Deepfakes – Artifex.News https://artifexnews.net Stay Connected. Stay Informed. Tue, 30 Jul 2024 10:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://artifexnews.net/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png Deepfakes – Artifex.News https://artifexnews.net 32 32 Need to tell AI-made deepfakes from real pics? Call astronomers https://artifexnews.net/article68463338-ece/ Tue, 30 Jul 2024 10:00:00 +0000 https://artifexnews.net/article68463338-ece/ Read More “Need to tell AI-made deepfakes from real pics? Call astronomers” »

]]>

Using innovative strategies that astronomers use to identify the shape of distant, dim galaxies, Adejumoke Owolabi, a master’s student at the University of Hull, and her mentor Kevin Pimbblet, a professor of astrophysics and director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull, have described a groundbreaking technique to spot deepfakes created by machine-learning artificial intelligence (AI) from genuine photographs.

Their research findings were presented at the U.K. Royal Astronomical Society’s National Astronomy Meeting on 15 July this year.

Deepfake concern

In what The New York Times described as the “First A.I. Election”, Argentina witnessed two rival candidates, Sergio Massa and Javier Milei, using artificial deepfake technology based on AI to create hyper-realistic video, audio, and pictures during their recent election campaigns.

Not too long ago, Russian cyber agents hacked into a Ukrainian television channel and created a deepfaked video of Ukraine’s President Volodymyr Zelenskyy. In the video, he was shown asking his compatriots to lay down their weapons, and it quickly went viral.

Why, during the recent general elections in India, unscrupulous actors used AI tools to create avatars of Indian politicians. Alarmingly, these avatars were manipulated to spread audio and video messages to undermine political rivals.

Others have used deepfake tools to defame actresses by creating pornography involving their images. Such manipulation can make it difficult for the people at large to distinguish real from counterfeit, trap them in insidious narratives and beliefs, and ultimately undermine trust in democratic institutions.

The shapes of galaxies

A counterpunch to this problem may just lie in the stars.

Galaxies are vast cosmic islands of gas clouds, dust, and billions of stars held together by gravity. They come in various shapes and sizes and their appearance can give us clues about how they formed and evolved. For instance, if one galaxy formed after two older galaxies collided, it will contain some distinct signs of that collision. If a galaxy is pulled and consumed by a nearby massive galaxy, it will show signs of this interaction as well.

In the 1930s, the American astronomer Edwin Hubble classified thousands of known galaxies based on their appearance into four groups: elliptical, spiral, barred spiral, and irregular, plus various subclasses. Today, thanks to astronomical telescopes, we have data about millions of galaxies in the universe. And thanks to the efforts of half a million volunteers who participated in a crowdsourcing citizen-science programme called Galaxy Zoo, it has been possible to organise these galaxies using Hubble’s rubric.

Multiple large-scale astronomy projects, including the ground-based Vera C. Rubin Telescope and space-based observatories such as the Wide Field Infrared Survey Telescope and the James Webb Space Telescope, are also currently in progress. The Vera C. Rubin Observatory (previously known as the Large Synoptic Survey Telescope) is projected to produce 36 TB of data after scanning the skies every night. Even a dedicated citizen-science effort would struggle to manage such a large output.

AI in astronomy

While a visual approach to classification is the ideal way to determine a galaxy’s shape and structure — its morphology, i.e. — astronomers are resorting to AI-based solutions to manage the Big Data level of data many new astronomy projects have been generating. These data pertain to the billions, even trillions, of galaxies across the observable universe, most of which are too dim to classify using just one’s eyes.

Specifically, astronomers have been using machine learning and computer vision techniques to characterise galaxies’ morphologies. One popular set of parameters used in this exercise is called CAS, which stands for ‘concentration, asymmetry, and smoothness’. Computers evaluate these three parameters by analysing the level of light in each pixel of a digital image of a galaxy.

The concentration parameter quantifies the amount of light in the centre of a galaxy compared to its outer parts. The asymmetry index indicates the fraction of the light in a galaxy that is non-symmetric while the smoothness index indicates the fraction of light contained in clumps.

Another method, called the Gini index, is used to measure the relative distribution of light among pixels, which is also useful to identify different morphologies but especially those of galaxies that are merging with each other.

Into the eye of a deepfake

Typically, in a photograph of a person’s face, the eyes reflect ambient light, including the images of people and objects nearby. Adejumoke Owolabi noted that the reflections on the left and the right eyes could be analysed using the CAS parameters and the Gini index to check if they matched. If the reflections did match, the photograph is likely to be the actual thing, a legitimate photograph. If they didn’t match, it is likely to be a deepfake.

A series of real eyes showing largely consistent reflections in both eyes.

A series of real eyes showing largely consistent reflections in both eyes.
| Photo Credit:
Adejumoke Owolabi

A series of deepfake eyes showing inconsistent reflections in each eye.

A series of deepfake eyes showing inconsistent reflections in each eye.
| Photo Credit:
Adejumoke Owolabi

Dr. Owolabi analysed reflections of light in the eyes of several authentic and AI-generated images in this way and found that the parameters were consistent in genuine photos, as one would expect from the laws of physics. But she also noticed a small but detectable — by computers, at least — mismatch in deepfakes generated using AI-based tools.

Further, she found that the Gini index could predict whether an image had been deepfaked more efficiently than the CAS parameters could. Using the Gini index, she and Dr. Pimbblet, her mentor, detected digitally manipulated facial images accurately 70% of the time.

“It’s important to note that this is not a silver bullet for detecting fake images,” Dr. Pimbblet said in a press release issued by the Royal Astronomical Society. “There are false positives and false negatives; it’s not going to get everything. But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”

T.V. Venkateswaran is a science communicator and visiting faculty member at the Indian Institute of Science Education and Research, Mohali.



Source link

]]>
Another Alia Bhatt Deepfake Goes Viral, Fans Express Concern Over AI https://artifexnews.net/alia-bhatts-deepfake-deepopfake-videos-artificial-intelligence-alia-bhatts-new-deepfake-video-goes-viral-amid-concerns-over-ai-misuse-5899914rand29/ Sun, 16 Jun 2024 01:44:39 +0000 https://artifexnews.net/alia-bhatts-deepfake-deepopfake-videos-artificial-intelligence-alia-bhatts-new-deepfake-video-goes-viral-amid-concerns-over-ai-misuse-5899914rand29/ Read More “Another Alia Bhatt Deepfake Goes Viral, Fans Express Concern Over AI” »

]]>

Alia Bhatt’s deepfake video shows her getting ready in a black kurta

New Delhi:

Amid consternation and outrage over a series of deepfake videos, actor Alia Bhatt has fallen prey to the technology yet again.

Alia Bhatt’s new deepfake shows her taking part in the ‘get ready with me’ trend in a video shared on Instagram. The video shows her getting ready in a black kurta and putting the makeup on.

This is not the first time that a deepfake video of Alia has gone viral on social media.

Earlier, a deepfake video of Alia Bhatt’s face merged with actor Wamiqa Gabbi’s had also gone viral. Her another deepfake showed a woman with the morphed face of Alia Bhatt making obscene gestures.

Several Instagram users have reacted to Alia Bhatt’s deepfake video expressing concern over the misuse of Artificial Intelligence (AI).

“AI is getting dangerous day by day,” a user said. A second user said, “I am getting scared of AI now.” “I really hope you have consent for using the AI that uses real human faces,” said another Alia Bhatt friend.

Deepfakes are a form of synthetic media crafted using artificial intelligence, employing sophisticated algorithms to manipulate both visual and audio elements.

Deepfakes of several celebrities – including Rashmika Mandanna, Kajol, Katrina Kaif, Aamir Khan, Ranveer Singh and Sara Tendulkar – had earlier surfaced on the internet.

The government has advised all intermediaries – referring to social media platforms like Instagram and X – to ensure users “do not violate the prohibited content” rule of the IT Act, as it bids to combat the worrying trend of deepfakes.

The Centre has said that the creation and circulation of deepfakes carry a strong penalty Rs 1 lakh in fine and three years in jail.

Prime Minister Narendra Modi had also flagged the misuse of AI for creating deepfake videos and called it a “big concern.” “During the times of Artificial Intelligence, it is important that technology should be used responsibly,” he cautioned.





Source link

]]>