A Blog by Jonathan Low

 

Feb 17, 2022

Humans Find AI-Generated Faces More Trustworthy Than Real Ones

The conclusion is that these images are becoming increasingly sophisticated and difficult to discern from real faces. A problem compounded by the fact that people tend to overestimate their ability to identify fakes. 

The solution, according to researchers, are better tools for identifying and labeling fake images so they are not used maliciously or falsely. JL

Emily Willingham reports in Scientific American:

After compiling 400 real faces matched to 400 synthetic versions, researchers asked people to distinguish real from fake. (Some) participants got some training about how to spot fakes. A third group rated a selection of the images for trustworthiness on a scale of one (very untrustworthy) to seven (very trustworthy). The first group had an average accuracy of 48.2%. The second failed to show improvement even with feedback about choices. The group rating trustworthiness gave the synthetic faces 4.82, compared with 4.48 for real people. Making tools for detection is important because people overestimate their ability to spot fakes and “the public has to understand when they’re being used maliciously.”

When TikTok videos emerged in 2021 that seemed to show “Tom Cruise” making a coin disappear and enjoying a lollipop, the account name was the only obvious clue that this wasn’t the real deal. The creator of the “deeptomcruise” account on the social media platform was using “deepfake” technology to show a machine-generated version of the famous actor performing magic tricks and having a solo dance-off.

One tell for a deepfake used to be the “uncanny valley” effect, an unsettling feeling triggered by the hollow look in a synthetic person’s eyes. But increasingly convincing images are pulling viewers out of the valley and into the world of deception promulgated by deepfakes.

The startling realism has implications for malevolent uses of the technology: its potential weaponization in disinformation campaigns for political or other gain, the creation of false porn for blackmail, and any number of intricate manipulations for novel forms of abuse and fraud. Developing countermeasures to identify deepfakes has turned into an “arms race” between security sleuths on one side and cybercriminals and cyberwarfare operatives on the other.

 

A new study published in the Proceedings of the National Academy of Sciences USA provides a measure of how far the technology has progressed. The results suggest that real humans can easily fall for machine-generated faces—and even interpret them as more trustworthy than the genuine article. “We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid, a professor at the University of California, Berkeley. The result raises concerns that “these faces could be highly effective when used for nefarious purposes.”

“We have indeed entered the world of dangerous deepfakes,” says Piotr Didyk, an associate professor at the University of Italian Switzerland in Lugano, who was not involved in the paper. The tools used to generate the study’s still images are already generally accessible. And although creating equally sophisticated video is more challenging, tools for it will probably soon be within general reach, Didyk contends.

The synthetic faces for this study were developed in back-and-forth interactions between two neural networks, examples of a type known as generative adversarial networks. One of the networks, called a generator, produced an evolving series of synthetic faces like a student working progressively through rough drafts. The other network, known as a discriminator, trained on real images and then graded the generated output by comparing it with data on actual faces.

The generator began the exercise with random pixels. With feedback from the discriminator, it gradually produced increasingly realistic humanlike faces. Ultimately, the discriminator was unable to distinguish a real face from a fake one.

The networks trained on an array of real images representing Black, East Asian, South Asian and white faces of both men and women, in contrast with the more common use of white men’s faces in earlier research.

 

After compiling 400 real faces matched to 400 synthetic versions, the researchers asked 315 people to distinguish real from fake among a selection of 128 of the images. Another group of 219 participants got some training and feedback about how to spot fakes as they tried to distinguish the faces. Finally, a third group of 223 participants each rated a selection of 128 of the images for trustworthiness on a scale of one (very untrustworthy) to seven (very trustworthy).

The first group did not do better than a coin toss at telling real faces from fake ones, with an average accuracy of 48.2 percent. The second group failed to show dramatic improvement, receiving only about 59 percent, even with feedback about those participants’ choices. The group rating trustworthiness gave the synthetic faces a slightly higher average rating of 4.82, compared with 4.48 for real people.

The researchers were not expecting these results. “We initially thought that the synthetic faces would be less trustworthy than the real faces,” says study co-author Sophie Nightingale.

 

The uncanny valley idea is not completely retired. Study participants did overwhelmingly identify some of the fakes as fake. “We’re not saying that every single image generated is indistinguishable from a real face, but a significant number of them are,” Nightingale says.

The finding adds to concerns about the accessibility of technology that makes it possible for just about anyone to create deceptive still images. “Anyone can create synthetic content without specialized knowledge of Photoshop or CGI,” Nightingale says. Another concern is that such findings will create the impression that deepfakes will become completely undetectable, says Wael Abd-Almageed, founding director of the Visual Intelligence and Multimedia Analytics Laboratory at the University of Southern California, who was not involved in the study. He worries scientists might give up on trying to develop countermeasures to deepfakes, although he views keeping their detection on pace with their increasing realism as “simply yet another forensics problem.”

 

“The conversation that’s not happening enough in this research community is how to start proactively to improve these detection tools,” says Sam Gregory, director of programs strategy and innovation at WITNESS, a human rights organization that in part focuses on ways to distinguish deepfakes. Making tools for detection is important because people tend to overestimate their ability to spot fakes, he says, and “the public always has to understand when they’re being used maliciously.”

Gregory, who was not involved in the study, points out that its authors directly address these issues. They highlight three possible solutions, including creating durable watermarks for these generated images, “like embedding fingerprints so you can see that it came from a generative process,” he says.

The authors of the study end with a stark conclusion after emphasizing that deceptive uses of deepfakes will continue to pose a threat: “We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits,” they write. “If so, then we discourage the development of technology simply because it is possible.”

1 comments:

Post a Comment