A Blog by Jonathan Low

 

Aug 21, 2018

US Dept of Defense Created Tool To Catch Deep Fakes: Then Co-Evolution Kicked In

Faking the fake finders: a never-ending game of facial whack-a-mole. JL

Will Knight reports in MIT Technology Review:

Faces made using deepfakes rarely blink. When they do, the eye-movement is unnatural. This is because deepfakes are trained on still images, which show a person with eyes open. Similar tricks for catching deepfakes: strange head movements, odd eye color. “We are working on exploiting physiological signals that, for now, are difficult for deepfakes to mimic." (But) these tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths. A key problem is that machine-learning systems can be trained to outmaneuver forensics tools.
The first forensics tools for catching revenge porn and fake news created with AI have been developed through a program run by the US Defense Department.
Forensics experts have rushed to find ways of detecting videos synthesized and manipulated using machine learning because the technology makes it far easier to create convincing fake videos that could be used to sow disinformation or harass people.The most common technique for generating fake videos involves using machine learning to swap one person’s face onto another's. The resulting videos, known as “deepfakes,” are simple to make, and can be surprisingly realistic. Further tweaks, made by a skilled video editor, can make them seem even more real.
Video trickery involves using a machine-learning technique known as generative modeling, which lets a computer learn from real data before producing fake examples that are statistically similar. A recent twist on this involves having two neural networks, known as generative adversarial networks, work together to produce ever more convincing fakes (see “The GANfather: The man who’s given machines the gift of imagination”).
The tools for catching deepfakes were developed through a program—run by the US Defense Advanced Research Projects Agency (DARPA)—called Media Forensics. The program was created to automate existing forensics tools, but has recently turned its attention to AI-made forgery.
"We've discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations,” says Matthew Turek, who runs the Media Forensics program.

Four video still images of Tucker Carlson speaking
Four video still images that mirror the original Tucker Carlson video. The face on the speaker appears to be that of actor Nicolas Cage.
Tucker Carlson gets his own Nicolas Cage makeover.
University at Albany, SUNY
One remarkably simple technique was developed by a team led by Siwei Lyu, a professor at the State University of New York at Albany, , and one of his students. “We generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well,” Lyu says.
Then, one afternoon, while studying several deepfakes, Lyu realized that the faces made using deepfakes rarely, if ever, blink. And when they do blink, the eye-movement is unnatural. This is because deepfakes are trained on still images, which tend to show a person with his or her eyes open.
Others involved in the DARPA challenge are exploring similar tricks for automatically catching deepfakes: strange head movements, odd eye color, and so on. “We are working on exploiting these types of physiological signals that, for now at least, are difficult for deepfakes to mimic,” says Hany Farid, a leading digital forensics expert at Dartmouth College.
DARPA’s Turek says the agency will run more contests “to ensure the technologies in development are able to detect the latest techniques."
The arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths. A key problem, says Farid, is that machine-learning systems can be trained to outmaneuver forensics tools.
Lyu says a skilled forger could get around his eye-blinking tool simply by collecting images that show a person blinking. But he adds that his team has developed an even more effective technique, but says he’s keeping it secret for the moment. “I’d rather hold off at least for a little bit,” Lyu says. “We have a little advantage over the forgers right now, and we want to keep that advantage.”