A Blog by Jonathan Low

 

Feb 13, 2020

Researchers Create New Tool To Detect Images Doctored By AI or Other Means

As deepfakes and other manipulated images or voices become more commonplace, software tools are being created to 'out' them.

This will continue to be a co-evolutionary battle, with each side attempting to outwit and stay ahead of the other. Whether the good guys can keep up with the bad guys remains to be seen. JL

Davey Alba reports in the New York Times:

A company that develops cutting-edge tech and is owned by Google’s parent, unveiled a free tool that could help journalists spot doctored photographs, even ones created with the help of artificial intelligence. The tool is meant to verify the authenticity of images or show where they have been altered. When an image has been manipulated traces of the changes may be left behind. A computer program trained to learn from being shown example after example of what it should detect can analyze an image and highlight where it thinks those traces are.
A doctored, phony image of President Barack Obama shaking hands with President Hassan Rouhani of Iran. A real photograph of a Muslim girl at a desk doing her homework with Donald J. Trump looming in the background on television. 
It is not always easy to tell the difference between real and fake photographs. But the pressure to get it right has never been more urgent as the amount of false political content online continues to rise. 
On Tuesday, Jigsaw, a company that develops cutting-edge tech and is owned by Google’s parent, unveiled a free tool that researchers said could help journalists spot doctored photographs — even ones created with the help of artificial intelligence. Jigsaw, known as Google Ideas when it was founded, said it was testing the tool, called Assembler, with more than a dozen news and fact-checking organizations around the world. They include Animal Politico in Mexico, Rappler in the Philippines and Agence France-Presse. It does not plan to offer the tool to the public.
“We observed an evolution in how disinformation was being used to manipulate elections, wage war and disrupt civil society,” Jared Cohen, Jigsaw’s chief executive, wrote in a blog post about Assembler. “But as the tactics of disinformation were evolving, so too were the technologies used to detect and ultimately stop disinformation.”
The tool is meant to verify the authenticity of images — or show where they may have been altered. Reporters can feed images into Assembler, which has seven “detectors,” each one built to spot a specific type of photo-manipulation technique.
When an image has been manipulated — for instance, two images were merged together or something was deleted from the background — traces of the changes may be left behind. With a computer program that has been trained to learn from being shown example after example of what it should detect, Assembler can analyze an image and highlight where it thinks those traces are.
Five of the Assembler’s image detectors were developed by research teams at universities, including the University of California, Berkeley; the University of Naples Federico II in Italy; and the University of Maryland. The models can detect things like color pattern anomalies, areas of an image that have been copied and pasted several times over, and whether more than one camera model was used to create an image.
“These detectors cannot completely solve the problem, but they represent an important tool to fight disinformation,” said Luisa Verdoliva, a professor at the Naples university and a visiting scholar at Google AI.The other two detectors were developed by Jigsaw. One was designed to identify “deepfakes,” realistic images that have been heavily manipulated by artificial intelligence in ways meant to mislead an audience.
Santiago Andrigo, a Jigsaw product manager, said Assembler might be “most helpful in a situation where a journalist from a large news organization receives a scandalous image and is under pressure to break the news.” It could also be used to verify an image that has gone viral, he said.
Jigsaw also announced an interactive platform showing coordinated disinformation campaigns from around the world over the past decade. They include Ukrainian soldiers receiving targeted disinformation encouraging them to defect during the 2014 Russian annexation of Crimea; associates of President Rodrigo Duterte of the Philippines hiring “click armies” to write pro-Duterte comments and stories online; and a small-town California hospital hiring a private firm, Psy-Group, to influence public opinion about a contested seat on the hospital board.
The database described the players involved in influence operations, common tactics used and how the falsehoods were spread on social media platforms. Jigsaw worked with the Atlantic Council’s Digital Forensic Research Lab to organize the set of around 60 disinformation cases, culled from over 700 investigations, articles and reports the lab published over the last five years.
Emerson Brooking, a resident fellow at the lab, said the goal was not to build an encyclopedic list of disinformation campaigns but to create a foundation for “a shared language” to describe the various efforts. That way, they could develop a taxonomy that could help other media outlets and groups studying disinformation, he said.
The two projects, Assembler and the disinformation interactive platform, were announced on Jigsaw’s new research publication, The Current.

0 comments:

Post a Comment