A Blog by Jonathan Low

 

Apr 13, 2023

The Reason AI Has Trouble Spotting AI-Generated Images

AI generators are continuously improving, in part because more people are creating and posting ever-better AI generated images. 

This makes it harder for AI identifiers to keep up. JL 

Anne-Marie Alcantara reports in the Wall Street Journal:

Behind many AI images is Midjourney, which can turn text prompts into images, or blend existing images in novel ways. Its latest version blurs the line between reality and fiction. Other image generators - including DALL-E, Microsoft’s new Bing Image Creator - are also getting better fast. When people share AI images, they choose the best of the fakes, so it becomes tougher to figure out what’s real. “We look at all the tools and every time they’re updating their models, we have to update ours and keep up.” Optic's tool had a 95% accuracy rate until Midjourney released its latest software, and accuracy dropped to 88.9%.

Artificial-intelligence image generators are evolving before our very eyes.

Way, way back in January, systems such as OpenAI’s DALL-E might have rendered a human with backward fingers or a floating extra eyebrow. By March, some images were realistic enough to fool a large number of people.

The fake image of Pope Francis in a white puffy coat tricked many people.ILLUSTRATION: GUERRERO ART

There was that image of Pope Francis in a white puffer coat, not to mention photos of Donald Trump getting arrested and French President Emmanuel Macron walking through a protest, among others. These images are part of a growing furor that led tech leaders such as Elon Musk and Apple co-founder Steve Wozniak to make a public plea to pause the development of AI tools.

While AI-generated content can be fun, it poses risks to industries and everyday interactions alike. It can be used to spread misinformation, infringe on intellectual property or sexualize photos of people. We’re already reaching a point where we need ways to discern human-made images from machine-generated ones.

Behind many of these AI images is Midjourney, which can turn text prompts into images, or blend existing images in novel ways. Its latest version has done the most to date to blur the line between reality and fiction, professors and researchers say. Other image generators—including DALL-E, which powers Microsoft’s new Bing Image Creator—are also getting better fast.

Some developers are building tools that can analyze images for signs of AI origin. The trouble is, unless the tools keep pace with the image generators they’re monitoring, even they can be fooled.

Examining the artifacts

Optic, an AI trust-and-safety company, launched AI or Not, a site where you can share a photo or illustration and the site determines whether it was made by a human or an AI. You can upload an image directly to the website or paste an image’s URL. There’s no limit to how many images you can upload.

 

You can also tweet or retweet an image at Optic’s Twitter account, @optic_xyz, along with the hashtag, #aiornot, and you’ll get a reply telling you what it believes, along with its confidence level as a percentage. The company is also working on a Google Chrome extension.

Optic’s tool examines artifacts in each image that are invisible to the human eye, such as variations of brightness and color in images, says Chief Executive Officer Andrey Doronichev. 

The tool had a 95% accuracy rate—until recently. Then Midjourney released its latest software, and the accuracy dropped to 88.9%. Optic’s team updated AI or Not so it could spot the new Midjourney images. Microsoft’s Bing Image Creator, which uses a newer DALL-E version, also threw off the tool.

‘An arms race’

Hive, an AI company that tags content, has also updated its free AI-generated content detector to keep pace with evolving image generators. The tool uses AI—trained on millions of images from DALL-E, Stable Diffusion, Midjourney—to determine the origin of both images and text. Hive limits users to 100 queries a day unless they pay for its premium service. 

The company estimates that it accurately detects about 95% of AI-made images, but widely shared images tend to be better than the rest. When people share AI images, they choose the best of the fakes, so it becomes tougher to figure out what’s real, says Hive CEO Kevin Guo.

As with Optic, Hive has some trouble spotting images from the new Bing Image Creator.

“It’s an arms race,” Mr. Guo says, noting that Hive will keep evolving. “We look at all the tools out there and every time they’re updating their models, we have to update ours and keep up the pace.”

For now, if you’re trying to find out whether an image is AI-made or real, run it through both tools from Optic and Hive. If at least one indicates it’s AI-generated, it probably is.

Context is key

Many AI image generators are setting their own guardrails. The Bing Image Creator flags and blocks user prompts that ask it to create images of prominent public figures, for example. Midjourney has human moderators, but it’s rolling out a way to moderate user requests with an algorithm, says David Holz, founder of the company.

You can also get better at detecting fake images with your own eyes.

Start with context: Where and when you see an image should help you determine whether it’s real, says Amit Roy-Chowdhury, a professor of electrical and computer engineering at University of California, Riverside.

 

Look for parts of an image that seem out of place. One is a person’s expression, which might not appear realistic or might look off in some way in AI-generated photos, Prof. Roy-Chowdhury says. Some images may have missing elements, such as a mirror or glass window that doesn’t show a person’s reflection, he added.

The real problem may not be that generative AI is getting harder to detect, but that more people are using it to trick others.

“It has the potential to be very, very harmful without being super smart,” he says.



1 comments:

Anonymous said...

The publication is a member of the Various Journalism Network, which promotes ethical standards in journalism worldwide. This commitment to ethical journalism is reflected in Theguardiansweekly rigorous fact-checking and verification procedures and its commitment to giving balanced and accurate coverage of complex issues. the guardians weekly

Post a Comment