A Blog by Jonathan Low

 

Jan 9, 2020

How Companies Are Using AI Generated Fake People To Appear More Diverse

Dating sites need more women and everyone needs more 'minorities.' JL

Drew Harwell reports in the Washington Post:

Artificial intelligence start-ups are selling images of computer-generated faces that look like the real thing, offering companies a chance to create imaginary models and “increase diversity” in their ads without needing human beings.  Images come from online databases of free and uncopyrighted photos, and the system allows clients to easily superimpose different faces on a shifting set of bodies.  The images are a gift to purveyors of disinformation, because unlike real photos, they cannot be easily traced.
Artificial intelligence start-ups are selling images of computer-generated faces that look like the real thing, offering companies a chance to create imaginary models and “increase diversity” in their ads without needing human beings.
One firm is offering to sell diverse photos for marketing brochures and has already signed up clients, including a dating app that intends to use the images in a chatbot. Another company says it’s moving past AI-generated headshots and into the generation of full, fake human bodies as early as this month.
The AI software used to create such faces is freely available and improving rapidly, allowing small start-ups to easily create fakes that are so convincing they can fool the human eye. The systems train on massive databases of actual faces, then attempt to replicate their features in new designs. 
But AI experts worry that the fakes will empower a new generation of scammers, bots and spies, who could use the photos to build imaginary online personas, mask bias in hiring and damage efforts to bring diversity to industries. The fact that such software now has a business model could also fuel a greater erosion of trust across an Internet already under assault by disinformation campaigns, “deepfake” videos and other deceptive techniques.
Elana Zeide, a fellow in artificial intelligence, law and policy at the University of California at Los Angeles’s law school, said the technology “showcases how little power and knowledge users have in terms of the reality of what they see online.”
“There’s no objective reality to compare these photos against,” she said. “We’re used to physical worlds with sensory input … but with this, we don’t have any instinctive or taught responses on how to detect what’s real and what isn’t. It’s exhausting.” 
Icons8, an Argentina-based design firm that sells digital illustrations and stock photos, launched its online business Generated.photos last month, offering “worry-free, diverse models on-demand using AI.”
The site allows anyone to filter fake photos based on age (from “Infant” to “Elderly”), ethnicity (including “White,” “Latino,” “Asian” and “Black”) and emotion (“Joy,” “Neutral,” “Surprise”), as well as gender, eye color and hair length. The system, however, shows a number of odd gaps and biases: For instance, the only available skin color for infants is white. The company says its faces could be useful for clients needing to jazz up promotional materials, fill out prototypes or illustrate concepts too touchy for a human model, such as “embarrassing situations” and “criminal proceedings.” Its online guide also promises clients they can “increase diversity” and “reduce bias” by including “many different ethnic backgrounds in your projects.”Companies infamously have embarrassed themselves through haphazard diversity-boosting attempts, Photoshopping a black man into an all-white crowd, as the University of Wisconsin-Madison did on an undergraduate booklet, or superimposing women into group photos of men.
But while the AI start-ups boast a simple fix — offering companies the illusion of diversity, without working with a diverse set of people — their systems have a crucial flaw: They mimic only the likenesses they’ve already seen. Valerie Emanuel, a Los Angeles-based co-founder of the talent agency Role Models Management, said she worried that these kinds of fake photos could turn the medium into a monoculture, in which most faces look the same.
“We want to create more diversity and show unique faces in advertising going forward,” Emanuel said. “This is homogenizing one look.”created its faces first by taking tens of thousands of photos of about 70 models in studios around the world, said Ivan Braun, the company’s founder. Braun’s colleagues — who work remotely across the United States, Italy, Israel, Russia and Ukraine — then spent several months preparing a database, cleaning the images, labeling data and organizing the photos to the computer’s precise specifications.
Clients can download up to 10,000 photos a month starting at $100. The models will not be paid residuals for any of the new AI-generated images built from their photo shoots, Braun said.
Another firm, the San Francisco-based start-up Rosebud AI, offers clients a chance at 25,000 photos of “AI-customized models of different ethnicities.” Company founder Lisha Li — who named it after an infinite-money cheat code she loved as a kid for the people-simulator game “The Sims” — said she first marketed the photos as a way for small businesses on online-shopping sites to invent stylish models without the need for pricey photography. 
Her company’s source images came from online databases of free and uncopyrighted photos, and the system allows clients to easily superimpose different faces on a shifting set of bodies. She promotes the system as a powerful tool to augment photographers’ abilities, letting them easily tailor the models for a fashion shoot to the nationality or ethnicity of the viewer. “Face is a pain point that the technology can solve,” she said.
The system is offered to a limited group of clients, whom she said the company assesses individually in the hope of blocking bad actors. About 2,000 prospective clients are on the waiting list.
Both companies rely on an AI breakthrough known as “generative adversarial networks,” which use dueling algorithms to refine their work: a creator system outputs a new image, which a critic system then compares with the original, informing the creator’s next design. Each iteration tends to beget a better copy than the last. 
But the systems are imperfect artists, untrained in the basics of human anatomy, and can attempt to match only the patterns of all the faces they’ve processed before. Along the way, the AI creates an army of what Braun calls “monsters”: Nightmarish faces pocked with inhuman deformities and surreal mutations. Common examples include overly fingered hands, featureless faces and people with mouths for eyes.The software has in recent months become one of AI researchers’ flashiest and most viral breakthroughs, vastly reducing the time and effort it takes for artists and researchers to create dreamy landscapes and fictional people. A seemingly infinite stream of fakes can be seen at thispersondoesnotexist.com, as well as a companion AI system trained on images of cats, called thiscatdoesnotexist.com. To test whether people can tell the difference between a generated fake and the real thing, AI researchers at the University of Washington also built the side-by-side website whichfaceisreal.com.
The machine-learning techniques are “open source,” allowing virtually anyone to use and build on them. And the software is improving all the time: A newer version of StyleGAN, unveiled last month by AI researchers at Nvidia, promises quicker generation methods, higher-quality images and fewer of the glitches and artifacts that gave old fakes away. 
Researchers say the images are a gift to purveyors of disinformation, because unlike real photos taken from elsewhere, they cannot be easily traced. Such forgeries are already in use, including on Facebook, where fact-checkers have found the images used to create fake profiles to promote preselected pages or political ideas.
In another case, the LinkedIn profile of a young woman supposedly named Katie Jones, which made connections with top officials around Washington, was found earlier this year to use an AI-generated image. Counterintelligence experts told the Associated Press that it carried the signatures of foreign espionage.
The technology is also the foundation for the face-swapping videos known as deepfakes, used for both parodies and fake pornography. The systems once required mountains of “facial data” to generate one convincing fake. But researchers this year have published details showing “few-shot” techniques that require only a couple of images to produce a convincing mimic.
Braun said there is a reasonable fear of AI-generated images being used for disinformation or abuse, adding, “We have to worry about it. The technology is already here, and there’s nowhere to go.” But the solution for that problem, he said, is not the responsibility of companies like his: Instead, it will require a “combination of social change, technological change and policy.” (The company does not use any authentication measures, like watermarks, to help people verify whether they’re real or fake.)
Two models who worked with Icons8 said they were told only after the photo shoot that their portraits would be used for AI-generated imagery. Braun said the first shoots were intended for stock photography and that the idea of an AI application came later, adding, “I never thought of it as a problem."
Estefanía Massera, a 29-year-old model in Argentina, said her photo shoot involved facially expressing various emotions. She was asked to look hungry, angry, tired and as if she had been diagnosed with cancer. Looking at some of the AI-generated faces, she said, she can see some similarities to her eyes.
She compared the face-creating software to “designer baby” systems in which parents can choose the features of their children. But she’s less worried about how the technology could affect her work: The world still needs real models, she said. “Today the trends in general and for companies and brands is to be as real as possible,” she added.
Simón Lanza, a 20-year-old student who also sat for an Icons8 shoot, said he could see why people in the business might be alarmed.
“As a model, I think it would take the job from people,” he said. “But you can’t stop the future.”

0 comments:

Post a Comment