A Blog by Jonathan Low

 

Apr 7, 2024

Tech Cos Use AI-Created "Synthetic Data" To Train Their AI. What Could Go Wrong?

The demand for synthetic data comes from tech companies' need for more content on which to train their AI models. The problem is that they are running out of such material because of natural limits compounded by lawsuits from content creators, publishers and news organizations which say their content has been taken without permission or compensation. 

AI's faults in generating content are now well known, from printing things that arent true to passing along biases and disinformation. So despite tech company 'experts' assurances, skepticism about the use of synthetic data is likely to generate new problems which further undermine confidence in AI. JL 

Cade Metz and Stuart Thompson report in the New York Times:

Tech companies may exhaust the high-quality text the internet has to offer for the development of AI. And the companies are facing copyright lawsuits from authors, news organizations and computer programmers for using their works without permission. Tech companies like Google and OpenAI (want) to train their technology with data generated by other A.I. models. (Such) synthetic data, they believe, will reduce copyright issues and boost the supply of training materials needed for A.I. (But) A.I. models get things wrong and make stuff up. They have shown that they pick up the biases that appear in the internet data from which they have been trained. If companies use A.I. to train A.I., they amplify their flaws.

OpenAI, Google and other tech companies train their chatbots with huge amounts of data culled from books, Wikipedia articles, news stories and other sources across the internet. But in the future, they hope to use something called synthetic data.

That’s because tech companies may exhaust the high-quality text the internet has to offer for the development of artificial intelligence. And the companies are facing copyright lawsuits from authors, news organizations and computer programmers for using their works without permission. (In one such lawsuit, The New York Times sued OpenAI and Microsoft.)

Synthetic data, they believe, will help reduce copyright issues and boost the supply of training materials needed for A.I. Here’s what to know about it.

It’s data generated by artificial intelligence.

Yes. Rather than training A.I. models with text written by people, tech companies like Google, OpenAI and Anthropic hope to train their technology with data generated by other A.I. models.

Not exactly. A.I. models get things wrong and make stuff up. They have also shown that they pick up on the biases that appear in the internet data from which they have been trained. So if companies use A.I. to train A.I., they can end up amplifying their own flaws.

No. Tech companies are experimenting with it. But because of the potential flaws of synthetic data, it is not a big part of the way A.I. systems are built today.

The companies think they can refine the way synthetic data is created. OpenAI and others have explored a technique where two different A.I. models work together to generate synthetic data that is more useful and reliable.

One A.I. model generates the data. Then a second model judges the data, much like a human would, deciding whether the data is good or bad, accurate or not. A.I. models are actually better at judging text than writing it.

“If you give the technology two things, it is pretty good at choosing which one looks the best,” said Nathan Lile, the chief executive of the A.I. start-up SynthLabs.

The idea is that this will provide the high-quality data needed to train an even better chatbot.

Sort of. It all comes down to that second A.I. model. How good is it at judging text?

Anthropic has been the most vocal about its efforts to make this work. It fine-tunes the second A.I. model using a “constitution” curated by the company’s researchers. This teaches the model to choose text that supports certain principles, such as freedom, equality and a sense of brotherhood, or life, liberty and personal security. Anthropic’s method is known as “Constitutional A.I.”

Here’s how two A.I. models work in tandem to produce synthetic data using a process like Anthropic’s.

Even so, humans are needed to make sure the second A.I. model stays on track. That limits how much synthetic data this process can generate. And researchers disagree on whether a method like Anthropic’s will continue to improve A.I. systems.

The A.I. models that generate synthetic data were themselves trained on human-created data, much of which was copyrighted. So copyright holders can still argue that companies like OpenAI and Anthropic used copyrighted text, images and video without permission.

Jeff Clune, a computer science professor at the University of British Columbia who previously worked as a researcher at OpenAI, said A.I. models could ultimately become more powerful than the human brain in some ways. But they will do so because they learned from the human brain.

“To borrow from Newton: A.I. sees further by standing on the shoulders of giant human data sets,” he said.

0 comments:

Post a Comment