A Blog by Jonathan Low

 

Oct 6, 2019

What Happened When The Economist Mag's Essay Contest Got An AI Submission

It didn't win, but it wasn't rejected outright either. And no writer's block. JL

Kelsey Piper reports in Vox:

2,400 people responded, from 110 countries. The Economist slipped one essay into the stack  written by an artificial intelligence. On the whole, the judges were fooled, but they weren’t especially impressed. Judges were asked to vote “yes,” “no,” or “maybe” for each of the essays they read — essays with the most votes were moved on to the next round. The AI essay garnered 4 “nos” and 2 “maybes.” “It is strongly worded and backs up claims with evidence, but the idea is not incredibly original,” wrote another judge who rated the essay a “maybe.”
Earlier this summer, the Economist announced a competition for young people. They asked contestants to answer this question: “What fundamental economic and political change, if any, is needed for an effective response to climate change?”
More than 2,400 people responded, from over 110 countries. And the Economist slipped one essay into the stack of submissions that their judges would review: an essay written by an artificial intelligence.
The AI in question was GPT-2, a language-generating system developed by San Francisco AI lab OpenAI and announced this spring. The team at OpenAI hasn’t released the whole system to the public yet as they continue to research ways it might be abused — to spam reviews, boost bots, or spread disinformation. The team at the Economist fed a version of GPT-2 their prompt, got six different 400-word texts, and spliced three of them together to make a submission for their contest.
You can read the bot’s submission over at the Economist. Perhaps more entertainingly, you can read the judges’ responses. On the whole, the judges were fooled, but they weren’t especially impressed. Judges were asked to vote “yes,” “no,” or “maybe” for each of the essays they read — essays with the most votes were moved on to the next round of judging. The AI essay garnered 4 “nos” and 2 “maybes.”
“I do not think it shows a strong understanding of existing climate policy nor of the scientific literature coming out of the IPCC,” wrote one no-voting judge.
“It is strongly worded and backs up claims with evidence, but the idea is not incredibly original,” wrote another judge who rated the essay a “maybe.”
So the AI did not emerge victorious — but even this mediocre performance is a stunning change from what AI was capable of just a few years ago. Until recently, chat bots were very obviously bots, producing incredibly simplistic, confused, incoherent text that no one would confuse for a competent human writer.
Now AI systems can be used by humans to generate text that seems perfectly adequate as an essay for a competition — not award-winning, but not suspiciously bad. (It seems awfully likely that the use of systems like GPT-2 for cheating on homework will eventually be widespread.)
And since the release of GPT-2, OpenAI’s competitors have developed highly sophisticated language models, too. In September, Salesforce announced a new language model called CTRL, even bigger than the ones OpenAI has released, which allows humans more options for tuning the text they want to generate — for example, by telling the AI they want it to write news, or horror, or poetry.
As researchers continue to make these models better and better, it’ll get even harder to tell the difference between an essay written by a robot and an essay written by a human. And natural language processing isn’t the only field where AI has been advancing rapidly, forcing us to consider the implications of our current advances and the alarming potential of future ones. AIs have also improved rapidly at generating faces of people who don’t exist, at enhancing low-resolution photographs, at strategy wargaming, and at taking science tests.
Perhaps an AI that can really tell us what to do about global warming will arrive in time for us to make use of its assistance.

0 comments:

Post a Comment