A Blog by Jonathan Low

 

Mar 25, 2023

Google and Microsoft AI Chatbots Cite Each Other, Spreading Misinformation

The problem is that AI chatbots - including ChatGPT and Google Bard - are, so far, unable distinguish between reliable, authoritative sources and spurious or deliberate disinformation. 

Given the human propensity to rely on information gleaned from the web as "true," this may launch further assaults on scientifically verifiable or otherwise trustworthy sources by those intent on benefiting from misinformation. JL

James Vincent reports in The Verge:

We have an early sign we’re stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, impossible to map completely or debunk. All because Microsoft, Google, and OpenAI decided that market share is more important than safety.

If you don’t believe the rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web’s information ecosystem, consider the following:

Right now,* if you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.

(*I say “right now” because in the time between starting and finishing writing this story, Bing changed its answer and now correctly replies that Bard is still live. You can interpret this as showing that these systems are, at least, fixable or that they are so infinitely malleable that it’s impossible to even consistently report their mistakes.)

A screenshot of the Bing UI.
Microsoft’s Bing chatbot thinks Google’s Bard chatbot has been shut down and incorrectly cites a news story to do so.
 Image: The Verge

But if reading all that made your head hurt, it should — and in more ways than one.

What we have here is an early sign we’re stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if you wanted these systems to fail.

It’s a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.

These companies can put as many disclaimers as they like on their chatbots — telling us they’re “experiments,” “collaborations,” and definitely not search engines — but it’s a flimsy defense. We know how people use these systems, and we’ve already seen how they spread misinformation, whether inventing new stories that were never written or telling people about books that don’t exist. And now, they’re citing one another’s mistakes, too.

0 comments:

Post a Comment