Google and Microsoft AI Chatbots Cite Each Other, Spreading Misinformation
The problem is that AI chatbots - including ChatGPT and Google Bard - are, so far, unable distinguish between reliable, authoritative sources and spurious or deliberate disinformation.
Given the human propensity to rely on information gleaned from the web as "true," this may launch further assaults on scientifically verifiable or otherwise trustworthy sources by those intent on benefiting from misinformation. JL
James Vincent reports in The Verge:
We have an early sign we’re stumbling into a massive game of AI
misinformation telephone, in which chatbots are unable to gauge reliable
news sources, misread stories about themselves, and misreport on their
own capabilities. In this case, the whole thing started because of a
single joke comment on Hacker News. Imagine what you could do if youwantedthese systems to fail. Given the inability of AI language models to reliably sort fact from
fiction, their launch online threatens to unleash a rotten trail of
misinformation and mistrust across the web, impossible
to map completely or debunk. All because Microsoft,
Google, and OpenAI decided that market share is more important than
safety.
If you don’t believe the rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web’s information ecosystem, consider the following:
Right now,* if you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence anews articlethat discusses atweetin which a user asked Bardwhenit would be shut down and Bard said it already had, itself citing acommentfrom Hacker News in which someonejokedabout this happening, and someone else used ChatGPT to write fake news coverage about the event.
(*I say “right now” because in the time between starting and finishing writing this story, Bing changed its answer and now correctly replies that Bard is still live. You can interpret this as showing that these systems are, at least, fixableorthat they are so infinitely malleable that it’s impossible to even consistently report their mistakes.)
Microsoft’s Bing chatbot thinks Google’s Bard chatbot has been shut down and incorrectly cites a news story to do so.Image: The Verge
But if reading all that made your head hurt, it should — and in more ways than one.
What we have here is an early sign we’re stumbling into a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities. In this case, the whole thing started because of a single joke comment on Hacker News. Imagine what you could do if youwantedthese systems to fail.
It’s a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.
These companies can put as many disclaimers as they like on their chatbots — telling us they’re “experiments,” “collaborations,” anddefinitely not search engines— but it’s a flimsy defense. We know how people use these systems, and we’ve already seen how they spread misinformation, whether inventingnew stories that were never writtenor telling people aboutbooks that don’t exist. And now, they’re citing one another’s mistakes, too.
As a Partner and Co-Founder of Predictiv and PredictivAsia, Jon specializes in management performance and organizational effectiveness for both domestic and international clients. He is an editor and author whose works include Invisible Advantage: How Intangilbles are Driving Business Performance. Learn more...
0 comments:
Post a Comment