A Blog by Jonathan Low

 

Jul 6, 2022

The Reason It's Time To Stop the AI Hype and Adjust Expectations

Hyping capabilities beyond real performance has long been a feature of Silicon Valley culture. Investors and the public have tolerated a certain amount of it because the tech industry often met or exceeded expectations. 

But there is growing concern within the AI community - and in tech generally - that the hype about AI is becoming misleading, which could lead to misinformation about opportunities, minimization of potental problems - and misallocation of resources at a time of economic uncertainty. As a result, a growing number of senior people in the field are calling for less exuberant promotional rhetoric and a more sober assessment of possibilities. JL

Karen Hao and Mike Kruppa report in the Wall Street Journal:

After years of companies emphasizing the potential of artificial intelligence, researchers say it is time to reset expectations. AI ethicists and researchers warn that businesses are exaggerating the capabilities, hype that is brewing widespread misunderstanding and distorting policy makers’ views of the power and fallibility of such technology. The stakes have heightened because AI is now embedded everywhere and involves more companies whose software, email, search engines, newsfeeds, voice assistants, permeates digital lives.

After years of companies emphasizing the potential of artificial intelligence, researchers say it is now time to reset expectations.

With recent leaps in the technology, companies have developed more systems that can produce seemingly humanlike conversation, poetry and images. Yet AI ethicists and researchers warn that some businesses are exaggerating the capabilities—hype that they say is brewing widespread misunderstanding and distorting policy makers’ views of the power and fallibility of such technology.

“We’re out of balance,” says Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a Seattle-based research nonprofit.

 

He and other researchers say that imbalance helps explain why many were swayed earlier this month when an engineer at Alphabet Inc.’s Google argued, based on his religious beliefs, that one of the company’s artificial-intelligence systems should be deemed sentient.

 

The engineer said the chatbot had effectively become a person with the right to be asked for consent to the experiments being run on it. Google suspended him and rejected his claim, saying company ethicists and technologists have looked into the possibility and dismissed it.

The belief that AI is becoming—or could ever become—conscious remains on the fringes in the broader scientific community, researchers say.In reality, artificial intelligence encompasses a range of techniques that largely remain useful for a range of uncinematic back-office logistics like processing data from users to better target them with ads, content and product recommendations.

Over the past decade, companies like Google, Facebook parent Meta Platforms Inc., and Amazon.com Inc. have invested heavily in advancing such capabilities to power their engines for growth and profit.

Google, for instance, uses artificial intelligence to better parse complex search prompts, helping it deliver relevant ads and web results.

A few startups have also sprouted with more grandiose ambitions. One, called OpenAI, raised billions from donors and investors including Tesla Inc. chief executive Elon Musk and Microsoft Corp. in a bid to achieve so-called artificial general intelligence, a system capable of matching or exceeding every dimension of human intelligence. Some researchers believe this to be decades in the future, if not unattainable.

 

Competition among these firms to outpace one another has driven rapid AI advancements and led to increasingly splashy demos that have captured the public imagination and drawn attention to the technology.

OpenAI’s DALL-E, a system that can generate artwork based on user prompts, like “McDonalds in orbit around Saturn” or “bears in sports gear in a triathlon,” has in recent weeks spawned many memes on social media.

Google has since followed with its own systems for text-based art generation.

While these outputs can be spectacular, however, a growing chorus of experts warn that companies aren’t adequately tempering the hype.

Margaret Mitchell, who co-led Google’s ethical AI team before the company fired her after she wrote a critical paper about its systems, says part of the search giant’s sell to shareholders is that it is the best in the world at AI.

Ms. Mitchell, now at an AI startup called Hugging Face, and Timnit Gebru, Google’s other ethical AI co-lead—also forced out—were some of the earliest to caution about the dangers of the technology.

 

In their last paper written at the company, they argued that the technologies would at times cause harm, as their humanlike capabilities mean they have the same potential for failure as humans. Among the examples cited: a mistranslation by Facebook’s AI system that rendered “good morning” in Arabic as “hurt them” in English and “attack them” in Hebrew, leading Israeli police to arrest the Palestinian man who posted the greeting, before realizing their error.

Internal documents reviewed by The Wall Street Journal as part of The Facebook Files series published last year also revealed that Facebook’s systems failed to consistently identify first-person shooting videos and racist rants, removing only a sliver of the content that violates the company’s rules.

Facebook said improvements in its AI have been responsible for drastically shrinking the amount of hate speech and other content that violates its rules.

Google said it fired Ms. Mitchell for sharing internal documents with people outside the company. The company’s head of AI told staffers Ms. Gebru’s work was insufficiently rigorous.

The dismissals reverberated through the tech industry, sparking thousands within and outside of Google to denounce what they called in a petition its “unprecedented research censorship.” CEO Sundar Pichai said he would work to restore trust on these issues and committed to doubling the number of people studying AI ethics.

The gap between perception and reality isn’t new. Mr. Etzioni and others pointed to the marketing around Watson, the AI system from International Business Machines Corp. that became widely known after besting humans on the quiz show “Jeopardy.” After a decade and billions of dollars in investment, the company said last year it was exploring the sale of Watson Health, a unit whose marquee product was supposed to help doctors diagnose and cure cancer.

The stakes have only heightened because AI is now embedded everywhere and involves more companies whose software—email, search engines, newsfeeds, voice assistants—permeates our digital lives.

After its engineer’s recent claims, Google pushed back on the notion that its chatbot is sentient.

The company’s chatbots and other conversational tools “can riff on any fantastical topic,” said Google spokesperson Brian Gabriel. “If you ask what it’s like to be an ice-cream dinosaur, they can generate text about melting and roaring and so on.” That isn’t the same as sentience, he added.

Blake Lemoine, the now-suspended engineer, said in an interview that he had compiled hundreds of pages of dialogue from controlled experiments with a chatbot called LaMDA to support his research, and he was accurately presenting the inner workings of Google’s programs.

 

“This is not an exaggeration of the nature of the system,” Mr. Lemoine said. “I am trying to, as carefully and precisely as I can, communicate where there is uncertainty and where there is not.”

Mr. Lemoine, who described himself as a mystic incorporating aspects of Christianity and other spiritual practices such as meditation, has said he is speaking in a religious capacity when describing LaMDA as sentient.

Elizabeth Kumar, a computer-science doctoral student at Brown University who studies AI policy, says the perception gap has crept into policy documents. Recent local, federal and international regulations and regulatory proposals have sought to address the potential of AI systems to discriminate, manipulate or otherwise cause harm in ways that assume a system is highly competent. They have largely left out the possibility of harm from such AI systems’ simply not working, which is more likely, she says.

Mr. Etzioni, who is also a member of the Biden administration’s National AI Research Resource Task Force, said policy makers often struggle to grasp the issues. “I can tell you from my conversations with some of them, they’re well-intentioned and ask good questions, but they’re not super well-informed,” he said.

0 comments:

Post a Comment