A Blog by Jonathan Low

 

May 25, 2023

OpenAI Supports Regulating AI 'Superintelligence' Which Conveniently Doesnt Exist

The push by OpenAI for regulation is being attacked by many in the industry as a diversion intended to take momentum away from calls by tech experts for a 'pause' in development until the risks are better assessed and frameworks established for managing them. 

Either way, there does appear to be a growing consensus for more focus on risks and threats and less breathless hype about how wonderful it's all going to be. JL

Ellen Francis reports in the Washington Post:

The leaders of OpenAI, the creator of chatbot ChatGPT, are calling for the regulation of “superintelligence” and AI, suggesting an equivalent to the world’s nuclear watchdog would help reduce the “existential risk” posed by the technology. An international regulator would become necessary to “inspect systems, require audits, test for compliance with safety standards, (and) place restrictions on degrees of deployment and levels of security.” OpenAI’s business decisions contrast these safety warnings — as their rapid rollout has created an AI arms race, to release products while policymakers are still grappling with risks.

The leaders of OpenAI, the creator of viral chatbot ChatGPT, are calling for the regulation of “superintelligence” and artificial intelligence systems, suggesting an equivalent to the world’s nuclear watchdog would help reduce the “existential risk” posed by the technology.


In a statement published on the company website this week, co-founders Greg Brockman and Ilya Sutskever, as well as CEO Sam Altman, argued that an international regulator would eventually become necessary to “inspect systems, require audits, test for compliance with safety standards, (and) place restrictions on degrees of deployment and levels of security.”

They made a comparison with nuclear energy as another example of a technology with the “possibility of existential risk,” raising the need for an authority similar in nature to the International Atomic Energy Agency (IAEA), the world’s nuclear watchdog.

 

Over the next decade, “it’s conceivable that … AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” the OpenAI team wrote. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there.”

The statement echoed Altman’s comments to Congress last week, where the U.S.-based company’s CEO also testified to the need for a separate regulatory body.

CEO behind ChatGPT warns Congress AI could cause ‘harm to the world’

Critics have warned against trusting calls for regulation from leaders in the tech industry who stand to profit off continuing development without restraints. Some say OpenAI’s business decisions contrast these safety warnings — as their rapid rollout has created an AI arms race, pressuring companies such as Google parent company Alphabet to release products while policymakers are still grappling with risks.

 

Few Washington lawmakers have a deep understanding of emerging technology or AI, and AI companies have lobbied them extensively, The Washington Post previously reported, as supporters and critics hope to influence discussions on tech policy.

Some have also warned against the risk of hampering U.S. ability to compete on the technology with rivals — particularly China.

The OpenAI leaders warn in their note against pausing development, adding that “it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing.”

The debate over whether AI will destroy us is dividing Silicon Valley

In his first congressional testimony last week, Altman issued warnings on how AI could “cause significant harm to the world,” while asserting that his company would continue to roll out the technology.

 

Altman’s message of willingness to work with lawmakers received a relatively warm reception in Congress, as countries including the United States acknowledge they need to contend with supporting innovation while handling a technology that is unleashing concerns about privacy, safety, job cuts and misinformation.

A witness at the hearing, New York University professor emeritus Gary Marcus, highlighted the “mind boggling” sums of money at stake and described OpenAI as “beholden” to its investor Microsoft. He criticized what he described as the company’s divergence from its mission of advancing AI to “benefit humanity as a whole” without the constraints of financial pressure.

Washington vows to tackle AI, as tech titans and critics descend

The popularization of ChatGPT and generative AI tools, which create text, images or sounds, has dazzled users and also added urgency to the debate on regulation.

 

At a G-7 summit on Saturday, leaders of the world’s largest economies made clear that international standards for AI advancements were a priority, but have not yet produced substantial conclusions on how to address the risks.

The United States has so far moved slower than others, particularly in Europe, although the Biden administration says it has made AI a key priority. Washington policymakers have not passed comprehensive tech laws for years, raising questions over how quickly and effectively they can develop regulations for the AI industry.

As AI changes jobs, Italy is trying to help workers retrain

The ChatGPT makers called in the immediate term for “some degree of coordination” among companies working on AI research “to ensure that the development of superintelligence” allows for safe and “smooth integration of these systems with society.” The companies could, for example, “collectively agree … that the rate of growth in AI capability at the frontier is limited to a certain rate per year,” they said.

“We believe people around the world should democratically decide on the bounds and defaults for AI systems,” they added — while admitting that “we don’t yet know how to design such a mechanism.”

0 comments:

Post a Comment