A Blog by Jonathan Low

 

Mar 30, 2023

Why Tech Leaders Are Calling For Extraordinary 'Pause In Giant AI Experiments'

Not that this is likely to influence Open AI owner Microsoft , Bard owner Google or anyone else in tech licking their chops at the potential power and profit to be derived from generative AI. 

But the fact that the letter was sent under the auspices of the "Future of Life Institute:" should probably give every responsible person pause. JL

James Vincent reports in The Verge:

A number of well-known AI researchers - and Elon Musk - have signed an open letter calling on AI labs around the world to pause development of large-scale AI systems, citing fears over the “profound risks to society and humanity” they claim this software poses. The letter notes that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.” Signatories include Yuval Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, Andrew Yang, and a number of well-known AI researchers and CEOs.

A number of well-known AI researchers — and Elon Musk — have signed an open letter calling on AI labs around the world to pause development of large-scale AI systems, citing fears over the “profound risks to society and humanity” they claim this software poses.

The letter, published by the nonprofit Future of Life Institute, notes that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” says the letter. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque. The full list of signatories can be seen here, though new names should be treated with caution as there are reports of names being added to the list as a joke (e.g. OpenAI CEO Sam Altman, an individual who is partly responsible for the current race dynamic in AI).

The letter is unlikely to have any effect on the current climate in AI research, which has seen tech companies like Google and Microsoft rush to deploy new products, often sidelining previously-avowed concerns over safety and ethics. But it is a sign of the growing opposition to this “ship it now and fix it later” approach; an opposition that could potentially make its way into the political domain for consideration by actual legislators.

As noted in the letter, even OpenAI itself has expressed the potential need for “independent review” of future AI systems to ensure they meet safety standards. The signatories say that this time has now come.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” they write. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

0 comments:

Post a Comment