A Blog by Jonathan Low

 

Jun 11, 2024

OpenAI Becomes 'Just Another Facebook' As Controversies Pile Up

OpenAI is now being called 'a new generation Facebook' due to all of the scandals and controversies surrounding its behavior. 

The arrogance of its leaders is also considered comparable to that of Facebook executives, all of which is considered detrimental to its longer term prospects since customers and investors tend to prefer less public turmoil. JL 

Matteo Wong reports in The Atlantic:

OpenAI has become a full-fledged tech behemoth—a next-generation Facebook with an ego to match. Former researchers with OpenAI, at least one former board member, and national regulators have accused the company of or are investigating it for putting profit over safety, retaliating against employees, and stifling competition. The problem is that OpenAI is less building the future than adopting a tried-and-true Silicon Valley model: accused of atrocious labor conditions, sparking a mental-health crisis, sucking the life out of the internet, enabling genocide, federal antitrust lawsuits, and having employees speak out against unethical business practices.

OpenAI appears to be in the midst of a months-long revolt from within. The latest flash point came yesterday, when a group of 11 current and former employees—plus two from other firms—issued a public letter declaring that leading AI companies are not to be trusted. “The companies are behaving in a way that is really not in the public interest,” William Saunders, a signatory who, like several others on the letter, left OpenAI earlier this year, told me.

The letter tells a familiar story of corporate greed: AI could be dangerous, but tech companies are sacrificing careful safety procedures for speedy product launches; government regulation can’t keep up, and employees are afraid to speak out. Just last month, Vox reported on a nondisclosure and non-disparagement agreement that OpenAI employees were asked to sign upon leaving the company. Violators risk losing all their vested equity in the company, which can amount to millions of dollars—providing a clear reason for workers to remain silent, even about issues of significant societal concern. (An OpenAI spokesperson told me in an emailed statement that all former employees have been released from the non-disparagement clause, and that such an obligation has been scrubbed from future offboarding paperwork.)

“AI companies have strong financial incentives to avoid effective oversight,” the letter states, “and we do not believe bespoke structures of corporate governance are sufficient to change this.” To remedy this problem, current and former employees asked advanced-AI companies to establish a “Right to Warn” about their products from within and commit to substantive, independent oversight.

The Right to Warn letter is only the latest in a string of high-profile incidents suggesting that OpenAI is no longer committed to its founding goal—to build AI that “benefits all of humanity”—and is instead in thrall to investors. OpenAI leaders’ talk of a possible AI doomsday (or, conversely, utopia) has faded into the background. Instead, the company is launching enterprise software, beholden to Microsoft’s $13 billion investment, reportedly closing a massive deal with Apple, and debuting consumer products. In doing so, it has sparked other controversies: a generative-AI assistant that sounds uncannily like Scarlett Johansson, despite her repeated refusal to give the company permission to use her voice. (You can listen for yourself here; the company denies copying Johansson’s voice and has paused that particular bot.) Former researchers with OpenAI, at least one former board member, and national regulators have accused the company of or are investigating it for putting profit over safety, retaliating against employees, and stifling competition. OpenAI has, in other words, become a full-fledged tech behemoth—a next-generation Facebook with an ego to match. (The Atlantic recently entered into a corporate partnership with OpenAI, independent of the editorial side of this organization.)

 

The first signs of a schism at OpenAI date back three years, when a group of high-ranking employees left to form a rival start-up, Anthropic, that claimed to place a greater emphasis on safely building its technology. Since then, a number of outside academics, pundits, and regulators have criticized OpenAI for releasing AI products rife with well-known risks, such as output that was misleading or hateful. Internal dissent began in earnest this past November, when Sam Altman was removed as CEO, reportedly because of concerns that he was steering the company toward commercialization and away from its stated “primary fiduciary duty [to] humanity.” (A review commissioned by OpenAI from the law firm WilmerHale later found that the ouster was not related to “product safety or security,” according to OpenAI’s summary of the investigation, although the report itself is not public.) Investors led by Microsoft pressured OpenAI to reinstate Altman, which it did within days, alongside vague promises to be more responsible.

Then, last month, the company disbanded the internal group tasked with safety research, known as the “superalignment team.” Some of the team’s most prominent members publicly resigned, including its head, Jan Leike, who posted on X that “over the past years, safety culture and processes have taken a backseat to shiny products.” Fortune reported that OpenAI did not provide anywhere near the resources it had initially, publicly promised for safety research. Saunders, who also worked on superalignment, said he resigned when he “lost hope a few months before Jan did.” Elon Musk—who was one of OpenAI’s founding investors in 2015 and started his own rival firm, xAI, last year—sued OpenAI in March for breaching, in favor of rapid commercialization, an alleged contractual obligation to build AI for the good of humanity.

 

All the while, OpenAI has steadily released new products and announced new deals with some of the biggest companies around. That all but two of the Right to Warn letter’s signatories are former OpenAI employees feels especially telling: This company, more than any other, has set the tone for the generative-AI era. But there are others, of course. Google and Anthropic, current and former employees of which also signed yesterday’s letter, have launched a slew of consumer- and business-facing AI tools in recent months, including the disastrous rollout of AI to Google Search. (The company appears to be removing AI overviews from many queries, at least for the time being.)

An OpenAI spokesperson told me that the company takes a “scientific approach to addressing risk” and cited several instances in which OpenAI has delayed product launches until proper safeguards are put in place. The company has previously noted that raising money is the only way to acquire the massive resources needed to build the advanced AI it believes could usher in a better future. But Saunders told me that “no one should trust the company saying that they’ve done this process properly.”

 

The threats that Leike, Musk, Saunders, and dozens of other insiders are worried about are largely speculative. Perhaps, some day, self-replicating, autonomous chatbots will design bioweapons, launch nuclear attacks, or disrupt critical infrastructure. Tech executives have themselves in the past warned of these “existential” risks; Altman himself has likened his company’s work to that of inventing nuclear weapons. Ex-employees might disagree over how to handle those dangers, but they tend to agree that the company is building godlike computer programs—critics of OpenAI’s approach to existential harms don’t have faith in the company to regulate itself, but they do have faith in the myth of Altman as the world’s savior or destroyer. The less fantastical but already present dangers of AI—its capacity to foment misinformation and bias, monopolize the internet, and take jobs—tend to vanish in these discussions.

But the Right to Warn letter indicts not a specific problem with generative AI so much as the entire process by which it is currently developed: a small number of companies “outrunning the law in the sense that they are developing the [technology] faster than regulation can be passed,” Saunders told me. Employees, the signatories believe, will have to fill in the gap and thus need “a right [to warn] no matter what your concern is,” he said. The problem, in other words, is that OpenAI is less building the future than adopting a tried-and-true Silicon Valley model. If today’s tech companies had invented electricity, we’d all be paying exorbitant prices to constantly upgrade wiring and outlets that are super sleek, pump out massive voltage, and occasionally short-circuit, singe our fingertips, or explode.

Just as scary as a profit-seeking company building a new god is yet another profit-seeking company shoving whatever software it wants down the global population’s collective throat. Apple once aimed to “advance humankind,” Google’s unofficial motto—until it became obviously farcical—was “Don’t be evil,” and Facebook’s mission is to “bring the world closer together.” Now these are among the most valuable companies in the world, dominating smartphones, online ads, and social media, respectively. They have been accused of atrocious labor conditions, sparking a generational mental-health crisis, sucking the life out of the internet, and enabling genocide around the globe. All three face federal antitrust lawsuits, and all three have had former employees speak out against unethical business practices following alleged or feared retaliation. Eschatology gives way to escrow.


Perhaps the most famous of these whistleblowers was Frances Haugen, who in 2021 released thousands of documents revealing that Facebook was aware of, and ignored, the ways in which it was undermining democracy around the world. The lawyer who represented her is now representing the Right to Warn signatories. Thankfully, their letter is less of an autopsy, as Haugen’s was, and more of an early diagnosis of still more havoc that Silicon Valley may wreak.

0 comments:

Post a Comment