A stream of staff at Anthropic, OpenAI and xAI have publicly resigned this week, many due to AI safety concerns. Co-founders and other insiders have chosen to leave. Several did so through announcements on X, sounding alarms over AI's existential risks. With AI innovation accelerating, insider exits are increasing anxieties about the velocity of the tech innovations and serious potential impacts. "People tell chatbots about their medical fears, relationship problems, beliefs about God. Advertising built on that archive creates potential for manipulating users in ways we don't have the tools to understand, let alone prevent." If resignations of technical staff begin to affect AI models' ability to operate well, "you could see that begin to be reflected in investment flows and valuations."Through social-media posts and even a resignation letter in the New York Times, former employees of companies like OpenAI and Anthropic are not leaving quietly
A stream of staffers at Anthropic, OpenAI and xAI have resigned this week, many due to AI safety concerns.
On Wednesday, Zoë Hitzig, a former researcher at OpenAI, said she quit her job at the company after it started testing ads on ChatGPT.
Yet instead of just updating her job status on LinkedIn or texting friends and colleagues about her decision, Hitzig announced her resignation in the New York Times - writing a guest essay published yesterday titled "OpenAI is Making the Mistakes Facebook Made. I Quit."
Hitzig said that while she doesn't believe ads are "immoral or unethical," she has "deep reservations about OpenAI's strategy."
"People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent," she wrote.
A flurry of public resignations have gripped the artificial-intelligence industry this week. Safety researchers, co-founders and other insiders at top AI companies have recently chosen to leave. Several did so through heartfelt announcements on X, sounding alarms over AI's existential risks. Others have left their posts more quietly.
With AI innovation accelerating, insider exits are increasing anxieties about the velocity of the technological innovations and the serious potential safety impacts.
Hitzig and OpenAI did not immediately respond to requests for comment. But Hitzig's resignation letter in the New York Times was reminiscent of Greg Smith's "Why I Am Leaving Goldman Sachs" column in 2012, which tapped into anxieties around the 2008-09 financial crisis and the bailout of big Wall Street banks that followed it. The column led to an appearance on "60 Minutes" and a bestselling book for Smith.
As high as the stakes seemed then, the implications of AI could likely be even more meaningful. On Feb. 9, Mrinank Sharma, an AI researcher at Anthropic who led the company's Safeguards research team, announced his resignation.
When speaking of his time at Anthropic in a note to colleagues posted on his X account, he said: "I've repeatedly seen how hard it is to truly let our values govern our actions."
He wrote in a comment to the post that he will be moving back to the U.K. to let himself "become invisible for a period of time."
Sharma did not respond immediately to a request for comment.
See also: Despite questions about AI's long-term profitability, OpenAI and Anthropic accelerate investment
On Feb.10, Tony Wu, a former xAI co-founder, announced in an X post his resignation from the Elon Musk-led company. Within 24 hours, another xAI co-founder, Jimmy Ba also resigned.
"We are heading to an age of 100x productivity with the right tools," Ba wrote in a post on X. He added: "It's time to recalibrate my gradient on the big picture."
These prominent exits follow the merger of xAI and SpaceX earlier this month, though details about the reasons for the departures remain unclear. xAI did not immediately respond to a request for comment.
Read more: OpenAI reportedly eyeing an IPO by year's end, ahead of Anthropic
It's important to note that staff turnover within the AI world has been common for a long time. Ba and Wu's departures come after half of xAI's 12 co-founders left the company in recent years.
Jan Leike, a researcher at Anthropic who formerly worked at OpenAI and DeepMind, according to his X bio and website, left OpenAI in 2024 - also while sounding alarms over AI safety concerns.
"I joined because I thought OpenAI would be the best place in the world to do this research," he said in a 2024 post on X.
On his website, Leike writes that his research focuses on solving the "hard problem of alignment."
"How can we train AI systems to follow human intent on tasks that are difficult for humans to evaluate directly?"
Leike added in his X post: "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."
The recent high-profile exits haven't always been solely initiated by staff. In January, OpenAI fired Ryan Beiermeister, one of the company's top safety executives, after she voiced concerns over the rollout of AI erotica in ChatGPT. The company told Beiermeister that her termination was tied to her sexual discrimination against a colleague, which she denies, according to the Wall Street Journal.
Dimitri Zabelin, a senior AI analyst at PitchBook, said in an interview that unless AI safety concerns bring regulatory hurdles that could meaningfully impact financial returns for AI companies, alarm bells over safety are unlikely to change general corporate or investment direction.
"[T]he topic of AI safety has not merited a sufficient level of concern amongst investors that would meaningfully alter fundraising trends and capital inflows," he said.
Zabelin noted that if resignations of technical staff begin to affect AI models' ability to operate well, "then you could see that begin to be reflected in investment flows and subsequent valuations."
Feb 14, 2026
More Senior AI Firm Staff Are Resigning, Issuing Warnings About Risks
While resignations at AI startups and more established firms have been common due to changing models, internal disputes and working conditions, a spate of recent senior departures is being tied more to concerns about AI risks, user manipulation and safety issues with much of that driven by financial pressure from investors and founders bent on growth and domination at any cost.
It is increasingly apparent that much of the strategic decision-making is based on desire for domination of public behavior as much as it is about financial returns, though those are expected to follow as a second order 'benefit.' There is also concern that the combination of excessive demands and key technical staff resignations could begin to affect model performance. JL


















0 comments:
Post a Comment