A Blog by Jonathan Low

 

Mar 13, 2026

Anthropic's Standoff With Pentagon Has Provided Talent War Advantage

Anthropic may have lost $200 million in Pentagon contracts, but perhaps not surprisingly, it has become the darling of the tech (and especially AI) world for its principled stance. 

At least two senior OpenAI executives have quit since that company signed what many seem to believe is a craven deal with the DoD that has now caused its leadership to pivot into damage control over what is perceived to be its sell-out. Tech was, back in the dotcom days, a haven for people who claimed to have values and principles. That image has become tarnished over the years, but AI may be a new source of such behavior. When Meta attempted to raid Anthropic last summer and threw vast sums around, Anthropic refused to change its compensation and ended up losing only two people. This fight, like the war with Iran that Anthropic's models have helped manage, could be a long one. JL

Meghan Bobrowsky reports in the Wall Street Journal:

Anthropic's standoff with the Defense Department has cost it a customer, but has brought an advantage in the talent war between rival AI labs. High-level employees have resigned from OpenAI, citing values principles, since it reached deal with the Pentagon. OpenAI’s recent defections offer a reminder that values have frequently trumped money for the most sought-after prospects. A group of employees of OpenAI and Google DeepMind filed a brief asking the court to side with Anthropic in its suit against DoD. Last summer, Meta mounted an aggressive campaign to woo researchers and engineers from rival labs including OpenAI, Anthropic, dangling compensation worth hundreds of millions of dollars. Anthropic told employees it wasn’t going to alter compensation to respond to Meta. It lost only two employees to Meta, versus “several dozen” from OpenAI. 

Anthropic’s standoff with the Defense Department has cost it Uncle Sam as a customer, but it has brought a momentary advantage in the ferocious talent war between rival artificial intelligence labs. 

At least two high-level employees have resigned from OpenAI, citing values and principles, since the company said it had reached a deal with the Pentagon to deploy its AI models in classified settings. Anthropic was renegotiating its own Pentagon deal and seeking to weave in new safeguards around domestic surveillance and autonomous weapons when talks broke down, prompting the Trump administration to declare the company a supply-chain risk.

Caitlin Kalinowski, who oversaw hardware within OpenAI’s robotics division, wrote in a post on X over the weekend that she was quitting over the contract.

“AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got,” she wrote. “This was about principle, not people.”

In a separate post a few days earlier, Max Schwarzer, a vice president of research at OpenAI, wrote on X that he was leaving to join Anthropic.

“Many of [the] people I most trust and respect have joined Anthropic over the last couple of years, and I’m excited to work with them again. I have also been very impressed with Anthropic’s talent, research taste and values, and I’m excited to be part of what the company does next!” he wrote, adding that he was also leaving to return to individual contributor research work. 

“We recognize that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world,” a spokeswoman for OpenAI said. 

Anthropic has seen a surge of public goodwill since Chief Executive Dario Amodei said the company wouldn’t compromise its red lines to do a deal. Its Claude chatbot hit no. 1 in downloads on the Apple App Store, surpassing OpenAI’s ChatGPT for the first time. San Franciscans chalked messages of appreciation outside its headquarters. Even pop star Katy Perry posted a screenshot of her subscribing to a paid Claude plan with a heart around the page.

But no front of competition has mattered more than the red-hot contest for technical talent. And while researchers juggling competing offers have been able to command pay packages worth hundreds of millions of dollars, OpenAI’s recent defections offer a reminder that values have frequently trumped money for the most sought-after prospects. 

Anthropic was founded in 2021 by former OpenAI researchers, including Amodei, who felt that the company was rushing to build a commercial product without prioritizing safety. Since then, it has cultivated a reputation as a haven for researchers focused on the potential downsides of powerful AI, even as its models have established themselves as the preference of a wide range of business customers. 

“I think leading from values first has kind of allowed us to build a team that really believes in what we’re doing,” Amodei said last week during a private session at Morgan Stanley’s Technology, Media & Telecom conference in San Francisco. 

The Pentagon sought to use Anthropic’s customer relationships as a point of leverage in threatening to declare Anthropic a supply-chain risk, saying the designation would cut off other Defense Department contractors from working with the company. But Anthropic said the status applies only to the work contractors do for the department, and on Monday it sued to challenge the designation. 

 A group of 37 employees of OpenAI and Google DeepMind filed a brief asking the court to side with Anthropic. A test of how much Anthropic’s employees care about the company’s ethical orientation arrived last summer when Meta Platforms, seeking to supercharge its model development, mounted an aggressive campaign to woo researchers and engineers from rival labs, including OpenAI and Anthropic, dangling compensation packages in some cases worth hundreds of millions of dollars.

OpenAI countered Meta’s offers with bonuses, and in December, got rid of its vesting cliff, meaning that new employees no longer had to wait for their stock to vest.

Anthropic, according to Amodei, told employees it wasn’t going to alter employees’ compensation structure to respond to Meta’s blitz. “The thing we said to everyone was look, you’re here for the mission,” he said in the Morgan Stanley session. “We have a system and we’re going to hold to that. If you leave, you leave.” 

Anthropic ended up losing only two employees to Meta, versus “several dozen” from OpenAI, he said. He noted that his company not only has all of its original co-founders but almost all of its first 20 employees. 

“If you look at our retention rate, it’s just the best in the industry,” Amodei said. “We’re just clearly doing something different.” 

Sam Altman, CEO of OpenAI, speaking into a microphone with "OpenAI" and "Oracle" logos in the background.
OpenAI Chief Executive Sam Altman in Texas last year. Kyle Grillot/Bloomberg News

In the days after OpenAI signed its contract with the Pentagon, the company went into damage control mode. Saying the rush to get a deal had “looked opportunistic and sloppy,” Chief Executive Sam Altman posted messages on X trying to explain and assuage concerns.

Over the weekend following the contract signing, Altman hosted an “ask-me-anything” session on X, where he and two other employees answered questions. And on the next business day, Altman posted that he had been working with the Department of Defense to make some additions to the agreement “to make our principles very clear.”

The following day, Schwarzer resigned. Kalinowski announced she had quit a few days later.

“We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the OpenAI spokeswoman said.

Anthropic isn’t immune to scrutiny of its own. 

An Anthropic safety researcher, Mrinank Sharma, said in early February that he was leaving the company to explore a poetry degree and wrote in a letter to colleagues that the world was in peril from AI. 

Sharma’s decision to leave was related in part to a recent company decision to modify a long-held safety policy, The Wall Street Journal previously reported.

0 comments:

Post a Comment