A Blog by Jonathan Low

 

Feb 17, 2026

Pentagon Threatens Anthropic Over Permit Refusal For Surveilling Americans, Autonomous Weapons

The Trump Defense Department is threatening to sever its contractual relationship Anthropic over the AI company's refusal to permit use of its model for surveillance of Americans and for autonomous weapons development. While the loss of government contracts could be significant, especially if the administration expands its threat to cut off any company using the Anthropic model, the reality is that Anthropic is perceived to be far ahead of competitors for the myriad uses the military envisions. 

In addition to pulling ahead of competitors as the professional AI of choice, Anthropic and its founder have been the most outspoken about the risks of AI, particularly in the hands of unscrupulous users. It is likely that some sort of compromise will be crafted - but it is also likely that the administration will then violate the terms of that agreement to suit its own ends. JL

The Pentagon is considering severing its relationship with Anthropic over the AI firm's insistence on maintaining some limitations on how the military uses its models. The Pentagon is pushing leading AI labs to let the military use their tools in the most sensitive areas of weapons development, intelligence collection, and battlefield operations. Anthropic has not agreed to those terms, and insists two areas remain off limits: the mass surveillance of Americans and fully autonomous weaponry. The Pentagon is getting fed up after months of difficult negotiations. (But) officials concede it would be difficult for the military to replace Claude, because "the other companies are behind" (for) specialized government applications.

The Pentagon is considering severing its relationship with Anthropic over the AI firm's insistence on maintaining some limitations on how the military uses its models a senior administration official told Axios.

Why it matters: The Pentagon is pushing four leading AI labs to let the military use their tools for "all lawful purposes, even in the most sensitive areas of weapons development, intelligence collection, and battlefield operations. Anthropic has not agreed to those terms, and the Pentagon is getting fed up after months of difficult negotiations.

  • Anthropic insists that two areas remain off limits: the mass surveillance of Americans and fully autonomous weaponry.

The big picture: The senior administration official argued there is considerable gray area around what would and wouldn't fall into those categories, and that it's unworkable for the Pentagon to have to negotiate individual use-cases with Anthropic — or have Claude unexpectedly block certain applications.

  • "Everything's on the table," including dialing back the partnership with Anthropic or severing it entirely, the official said. "But there'll have to be an orderly replacement [for] them, if we think that's the right answer."
  • An Anthropic spokesperson said the company remained "committed to using frontier AI in support of U.S. national security."

Zoom in: The tensions came to a head recently over the military's use of Claude in the operation to capture Venezuela's Nicolás Maduro, through Anthropic's partnership with AI software firm Palantir.

  • According to the senior official, an executive at Anthropic reached out to an executive at Palantir to ask whether Claude had been used in the raid.
  • "It was raised in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot," the official said.

The other side: The Anthropic spokesperson flatly denied that, saying the company had not "not discussed the use of Claude for specific operations with the Department of War. We have also not discussed this with any industry partners outside of routine discussions on strictly technical matters."

  • "Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy."
  • "Anthropic's conversations with the DoW to date have focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance — none of which relate to current operations," the spokesperson said.

Friction point: Beyond the Maduro incident, the official described a broader culture clash with what the person claimed was the most "ideological" of the AI labs when it came to the potential dangers of the technology. 

  • But the official conceded that it would be difficult for the military to quickly replace Claude, because "the other model companies are just behind" when it comes to specialized government applications.

Breaking it down: Anthropic signed a contract valued up to $200 million with the Pentagon last summer. Claude was also the first model the Pentagon brought into its classified networks.

  • OpenAI's ChatGPT, Google's Gemini and xAI's Grok are all used in unclassified settings, and all three have agreed to lift the guardrails that apply to ordinary users for their work with the Pentagon.
  • The Pentagon is negotiating with them about moving into the classified space, and is insisting on the "all lawful purposes" standard for both classified and unclassified uses.
  • The official claimed one of the three has agreed to those terms, and the other two were showing more flexibility than Anthropic.

The intrigue: In addition to CEO Dario Amodei's well-documented concerns about AI-gone-wrong, Anthropic also has to navigate internal disquiet among its engineers about working with the Pentagon, according to a source familiar with that dynamic.

The bottom line: The Anthropic spokesperson said the company was still committed to the national security space.

  • "That's why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers."

0 comments:

Post a Comment