A Blog by Jonathan Low

 

Jan 30, 2026

Anthropic and Pentagon At Odds Over Domestic Surveillance, Military AI Uses

Anthropic's Claude has emerged as the leading AI model for business because of its effectiveness and reliability. But the company is also doing something very unusual for a tech firm in the Trump era: insisting on limits to the way AI is used, including the prohibitions on domestic surveillance and autonomous lethal operations. 

The result is growing tension between the company and the current administration, which is claiming there are no limits on anything it chooses to do. The issue is fraught for both sides as the administration could take action against Anthropic which might hurt its business and valuation, but also for the government given that the company's AI model is clearly superior to others available for the uses the government has in mind. JL

Keith Hagey and colleagues report in the Wall Street Journal:

AI leader Anthropic and the US Defense Department are at odds over the contractual terms of how Anthropic’s technology can be used. The tension could lead to the cancellation of a $200 million Pentagon contract intended to integrate Anthropic’s Claude models into defense operations as part of the government’s deployment of AI.  Anthropic’s terms and conditions dictate that Claude can’t be used for any actions related to domestic surveillance. Anthropic's focus on safe applications of AI—and its objection to having its technology used in autonomous lethal operations—have caused administration officials to be frustrated that the company was dictating how its technology could be used.

Anthropic scored a major endorsement last summer when it won a contract worth up to $200 million from the Defense Department. Now, the AI startup’s relationship with the Pentagon is on the rocks. 

The company and agency are at odds over the contractual terms of how Anthropic’s technology can be used, according to people familiar with the matter. The tension could lead to the cancellation of the Pentagon contract, one of the people said. 

The contract was intended to integrate Anthropic’s Claude models into defense operations as part of the government’s deployment of AI. 

Tensions with the administration began almost immediately after it was awarded, in part because Anthropic’s terms and conditions dictate that Claude can’t be used for any actions related to domestic surveillance. That limits how many law-enforcement agencies such as Immigration and Customs Enforcement and the Federal Bureau of Investigation could deploy it, people familiar with the matter said. 

Anthropic’s focus on safe applications of AI—and its objection to having its technology used in autonomous lethal operations—have continued to cause problems, they said. Some administration officials were frustrated that the company was dictating how its technology could be used, including for legal activities, they said.

CEO Dario Amodei outlined fears about AI’s use in both mass surveillance and fully autonomous weapons capabilities in a recent essay. Friction between the startup and the Pentagon adds to existing tensions between the highly valued company and the Trump administration.

At an event earlier this month announcing that the Pentagon would be working with Elon Musk’s xAI, Defense Secretary Pete Hegseth said the agency would not “employ AI models that won’t allow you to fight wars.” He was referring to discussions administration officials have had with Anthropic, some of the people said.

Pete Hegseth, US Secretary of Defense, at an all-Senate briefing on Venezuela.
Defense Secretary Pete Hegseth Al Drago/Bloomberg News

Semafor earlier reported on Hegseth’s comments referring to Anthropic and tensions between the company and the agency. Other AI companies including OpenAI and Google are also working with the military.

The Pentagon declined to comment.

“Anthropic is committed to protecting America’s lead in AI and helping the U.S. government counter foreign threats by giving our warfighters access to the most advanced AI capabilities,” an Anthropic spokesman said in a statement. The startup said Claude is used “extensively” for U.S. national security missions and that it is “in productive discussions with the Department of War about ways to continue that work.”

The company’s latest models and coding tools have gained popularity in recent weeks, and it is in talks to raise billions from investors at a $350 billion valuation.

Amodei has said fast-developing AI can spur economic growth, but has also warned of its downsides, from safety risks to unemployment and inequality. “I don’t think there’s an awareness at all of what is coming here and the magnitude of it,” he said in an interview last week with The Wall Street Journal. He has also criticized Trump for allowing exports of Nvidia AI chips to China, a move he says poses national-security risks.

That has at times put the company in conflict with White House AI and crypto czar David Sacks, who has pushed for looser regulations that accelerate model development. Sacks has accused Anthropic of being “AI doomers” focused on slowing competitors to benefit its business. The company has denied those claims and said it has a good relationship with the administration overall. Anthropic has backed Trump’s approach to expanding energy production to power AI data centers.

0 comments:

Post a Comment