A Blog by Jonathan Low

 

Mar 5, 2026

Anthropic's Claude AI Has Been Central To US Attack On Iran Despite Pentagon "Ban"

The harsh reality for the Pentagon and for a White House attempting to bend the world to its will is that Anthropic has developed a superior product already deeply intertwined with the US military and security services' needs and uses. Replacing it may be possible, but not soon - and certainly not quickly enough for current operations. That probably explains some of the Administration's harsh urgency. 

But as in so many other realms, once the order to attack was given, necessity took priority over theory. Anthropic has already proven its worth. Whether it will ever really be 'banned' is hard to know, but unlikely, except as a piece of political rhetoric. JL

Anna Zhadan reports in Cyber News, Marcus Weisgerber and colleagues report in the Wall Street Journal:

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran. Claude’s use in high-profile missions, such as the Iran attack and the U.S. military operation that captured Venezuelan President Nicolás Maduro, shows why the administration said it would take six months to phase out the technology

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.

Commands around the world, including U.S. Central Command in the Middle East, use Anthropic’s Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran.

The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations.

The administration and Anthropic have been feuding for months over how its AI models can be used by the Pentagon. Trump on Friday ordered agencies to stop working with the company and the Defense Department designated it a security threat and risk to its supply chain.

That came after Anthropic refused to let the Pentagon use its tools in all lawful scenarios during their contract negotiation. Anthropic’s lobbying against the administration’s AI policies and ties to organizations that are big Democratic donors have also upset administration officials.

Claude’s use in high-profile missions, such as the U.S. military operation that captured Venezuelan President Nicolás Maduro, shows why the administration said it would take six months to phase out the technology—a complicated process given how it is used by partners including data-mining firm Palantir.

Key takeaways:

On Saturday, the Israeli and US military launched "major combat operations" against Iran, killing Iran’s Supreme Leader Ayatollah Ali Khamenei.

 

Anthropic's tools are used by US Central Command in the Middle East, as well as other commands around the world, for intelligence assessments, target identification, and simulating battle scenarios, according to the WSJ, which quoted people familiar with the matter.

Full details about the extent to which Claude is used in the ongoing operation against Iran were not disclosed.

Over 200 people have been killed across Iran and more than 700 injured, according to the Red Crescent on Saturday. Strikes hit 24 of Iran's 31 provinces. In response, Iran fired drones and missiles at Bahrain, Kuwait, Qatar, and the UAE in retaliatory attacks on US assets in the region.

Trump said the “heavy and pinpoint bombing” would continue throughout the week or for as long as needed.

The US military also reportedly used Claude AI in the operation that captured Venezuelan President Nicolas Maduro.

 

On Friday, Trump ordered the US government to immediately stop using Anthropic’s technology, while Defense Secretary Pete Hegseth confirmed that he is directing the Department of War to designate Anthropic a supply-chain risk to national security.

Anthropic has said it plans to challenge the decision to label it a "supply-chain risk" in court.

 

“Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers,” the company said in a public announcement.

 

The move came after weeks of tensions between Anthropic and the Pentagon. Anthropic has refused to allow its tools to be used for mass surveillance or autonomous lethal weapons.

The Pentagon set a deadline for the company to either agree to the terms or face consequences.

When the ban was announced, Trump gave a 6-month window for agencies to phase out their use of Anthropic products and emphasized that the company “better get their act together, and be helpful during this phase out period” unless they want to face “major civil and criminal consequences.”

1 comments:

guru said...

Why is https://AFK-spin.org so important for those who want to make money? Here are the best slot machines with returns above 96%. The usefulness of a resource lies in its honesty: you always know that winning depends only on chance and your strategy. It's fascinating to see how a small bet turns into a large skid that can be withdrawn in five minutes!

Post a Comment