A Blog by Jonathan Low

 

Feb 27, 2026

Anthropic's Pentagon Defiance Tied To Chinese Hacking, Chip Sales To UAE

The dispute with the Pentagon is not about Anthropic being woke, or obstinate. The company appears primarily concerned about national security, based on their own experience with China, which has created thousands of accounts under fake names to siphon Anthropic's data and systems to bolster their own, less advanced AI. They also, as any sensible company would, believe they should retain control of their product and the intellectual property that drives it, not hand it off to some bureaucrat with a political agenda. 

Anthropic's model is evidently superior to its competitors for the the Pentagon's needs, since it is the only one currently with a top security clearance. In that light, Anthropic has been critical of the Trump administration's eagerness to sell AI chips to the United Arab Emirates for money that went not to the US government, but to private Trump business accounts. The company has understandable concerns about how those chips will be used, especially if, as feared, the UAE then re-sells some of them to China. So, in short, Anthropic does not trust this administration to protect the country's interests or the company's own commercial investment. And in defying the DOD Secretary Hegseth, are taking a prudent financial position that could benefit the entire US AI industry. JL

Robert McMillan and Raffaele Huang report in the Wall Street Journal, Ian Duncan and colleagues report in the Washington Post and Rebecca Bellan reports in Tech Crunch:

Anthropic CEO Dario Amodei said it was ready to continue working with the Pentagon, but would not change its stance (regarding use of its AI for) robotic weaponry and domestic surveillance. Amodei previously criticized the Trump administration’s drive to allow exports of American AI chips to China. He compared the policy to “selling nuclear weapons to North Korea.” Anthropic said three Chinese AI companies set up 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up. The three companies, Deep Seek, Moonshot AI and AI, prompted Claude 16 million times to siphon information from Anthropic to train their own products. Anthropic is the only AI lab with classified DOD access. The DOD doesn’t have a backup option currently. "This is a single vendor situation. If Anthropic cancels, it will be a serious situation for DOD."

U.S. artificial-intelligence startup Anthropic said three Chinese AI companies set up more than 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up.

The three companies—DeepSeek, Moonshot AI and MiniMax—prompted Claude more than 16 million times, siphoning information from Anthropic’s system to train and improve their own products, Anthropic said in a blog post Monday.

Earlier this month, an Anthropic rival, OpenAI, sent a memo to House lawmakers accusing DeepSeek of using the same tactic, called distillation, to mimic OpenAI’s products.

Anthropic said distillation had legitimate uses—companies use it to build smaller versions of their own products, for example—but it could also be used to build competitive products “in a fraction of the time, and at a fraction of the cost.”

The scale of the different companies’ distillation activity varied. DeepSeek engaged in 150,000 interactions with Claude, whereas Moonshot and MiniMax had more than 3.4 million and 13 million, respectively, Anthropic said.

Representatives from DeepSeek, Moonshot and MiniMax didn’t respond to requests for comment.

Many Chinese companies including Moonshot and MiniMax have recently released their latest AI models, many of which feature enhanced reasoning and coding capabilities. DeepSeek is preparing to roll out its next-generation model soon.

When DeepSeek first captured the attention of AI enthusiasts last year, it raised concerns that China might be able to quickly catch up with U.S. AI companies even without having access to the most powerful AI chips. AI observers speculated that DeepSeek might have used distillation

In a research paper updated in September, DeepSeek said that during a late stage of pretraining its flagship V3 model, it exclusively used plain webpages and ebooks, without incorporating any synthetic data. However, it said some webpages contained “a significant number of OpenAI-model-generated answers.” DeepSeek said its base model might have acquired knowledge from other powerful models indirectly by drawing on such webpages.

Synthetic data, often using distillation, has been increasingly adopted for training large foundation models as developers face a shortage of high-quality data and focus on giving models so-called agentic capabilities, meaning allowing them to take action proactively to complete tasks on behalf of users. In a technical report in July, Moonshot said it used synthetic data for training its Kimi K2 model.

Anthropic said the activity by the Chinese developers raised national-security concerns for the U.S. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems,” the company said. 

 

Anthropic CEO Amodei has also criticized the Trump administration’s drive to allow exports of American AI chips to China. On the sidelines of the World Economic Forum in Davos, Switzerland last month, Amodei compared the policy to “selling nuclear weapons to North Korea.” After meeting with Amodei this month on Capitol Hill, Sen. Elizabeth Warren (D-Massachusetts) said she would introduce legislation to sharply limit any export. 

 

Amodei said in a statement late Thursday that his company was ready to continue working with the Pentagon, but would not change its stance. Current AI systems are not reliable enough to power robotic weaponry without putting troops and civilians alike at risk, he said, and existing laws on domestic surveillance do not account for the sweeping potential of AI snooping tools. 

 

It’s a serious game of chicken, and Anthropic may not be the one to blink first. According to Reuters, Anthropic doesn’t plan on easing its usage restrictions. 

Anthropic is the only frontier AI lab with classified DOD access, according to several reports. The Department of Defense doesn’t have a backup option currently in play — though the Pentagon has reportedly reached a deal to use xAI’s Grok in classified systems. 

That lack of redundancy may help explain the Pentagon’s aggressive posture, Ball argued. 

“If Anthropic canceled the contract tomorrow, it would be a serious problem for the DOD,” he told TechCrunch, noting the agency appears to be falling short of a National Security Memorandum from the late Biden administration that directs federal agencies to avoid dependence on a single classified-ready frontier AI system. 

“The DOD has no backups. This is a single-vendor situation here,” he continued. “They can’t fix that overnight.”

0 comments:

Post a Comment