Anthropic CEO Dario Amodei said it was ready to continue working with the Pentagon, but would not change its stance (regarding use of its AI for) robotic weaponry and domestic surveillance. Amodei previously criticized the Trump administration’s drive to allow exports of American AI chips to China. He compared the policy to “selling nuclear weapons to North Korea.” Anthropic said three Chinese AI companies set up 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up. The three companies, Deep Seek, Moonshot AI and AI, prompted Claude 16 million times to siphon information from Anthropic to train their own products. Anthropic is the only AI lab with classified DOD access. The DOD doesn’t have a backup option currently. "This is a single vendor situation. If Anthropic cancels, it will be a serious situation for DOD."U.S. artificial-intelligence startup Anthropic said three Chinese AI companies set up more than 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up.
The three companies—DeepSeek, Moonshot AI and MiniMax—prompted Claude more than 16 million times, siphoning information from Anthropic’s system to train and improve their own products, Anthropic said in a blog post Monday.
Earlier this month, an Anthropic rival, OpenAI, sent a memo to House lawmakers accusing DeepSeek of using the same tactic, called distillation, to mimic OpenAI’s products.
Anthropic said distillation had legitimate uses—companies use it to build smaller versions of their own products, for example—but it could also be used to build competitive products “in a fraction of the time, and at a fraction of the cost.”
The scale of the different companies’ distillation activity varied. DeepSeek engaged in 150,000 interactions with Claude, whereas Moonshot and MiniMax had more than 3.4 million and 13 million, respectively, Anthropic said.
Representatives from DeepSeek, Moonshot and MiniMax didn’t respond to requests for comment.
Many Chinese companies including Moonshot and MiniMax have recently released their latest AI models, many of which feature enhanced reasoning and coding capabilities. DeepSeek is preparing to roll out its next-generation model soon.
When DeepSeek first captured the attention of AI enthusiasts last year, it raised concerns that China might be able to quickly catch up with U.S. AI companies even without having access to the most powerful AI chips. AI observers speculated that DeepSeek might have used distillation.
In a research paper updated in September, DeepSeek said that during a late stage of pretraining its flagship V3 model, it exclusively used plain webpages and ebooks, without incorporating any synthetic data. However, it said some webpages contained “a significant number of OpenAI-model-generated answers.” DeepSeek said its base model might have acquired knowledge from other powerful models indirectly by drawing on such webpages.
Synthetic data, often using distillation, has been increasingly adopted for training large foundation models as developers face a shortage of high-quality data and focus on giving models so-called agentic capabilities, meaning allowing them to take action proactively to complete tasks on behalf of users. In a technical report in July, Moonshot said it used synthetic data for training its Kimi K2 model.
Anthropic said the activity by the Chinese developers raised national-security concerns for the U.S. “Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems,” the company said.
Anthropic CEO Amodei has also criticized the Trump administration’s drive to allow exports of American AI chips to China. On the sidelines of the World Economic Forum in Davos, Switzerland last month, Amodei compared the policy to “selling nuclear weapons to North Korea.” After meeting with Amodei this month on Capitol Hill, Sen. Elizabeth Warren (D-Massachusetts) said she would introduce legislation to sharply limit any export.
Amodei said in a statement late Thursday that his company was ready to continue working with the Pentagon, but would not change its stance. Current AI systems are not reliable enough to power robotic weaponry without putting troops and civilians alike at risk, he said, and existing laws on domestic surveillance do not account for the sweeping potential of AI snooping tools.
It’s a serious game of chicken, and Anthropic may not be the one to blink first. According to Reuters, Anthropic doesn’t plan on easing its usage restrictions.
Anthropic is the only frontier AI lab with classified DOD access, according to several reports. The Department of Defense doesn’t have a backup option currently in play — though the Pentagon has reportedly reached a deal to use xAI’s Grok in classified systems.
That lack of redundancy may help explain the Pentagon’s aggressive posture, Ball argued.
“If Anthropic canceled the contract tomorrow, it would be a serious problem for the DOD,” he told TechCrunch, noting the agency appears to be falling short of a National Security Memorandum from the late Biden administration that directs federal agencies to avoid dependence on a single classified-ready frontier AI system.
“The DOD has no backups. This is a single-vendor situation here,” he continued. “They can’t fix that overnight.”


















0 comments:
Post a Comment