Site iconSite icon ForkLog

Anthropic accuses Chinese AI labs of ‘data theft’

Anthropic accuses Chinese AI labs of 'data theft'

Anthropic accused three Chinese AI startups — DeepSeek, Moonshot and MiniMax — of running a large-scale campaign to use Claude to improve their own models.

Labs from China generated more than 16 million interactions with the chatbot via roughly 24,000 fraudulent accounts, violating terms of use and regional restrictions.

“We have a high degree of confidence linking each campaign to a specific firm based on correlations of IP addresses, request metadata, infrastructure signals and confirmations from partners in the industry. They targeted the most unique capabilities of Claude: agentic reasoning, tool use and programming,” Anthropic said.

The firms used distillation — training a less powerful neural network on the outputs of a stronger one.

It is a widely used and legitimate method. Leading AI labs regularly distil their own models to create compact, cheaper versions for clients.

“However, it can also be used illegally: competitors improve capabilities at the expense of another’s LLM in a fraction of the time and cost compared with developing on their own,” the Anthropic blog says.

The company stressed that the window to respond to such “theft” is narrow, and the threat extends beyond a single firm or region. Addressing it will require swift, coordinated action by industry, regulators and the global AI community.

Why this is dangerous

Anthropic set out the risks. Illegally distilled models do not retain necessary safety mechanisms — creating national-security concerns.

American firms are deploying systems to prevent the use of AI in developing biological weapons, conducting malicious cyberattacks and other dangerous acts. Models created through unlawful distillation do not receive such constraints.

Foreign labs can integrate unprotected capabilities into military and intelligence systems, enabling authoritarian governments to use advanced AI for cyberattacks, disinformation and mass surveillance, the company added.

Countermeasures

Anthropic experts backed export controls to preserve US leadership in AI. In their view, distillation attacks undermine these measures, allowing overseas labs to narrow the technology gap.

“Without transparency into such attacks, the rapid progress of Chinese labs is mistakenly read as proof that export controls are ineffective. In practice, their achievements largely depend on extracting capabilities from American models, and scaling this approach requires access to cutting-edge chips,” the company blog says.

Anthropic listed its own countermeasures:

This is not the first such allegation. In January 2025, shortly after the explosive debut of DeepSeek‑R1, the company was suspected of stealing data from OpenAI.

Continuing clash with the Pentagon

Anthropic CEO Dario Amodei will meet Defence Secretary Pete Hegseth at the Pentagon to discuss ways the company’s AI models could be used by the military.

The sides have fallen out of late — Anthropic opposes using AI for mass surveillance of US citizens and for building autonomous weapons. The Defence Department has made clear it intends to use LLMs “for all lawful scenarios” without restrictions.

It has gone so far that the Pentagon said it may terminate its contract with Anthropic.

An AI vulnerability scanner

Shares of leading publicly listed cybersecurity firms fell after Anthropic launched Claude Code Security — an AI code vulnerability scanner.

The company says the new service “analyses the entire codebase for vulnerabilities, verifies each finding to minimise false positives and proposes fixes”.

Claude conducts analysis “like an experienced security researcher”: it understands context, traces data flows and detects vulnerabilities.

According to VentureBeat, Claude Opus 4.6 identified more than 500 critical vulnerabilities that had persisted for decades despite expert reviews.

The five largest US publicly traded cybersecurity firms by market value posted double-digit declines over the past five days amid the arrival of an AI competitor:

Palo Alto Networks share-price chart. Source: Yahoo Finance.

Wedbush analysts said the sell-off reflects worries over the so-called AI Ghost Trade. In their view, the market’s reaction is mistaken, and Palo Alto, CrowdStrike and Zscaler will prove their effectiveness in 2026.

In February, OpenAI together with Paradigm introduced EVMbench — a benchmark to assess AI agents’ ability to identify, fix and exploit flaws in smart contracts.

Exit mobile version