Anthropic says Chinese AI firms used Claude in distillation attacks

March 21, 20262 min read2 sources
Share:
Anthropic says Chinese AI firms used Claude in distillation attacks

Anthropic has accused Chinese AI companies DeepSeek, Moonshot AI and MiniMax of using its Claude models for unauthorized “distillation” attacks, according to reporting by Infosecurity Magazine. The company says the firms queried Claude at scale to help train rival systems that could mimic some of Claude’s capabilities, a practice Anthropic says violated its terms of service and attempted to bypass its safeguards.

The allegation centers on model distillation, a standard machine learning technique that becomes contentious when developers use outputs from a proprietary system without permission to build a competing product. In this case, Anthropic is framing the issue less as a conventional software exploit and more as model extraction through API abuse: repeated prompts, automated collection of responses and use of those outputs as training data.

Anthropic’s warning adds to a growing security concern for major AI providers. Unlike a traditional breach, there are no CVEs, malware indicators or publicly disclosed infrastructure compromises tied to the claim. The attack surface is the model interface itself. Providers typically respond with rate limits, account monitoring, prompt-pattern analysis and stronger abuse detection, but those controls can be difficult to enforce if actors spread activity across accounts or infrastructure.

The broader impact is commercial as much as technical. Frontier AI models are expensive to build, and large-scale extraction can let competitors reproduce useful behaviors at a fraction of the cost. The case also raises questions about whether model outputs should be treated more like protected intellectual property and whether AI vendors will further restrict access, logging and customer verification. For enterprise users, that could mean tighter controls around how models are accessed, tested and integrated, including over remote connections where teams may already rely on a VPN.

The named firms had not publicly rebutted the allegations in the source report. Anthropic’s claims arrive amid intensifying US-China competition in generative AI, where disputes over model theft, training data and API misuse are becoming part of the wider cybersecurity and policy debate.

Share:

// SOURCES

// RELATED

Most 'AI SOCs' are just faster triage, and that's not enough

Many AI security tools only speed up alert analysis, failing to reduce analyst workload. Experts argue real gains require AI that automates response a

2 min readApr 17

ZionSiphon malware designed to sabotage water treatment systems

A new proof-of-concept malware, ZionSiphon, demonstrates how attackers can sabotage water treatment plants by manipulating industrial control systems.

2 min readApr 17

ThreatsDay bulletin: A deep dive into the Defender 0-day, SonicWall attacks, and a 17-year-old Excel flaw

This week’s threat bulletin is a heavy one. We analyze the critical Microsoft Defender 0-day, a massive SonicWall brute-force campaign, and a 17-year-

6 min readApr 17

Microsoft Defender's 'RedSun' zero-day: A researcher's protest and a threat to Windows systems

A researcher's protest exposed a critical zero-day in Microsoft Defender, allowing attackers full system control. Here's the technical breakdown and h

7 min readApr 17