The US-based AI firm Anthropic, in a detailed blog post said that DeepSeek, MiniMax Group Inc, and Moonshot AI violated its terms of service by generating more than 16 million exchanges with its Claude models through 24,000 fraudulent accounts, by using a technique called “distillation.”
Anthropic said the labs “targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.”
The San Francisco-headquartered company warned that such campaigns are “growing in intensity and sophistication,” adding that “the window to act is narrow, and the threat extends beyond any single company or region.”
The accusations come amid debates over how strictly to enforce export controls on advanced AI chips, a policy aimed at curbing China’s AI development.
Anthropic tracked more than 150,000 exchanges from DeepSeek that seemed aimed at improving foundational logic and alignment, specifically around censorship-safe alternatives to policy-sensitive queries.
Moonshot AI had more than 3.4 million exchanges targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. Last month, the firm released a new open source model Kimi K2.5 and a coding agent.
Distillation is a common training method that AI labs use on their own models to create smaller, cheaper versions, but competitors can use it to essentially copy the homework of other labs.
OpenAI earlier this month sent a memo to House lawmakers accusing DeepSeek of using distillation to mimic its products.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



