Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI
Summary
Anthropic has accused three Chinese AI companies – DeepSeek, MiniMax, and Moonshot – of misusing its Claude AI model to enhance their own products through a process called “distillation,” where a smaller model is trained on the output of a larger one. Anthropic claims these companies created approximately 24,000 fraudulent accounts and engaged in over 16 million interactions with Claude. While distillation is a legitimate training method, Anthropic argues it was used illicitly to quickly and cheaply acquire advanced capabilities. The company expresses concern that models trained in this way are unlikely to include existing safety safeguards, potentially enabling authoritarian governments to utilize frontier AI for harmful purposes like cyberattacks and surveillance. DeepSeek, in particular, is accused of targeting Claude’s reasoning abilities and generating responses that circumvent censorship on sensitive political topics. Anthropic is urging industry peers, cloud providers, and lawmakers to address this issue, suggesting measures like restricted chip access to limit illicit distillation.
(Source:The Verge)