Anthropic has accused three prominent Chinese artificial intelligence firms of using its Claude chatbot on a massive scale to secretly train rival models—an unexpected development in a yearslong global debate over where fraud ends and industry standard practice begins.
In a blog post on Monday, San Francisco–based Anthropic alleged that Chinese labs DeepSeek, Moonshot AI, and MiniMax violated corporate law by interacting with Claude, its market-reshaping vibe-coding tool. “We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models,” the company said. “These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”
According to Anthropic, the Chinese companies relied on a technique known as “distillation,” in which one model is trained on the outputs of another, often a more capable system. The campaigns allegedly focused on areas that Anthropic considers key differentiators for Claude, including complex reasoning, coding assistance, and tool use.
Anthropic argues that while distillation is a “widely used and legitimate training method,” the Chinese firms’ use of it in this manner may have been “for illicit purposes.” Using sprawling networks of fake accounts to replicate a competitor’s proprietary model violates its terms of service and undermines U.S. export controls aimed at constraining China’s access to cutting‑edge AI, Anthropic said, urging “rapid, coordinated action among industry players, policymakers, and the global AI community.”







