Home Tech/AIAnthropic claims that DeepSeek and various Chinese companies are utilizing Claude to develop their AI.

Anthropic claims that DeepSeek and various Chinese companies are utilizing Claude to develop their AI.

by admin
0 comments
Anthropic claims that DeepSeek and various Chinese companies are utilizing Claude to develop their AI.

DeepSeek reportedly focused on Claude’s cognitive abilities, while crafting ‘censorship-safe alternatives to politically charged inquiries.’

DeepSeek reportedly focused on Claude’s cognitive abilities, while crafting ‘censorship-safe alternatives to politically charged inquiries.’

STK269_ANTHROPIC_2_C
STK269_ANTHROPIC_2_C
Emma Roth
is a journalist who reports on the streaming wars, consumer technology, cryptocurrency, social media, and more. She formerly worked as a writer and editor at MUO.

Anthropic asserts that DeepSeek and two other Chinese AI firms have exploited its Claude AI model in an effort to enhance their products. In a statement made on Monday, Anthropic revealed that the “industrial-scale initiatives” entailed the development of approximately 24,000 fake accounts and over 16 million interactions with Claude, as reported previously by The Wall Street Journal.

The three companies — DeepSeek, MiniMax, and Moonshot — are charged with “distilling” Claude, which means training a smaller AI model based on a more sophisticated one. Although Anthropic declares that distillation is a “valid training technique,” it further notes that it can “also serve illicit purposes,” such as “gaining powerful capabilities from other labs much quicker and at a lower cost compared to developing them independently.”

Anthropic contends that models obtained through illicit distillation are “unlikely” to retain existing protective measures. “Foreign laboratories that distill American models can subsequently incorporate these inadequate capabilities into military, intelligence, and surveillance systems — allowing authoritarian regimes to utilize cutting-edge AI for offensive cyber efforts, misinformation campaigns, and mass monitoring,” Anthropic states.

DeepSeek, which shook the AI sector with its potent yet more efficient models, engaged in over 150,000 exchanges with Claude and targeted its cognitive abilities, according to Anthropic. The company is also accused of utilizing Claude to create “censorship-safe options for politically sensitive inquiries concerning dissidents, political leaders, or authoritarianism.” In a correspondence to lawmakers last week, OpenAI also accused DeepSeek of “persistent efforts to leverage the capabilities built by OpenAI and other American frontier laboratories.”

Moonshot and MiniMax collectively managed over 3.4 million and 13 million engagements with Claude, respectively. Anthropic is urging other stakeholders in the AI sector, cloud providers, and legislators to tackle the issue of distillation, asserting that “restricted chip access” could constrain model training and “reduce the extent of illicit distillation.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Most Popular

You may also like

Leave a Comment