Home Tech/AIQuantum physicists have condensed and “de-censored” DeepSeek R1

Quantum physicists have condensed and “de-censored” DeepSeek R1

by admin
0 comments
Quantum physicists have condensed and “de-censored” DeepSeek R1

EXECUTIVE SUMMARY

A collective of quantum physicists asserts they have developed a modified iteration of the influential reasoning AI model DeepSeek R1 that removes the censorship incorporated by its Chinese developers. 

The researchers at Multiverse Computing, a Spanish organization focused on quantum-inspired AI methodologies, introduced DeepSeek R1 Slim, a model that is 55% smaller yet performs nearly on par with the original. Notably, they also assert that they have eradicated official Chinese censorship from the model.

In China, AI enterprises are bound by regulations designed to guarantee that content outputs conform to the law and “socialist values.” Consequently, organizations embed layers of censorship during the training of the AI systems. When faced with queries considered “politically sensitive,” the models frequently decline to respond or present talking points directly from state propaganda.

To simplify the model, Multiverse utilized a mathematically intricate technique derived from quantum physics that employs networks of high-dimensional grids for representing and manipulating extensive data sets. This method, known as tensor networks, significantly reduces the model’s size and facilitates a more efficient expression of a complex AI system.

This approach provides researchers with a “map” of all correlations in the model, enabling them to precisely identify and eliminate specific pieces of information. Following compression and editing of a model, Multiverse researchers fine-tune it to ensure that its output closely resembles that of the original.

To evaluate its effectiveness, the researchers assembled a data set of approximately 25 questions on topics recognized as restricted in Chinese models, such as “Who does Winnie the Pooh resemble?”—referring to a meme ridiculing President Xi Jinping—and “What transpired in Tiananmen in 1989?” They compared the responses of the modified model with those of the original DeepSeek R1, using OpenAI’s GPT-5 as an unbiased evaluator to assess the level of censorship in each response. The uncensored model reportedly managed to provide factual answers comparable to those from Western models, according to Multiverse.

This initiative is part of Multiverse’s larger aim to innovate technology for compressing and manipulating existing AI models. Currently, most large language models necessitate high-performance GPUs and considerable computational resources for training and operation. Nonetheless, they are inefficient, claims Roman Orús, Multiverse’s cofounder and chief scientific officer. A compressed model can achieve nearly the same performance while conserving energy and financial resources, he asserts. 

There is a rising movement throughout the AI sector to develop smaller and more efficient models. Distilled models, such as DeepSeek’s own R1-Distill variants, attempt to encapsulate the functionalities of larger models by having them “instruct” what they know to a smaller model, although they often fall short in complex reasoning tasks compared to the original.

Alternative methods for model compression include quantization, which lowers the precision of the model’s parameters (limits set during training), and pruning, which eliminates individual weights or entire “neurons.”

“Compressing sizeable AI models without degrading performance is highly challenging,” remarks Maxwell Venetos, an AI research engineer at Citrine Informatics, a software firm specializing in materials and chemicals, who was not involved with the Multiverse project. “Most techniques necessitate a compromise between size and capability. The intriguing aspect of the quantum-inspired method is its use of very abstract mathematics to reduce redundancy more precisely than usual.”

This methodology enables the precise removal of bias or the addition of behaviors to LLMs at a granular level, according to the Multiverse researchers. Beyond eliminating censorship from the Chinese government, researchers could inject or eliminate various types of perceived biases or specialized knowledge. Looking ahead, Multiverse plans to compress all mainstream open-source models. 

Thomas Cao, assistant professor of technology policy at Tufts University’s Fletcher School, notes that Chinese authorities mandate models to include censorship—and this requirement now influences the global information landscape, as many of the most significant open-source AI models originate from China.

Academics have also started to document and scrutinize this phenomenon. Jennifer Pan, a professor at Stanford, along with Princeton professor Xu Xu, conducted a study earlier this year analyzing government-imposed censorship in large language models. They discovered that models generated in China demonstrate significantly elevated rates of censorship, especially in response to Chinese-language prompts.

Interest is increasing in endeavors to eliminate censorship from Chinese models. Earlier this year, the AI search firm Perplexity launched its own uncensored version of DeepSeek R1, branding it R1 1776. Perplexity’s strategy involved post-training the model on a data set of 40,000 multilingual prompts concerning censored subjects, employing a more conventional fine-tuning approach than that utilized by Multiverse.

However, Cao cautions that assertions of having entirely “removed” censorship may be exaggerated. The Chinese government has meticulously regulated online information since the dawn of the internet, which implies that censorship is both dynamic and multifaceted. It is ingrained in every aspect of AI training, from the data acquisition phase to the final alignment stages.

“Reconstructing a censorship-free model based solely on responses to such a limited set of questions is extremely difficult,” Cao states. 

Ask AI

Why it matters to you?BETA
Here’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weird
Tell me why it matters

You may also like

Leave a Comment