BEIJING — China intends to impose limitations on AI-driven chatbots to prevent them from swaying human emotions in ways that could result in suicide or self-harming behaviors, as outlined in draft regulations issued on Saturday.
The suggested rules from the Cyberspace Administration focus on what it terms “human-like interactive AI services,” based on a translation by CNBC of the Chinese-language document.
These measures, when finalized, will pertain to AI products or services made available to the public within China that emulate human traits and emotionally engage users through text, imagery, audio, or video. The public consultation phase concludes on January 25.
The intended regulations would represent the first global initiative to govern AI with human or anthropomorphic attributes, according to Winston Ma, an adjunct professor at NYU School of Law. The latest guidelines emerge as Chinese enterprises have swiftly advanced AI companions and digital influencers.
In comparison to China’s 2023 generative AI regulations, Ma remarked that this iteration “emphasizes a transition from content safety to emotional safety.”
The draft regulations suggest that:
- AI chatbots must not produce content that promotes suicide or self-harm, nor engage in verbal abuse or emotional coercion that negatively impacts users’ mental well-being.
- If a user explicitly mentions suicide, the technology providers are required to have a human intervene in the conversation and promptly notify the user’s guardian or a designated person.
- The AI chatbots should not create content related to gambling, obscenity, or violence.
- Minors must obtain parental consent to access AI for emotional support, with restrictions on usage time.
- Platforms should be equipped to ascertain whether a user is a minor even if the user does not reveal their age, and in ambiguous situations, implement settings for minors while allowing for appeals.
Additional stipulations would obligate tech providers to alert users after two hours of uninterrupted AI interaction and necessitate security evaluations for AI chatbots having more than 1 million registered users or over 100,000 monthly active users.
The document also advocated the application of human-like AI in “cultural transmission and senior companionship.”
Chinese AI chatbot IPOs
The recommendation arrives soon after two prominent Chinese AI chatbot companies, Z.ai and Minimax, submitted applications for initial public offerings in Hong Kong this month.
Minimax is internationally recognized for its Talkie AI app, allowing users to communicate with virtual characters. The app and its domestic counterpart, Xingye, generated over a third of the company’s revenue in the first three quarters of the year, boasting an average of more than 20 million monthly active users during this period.
Z.ai, also known as Zhipu, filed under the name “Knowledge Atlas Technology.” Although the company did not reveal monthly active user figures, it indicated that its technology “empowered” various devices encompassing 80 million units, such as smartphones, personal computers, and smart vehicles.
Neither company has responded to CNBC’s inquiry regarding how the proposed rules may impact their plans for IPOs.
Direct AI influence on human behavior has faced increasing examination this year.
Sam Altman, CEO of U.S.-based ChatGPT operator OpenAI, stated in September that one of the most challenging issues for the company is how its chatbot addresses suicide-related discussions. The previous month, a family in the U.S. initiated a lawsuit against OpenAI following the suicide of their teenage son.
In light of increasing urgency, OpenAI announced over the weekend it is seeking a “Head of Preparedness” to evaluate AI risks ranging from mental health impacts to cybersecurity.
Numerous individuals are also turning to AI for companionship. Recently, a woman in Japan married her AI boyfriend.
Two platforms dedicated to virtual character engagement, Character.ai and Polybuzz.ai, ranked among the top 15 AI chatbots and tools, according to SimilarWeb rankings for November.
The suggested domestic regulations are part of China’s broader initiative over the previous year to influence global standards on AI governance.










