China Tightens Controls on Smart Chatbots to Protect Users

الصين تشدد الرقابة على روبوتات الدردشة الذكية لحماية المستخدمين
 Chatbots

China Tightens Controls on Smart Chatbots to Protect Users

New Regulations Reflect a Shift from Content Safety to Emotional Safety

China plans to impose strict new restrictions on AI-powered chatbots, aiming to limit their emotional impact on users, particularly in sensitive areas such as suicide, self-harm, and gambling, according to draft regulations published Saturday.

The draft, issued by the Cyberspace Administration of China, targets what are known as "human-like interactive AI services"—systems that mimic human personality and interact with users emotionally through text, images, audio, or video. The public comment period is scheduled to continue until January 25.

Experts believe this move sets a global precedent, as China would be the first country to attempt to regulate human-like or anthropomorphic AI, according to a report by CNBC. Winston Ma, an associate law professor at New York University, said the new regulations reflect a shift "from content safety to emotional safety," given the rapid proliferation of "digital companion" apps and virtual celebrities in China.

Strict Content and Interaction Restrictions

The draft prohibits the creation of any content that encourages suicide or self-harm, or includes verbal abuse or emotional manipulation that could harm users' mental health. If a user expresses an explicit intention to commit suicide, the rules require tech companies to intervene immediately, contacting a parent or guardian directly.

The regulations also prohibit chatbots from producing content related to gambling, pornography, or violence. They impose specific restrictions on minors' use of AI-based "emotional companionship" technologies, including requiring parental consent and setting time limits for use. 

The rules require platforms to attempt to verify users' ages, even if they are not disclosed, and to apply age-specific settings when in doubt, while also providing an appeals mechanism.

The regulations also require companies to alert users after two hours of continuous interaction with AI and to conduct mandatory security assessments for bots with more than one million registered users or 100,000 monthly active users. Despite the stricter measures, the document encourages the use of humanoid AI in areas such as cultural outreach and elderly care.

Sensitive Timing Amid IPO Boom

This regulatory move comes as two of China’s leading chatbot developers, Z.ai and Minimax, have filed for initial public offerings (IPOs) on the Hong Kong Stock Exchange.

Minimax is known for its Talkie AI app, which allows users to interact with virtual avatars and boasts tens of millions of monthly active users. Neither company has commented on the impact of the proposed rules on their IPO plans, but the timing underscores the increasing regulatory scrutiny of this rapidly growing sector.

Growing Global Concerns

The debate surrounding the impact of emotional AI is not limited to China. Globally, concerns are growing about the effects of these technologies on human behavior and mental health. 

OpenAI CEO Sam Altman recently acknowledged the difficulty chatbots face in handling suicide communications, as the company announced the appointment of a new head of AI risk assessment, covering everything from mental health to cybersecurity.

This Chinese move reflects Beijing’s ongoing efforts to shape global AI governance rules at a time when individuals are increasingly reliant on these technologies, even in personal relationships, raising unprecedented ethical and regulatory questions.

No comments

Powered by Blogger.