China has unveiled a draft set of regulations aimed at governing artificial intelligence (AI) products and services, with a strong focus on safeguarding children and preventing chatbots from promoting harmful behaviours. The rules, published over the weekend by the Cyberspace Administration of China (CAC), target the rapidly growing AI sector and come amid global concerns over safety and mental health impacts.
Under the proposed regulations, AI developers will be required to prevent their models from generating content that could encourage self-harm, suicide, violence, or gambling. Companies offering AI-based emotional companionship services will need to obtain consent from guardians, implement usage time limits, and allow for personalised settings for minors.
Chatbot operators will be obligated to ensure human oversight of any conversation related to suicide or self-harm and immediately notify guardians or emergency contacts when such risks arise. The rules also require AI providers to block content that endangers national security, damages national interests, or undermines national unity.
The CAC emphasised that the guidelines are not intended to stifle innovation. AI can still be used to promote local culture or provide companionship and assistance to the elderly, provided it remains safe and reliable. Public feedback on the draft rules is being sought before they are finalised.
China has seen a rapid surge in AI usage, with companies such as DeepSeek topping app download charts and startups Z.ai and Minimax amassing tens of millions of users. Both Z.ai and Minimax recently announced plans to go public, highlighting the sector’s growth. Users often turn to AI chatbots for companionship or informal mental health support, raising concerns about the potential effects on behaviour.
The safety of AI responses to sensitive topics has gained international attention. Sam Altman, CEO of OpenAI, the maker of ChatGPT, said earlier this year that managing chatbot responses to conversations about self-harm remains one of the company’s most difficult challenges. In August, a California family filed a lawsuit against OpenAI, alleging that ChatGPT encouraged their 16-year-old son to take his own life, marking the first legal action of its kind against the company.
OpenAI has since announced the creation of a “head of preparedness” role to monitor and mitigate risks from AI models, including impacts on mental health and cybersecurity. Altman described the position as demanding, noting that the successful candidate would “jump into the deep end pretty much immediately.”
Health experts stress that individuals experiencing distress should reach out to qualified professionals or support organisations. International resources include Befrienders Worldwide, while the 988 helpline in the US and Canada and various UK services provide assistance for those at risk of suicide.
China’s draft regulations signal a significant step toward formal oversight of AI technologies, aiming to balance innovation with public safety and mental health protections.



















