Chinese authorities are well known for their deep involvement in citizens’ lives and public affairs. It was only a matter of time before they turned their attention to AI-generated responses, which can be difficult to fully control and moderate. That moment has now arrived, as China moves to rein in sensitive topics that AI systems may address.
The new guidelines were issued by the Cyberspace Administration of China (CAC) and apply to chatbots offering both text- and voice-based conversations. Regulators require that AI systems must not provoke strong emotional reactions, encourage self-destructive behavior, or create relationships that could replace real human interactions.
The rules explicitly ban the generation of content related to suicide, gambling, and violence, and require companies to implement real-time filtering and moderation mechanisms capable of detecting such topics. When risky behavior is identified, the chatbot must redirect the user to safe information or terminate the conversation.
The new regulations cover a growing group of Chinese consumer chatbots, including Zhipu AI, MiniMax, Moonshot, Talkie, and Xingye, which have gained tens of millions of users in China in recent months. It is worth noting that some of these apps had already faced criticism for overly “empathetic” responses and for simulating close emotional relationships with users.
The state also requires chatbots to clearly inform users that they are AI systems, not humans, and to ensure that their responses align with “core socialist values” and existing laws. Companies that fail to comply may be forced to suspend certain features or shut down entire services, although specific financial penalties and enforcement timelines have not yet been disclosed.
It is increasingly clear that China is aiming for strict control over generative AI, especially in areas related to social influence and mental health. Authorities are treating chatbots as tools capable of shaping user behavior, rather than as neutral communication technologies. These changes raise questions about how far companies will have to limit conversational features to meet the new requirements. Comparable regulations explicitly banning “emotional chatbots” are largely absent in the West; the European Union and the United States tend to focus instead on preventing manipulation and protecting consumers, leaving the design of empathetic interfaces largely to companies—at least until demonstrable harm occurs.

