Prime Minister Keir Starmer said the government intends to strengthen online safety rules and close a loophole that currently allows AI chatbots to operate outside protections applied to social media and public platforms. The goal is to require all chatbot providers to comply with safety standards, particularly those related to shielding children from illegal and harmful content, including non-consensual sexualized material or generated images.
The current legislative proposals, expected to be introduced quickly after public consultations conclude, would expand obligations for AI chatbot operators to monitor and remove content that could harm minors. They would also formally bring such tools under the UK’s Online Safety Act, which until now has mainly covered social platforms and public forums but did not fully address one-to-one interactions with AI systems.
The government hopes the new rules will allow authorities to respond to risks far more rapidly — within months rather than years — as the technology evolves quickly. At the same time, Starmer and his ministers are consulting on the possibility of introducing a minimum age threshold, such as restricting certain online services for users under 16, similar to rules already in force in Australia, where since December 2025 people under 16 have been barred from holding accounts on major social media platforms.
Stricter regulation of AI chatbots in the UK reflects a broader international trend of governments responding to risks tied to young users’ access to AI. In Australia, alongside minimum age rules for social media, regulators require companies to verify users’ ages and impose penalties on platforms that fail to take adequate measures.
Other countries and organizations are also taking action. Spain has launched investigations into platforms such as X, Meta, and TikTok over possible distribution of AI-generated material involving minors, which could violate existing child-protection laws.
Experts note that the push for regulation is driven not only by cases of inappropriate content generation, but also by observations that children and teenagers increasingly interact with chatbots for emotional support or information, raising potential privacy and safety concerns.

