Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»UK tightens laws on chatbots to protect children from AI-related risks
    News

    UK tightens laws on chatbots to protect children from AI-related risks

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczFebruary 17, 20262 Mins Read
    LinkedIn Twitter Threads Reddit
    Share
    Twitter LinkedIn Threads Reddit

    Prime Minister Keir Starmer said the government intends to strengthen online safety rules and close a loophole that currently allows AI chatbots to operate outside protections applied to social media and public platforms. The goal is to require all chatbot providers to comply with safety standards, particularly those related to shielding children from illegal and harmful content, including non-consensual sexualized material or generated images.

    The current legislative proposals, expected to be introduced quickly after public consultations conclude, would expand obligations for AI chatbot operators to monitor and remove content that could harm minors. They would also formally bring such tools under the UK’s Online Safety Act, which until now has mainly covered social platforms and public forums but did not fully address one-to-one interactions with AI systems.

    The government hopes the new rules will allow authorities to respond to risks far more rapidly — within months rather than years — as the technology evolves quickly. At the same time, Starmer and his ministers are consulting on the possibility of introducing a minimum age threshold, such as restricting certain online services for users under 16, similar to rules already in force in Australia, where since December 2025 people under 16 have been barred from holding accounts on major social media platforms.

    Stricter regulation of AI chatbots in the UK reflects a broader international trend of governments responding to risks tied to young users’ access to AI. In Australia, alongside minimum age rules for social media, regulators require companies to verify users’ ages and impose penalties on platforms that fail to take adequate measures.

    Other countries and organizations are also taking action. Spain has launched investigations into platforms such as X, Meta, and TikTok over possible distribution of AI-generated material involving minors, which could violate existing child-protection laws.

    Experts note that the push for regulation is driven not only by cases of inappropriate content generation, but also by observations that children and teenagers increasingly interact with chatbots for emotional support or information, raising potential privacy and safety concerns.

    Related Posts

    Analytics

    Digital Therapeutics: Who Stands to Gain?

    April 22, 2026
    News

    US wants to spend more on drones than the defense budgets of entire nations

    April 22, 2026
    News

    European Commission approves the first mCOMBRIAX vaccine: Moderna’s combined COVID-19 and flu protection

    April 22, 2026
    Read more

    From AI Picking to Robots by Subscription: How Industrial Robotics Is Changing

    April 15, 2026

    Sex toys got an upgrade. The kitchen didn’t. Maria Kardakova wants to fix that

    April 10, 2026

    The Biggest Bet in Commercial Aviation — Next Narrow-Body Aircraft

    April 8, 2026
    Demo
    X (Twitter) Instagram Threads LinkedIn Reddit
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.