Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»UK tightens laws on chatbots to protect children from AI-related risks
    News

    UK tightens laws on chatbots to protect children from AI-related risks

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczFebruary 17, 20262 Mins Read
    LinkedIn Twitter Threads

    Prime Minister Keir Starmer said the government intends to strengthen online safety rules and close a loophole that currently allows AI chatbots to operate outside protections applied to social media and public platforms. The goal is to require all chatbot providers to comply with safety standards, particularly those related to shielding children from illegal and harmful content, including non-consensual sexualized material or generated images.

    The current legislative proposals, expected to be introduced quickly after public consultations conclude, would expand obligations for AI chatbot operators to monitor and remove content that could harm minors. They would also formally bring such tools under the UK’s Online Safety Act, which until now has mainly covered social platforms and public forums but did not fully address one-to-one interactions with AI systems.

    The government hopes the new rules will allow authorities to respond to risks far more rapidly — within months rather than years — as the technology evolves quickly. At the same time, Starmer and his ministers are consulting on the possibility of introducing a minimum age threshold, such as restricting certain online services for users under 16, similar to rules already in force in Australia, where since December 2025 people under 16 have been barred from holding accounts on major social media platforms.

    Stricter regulation of AI chatbots in the UK reflects a broader international trend of governments responding to risks tied to young users’ access to AI. In Australia, alongside minimum age rules for social media, regulators require companies to verify users’ ages and impose penalties on platforms that fail to take adequate measures.

    Other countries and organizations are also taking action. Spain has launched investigations into platforms such as X, Meta, and TikTok over possible distribution of AI-generated material involving minors, which could violate existing child-protection laws.

    Experts note that the push for regulation is driven not only by cases of inappropriate content generation, but also by observations that children and teenagers increasingly interact with chatbots for emotional support or information, raising potential privacy and safety concerns.

    Share. Twitter LinkedIn Threads

    Related Posts

    News

    New AI laundry-folding robot goes on sale — and it isn’t cheap

    February 17, 2026
    News

    Japan introduces the world’s first “30% hydrogen” gas engine — a step toward energy decarbonization

    February 16, 2026
    News

    Global AI Summit in New Delhi — technologists, leaders, and billions in investments flow into India

    February 16, 2026
    Read more

    “We basically generated a pathway that at that time did not exist” – Prof. Wiendl on the first certified Clinical Decision Support System.

    February 5, 2026

    Wearable Waste. The Ecological Price of Medical Devices

    February 4, 2026

    “Despite Automation, Medical AI Research Is Still About Talking to People.” Kaapana Platform Powers Medical AI Research Across University Clinics

    February 3, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.