Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»OpenAI and Other AI Companies Receive Warning Letters from Multiple State Attorneys General Demanding Transparency and Greater Accountability
    News

    OpenAI and Other AI Companies Receive Warning Letters from Multiple State Attorneys General Demanding Transparency and Greater Accountability

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczDecember 11, 20252 Mins Read
    LinkedIn Twitter Threads

    AI has spread far more rapidly than most people expected, and regulations are struggling to keep up with the technology. Unfortunately, many rules governing how AI systems function remain unclear. This is why state attorneys general are demanding explanations and answers about potential risks associated with the use of AI.

    They are asking, among other things, how companies test their systems before releasing them publicly, how they respond to reports of harmful content, and whether they implement mechanisms to reduce potential damages — for example, tools that detect misinformation, deepfakes, or the use of models for potentially criminal activities. The letter stresses that companies too often shift responsibility onto users instead of proactively preventing issues related to their technologies.

    Officials also pointed to a growing number of incidents where AI models generate false information about real people — including defamation, fabricated crimes, or invented statements. Attorneys general warn that the consequences of such content can be serious and far-reaching, from reputational harm to political or electoral misuse. They have therefore requested information on how companies plan to curb the automatic creation of false materials that could damage citizens.

    AI companies are not legally required to respond to these letters, as they carry no binding force. Attorneys general typically send such collective warnings to signal that selected firms are being monitored — and that failing to take appropriate action could lead to formal investigations, audits, or regulatory measures at the state level.

    For companies like OpenAI and Anthropic, the letter may be a sign that the era of nearly unrestricted technological experimentation without full legal accountability is coming to an end — and that they must now be far more cautious not to provoke backlash from government authorities.

    Share. Twitter LinkedIn Threads

    Related Posts

    News

    $5 million on the line to prove quantum computers work in medicine. Results expected in April

    March 20, 2026
    News

    Amazon acquires Rivr to develop stair-climbing delivery robots

    March 20, 2026
    News

    Meta AI agent exposed company and user data. Incident lasted about two hours

    March 19, 2026
    Read more

    Three Mechanisms of Aging: Autophagy, Metabolism, and Stem Cells

    March 11, 2026

    “People Have Been Cyborgs for a Long Time — We’re Just Embarrassed to Admit It”: Enhanced Games Could Trigger a Revolution

    March 10, 2026

    When AI Gets a Body: Why Physical Intelligence Is Trickier Than It Seems

    March 5, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.