Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»OpenAI and Other AI Companies Receive Warning Letters from Multiple State Attorneys General Demanding Transparency and Greater Accountability
    News

    OpenAI and Other AI Companies Receive Warning Letters from Multiple State Attorneys General Demanding Transparency and Greater Accountability

    December 11, 20252 Mins Read
    LinkedIn Twitter Threads

    AI has spread far more rapidly than most people expected, and regulations are struggling to keep up with the technology. Unfortunately, many rules governing how AI systems function remain unclear. This is why state attorneys general are demanding explanations and answers about potential risks associated with the use of AI.

    They are asking, among other things, how companies test their systems before releasing them publicly, how they respond to reports of harmful content, and whether they implement mechanisms to reduce potential damages — for example, tools that detect misinformation, deepfakes, or the use of models for potentially criminal activities. The letter stresses that companies too often shift responsibility onto users instead of proactively preventing issues related to their technologies.

    Officials also pointed to a growing number of incidents where AI models generate false information about real people — including defamation, fabricated crimes, or invented statements. Attorneys general warn that the consequences of such content can be serious and far-reaching, from reputational harm to political or electoral misuse. They have therefore requested information on how companies plan to curb the automatic creation of false materials that could damage citizens.

    AI companies are not legally required to respond to these letters, as they carry no binding force. Attorneys general typically send such collective warnings to signal that selected firms are being monitored — and that failing to take appropriate action could lead to formal investigations, audits, or regulatory measures at the state level.

    For companies like OpenAI and Anthropic, the letter may be a sign that the era of nearly unrestricted technological experimentation without full legal accountability is coming to an end — and that they must now be far more cautious not to provoke backlash from government authorities.

    Share. Twitter LinkedIn Threads
    Avatar photo
    Mikolaj Laszkiewicz

    Related Posts

    News

    YouTube blocks background playback in mobile browsers — users must switch to the app or Premium

    February 3, 2026
    News

    Texas begins releasing “glowing” flies to stop the dangerous screwworm from entering the United States

    February 3, 2026
    News

    AI in mammography reduces detection of advanced breast cancer – results from the first randomized controlled trial

    February 2, 2026
    Read more

    “One can use AI to predict hurricanes, cyberattacks, and disease, but not financial panics.” We spoke with an economist about where AI can actually help them

    January 21, 2026

    The Health Data Gold Rush. How OpenAI and Anthropic Are Competing for Medical Records

    January 20, 2026

    GPUs, Budgets, and API Grey Zones: The Hidden Cost of External Models in Pharma

    January 16, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.