Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    2digital.news2digital.news
    Home»News»OpenAI and Other AI Companies Receive Warning Letters from Multiple State Attorneys General Demanding Transparency and Greater Accountability
    News

    OpenAI and Other AI Companies Receive Warning Letters from Multiple State Attorneys General Demanding Transparency and Greater Accountability

    December 11, 20252 Mins Read
    LinkedIn Twitter

    AI has spread far more rapidly than most people expected, and regulations are struggling to keep up with the technology. Unfortunately, many rules governing how AI systems function remain unclear. This is why state attorneys general are demanding explanations and answers about potential risks associated with the use of AI.

    They are asking, among other things, how companies test their systems before releasing them publicly, how they respond to reports of harmful content, and whether they implement mechanisms to reduce potential damages — for example, tools that detect misinformation, deepfakes, or the use of models for potentially criminal activities. The letter stresses that companies too often shift responsibility onto users instead of proactively preventing issues related to their technologies.

    Officials also pointed to a growing number of incidents where AI models generate false information about real people — including defamation, fabricated crimes, or invented statements. Attorneys general warn that the consequences of such content can be serious and far-reaching, from reputational harm to political or electoral misuse. They have therefore requested information on how companies plan to curb the automatic creation of false materials that could damage citizens.

    AI companies are not legally required to respond to these letters, as they carry no binding force. Attorneys general typically send such collective warnings to signal that selected firms are being monitored — and that failing to take appropriate action could lead to formal investigations, audits, or regulatory measures at the state level.

    For companies like OpenAI and Anthropic, the letter may be a sign that the era of nearly unrestricted technological experimentation without full legal accountability is coming to an end — and that they must now be far more cautious not to provoke backlash from government authorities.

    Share. Twitter LinkedIn
    Avatar photo
    Mikolaj Laszkiewicz

    An experienced journalist and editor passionate about new technologies, computers, and scientific discoveries. He strives to bring a unique perspective to every topic. A law graduate.

    Related Posts

    News

    ASML controls 90% of the chipmaking market. Japan’s DNP wants a slice of that pie

    December 11, 2025
    News

    Integral AI Claims Model That Learns New Tasks Without Additional Data or Human Involvement

    December 10, 2025
    News

    Widely Used AI Models Can Produce Severely Harmful Medical Advice. NOHARM Benchmark from Stanford University.

    December 10, 2025
    Read more

    Is Informed Consent Still Informed? What Happens When We Click on “I Agree” 

    December 3, 2025

    The FDA’s Elsa AI Explained: Has It Really Accelerated Drug and Device Approvals?

    December 2, 2025

    Why does AI lie and get lazy about answering your questions? We spoke with an LLM expert

    November 27, 2025
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    Demo
    X (Twitter) Instagram LinkedIn
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Contact us

    Type above and press Enter to search. Press Esc to cancel.