AI has spread far more rapidly than most people expected, and regulations are struggling to keep up with the technology. Unfortunately, many rules governing how AI systems function remain unclear. This is why state attorneys general are demanding explanations and answers about potential risks associated with the use of AI.
They are asking, among other things, how companies test their systems before releasing them publicly, how they respond to reports of harmful content, and whether they implement mechanisms to reduce potential damages — for example, tools that detect misinformation, deepfakes, or the use of models for potentially criminal activities. The letter stresses that companies too often shift responsibility onto users instead of proactively preventing issues related to their technologies.
Officials also pointed to a growing number of incidents where AI models generate false information about real people — including defamation, fabricated crimes, or invented statements. Attorneys general warn that the consequences of such content can be serious and far-reaching, from reputational harm to political or electoral misuse. They have therefore requested information on how companies plan to curb the automatic creation of false materials that could damage citizens.
AI companies are not legally required to respond to these letters, as they carry no binding force. Attorneys general typically send such collective warnings to signal that selected firms are being monitored — and that failing to take appropriate action could lead to formal investigations, audits, or regulatory measures at the state level.
For companies like OpenAI and Anthropic, the letter may be a sign that the era of nearly unrestricted technological experimentation without full legal accountability is coming to an end — and that they must now be far more cautious not to provoke backlash from government authorities.

