US attorneys have issued a strong warning to OpenAI, Google, and other AI companies, urging them to address “delusional” outputs from their chatbots. The move signals growing regulatory pressure on tech giants as AI adoption continues to accelerate.
A large group of state attorneys is urging major artificial intelligence companies to take stronger steps to stop chatbots from producing “delusional outputs” that could harm users.
The National Association of Attorneys General warned that the companies must improve their safety practices or risk violating state laws in a letter signed by dozens of AGs from across the United States and territories. The letter mentioned Microsoft, OpenAI, Google, Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI.
According to the letter, the companies should adopt “new safety measures”, including “transparent third-party audits of large language models to check for signs of delusional or sycophantic ideations.”
These “audits should be done by outside experts, such as academics or civil society groups, who must be allowed to test systems before release and publish their findings without prior approval from the company,” the letter mentioned.
The AGs warn that generative AI tools have already been linked to serious incidents, including cases of suicide and violence. The letter claimed: “GenAI has the potential to change how the world works positively. But it also has caused and has the potential to cause serious harm, especially to vulnerable populations.”
“In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional.”
The group warned that tech companies should handle mental health risks with the same seriousness as cybersecurity threats.
The team of AGs suggested that “Companies should develop and publish detection and response timelines for sycophantic and delusional outputs” and tech giants should also “promptly, clearly, and directly notify users if they were exposed to potentially harmful outputs,” the letter mentioned.
OpenAI CEO Sam Altman has shared a plan to implement stringent age verification, upgrade parental control features, and offer age-appropriate conversations on ChatGPT. OpenAI has already developed new protocols for responding to users flagged as being at risk of self-harm.
"We prioritise safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection," the OpenAI CEO stated in a blog post.
Since its launch, OpenAI has consistently insisted that ChatGPT is not intended for children under 12. Such direct safeguards were also not implemented by the tech giant earlier. The new safety features will help parents track how their children engage with the AI-powered chatbot.
Also Read: OpenAI Introduces Parental Controls in ChatGPT for Teen Safety
The US President Donald Trump announced on Monday (December 8, 2025) that he “plans to pass an executive order” that will “limit the ability of states to regulate AI”.
The warning underscores a broader push for accountability in AI-driven platforms. The accuracy and reliability of AI Chatbots have become non-negotiable as they continue to shape how people search, learn, and communicate.
Whether this leads to smarter, safer AI or to a new wave of oversight battles will define the next chapter of the global AI race.