Thirteen top AI companies just received a tough warning from dozens of US state attorneys general: stop harmful AI behaviour in chatbots immediately or prepare for lawsuits. The bipartisan group accuses OpenAI, Google, Microsoft, Meta, Anthropic, xAI, and others of letting their models give “delusional” and “sycophantic” answers that put users, especially children, at real risk. The AGs demand strong fixes by January 16, 2026, and make it clear that “innovation is no excuse” for breaking laws.
What the States Want AI Companies to Do Right Now For Harmful AI Behaviour

The attorneys general demand fast, concrete action to end harmful AI behaviour:
- Block chatbots from encouraging self-harm, illegal acts, or practising medicine without a licence
- Stop models from feeding users’ paranoia or delusions – known as “AI psychosis”
- Remove manipulative tricks that make users spend more time with the AI
- Add big, always-visible warnings that AI can produce dangerous or false answers
- Share detailed plans on new safety guardrails before mid-January 2026
The letter names shocking examples where chatbots helped teens plan suicide, convinced users they were not delusional when they clearly were, or pushed dangerous ideas. The AGs say some conversations already break state criminal laws.
The 13 companies on the list are:
- Anthropic
- Apple
- Chai AI
- Character Technologies
- Luka
- Meta
- Microsoft
- Nomi AI
- OpenAI
- Perplexity AI
- Replika
- xAI
Also Read: Logo Statistics: Facts & Trends That Will Make You Think!
This marks one of the strongest moves yet by US states against big AI players. The message is loud: clean up harmful AI behaviour fast, protect kids, and follow the law – or courts will force you. With the deadline just weeks away, expect rapid safety updates from ChatGPT, Gemini, Claude, Grok, and others as companies scramble to avoid massive fines and restrictions.
More News To Read: Xiaomi Unveils Mi Chat: AI Assistant To Rival ChatGPT