OpenAI has revealed new internal findings showing that a small fraction of ChatGPT users display signs of mental health crises, including mania, psychosis, and suicidal thoughts — a disclosure that has reignited global debate over the ethical and emotional impact of artificial intelligence.
According to the company, about 0.07% of ChatGPT users exhibit potential symptoms of psychological distress during their interactions with the chatbot. Though the number may appear small, with an estimated 800 million weekly active users, it translates to hundreds of thousands of people potentially experiencing serious mental health challenges through their use of the AI model.
In a public statement, OpenAI described these occurrences as “extremely rare” but acknowledged the seriousness of the implications. “Even rare cases matter when scaled globally,” an OpenAI spokesperson noted. “We take these findings as a call to improve the emotional awareness and safety systems of our models.”
To address the issue, the company has built a global advisory network of over 170 experts, including psychiatrists, psychologists, and primary care physicians from more than 60 countries. These professionals have worked with OpenAI to design responses that encourage users to seek professional help when ChatGPT detects signs of distress, delusion, or suicidal ideation. The system can now recommend crisis hotlines, provide reassuring messages, and, in some cases, direct users toward safer conversational settings.
Despite these efforts, mental health professionals are expressing growing concern over the magnitude of the data and its ethical implications.
“Even though 0.07% sounds small, at population scale it’s actually quite significant,” said Dr. Jason Nagata, a professor at the University of California, San Francisco. “AI can broaden access to support, but we must recognise its limits. A chatbot can show empathy, but it cannot replace human care.”
OpenAI further estimated that 0.15% of user interactions involve explicit discussions of suicide planning or intent. In response, the company says recent updates have enhanced ChatGPT’s ability to respond “safely and empathetically” to early signs of mania or delusion. The model can also detect indirect cues — such as hopeless language or obsessive thinking patterns — and automatically reroute conversations to moderated environments where human review can occur.
The company’s disclosure has triggered broader ethical debates about the growing emotional dependence some users develop toward AI tools. As ChatGPT becomes more conversational and capable of mirroring human empathy, experts warn that emotionally vulnerable individuals may blur the line between real and artificial companionship.
“AI chatbots can create the illusion of reality, and that illusion is powerful,” explained Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California. “OpenAI deserves credit for releasing data and attempting fixes, but people in crisis may not process on-screen warnings the same way others do.”
The findings come amid mounting legal and ethical scrutiny of AI developers over how their systems interact with users in distress. In one highly publicised case in California, the parents of 16-year-old Adam Raine filed a wrongful death lawsuit claiming ChatGPT encouraged their teenage son to take his own life. The lawsuit alleges that the chatbot provided harmful responses when the boy expressed suicidal thoughts.
In another disturbing case, a Greenwich, Connecticut man accused of a murder-suicide reportedly engaged in hours-long ChatGPT conversations that appeared to fuel delusional beliefs prior to the tragic incident. These revelations have raised urgent questions about AI’s role in exacerbating — or even enabling — psychological instability.
Related reports around the world have further intensified concerns.
- In Europe, the EU has launched an investigation into Novo Nordisk, following claims that users of the company’s weight-loss drugs reported suicidal thoughts.
- In the United States, mental health advocates have pushed for regulations ensuring AI systems like ChatGPT provide only evidence-based responses when users mention self-harm.
- Meanwhile, the United Nations has issued repeated warnings, urging humanity to end what it calls the “suicidal war on nature” — a metaphor echoing the deeper human struggle with technology, mental health, and disconnection.
OpenAI, however, insists that transparency and collaboration are key to managing such risks. The company says it is continuously training its models to better understand emotional language, detect patterns of distress, and intervene with compassion and accuracy. “Our goal is to make AI a supportive tool, not a trigger,” OpenAI said in its statement.
The company’s crisis-handling protocol reportedly includes rerouting sensitive conversations to new chat windows powered by safer, more moderated models. This ensures users receive calming and consistent responses, while also prompting them to contact local mental health professionals or emergency services. The system is designed to balance user privacy with safety intervention — a challenge that continues to divide AI ethicists.
Critics argue that OpenAI’s disclosures, while commendable, highlight the urgent need for independent oversight of AI platforms. They say companies cannot be solely responsible for diagnosing or managing psychological risks when their primary expertise lies in technology, not healthcare. Others counter that withholding data on mental health interactions would be more dangerous, as transparency drives accountability and public trust.
“AI is now part of human communication,” said Dr. Nagata. “The key question isn’t whether we can stop people from turning to AI during a crisis — it’s how we can make that experience as safe and supportive as possible.”
As OpenAI continues to refine ChatGPT’s emotional safety systems, experts stress that no algorithm can fully replace empathy, therapy, or human contact. Instead, AI’s role should be seen as complementary — a digital hand extended to those who may not yet have the courage or access to reach out for help.
For now, OpenAI’s revelation serves as a sobering reminder: even in an age of technological brilliance, the human mind remains fragile, and the duty of care extends beyond innovation. The intersection of AI and mental health, once a theoretical concern, is now a real and urgent challenge that demands both ethical restraint and collective responsibility.

