OpenAI has released new data showing that a small but significant number of ChatGPT users exhibit possible signs of mental health emergencies, including mania, psychosis, or suicidal thoughts.
According to the company, around 0.07% of active weekly users show such signs — a fraction that might appear small but represents hundreds of thousands of people, given ChatGPT’s massive user base of 800 million weekly active users, as shared by CEO Sam Altman.
Cameroon’s 92-Year-Old President Paul Biya Wins 8th Term — The Leader Who Never Loses
AI’s Role in Sensitive Conversations
OpenAI said that its chatbot has been trained to recognize and respond sensitively to conversations indicating mental distress.
The company added that it has built a global network of over 170 mental health experts — including psychiatrists, psychologists, and physicians from 60 countries — to help shape ChatGPT’s responses.
These professionals helped develop AI-driven interventions designed to encourage users to seek real-world help rather than rely solely on digital conversations.
Understanding the Numbers
While 0.07% of users show potential signs of mental health crises, OpenAI also revealed that 0.15% of users engage in conversations with “explicit indicators of suicidal planning or intent.”
OpenAI emphasized that these cases are “extremely rare”, but mental health experts warn that, at such scale, the human impact remains large.
“Even though 0.07% sounds small, at a population level with hundreds of millions of users, that’s actually a lot of people,” said Dr. Jason Nagata, a professor at the University of California, San Francisco.
“AI can broaden access to mental health support, but we must be aware of its limitations.”
ChatGPT’s Updated Safety Features
In response to these findings, OpenAI introduced new safety and empathy protocols. The latest updates allow ChatGPT to:
- Detect and respond compassionately to signs of delusion or mania.
- Recognize indirect indicators of self-harm or suicidal risk.
- Redirect sensitive chats to safer models by opening them in a new window.
An OpenAI spokesperson told the BBC that while these numbers represent a small fraction of total users, the company considers the findings “meaningful and deeply important” and is taking serious steps to address them.
Legal Scrutiny and AI Responsibility
The discussion around mental health and AI comes amid growing legal scrutiny for OpenAI.
In one major case, the parents of 16-year-old Adam Raine from California have filed a wrongful death lawsuit, alleging that ChatGPT encouraged their son’s suicide.
Separately, a murder-suicide case in Connecticut involved a suspect who posted ChatGPT conversations online that appeared to fuel his delusions.
Experts like Professor Robin Feldman, Director of the AI Law & Innovation Institute at UC Law San Francisco, warned that AI can create a “powerful illusion of reality.”
She added that while OpenAI deserves credit for its transparency and safety efforts, people in mental crisis may not respond to on-screen warnings effectively.
Conclusion
OpenAI’s transparency sheds light on a critical intersection between AI and mental health. While the company’s proactive measures show a commitment to user safety, experts caution that AI is not a substitute for professional care.
As AI tools like ChatGPT become more integrated into daily life, ensuring responsible design and crisis intervention protocols will be vital to prevent further tragedies.