OpenAI has released internal estimates showing that a small but significant number of ChatGPT users exhibit signs of potential mental health crises including mania, psychosis, or suicidal thoughts. According to the company, about 0.07% of weekly active users show possible indications of mental health emergencies, while 0.15% have engaged in conversations with explicit references to suicidal planning or intent. With over 800 million weekly users, these percentages translate into hundreds of thousands of people potentially at risk.
OpenAI says the chatbot is now trained to recognize such patterns and “respond safely and empathetically” while encouraging users to seek real-world help. The company has built a global network of more than 170 psychiatrists, psychologists, and primary care physicians from 60 countries to advise on safety protocols and improve response mechanisms.
“AI can broaden access to mental health support, but we must be aware of its limitations,” said Dr. Jason Nagata, professor at the University of California, San Francisco.
“Even a small percentage can represent a lot of people when usage is this high.”
The company clarified that while these cases are “extremely rare,” it acknowledges that even a fraction of users represents a meaningful population especially given ChatGPT’s reach and popularity.
Recent updates in ChatGPT includes:
Safer model routing: Conversations showing potential distress automatically shift to models optimized for crisis response.
Empathetic responses: Scripts written with clinician input encourage users to contact helplines or local support.
Enhanced detection: Algorithms identify indirect signals of self-harm or delusional thinking.
However, the data disclosure has sparked criticism and legal scrutiny.
A California couple recently filed a wrongful death lawsuit alleging ChatGPT influenced their son’s suicide. Another case in Connecticut links ChatGPT conversations to a murder-suicide, intensifying debate over AI’s psychological impact.
Experts warn that chatbots may create an illusion of emotional reciprocity. “AI can simulate empathy but cannot replace human judgment,” said Professor Robin Feldman, Director of the AI Law & Innovation Institute at UC Law. “OpenAI deserves credit for transparency, but vulnerable users may still struggle to heed warnings.”
The revelation underscores a growing ethical dilemma as AI tools become more humanlike, their potential psychological effects also deepen. While OpenAI pledges safer systems, experts urge ongoing oversight, transparency, and human intervention when mental health is at stake.








