OpenAI has revealed that roughly 0.15% of its users translating into approximately 1.2 million people each week engage in conversations with ChatGPT that include “explicit indicators of potential suicidal planning or intent”. In addition, 0.07% (≈560,000 users) show signs of possible psychosis or mania.
Why it Matters?
This is the first time a major AI company has publicly quantified the scale of self-harm and suicide-related conversations happening with its chatbot. Experts say even if the percentage is small, the sheer size of the user base means millions are involved started raising urgent questions about AI safety, mental health risk, and the role of technology in emotional crisis.
What OpenAI is doing?
Worked with over 170 mental-health professionals globally to refine ChatGPT’s responses.
Updated the model (GPT-5) to better detect signs of distress, redirect sensitive conversations, and connect users with crisis resources.
- Published an internal blog post titled “Strengthening ChatGPT’s responses in sensitive conversations”.
Although, the figures are internal estimates from OpenAI; methodology and verification details are limited. Some mental-health professionals warn that AI chatbots may reinforce emotional dependence or risk behaviours rather than resolve them, especially for vulnerable users. Despite improvements, AI cannot replace trained clinicians or human-based mental-health support.
The data from OpenAI shines a spotlight on the role AI chatbots now play in people’s emotional lives and the responsibilities that come with it. With over a million weekly users discussing suicide through ChatGPT, the story isn’t just about numbers. It’s about how technology intersects with mental health, where the risks lie, and how society responds. Tech firms, regulators, mental-health professionals and users all face a shared challenge – how to harness AI’s power without surrendering human vulnerability.








