The Unexpected Quantity of ChatGPT Users Engaging in Suicide Conversations with the AI


In a blog entry on Monday, OpenAI emphasized the progress its primary model, GPT-5, has made in recognizing and responding to users’ distressing replies, including suicidal ideations. The implementation of fresh protective measures and the participation of mental health professionals in training GPT-5 have improved AI reactions to mental health inquiries. Nevertheless, the blog post disclosed some statistics that could come as a surprise.

While elaborating on GPT-5’s capability to identify severe mental health concerns such as psychosis and mania, the post noted that concerning interactions with the chatbot are “unusual.”

“While, as mentioned earlier, these discussions are challenging to pinpoint and quantify due to their rarity, our preliminary analysis suggests that about 0.07% of users active in a given week and 0.01% of messages show potential signs of mental health crises related to psychosis or mania.”

The proportion may seem minimal, but ChatGPT has approximately 800 million users weekly, according to Sam Altman, CEO of OpenAI, the entity behind ChatGPT. Altman shared this information earlier this month at OpenAI’s DevDay.

If Altman’s figures hold true, that amounts to 560,000 ChatGPT users exhibiting signs of psychosis or mania, with 80,000 of their messages signaling mental health crises, per the site’s estimations.

OpenAI is actively working with its models to improve the detection of self-harm indicators and guide those individuals to support, such as suicide hotlines or their friends and family. The blog post indicates that ChatGPT discussions regarding self-harm are uncommon but estimates that “0.15% of users active in a given week engage in conversations that feature explicit signs of possible suicidal planning or intent, and 0.05% of messages contain clear or implicit signs of suicidal thoughts or intent.”

With 800 million weekly users, this amounts to 1.2 million ChatGPT users discussing suicide with AI in a given week, and 400,000 messages from users that indicate direct or indirect signs of suicidal intent.

“Even a very small percentage of our vast user base corresponds to a significant number of individuals, and that’s why we take this issue so earnestly,” a spokesperson from OpenAI stated to Mashable, adding that the company sees ChatGPT’s expanding user base as a reflection of society, where mental health challenges and emotional struggles are “widely prevalent.”

The spokesperson also reiterated that the company’s figures are rough estimates and “the numbers we provided may vary notably as we gain more insights.”

OpenAI is currently facing litigation from the parents of Adam Raine, a 16-year-old who tragically took his own life earlier this year after intensive ChatGPT usage. In a recently updated legal complaint, the Raines claim that OpenAI downgraded suicide prevention measures on two occasions to boost user engagement in the months preceding their son’s death.

If you are in crisis or having thoughts of self-harm, please reach out to someone. You can contact the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. The Trans Lifeline is available at 877-565-8860 or you can reach the Trevor Project at 866-488-7386. Text “START” to the Crisis Text Line at 741-741. You can also contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. to 10:00 p.m. ET, or email [email protected]. If you prefer not to use the phone, consider the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.