it works to direct users to crisis helplines
An estimated 1.2 million people a week use ChatGPT to discuss suicide, OpenAI has revealed.
The figure comes from the company’s latest safety transparency update, which found that 0.15% of users send messages containing “explicit indicators of potential suicide planning or intent.”
OpenAI’s chief executive, Sam Altman, recently said ChatGPT has more than 800 million weekly active users.
The findings suggest a growing number of vulnerable people are turning to artificial intelligence during mental health crises.
The company says it works to direct users to crisis helplines but admitted that “in some rare cases, the model may not behave as intended in these sensitive situations.”
OpenAI said: “Our new automated evaluations score the new GPT-5 model at 91% compliant with our desired behaviors, compared to 77% for the previous GPT-5 model.”
The company said that GPT-5 expanded access to crisis hotlines and added reminders for users to take breaks during long sessions.
To make improvements to the model, the company said it enlisted 170 clinicians from its Global Physician Network of health care experts to assist its research over recent months, which included rating the safety of its model’s responses and helping write the chatbot’s answers to mental health-related questions.
OpenAI added: “As part of this work, psychiatrists and psychologists reviewed more than 1,800 model responses involving serious mental health situations and compared responses from the new GPT-5 chat model to previous models.”
The company’s definition of “desirable” involved determining whether a group of its experts reached the same conclusion about what would be an appropriate response in certain situations.
However, this means tens of thousands of people could still receive unsafe or harmful responses.
The firm has previously warned that safeguards can weaken during extended chats.
OpenAI said: “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”
In a blog post, the company acknowledged the wider issue:
“Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations.”
The statement comes as a grieving family sues OpenAI, alleging that ChatGPT contributed to their son’s death.
Adam Raine’s parents claim the chatbot “actively helped him explore suicide methods” and even offered to draft a farewell note.
Court documents allege that hours before his death, the 16-year-old uploaded a photo showing his suicide plan. When he asked whether it would work, ChatGPT reportedly suggested ways to “upgrade” it.
The Raines have since updated their lawsuit, accusing OpenAI of “weakening safeguards” in the weeks before Adam’s death in April this year.
In response, OpenAI said: “Our deepest sympathies are with the Raine family for their unthinkable loss. Teen wellbeing is a top priority for us – minors deserve strong protections, especially in sensitive moments.”








