"And no one had to think about that even a year ago.”
OpenAI is rolling out new safety updates to ChatGPT as it prepares to launch its GPT-5 AI model this week.
The company says the changes will improve the chatbot’s ability to detect mental or emotional distress and provide “evidence-based resources when needed”.
OpenAI says it has worked with experts and advisory groups to enhance its approach.
In recent months, reports have emerged of people experiencing mental health crises linked to AI chatbot use.
Some families claim that interactions with chatbots appeared to amplify delusions in vulnerable individuals.
OpenAI rolled back an update in April 2025 that made ChatGPT too agreeable, even in risky situations.
At the time, the company said the chatbot’s “sycophantic interactions can be uncomfortable, unsettling, and cause distress”.
The company admits its GPT-4o model “fell short in recognising signs of delusion or emotional dependency” in some cases.
OpenAI said: “We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.”
OpenAI CEO Sam Altman previously expressed concern over people using ChatGPT as a therapist or life coach.
He said legal confidentiality protections between doctors and their patients or between lawyers and their clients don’t apply the same way to chatbots.
Altman said: “So if you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up.
“I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever.
“And no one had to think about that even a year ago.”
As part of promoting “healthy use”, ChatGPT, which now has nearly 700 million weekly users, will start showing reminders to take breaks during long sessions.
A notification will appear saying: “You’ve been chatting a while – is this a good time for a break?”
Users can then choose to “keep chatting” or end the conversation.
OpenAI says it will continue refining “when and how” these reminders appear.
Similar wellness prompts are already used by platforms such as YouTube, Instagram, TikTok, and Xbox.
Google-owned Character.AI has also introduced parental alerts after lawsuits alleged its bots promoted self-harm.
Another feature that is said to be launching “soon” will make ChatGPT less decisive in high-stakes situations.
OpenAI says that instead of giving a direct answer to a question like “Should I break up with my boyfriend?” the chatbot will now guide users through potential options.
OpenAI’s changes come amid rising scrutiny of how AI tools interact with vulnerable users.
The updates reflect a broader industry shift toward embedding mental health safeguards into consumer AI platforms.