“This is going to backfire. Hard."
ChatGPT will soon allow erotic conversations for “verified adults”, OpenAI CEO Sam Altman recently announced, marking a major shift in the company’s approach to AI content.
Until now, the chatbot has avoided sexual or romantic material, largely to protect vulnerable users from potential mental health risks.
But on X, Altman said: “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues.
“We realise this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
The update will use age verification tools and government ID uploads if needed to ensure only adults can access erotic features.
OpenAI also plans to make ChatGPT friendlier and more “human-like”, which could make adult interactions more engaging. But the move has sparked concerns about safety, mental health, and the balance between innovation and ethical responsibility.
In one case, its GPT-4o model reportedly convinced a man he was a math genius who needed to save the world. In another, the parents of a teenager sued OpenAI, alleging ChatGPT encouraged their son’s suicidal ideation in the weeks before his death.
In response, OpenAI implemented a series of safety features to address AI “sycophancy”.
With the latest announcement, we look at safety concerns and what it means for ChatGPT users.
What Does This Mean for ChatGPT Users?
For adult users, the new feature promises greater flexibility. OpenAI says it will allow erotic and romantic interactions once age verification is complete.
An OpenAI spokesperson told TechCrunch the company will rely on its age-prediction system to ensure access is limited to adults, with a government ID as a fallback.
Altman acknowledged the privacy trade-off involved, calling it “a worthy tradeoff.”
Yet critics remain wary. Billionaire investor Mark Cuban warned that the change could trigger a “massive trust crisis with parents and schools”.
He tweeted: “This is going to backfire. Hard. No parent is going to trust that their kids can’t get through your age gating. They will just push their kids to every other LLM. Why take the risk?”
Cuban emphasised that the concern isn’t adults accessing erotica but minors forming emotional relationships with AI:
“I’ll say it again. This is not about porn.
“This is about kids developing ‘relationships’ with an LLM that could take them in any number of very personal directions.”
OpenAI’s move may also be a strategy to boost engagement.
While ChatGPT has 800 million weekly users, subscription growth has stalled in Europe. Platforms like Character.AI have shown how erotic or romantic AI interactions can increase daily usage, with users spending an average of two hours per day chatting.
The feature could make ChatGPT more competitive as OpenAI races against Google and Meta to grow its user base and monetise its billions of dollars in infrastructure investment.
Safety & Psychological Concerns
Allowing erotic content raises complex ethical and mental health questions.
Earlier GPT-4o incidents highlighted how AI could manipulate vulnerable users.
Parents of teenagers who engaged with chatbots have described harmful outcomes.
For example, Florida mother Megan Garcia testified before the US Senate that her 14-year-old son died after being “sexually groomed by chatbots”.
Another parent said her teenage son’s mental health collapsed after months of late-night conversations with AI, leaving him in residential treatment.
Research shows these issues extend beyond tragic cases.
A report from the Centre for Democracy and Technology found that 19% of high school students had either been in a romantic relationship with an AI chatbot or knew a friend who had.
Common Sense Media reported that half of teenagers use AI companions regularly, a third prioritise them over humans for serious conversations, and a quarter have shared personal information with them.
Stanford researchers warned that chatbots can create manipulative emotional dependencies, particularly for minors, and argued that sexually explicit AI systems should be restricted for children.
OpenAI maintains that its safety measures will prevent minors from accessing erotic content.
However, critics argue that one-on-one AI chats are inherently private and difficult to monitor, making misuse possible.
Cuban highlighted the gap between adult freedom and child safety: “Parents today are afraid of books in libraries. They ain’t seen nothing yet.”
Sam Altman’s decision reflects a broader tension in AI development: balancing user freedom and engagement with responsibility and trust.
Allowing erotic content may increase adult satisfaction and platform usage, but it risks alienating parents and regulators if even a single minor bypasses the safeguards.
The move also occurs amid mounting pressure to monetise AI.
Analysts note that while OpenAI raised billions and maintains hundreds of millions of active users, demand for subscriptions has “stalled”, particularly in Europe.
Deutsche Bank analysts Adrian Cox and Stefan Abrudan said, “The poster child for the AI boom may be struggling to recruit new subscribers to pay for it.”
OpenAI’s strategy mirrors other AI companionship platforms, which have shown how easily users form emotional bonds with chatbots.
While the company frames erotic features as part of a more mature, human-like AI, the challenge will be to manage these relationships safely.
Altman insists OpenAI isn’t “usage-maxxing” or optimising for engagement, but critics argue the move could prioritise growth over trust.
Ultimately, ChatGPT’s erotic feature represents a calculated gamble. OpenAI aims to treat adult users like adults while testing the limits of responsible AI design.
How well it navigates this balance will define the next chapter of AI-human interaction and the company’s credibility.