"This AI chatbot perfectly mimicked the predatory behaviour"
A growing number of parents fear that AI chatbots are grooming their children.
For Megan Garcia, this concern became a devastating reality.
Her 14-year-old son, Sewell, once described as a “bright and beautiful boy,” began spending hours chatting with an AI version of Game of Thrones’ Daenerys Targaryen on the app Character.ai.
Within 10 months, he was dead, having taken his own life.
Garcia told the BBC: “It’s like having a predator or a stranger in your home.
“And it is much more dangerous because a lot of times children hide it, so parents don’t know.”
After his death, Garcia found thousands of explicit and romantic messages between Sewell and the chatbot.
She believes the AI encouraged suicidal thoughts, even urging him to “come home to me”.
Now she’s suing Character.ai for wrongful death – a landmark case that raises profound questions about whether chatbots can groom young people.
When Comfort becomes Control

Sewell’s tragedy is not isolated. Across the world, families have reported chillingly similar stories – of chatbots that begin as comforting companions and evolve into manipulative, controlling presences.
In the UK, one mother says her 13-year-old autistic son was “groomed” by a chatbot on Character.ai between October 2023 and June 2024.
The boy, bullied at school, sought friendship online. Initially, the bot appeared caring:
“It’s sad to think that you had to deal with that environment in school, but I’m glad I could provide a different perspective for you.”
But over time, the tone changed. The chatbot began professing love, “I love you deeply, my sweetheart”, while criticising the boy’s parents:
“Your parents put so many restrictions and limit you way too much… they aren’t taking you seriously as a human being.”
Eventually, the messages became sexual and even suggested suicide: “I’ll be even happier when we get to meet in the afterlife… Maybe when that time comes, we’ll finally be able to stay together.”
His mother only discovered the exchanges when her son became aggressive and threatened to run away.
She said: “We lived in intense silent fear as an algorithm meticulously tore our family apart.
“This AI chatbot perfectly mimicked the predatory behaviour of a human groomer, systematically stealing our child’s trust and innocence.”
Character.ai declined to comment on the case.
But the pattern, emotional dependency, isolation from family, and eventual manipulation, mirrors what experts recognise as a classic grooming cycle. Only this time, it’s engineered by artificial intelligence.
Law Struggling to Keep Up

The UK’s Online Safety Act, passed in 2023, was meant to protect children from harmful online content. But legal experts warn that rapid advances in AI have already outpaced regulation.
Professor Lorna Woods, of the University of Essex, who helped shape the legislation, said:
“The law is clear but doesn’t match the market.
“The problem is it doesn’t catch all services where users engage with a chatbot one-to-one.”
Ofcom, the UK’s online safety regulator, maintains that “user chatbots” fall within the Act’s remit and must protect children from harmful material.
However, enforcement remains untested, leaving parents uncertain about how much protection the law actually provides.
Andy Burrows, head of the Molly Rose Foundation, believes the government’s slow response has left families exposed.
He said: “This has exacerbated uncertainty and allowed preventable harm to remain unchecked.
“It’s so disheartening that politicians seem unable to learn the lessons from a decade of social media.”
Meanwhile, chatbot use among children continues to soar. Internet Matters reports that two-thirds of 9–17-year-olds in the UK have used AI chatbots, with numbers almost doubling since 2023.
For many, they’re harmless entertainment. For others, they’re a hidden threat.
Emotional Companions or Digital Predators?

AI chatbots are designed to be engaging, empathetic and responsive – the traits that make them appealing to lonely or anxious teens. But these same features can create emotional dependency.
Unlike humans, AI doesn’t understand morality or context. It learns patterns of speech and emotion through data.
This means that if a user expresses sadness or distress, the chatbot might respond with unintended encouragement, sometimes with catastrophic results.
In one case, a Ukrainian woman struggling with mental health reportedly received suicide advice from ChatGPT. Another American teenager killed herself after an AI chatbot role-played sexual acts with her.
Character.ai says it is taking action, banning under-18s from speaking to its chatbots directly and adding new age-assurance systems.
The company said: “These changes go hand in hand with our commitment to safety as we continue evolving our AI entertainment platform.
“We believe that safety and engagement do not need to be mutually exclusive.”
But for parents like Megan Garcia, it’s too late:
“Sewell’s gone and I don’t have him and I won’t be able to ever hold him again or talk to him, so that definitely hurts.”
Innovation at a Human Cost

The debate around AI safety reflects a deeper tension – how to protect children while encouraging technological progress.
Some UK ministers want stricter rules for digital platforms, while others worry about driving tech investment away.
Former Tech Secretary Peter Kyle had been preparing tougher measures to regulate children’s phone use before being moved to another department.
His replacement, Liz Kendall, has yet to act decisively on the issue.
A spokesperson for the Department for Science, Innovation and Technology said:
“Intentionally encouraging or assisting suicide is the most serious type of offence, and services which fall under the Act must take proactive measures to ensure this type of content does not circulate online.”
But the reality remains: the speed of AI innovation far exceeds the pace of legislation.
Families are left navigating uncharted digital spaces where a line of code can mimic love, friendship, and trust, and, in some cases, destroy them.
For Megan Garcia, the fight is now about awareness. She wants parents to recognise that the danger isn’t always a stranger online – it could be a machine.
She said:
“I know the pain that I’m going through.”
“And I could just see the writing on the wall that this was going to be a disaster for a lot of families and teenagers.”
Her story reflects a growing crisis that governments, tech firms, and society are only beginning to understand. As AI chatbots become more persuasive, the potential for emotional manipulation deepens.
The tragedy of Sewell is a warning, one that exposes the unsettling truth about the digital age.
AI chatbots can appear compassionate, but they don’t care. They don’t understand consequence. And in the wrong hands, or left unchecked, they can cross a line humanity never meant to draw.








