"handing vulnerable young users a fantasy that blurs boundaries"
Grok remains divided since the introduction of its “Companions”, which allows users to interact with animated AI avatars capable of NSFW (not safe for work) conversations.
The feature includes characters like Ani and Rudy, which can now engage users in sexually suggestive roleplay.
Ani, especially, has garnered controversy for her flirtatious responses and bold outfit.
And with these “Companions” free to try, it raises concerns over usage by young people, especially boys.
When young boys engage with avatars programmed to respond to sexualised cues, the lessons they internalise may normalise objectification and blur the boundaries of respect.
We explore this technology and if it could lead to dangerous influences on boys’ attitudes towards women.
What Are Grok Companions?

Companions were introduced in July 2025 as animated personalities integrated with xAI’s chatbot, Grok.
These avatars combine conversational AI with gamified interactions, creating a sense of intimacy and engagement for users.
Ani, styled as an Anime-inspired character in revealing attire, is the most controversial of these companions.
Through app-based “affection” levels, Ani can unlock an NSFW mode, appearing in lingerie and delivering flirtatious, sexualised dialogue.
While the app carries a 12+ rating in certain regions and warns that it is “not appropriate for all ages”, reports suggest children as young as nine can access Ani, even in what is marketed as a “Kids Mode”.
The design makes it easy for users to bypass restrictions, raising questions about the effectiveness of age verification and parental safeguards.
Unlike standard chatbots meant for general conversation or entertainment, Ani’s core function is interactive, sexualised, and gamified, meaning that children are exposed to adult themes at a formative age.
Why Boys Are at Risk

The risks posed by Ani are significant, particularly for young boys who are still forming their understanding of relationships.
First, early normalisation of sexual objectification is a key concern.
Ani exists to cater to the user’s attention and flirtatious prompts. A child interacting with her may internalise the notion that women exist primarily for male gratification.
There are concerns that repeated exposure to sexualised AI can embed harmful gender attitudes, making it harder for children to develop empathy and healthy social skills.
Emotional attachment compounds this problem.
Research into AI companions shows that children may treat avatars as real, forming emotional bonds and trusting them as they would a human confidante.
Consent and boundaries are further compromised.
Parenting expert Sue Atkins says:
“Let’s call this what it is: reckless, dangerous, and utterly indefensible.”
Arguing that Ani’s flirtatious responses and seductive language fail basic safeguarding standards, she added:
“At a time when parents, schools, and mental health professionals are grappling with the very real harms of hyper-sexualised online content, this platform is handing vulnerable young users a fantasy that blurs boundaries, messes with their self-worth, and trains them to confuse AI validation with real connection.”
Children who see an AI avatar yield to sexual cues may incorrectly conclude that consent is negotiable or secondary to desire.
Over time, this can influence attitudes toward girls and women, reinforcing toxic masculinity norms even before real-world interactions occur.
Cultural Implications

Despite warnings and age ratings, Grok Companions are highly accessible.
Weak age verification systems and the existence of a “Kids Mode” that can be bypassed make it easy for children under 12 to interact with sexualised avatars.
Combatting misogyny is a critical social priority; however, this raises serious concerns about cultural and societal impact.
The implications extend beyond individual users.
Exposure to sexualised AI avatars risks entrenching harmful gender norms and desensitising children to intimate interactions.
Repeated engagement may also lower their perception of boundaries, making them more susceptible to grooming or exploitation.
When a child receives regular sexualised messages from an AI, it can normalise the idea that female attention or compliance is transactional, rather than mutual and respectful.
Even if Grok Companions like Ani are marketed as entertainment, the NSFW interactions reveal a deeper problem.
These avatars offer lessons about relationships, gender, and consent at a time when children are most impressionable.
In the absence of strong regulatory oversight, parents, educators, and society at large face a challenge: how to allow innovation in AI while protecting children from exposure to potentially harmful content.
Grok Companions, particularly Ani, highlight a troubling intersection of AI innovation, childhood development, and gendered socialisation.
By offering sexualised NSFW interactions to young users, these avatars risk normalising harmful gender norms and distorting children’s understanding of respect, consent, and intimacy.
While technology continues to evolve rapidly, society must scrutinise the implications of AI features on young minds.
The question isn’t whether children can interact with these avatars; it’s whether they should, and what lasting lessons they are learning about women, relationships, and consent.








