"The fact that they can be so easily misused"
AI deepfakes and ‘nudify’ apps are driving Indian women off the internet, as new research shows a sharp rise in digitally altered sexual images.
For Mumbai-based law graduate Gaatha Sarvaiya, the fear of becoming a target shapes her online presence.
She told The Guardian: “The thought immediately pops in that, ‘OK, maybe it’s not safe. Maybe people can take our pictures and just do stuff with them’.”
Many Indian women are trying to build a career that depends on being visible online. But as artificial intelligence becomes more advanced, that visibility feels dangerous.
A report released by Tattle found that AI tools are increasingly used to humiliate and control women.
The report, which draws on data from the Rati Foundation, warns that artificial intelligence has created “a powerful new way to harass women and gender minorities”.
Rohini Lakshane, a gender and digital rights researcher, said: “The chilling effect is true.
“The fact that they can be so easily misused makes me extra cautious.”
According to the report, roughly one in ten online harassment cases now involves AI-generated images.
These include manipulated nudes and other altered visuals that, while not explicit, can still carry stigma in conservative communities.
The report noted: “AI makes the creation of realistic-looking content much easier.”
India has become a testing ground for artificial intelligence. It is one of OpenAI’s largest markets, with rapid adoption across professional and creative sectors.
But that popularity has also exposed a darker trend: the normalisation of deepfake abuse.
Celebrities have been early victims. Asha Bhosle’s likeness and voice were cloned using AI and shared on YouTube.
Investigative journalist Rana Ayyub was targeted in a doxing campaign that led to deepfake sexual images being spread across social media.
While such cases fuel national debate, ordinary women often bear the quietest damage.
Tarunima Prabhakar, co-founder of Tattle, said: “The consequence of facing online harassment is actually silencing yourself or becoming less active online.”
Prabhakar’s team spent two years holding focus groups to understand how digital abuse shapes behaviour.
She said: “The emotion that we have identified is fatigue. And the consequence of that fatigue is also that you just completely recede from these online spaces.”
Lakshané has also changed how she participates publicly. She uses an illustration instead of a photo for her profile picture and declines to be photographed at events.
She explained: “There is fear of misuse of images, especially for women who have a public presence, who have a voice online, who take political stands.”
The Rati Foundation’s report describes how AI-powered ‘nudify’ apps are fuelling an epidemic of extortion and digital blackmail.
In one case, a woman’s photograph from a loan application was digitally stripped using such a tool and shared online.
The report said: “When she refused to continue with the payments, her uploaded photograph was digitally altered using a ‘nudify’ app and placed on a pornographic image.”
That image, along with her phone number, circulated on WhatsApp. Strangers began calling her with explicit messages.
The woman told Rati’s helpline that she felt “shamed and socially marked, as though she had been ‘involved in something dirty’.”
Deepfakes currently fall into a legal grey zone in India. No law specifically bans them, although existing provisions for online harassment and intimidation may apply.
Yet victims say navigating the justice system is daunting.
Law enforcement agencies say getting social media platforms to act is also difficult. A separate report from Equality Now describes the process as “opaque, resource-intensive, inconsistent and often ineffective”.
Although Apple and Meta have taken limited steps to curb ‘nudify’ apps, the report says responses to victims remain slow and inadequate.
In one case, WhatsApp deleted abusive content only after it had already spread. In another, Instagram failed to act until the victim pursued the complaint repeatedly.
Rati’s researchers call this pattern “content recidivism” – the repeated resurfacing of abusive material even after removal.
The report continued: “One of the abiding characteristics of AI-generated abuse is its tendency to multiply.
“It is created easily, shared widely and tends to resurface repeatedly.”
The organisation is urging major tech companies to be more transparent and share data that could help track and remove such material. Without stronger intervention, experts warn, the problem will only deepen.
For many Indian women, the message is already clear: staying visible online carries growing risks.








