“The images replicate the visual grammar of poverty"
AI-generated images depicting extreme poverty and survivors of sexual violence are increasingly appearing on stock photo sites and being used by major health NGOs, prompting experts to warn of a modern form of “poverty porn.”
Noah Arnold, of Fairpicture, told The Guardian:
“All over the place, people are using it. Some are actively using AI imagery, and others, we know that they’re experimenting at least.”
Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp who studies global health imagery, said the visuals mimic familiar poverty tropes.
He explained: “The images replicate the visual grammar of poverty – children with empty plates, cracked earth, stereotypical visuals.”
Alenichev has collected over 100 AI-generated images used in social media campaigns against hunger and sexual violence.
These include highly exaggerated scenes, such as children huddled in muddy water or an African girl in a wedding dress with a tear streaking her face.
In a comment piece in The Lancet Global Health, Alenichev described the phenomenon as “poverty porn 2.0.”
Though the exact scale of use is hard to measure, experts say it is growing. The trend has been fuelled by budget constraints and concerns about obtaining consent for real photography.
Alenichev said: “It is quite clear that various organisations are starting to consider synthetic images instead of real photography, because it’s cheap and you don’t need to bother with consent and everything.”
The likes of Adobe Stock and Freepik host numerous AI-generated images under search terms like “poverty”.
Many carry captions such as “Photorealistic kid in refugee camp,” “Asian children swim in a river full of waste,” and “Caucasian white volunteer provides medical consultation to young black children in African village.”
Alenichev continued: “They are so racialised. They should never even let those be published because it’s like the worst stereotypes about Africa, or India, or you name it.”
According to Freepik CEO Joaquín Abela, the platforms themselves are not responsible for how images are used. The photos are generated by the platform’s global community of contributors, who earn licensing fees when their images are purchased.
He added that Freepik has attempted to address bias in its library by promoting diversity in professional images, such as lawyers and CEOs, but admitted that customer demand largely drives what is created and sold.

AI-generated visuals have already been used by NGOs.
In 2023, Plan International’s Dutch branch released a campaign video against child marriage using AI-generated images of a girl with a black eye, an older man, and a pregnant teenager.
The UN also posted a video on YouTube featuring AI-generated “re-enactments” of sexual violence in conflict, including testimony from a Burundian woman describing her rape by three men in 1993. The video was later removed.
A UN Peacekeeping spokesperson said: “The video in question, which was produced over a year ago using a fast-evolving tool, has been taken down, as we believed it shows improper use of AI, and may pose risks regarding information integrity, blending real footage and near-real artificially generated content.
“The United Nations remains steadfast in its commitment to support victims of conflict-related sexual violence, including through innovation and creative advocacy.”
Arnold said the proliferation of AI imagery follows long-standing debates about ethical representation in global health:
“Supposedly, it’s easier to take ready-made AI visuals that come without consent, because it’s not real people.”
Kate Kardol, an NGO communications consultant, said the images were alarming:
“It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal.”
Experts warn that generative AI often reproduces existing social biases.
Alenichev noted that widespread use of such images in global health campaigns could exacerbate these problems, as they may be incorporated into future AI training datasets, amplifying prejudice.
A spokesperson for Plan International confirmed the NGO now “adopted guidance advising against using AI to depict individual children” and said the 2023 campaign employed AI to protect “the privacy and dignity of real girls.”








