"we're going to add yet another huge corpus of content."
AI slop is now reshaping social media at a pace few could have predicted.
Billions of users scroll through feeds that mix the real, the creative, and the absurd, often without being able to tell the difference.
Platforms such as Meta and YouTube are leaning into this new era, offering tools that make it easier than ever to generate and remix content.
Yet while some see it as a creative boon, others are frustrated by gory, misleading, or just plain pointless clips.
We look at how AI slop has transformed social media, the backlash it is creating, and what it means for the way we consume information.
Social Media’s Third Phase

Mark Zuckerberg has called it the “third phase” of social media.
The first phase centred on posts from friends and family; the second brought in creator content. Now, AI is set to produce yet another enormous stream of content.
In October 2025, he told Meta shareholders:
“Now, as AI makes it easier to create and remix content, we’re going to add yet another huge corpus of content.”
Meta has not only allowed AI-generated posts but has also launched tools to produce them, from image and video generators to advanced filters.
YouTube is doing the same.
Neal Mohan, the platform’s CEO, revealed that over one million channels used YouTube’s AI tools in December alone.
He compared AI to past technological revolutions like CGI and Photoshop, calling it a boon for creators while acknowledging concerns about “low-quality content, aka AI slop”.
Research from AI company Kapwing shows just how pervasive the problem has become.
Around 20% of content on a new YouTube account is now low-quality AI material, with short-form video particularly saturated.
One channel, India’s Bandar Apna Dost, has racked up 2.07 billion views and an estimated £2.9 million in annual earnings, highlighting how lucrative AI content can be.
Backlash

However, users are hitting back.
French student Théodore reported numerous AI-generated cartoons he found disturbing or aimed at children.
Clips included gory and surreal content, such as “a woman in a nightdress who eats a parasite and then turns into a giant angry monster that is eventually healed by Jesus”.
YouTube removed the channels for violating community guidelines, saying it was “focused on connecting our users with high-quality content, regardless of how it was made”.
Even lifestyle platforms like Pinterest have been affected.
Frustrated users prompted the company to introduce an opt-out system for AI-generated content, though it depends on users declaring their content as AI-made.
Across social media, backlash is now commonplace.
On TikTok, Threads, Instagram and X, comments condemning AI slop often receive more engagement than the original posts.
One video showing a snowboarder rescuing a wolf from a bear had 932 likes, while a commenter exclaiming “Raise your hand if you’re tired of this AI s**t” received 2,400.
Engagement, of course, still benefits platforms, feeding the very algorithms users dislike.
What is the Cognitive Cost?

Experts warn that AI slop is more than just annoying; it may be altering attention spans and critical thinking.
Emily Thorson, associate professor at Syracuse University, notes that its impact depends on why people are on the platform.
She said: “If a person is on a short-video platform solely for entertainment, then their standard for whether something is worthwhile is simply ‘is it entertaining?’.
“But if someone is on the platform to learn about a topic or to connect with community members, then they might perceive AI-generated content as more problematic.”
Alessandro Galeazzi, a social media researcher at the University of Padova, explains that verifying AI content requires mental effort.
Over time, he fears users may stop checking, letting misinformation slip through.
Even comical AI videos, gorillas lifting weights or fish wearing shoes, can contribute to what he calls “brain rot” by encouraging rapid consumption of content that is “not only unlikely to be real, but probably not meaningful or interesting”.
Real-World Risks

AI slop is not just frivolous. Some content can be manipulative or harmful.
Elon Musk’s platforms, xAI and X, faced criticism after the chatbot Grok was misused to digitally undress women and children.
Political events are also at risk: after the US attack on Venezuela, fake videos circulated showing people supposedly celebrating in the streets, potentially shaping public perception.
Dr Manny Ahmed, CEO of OpenOrigins, emphasised the need for verification tools:
“We need a new way for real content posters to be able to prove their clips and pictures are genuine.”
Platforms are experimenting with moderation and detection systems, but the scale of AI slop is overwhelming.
Billions of users consume content faster than it can be verified, raising urgent questions about truth, trust, and engagement in the digital age.
Social media is in uncharted territory, blending creativity, manipulation, and algorithm-driven consumption.
While platforms profit from engagement, the rise of low-quality, misleading, or disturbing content is prompting user backlash and raising concerns among experts.
From cognitive strain to misinformation, the implications are significant.
Social media now faces a pressing challenge: how to embrace AI innovation without eroding trust, authenticity, or meaningful engagement.
In an era where the line between reality and AI is increasingly blurred, finding that balance may determine the future of online culture.








