Yet the scale of the problem has grown with AI.
YouTube is cracking down on AI spam content.
From July 15, the platform will update its monetisation rules to make it harder for creators to profit from what it calls “inauthentic” content, including repetitive, mass-produced videos that have flooded the site in recent months.
The change, part of an update to the YouTube Partner Program (YPP) policies, reflects growing concerns about low-quality AI-generated content.
While the full policy text hasn’t been published, YouTube says the move will provide “more detailed guidelines” on what qualifies as content eligible for monetisation.
A help page on YouTube explains that creators have always been expected to produce “original” and “authentic” work.
The new language is intended to give creators a clearer picture of what that means today, especially as AI tools make it easier than ever to churn out spammy content.
Still, some creators are worried. They fear that the rule change could affect formats like reaction videos or clips-based content. YouTube has moved to reassure them.
In a video update, YouTube’s Head of Editorial and Creator Liaison Rene Ritchie said the change is “just a minor update” to existing rules and not a new restriction on popular content formats.
He said the revision was intended to better flag “mass-produced or repetitive” content, which has long been ineligible for ads.
Ritchie said: “This type of content has been ineligible for monetisation for years.”
He noted that viewers often perceive it as spam.
Yet the scale of the problem has grown with AI.
“AI slop”, a term used to describe hastily produced, low-quality content generated by artificial intelligence, has become increasingly common.
These videos often feature robotic voiceovers layered on stock footage, photo slideshows, or recycled clips. Channels using this approach have sometimes racked up millions of subscribers.
There’s also a deeper concern: misinformation and manipulation.
Fake AI-generated news updates, such as bogus reports about the Diddy trial, have pulled in millions of views.
Entire channels have posted deepfaked videos of true crime stories, with AI narrators and images, designed purely to game the algorithm.
Earlier in 2025, 404 Media revealed that one viral true crime series on YouTube had been entirely AI-generated.
Even YouTube CEO Neal Mohan wasn’t spared; his likeness was used in an AI-generated phishing scam hosted on the site.
Despite the platform offering tools to report deepfakes, these examples show the scale of the problem. Critics say YouTube’s slow response has enabled this content to multiply.
The July 15 update might seem like a technical clarification, but it marks a shift in how YouTube plans to tackle the AI content surge.
It also suggests the company is preparing for larger enforcement actions, potentially banning or demonetising entire channels producing low-effort AI spam.
While YouTube continues to describe the update as minor, the underlying message is clear: the platform wants to preserve its reputation and protect its ad ecosystem. Letting AI-generated junk thrive and make money would do the opposite.








