"yet more evidence of an ‘outrage economy'."
An AI-generated video of UK Prime Minister Keir Starmer standing at a Downing Street lectern in Islamic dress began circulating online shortly after the outbreak of the US-Iran war.
Within days, it had amassed around 400,000 views across Facebook and Instagram.
In the clip, the avatar delivers a fabricated and inflammatory speech, including Islamophobic language and a racial slur targeting Pakistani people.
At first glance, it appears like another example of AI-driven political misinformation spreading at speed across social media platforms.
But the origin of this content complicates that assumption in an unexpected way.
According to The Bureau Investigates, the creator is a Pakistan-based Muslim man who also runs religious accounts.
His case exposes a growing and uncomfortable reality: AI tools are not only democratising content creation, they are also collapsing the boundary between ideology, intent and profit.
What emerges is an online ecosystem where outrage is manufactured at scale, often detached from belief, understanding, or even language proficiency.
The Creator behind Britain Today
The man behind the content operated a network of social media pages targeting UK audiences.
One of the most prominent was a Facebook page called Britain Today, which amassed 192,000 followers, alongside an Instagram account with 44,000 followers and a TikTok presence with 11,000 followers.
Together, they generated millions of views through a steady stream of AI-generated memes, videos and repurposed clips.
The Bureau Investigates reported that the creator maintained that his primary motivation was financial survival, saying he earned around $1,500 a month from a single page.
He told the outlet: “You are aware of the conditions in Pakistan, how petrol and circumstances are. Whoever is doing this work is doing it to make an earning.
“We have no interest in news. I haven’t even looked at what is being said in the videos, what has been written and what hasn’t been written.”
The content itself ranged from AI-generated depictions of British politicians to memes promoting conspiracy theories such as the “great replacement” narrative, which falsely claims non-white immigrants are deliberately replacing white populations.
Other posts described Muslims praying in public as a “dominance strategy” and an “invasion of the West”.
Meta later removed the accounts after they were flagged, but not before their content had circulated widely across platforms.
A spokesperson said: “We have clear community standards that prohibit hate speech, harassment, harmful misinformation and inauthentic behaviour and we have removed these accounts for violating our policies.”
The Industrialisation of Outrage
What makes this case particularly significant is not only the content but the method of production.
The creator said: “All of the content, even the name of the page and everything, the content that is in the videos, that is AI.”
Tools such as Grok, Google’s image generator Whisk, Gemini, CapCut and ChatGPT were part of a workflow designed to automate content creation.
In practice, this meant searching for trending topics, copying summaries, pasting them into video tools and generating AI-driven visuals and captions in minutes.
He told The Bureau Investigates: “If there are protests and things in the UK, those videos I pick from Twitter or TikTok.
“All of these things are copy-paste … I lift these from there and put them on Facebook.”
He also claimed to have learned monetisation techniques through YouTube tutorials and paid guidance from other creators. Meta’s own monetisation systems, which reward high-engagement content through ad placements and bonuses, formed part of his income strategy.
This model reflects a broader shift in digital content ecosystems, where engagement rather than intent or accuracy determines visibility and reward.
Sam Stockwell, senior researcher at the Centre for Emerging Technology and Security, described this as a “shadow influencer” economy.
“This model prioritises passive income over political ideology, turning divisive content into a profitable commodity.
“Exploiting the way social media algorithms prioritise high-engagement metrics, these creators are realising that xenophobic or anti-establishment narratives are the most efficient pathways to virality and therefore money.”
The result is a system where misinformation does not require ideological commitment. It only requires attention.
Political Impact
The content published through Britain Today directly intersected with real-world political discourse in the UK, particularly around immigration, Islamophobia and public figures such as London Mayor Sadiq Khan.
Some posts used genuine footage of Khan speaking about prejudice faced by British Muslims, but recontextualised it with captions alleging government support for Muslim organisations “while [Muslims] rape women and children”.
Other posts described public religious gatherings, such as an iftar event in Trafalgar Square, as a “colonisation event”.
A spokesperson for Sadiq Khan responded to the findings by calling the case further evidence of an “outrage economy” online.
They said: “This appalling example is yet more evidence of an ‘outrage economy’ where people are profiting from the poisonous narratives they push online, including those targeting Muslim Londoners.
“Social media firms must do far more to stop the spread of lies and hatred on their platforms, and prevent those creating and disseminating them from being financially rewarded.”
The political concern extends beyond individual cases.
Emily Darlington MP, who sits on the Science, Innovation and Technology select committee, warned of systemic vulnerability, saying there is “clearly a market for hate content in the UK”.
The creator behind the AI-generated Islamophobic content claimed he did not fully understand the content he was distributing:
“I don’t speak proper English and then I don’t understand what they have and haven’t written.”
“But now what’s done is done. It’s a good thing you’ve told me. I am thankful to you.”
He told The Bureau Investigates that he would delete the posts and avoid similar content in future, however, many of the offensive videos remained online until Meta intervened.
What this case ultimately reveals is not just a failure of moderation, but a structural shift in how online influence now operates.
AI tools have lowered the barrier to entry so significantly that content can be produced, translated and monetised without understanding, intent or even language fluency.
Combined with algorithm-driven monetisation systems, this has created a global pipeline for “outrage-for-profit” content.
In this new ecosystem, ideology is optional. Virality is not.








