None of those moments occurred but the images looked genuine.
As artificial intelligence continues to become an ever-growing presence in our lives, it has made its way into the sporting world, with footballers being swept into a wave of AI slop.
On TikTok, you might see Lionel Messi and Cristiano Ronaldo cutting each other’s hair or boarding the Titanic in Edwardian dress. There are clips of Kylian Mbappé on a ski-lift with a turtle.
None of them are real.
Many are almost indistinguishable from authentic footage and as deepfakes become harder to detect, the line between parody and deception is narrowing.
It may seem like harmless fun but is there a point where players and clubs will try to draw the line?
Image Rights and Legal Grey Areas

Modern football is a commercial juggernaut. Players and clubs invest heavily in brand protection. That often means trademarking names, slogans and celebrations.
Take Chelsea’s Cole Palmer, who has trademarked the term ‘Cold Palmer’ with the UK Intellectual Property Office. The 23-year-old has also protected his name, autograph and ‘shivering’ celebration.
Those measures guard against unauthorised merchandising and endorsements.
AI presents a different scale of challenge.
In the UK, there is limited standalone legislation covering a person’s likeness, commonly referred to in football as image rights. Enforcement typically relies on intellectual property law, passing off, or data protection principles.
Jonty Cowan, legal director at Wiggin LLP, said AI was presenting “lots of novel challenges”.
He said:
“Various governments around the world are trying to figure out… how do we react to AI?”
Unless a player can show commercial or reputational damage, legal options are narrow.
Cowan added: “If it is a deepfake that is showing them in a compromising position, let’s say, that’s different.”
The Data (Use and Access) Act came into force in February 2026. It makes it a criminal offence to create, share or request a sexually explicit deepfake.
That offers stronger protection in extreme cases but it does not address the flood of realistic but non-explicit AI content.
A related concern is passing off, which occurs when a third party unfairly associates their product with the goodwill of an established brand or player. Consumers may be misled into believing there is an endorsement.
That can damage trust and revenue.
In December 2024, as part of an AI-related consultation, the UK government said it was considering “introducing some kind of personality right”.
Such a right could give players clearer grounds to challenge the misuse of their likeness.
For now, enforcement remains complex and reactive.
Blurred Realities

AI is no longer confined to obvious parody. It is being used to insert players into plausible real-world scenarios.
When Antoine Semenyo and Marc Guéhi were unveiled by Manchester City in January, official photographs showed each player with director of football Hugo Viana.
Yet before those images were even taken, AI-generated pictures circulated online, depicting the two players signing contracts alongside manager Pep Guardiola.
Another showed Semenyo being greeted at the training centre by former midfielder Yaya Touré.
None of those moments occurred but the images looked genuine.
In February, an image appeared of Manchester United interim head coach Michael Carrick with supporter Frank Ilett, who has pledged not to cut his hair until the club wins five consecutive games. Again, it was fabricated.
Cowan said it is difficult to pursue action when content is presented “in a non-contentious manner”.
From a reputational standpoint, the question becomes whether audiences are genuinely misled. Some content may be dismissed as implausible. Other examples are more damaging.
An AI-generated video showed Celtic’s Luke McCowan punching an assistant referee. Even if unlikely, such imagery can circulate rapidly before verification.
Clubs may rely on trademark and design rights where their kits and crests are replicated, as Cowan said:
“Where you’ve got, for example, the Man City kit they could look at other IP rights.
“Have they infringed the trademark in their crest? Or design rights in their shirt? For that kind of image, that’s what a club or an individual would likely be looking at.”
Platforms Under Pressure
@ai.realmadrid Mbappe snowday with his turtle ??? #mbappe #ai #aiart #aivideo #artificialintelligence #realmadrid #RealMadridGetafe #football #soccer #futbol #futebol #snow ? All I Want For Christmas Is You – Mariah Carey
Court action is costly and slow. Cowan believes challenging platforms directly is often more realistic.
He said: “The Online Safety Act has been introduced in the UK recently, and that is putting an obligation on platforms to tackle illegal content.
“It may well be that we will see more mechanisms that platforms will introduce to have that content taken down. Often, that is the easiest and quickest way to tackle these images.”
The Act increases duties on companies hosting user-generated content. It may encourage faster takedown processes and improved detection systems.
In 2025, the oversight board that runs appeals for Meta banned an advert for a gambling app on Facebook.
The advert used AI to manipulate a video of Brazilian icon Ronaldo and imitate his voice. It was not initially detected by automated tools.
Meta was instructed to create “easily identifiable indicators that distinguish AI content” to prevent “significant amounts of scam content”.
Governing bodies have faced similar issues.
During Euro 2024, The Football Association dealt with fake AI interviews targeting then-England head coach Gareth Southgate. The fabricated clips showed him making derogatory remarks about his players.
The videos breached TikTok’s AI policy, which forbids content that “falsely shows public figures in certain contexts”. They were removed after being viewed and shared by millions.
Cowan pointed to emerging transparency requirements under the EU AI Act, though it does not apply in the UK:
“Under advertising regulations, influencers have to disclose where a video they produce has been sponsored.
“I suspect we may end up with similar transparency requirements. A little ‘#AI generated’ or similar label in the corner.”
“If you’ve got those egregious videos, where someone’s putting out a hideous deepfake, they’re not going to worry about adding that label.”
Enforcement will hinge on platform design, moderation capacity and regulatory will.
Opportunity and Exploitation

AI is not solely a threat.
Clubs and players are already exploring synthetic media for marketing. Promotional material can be generated without travel. Multilingual campaigns can be localised instantly. Archive footage can be reimagined for new audiences.
The commercial upside is clear. So are the risks.
Unauthorised parties can take a player’s likeness and attach it to products, including gambling apps and financial schemes.
Audiences may struggle to distinguish official endorsements from manipulated content.
As AI tools improve, bad actors will find it easier to scale deception.
Automated voice cloning, photorealistic rendering and generative video lower technical barriers.
Detection tools are improving, but they are locked in an arms race.
For players, the stakes are personal and financial. For clubs, they are structural.
Trust underpins sponsorships, broadcasting deals and fan engagement. If audiences cannot easily verify authenticity, that trust erodes.
Digital rights management firms are expanding their services.
Some use AI to scrape websites and apps, identifying unauthorised uses of intellectual property. They can issue takedown requests on behalf of clients.
This indirect approach often proves faster than litigation.
The broader regulatory landscape remains unsettled.
The UK is weighing further reforms. The EU has introduced more explicit transparency obligations. Global platforms operate across both regimes, adding complexity.
AI-generated content featuring footballers is accelerating in volume and realism.
What began as playful experimentation has raised concerns over where the line is.
UK law offers partial remedies, particularly where reputational or financial harm is clear but it does not yet provide comprehensive protection against everyday ‘AI slop’.
Platforms are under growing pressure to label, detect and remove misleading content.
Governing bodies and clubs are adapting, though many still rely on fans to distinguish official from fabricated material.
As synthetic media becomes harder to spot, that assumption may weaken.
Football has always adapted to technological change, from broadcast rights to social media but the AI era presents a more complex test.








