"we’re testing a new way of detecting celeb-bait scams."
Meta, the owner of Facebook and Instagram, is set to introduce facial recognition technology to try and tackle scammers who fraudulently use celebrities in adverts.
Elon Musk and finance expert Martin Lewis are among those to fall victim to such scams, which typically promote investment schemes and cryptocurrencies.
Mr Lewis previously said he receives “countless” reports of his name and face being used in such scams every day, and had been left feeling “sick” by them.
Meta already uses an ad review system which uses artificial intelligence (AI) to detect fake celebrity endorsements.
We look at what it is doing to tackle celebrity scam adverts and the issues Meta faces.
What is being Done?

Meta has said it is retesting the service as part of a crackdown on ‘celeb bait’ scams.
The company said: “We know security matters, and that includes being able to control your social media accounts and protect yourself from scams.
“That’s why we’re testing the use of facial recognition technology to help protect people from celeb-bait ads and enable faster account recovery.
“We hope that by sharing our approach, we can help inform our industry’s defences against online scammers.
“Scammers often try to use images of public figures, such as content creators or celebrities, to bait people into engaging with ads that lead to scam websites, where they are asked to share personal information or send money.
“This scheme, commonly called ‘celeb-bait,’ violates our policies and is bad for people that use our products.
“Of course, celebrities are featured in many legitimate ads. But because celeb-bait ads are designed to look real, they’re not always easy to detect.”
Detailing the new changes, Meta added:
“Our ad review system relies primarily on automated technology to review the millions of ads that are run across Meta platforms every day.
“We use machine learning classifiers to review every ad that runs on our platforms for violations of our ad policies, including scams.
“This automated process includes analysis of the different components of an ad, such as the text, image or video.
“Now, we’re testing a new way of detecting celeb-bait scams.
“If our systems suspect that an ad may be a scam that contains the image of a public figure at risk for celeb bait, we will try to use facial recognition technology to compare faces in the ad to the public figure’s Facebook and Instagram profile pictures.
“If we confirm a match and determine the ad is a scam, we’ll block it.”
“We immediately delete any facial data generated from ads for this one-time comparison, regardless of whether our system finds a match, and we don’t use it for any other purpose.”
David Agranovich, director of global threat disruption at Meta, said:
“This process is done in real-time and is faster and much more accurate than manual human reviews, so it allows us to apply our enforcement policies more quickly and to protect people on our apps from scams and celebrities.”
Deepfakes

The problem of celebrity scams has been a long-running one for Meta.
It became so big in the 2010s that Mr Lewis took legal action against Facebook. However, he ended up dropping the case when the tech giant agreed to introduce a button so people could report scam ads.
In addition to introducing the button, Facebook also agreed to donate £3 million to Citizens Advice.
But these scams have become more complex and much more realistic due to so-called deepfake technology, where a realistic computer-generated likeness or video is used to make it seem like the celebrity is backing a product or service.
Meta has faced pressure to take action against the growing threat of these scam adverts.
Mr Lewis urged the government, to give the UK regulator, Ofcom, more powers to tackle scam ads after a fake interview with Chancellor Rachel Reeves was used to trick people into giving away their bank details.
Meta acknowledged:
“Scammers are relentless and continuously evolve their tactics to try to evade detection.”
“We hope that by sharing our approach, we can help inform our industry’s defences against online scammers.”
Facial Recognition Controversy

Although the new steps include facial recognition, the widespread use of it is controversial.
Facebook previously used it but ditched it in 2021 over privacy, accuracy and bias concerns.
It now says the video selfies will be encrypted and stored securely, and won’t be shown publicly. Facial data generated in making the comparison will be deleted after the check.
Meta’s recent initiatives to combat celebrity scam adverts highlight the platform’s commitment to enhancing online safety and transparency.
By focusing on rigorous identity verification and monitoring processes, Meta aims to curb the exploitation of public figures’ images in fraudulent ads that mislead users.
The planned global rollout in December 2024 marks a significant step toward addressing this issue at scale, although regulatory constraints mean certain regions such as the UK, EU, South Korea, and the US states of Texas and Illinois will not yet experience the full scope of these protections.
Meta’s ongoing efforts, coupled with international trials, will be critical in shaping the future of online advertising standards and trustworthiness across its platforms.








