Misinformation often plays on emotional triggers
In the age of rapidly advancing technology, deepfakes are becoming increasingly difficult to detect.
Realistic yet fabricated images, videos and audio, deepfakes pose major risks to privacy, security, and trust in digital content.
They also pose a risk to people’s reputations. Deepfakes have become a dark corner of the internet, where people in the thousands have gathered to share fake videos of celebrity women having sex.
In November 2023, India saw a spate of deepfakes where Bollywood actresses were depicted in bold scenarios.
The most notable was a video of Rashmika Mandanna wearing a low-cut unitard.
Artificial intelligence continues to evolve and as a result, so do the techniques used to create convincing deepfakes.
However, while these digital forgeries can be highly deceptive, they often contain telltale signs that, with the right knowledge, can be identified.
An AI expert shares the top tips and strategies for spotting deepfakes, highlighting the subtle yet critical flaws that can reveal manipulated media.
Verify the Source
On X, fake news spreads more rapidly than legitimate news.
This is concerning given that almost two-thirds of young adults in the UK use social media as a regular news source.
Always look at the credibility of the sources behind the content you consume.
Does the information come from a reputable news outlet or a verified official account?
If the source is unfamiliar or appears suspicious, cross-check the content’s authenticity using reliable news organisations or fact-checking platforms like Google Fact Check Tools.
Misinformation often plays on emotional triggers like fear, anger, or outrage to cloud judgement.
When encountering content that provokes strong emotions, it’s important to pause and re-assess to ensure one is not being used as a pawn in someone else’s game.
Look at Facial Expressions
Deepfakes often struggle to replicate the subtle intricacies of facial expressions and natural movements.
Key areas to focus on include microexpressions around the eyes and mouth.
Watch for unnatural blinking patterns, irregular eye movements, or abrupt head motions, and assess whether facial expressions match the emotions being conveyed.
Additionally, details such as the uniformity of teeth, hair texture, and overall facial structure can help identify deepfakes.
For example, a deepfake video of Ranveer Singh criticising Prime Minister Narendra Modi went viral.
Although convincing, a big giveaway that a video is a deepfake is the shape of the face and ears – deepfakes often have slightly off measurements in these areas, with ears being particularly difficult to replicate.
Pausing the video and examining the facial features can help you spot these irregularities.
Another thing to look at is body shape. Typically, the faces of female celebrities are superimposed onto other women’s bodies and shared online for lewd purposes.
One instance saw Katrina Kaif‘s towel fight scene in Tiger 3 doctored.
The deepfake replaced the towel with a revealing two-piece.
Her body was also photoshopped, which included her curves becoming noticeably more ample.
The fake image showed Katrina’s hands placed on them for a more sensual pose.
Use Reverse Image Search
Take advantage of reverse image and video search tools to trace the origins of visual content and assess its authenticity.
For images, uploading the file to tools like Google Reverse Image Search can help identify whether the image has been AI-generated, digitally altered, or used in a misleading context.
By finding other instances of the image online, you can determine where it first appeared, and whether it has been manipulated or misused.
For videos, more specialised tools like InVID or WeVerify are useful for analysing footage.
These tools allow you to break down videos into frames, conduct reverse image searches on individual frames, and examine metadata to detect edits, alterations, or whether the footage has appeared in a different context.
These techniques enable you to uncover discrepancies, verify authenticity, and spot any signs of tampering or reuse.
Together, these searches can help ensure that visual content is not being misrepresented or maliciously altered.
Look for Inconsistencies
Deepfakes can often be identified by subtle digital flaws such as blurriness or unnatural pixelation, particularly around the edges of faces or objects.
It’s important to watch for inconsistencies in lighting, shadows, and reflections, as these can be signs of manipulation.
Even small details like an extra finger or misplaced features might indicate something is off.
Additionally, examining the background for distortions or irregularities can help reveal manipulation, as these elements can disrupt the overall coherence of the scene.
The bold deepfake of Alia Bhatt making suggestive expressions and gyrating showed anomalies like distortion around her mouth and contrasting skin tone.
A fun way to help you recognise AI-generated content is to play Which Face is Real, a game created by professors at the University of Washington.
Audio-Visual Synchronisation
Spotting a deepfake often hinges on closely observing the movements of the lips, as our mouths naturally form distinct shapes when pronouncing certain letters.
AI systems frequently struggle to replicate these subtle movements with precision, making this a key area to focus on.
In fact, nearly a third of deepfake videos fail to accurately sync lip movements with sounds, particularly for letters like “M”, “B” and “P”, which require specific lip formations.
When the speaker’s lips don’t match the sounds being produced, or if there’s a noticeable delay or mismatch in the audio-visual synchronisation, it should raise suspicion.
Additionally, pay attention to any irregularities in the speaker’s tone, pitch, or speech rhythm.
These can also be indicators of a manipulated video, as AI-generated audio often lacks the natural inflexions and cadence of human speech.
Christoph C Cemper, an AI expert at AIPRM, said:
“The World Economic Forum has flagged disinformation as a top risk for 2024, with deepfakes emerging as one of the most alarming uses of AI.
“If you come across a potential deepfake, the best course of action is to refrain from sharing it.
“The power of a deepfake lies in its ability to spread, and its impact diminishes if it doesn’t disseminate widely.”
“If you see someone else sharing it, take a moment to courteously inform them and point them to reliable fact-checking resources, especially if the fake has been debunked.
“Additionally, leverage reporting features on social media platforms to limit its reach.
“It’s vital that we all play a role in raising awareness – share your experience and insights on how you recognised deepfakes to help others counteract similar threats.
“By staying vigilant about the content we consume online, we can collectively fight misinformation and safeguard the integrity of our digital environment.”
As deepfakes become more sophisticated, the ability to identify them is crucial for maintaining trust in digital content.
While AI-generated forgeries may seem convincing, they often leave behind subtle clues.
By understanding these indicators and employing tools like reverse image searches and video analysis, we can better protect ourselves from misinformation and manipulation.