The rise of generative AI has opened new avenues in technology but also in deceit, with scammers taking full advantage. Recently, a video featuring former Prime Minister Manmohan Singh was manipulated using AI to promote a bogus investment opportunity. The video, initially believed by some to be real, was revealed to be fake, constructed with artificial intelligence, particularly an AI voice clone of Singh. This article dives into the details of the scam and its exposure by fact-checkers.
Credits: Boom
How the Fake Video Spread
The fraudulent video began circulating on social media, particularly on Facebook, where a user posted the manipulated clip. In the video, Manmohan Singh allegedly claims that individuals can earn 75,000 rupees per day through an algorithm-based financial investment scheme. The scam supposedly hinges on an algorithm that can predict market trends, offering quick profits.
However, the video is entirely fabricated, with both the visuals and audio being altered. The visuals were taken from a legitimate video in 2019, where Singh discussed the slowdown of the Indian economy. The AI-generated voice was overlaid to create the illusion that he was endorsing the scam.
BOOM’s Investigation: Exposing the Fraud
Fact-checking platform BOOM, known for debunking fake news and disinformation, was quick to investigate the viral video. Using reverse image search on key frames, BOOM traced the visuals back to an original video posted by the Indian National Congress’s official YouTube channel on September 1, 2019.
In the genuine video, Singh expressed concerns about the slowing Indian economy under the Modi government, making no mention of any financial investment opportunity. He stated, “The state of the economy today is deeply worrying. The last quarter’s GDP growth rate of 5% signals that we are in the midst of a prolonged slowdown.”
This revelation confirmed that the viral video was doctored, with the original content being used to convey a false narrative.
AI Voice Cloning: The Key to the Scam
The most alarming aspect of this scam video is the use of an AI-generated voice to mimic Manmohan Singh. BOOM used a deepfake detection tool developed by TrueMedia.org to analyze the audio in the viral video. The tool confirmed that the voice was 100% AI-generated, having no correlation with Singh’s actual voice from the original 2019 video.
The manipulation is further evident in the lack of synchronization between Singh’s lip movements and the audio, suggesting that the voice was overlaid onto the original visuals. This lack of coordination between audio and video is a common marker of deepfake videos.
Credits: Boom
Recurring Pattern: Gautam Adani Was Also Targeted
Interestingly, this is not the first time the same Facebook user has attempted to spread AI-generated scam videos. In an earlier instance, the individual posted another doctored video that appeared to show Gautam Adani, founder and chairman of the Adani Group, endorsing a similar investment scheme. Fact-checkers had debunked that video as well, revealing a troubling pattern of using influential figures to propagate fraudulent investment opportunities.
The Dangers of AI-Generated Misinformation
Deepfake technology is becoming more and more dangerous, especially when it targets reputable public figures like Manmohan Singh. The capacity of AI to mimic voices and alter images is used by con artists to give their fraudulent enterprises a false sense of legitimacy. These movies may seem convincing to unwary viewers, tempting them to invest in fictitious chances or share the video with others in an attempt to propagate false information.
In the digital era, this case emphasizes the value of public watchfulness and fact-checking. The strategies used by individuals seeking to take advantage of AI for fraud will also change as the technology does. Beyond just financial loss, these technologies erode public confidence in public leaders and the veracity of digital content.
Combating the Spread of Deepfake Content
In response to the growing danger of deepfake videos and AI-generated scams, users and tech platforms alike must be proactive. Social media companies need to improve their detection algorithms so that they can identify modified content and alert consumers to videos that might be false. AI-driven technologies, such as those employed by TrueMedia.org and BOOM, can be extremely helpful in early deepfake detection.