AI voice cloning scams are a new venture that are quite difficult to spot because they create tension which makes it easy for the target to make rash decisions, like transfer money to an unknown account.
By Michael Akuchie
Imagine being woken up late at night by a phone call informing you that your brother has been kidnapped and a certain sum of money is demanded for his release. To heighten the realism, you hear your brother’s voice on the call, urgently pleading with you to provide the money. Faced with the urgency of the situation, you quickly transfer the money to the account number given, without first reaching out to other family members to verify if they have recently spoken to your brother.
Even after discovering that your brother is safe and realising you’ve been scammed, you can’t shake the feeling of that phone call. How genuine it seemed. How convincingly accurate his voice was. This is the alarming power of voice cloning, a facet of artificial intelligence that scammers are increasingly using to deceive unsuspecting victims.
Unlike traditional social engineering scams, such as phishing, where cybercriminals send fraudulent emails appearing to come from a trusted source, AI voice cloning scams exploit victims’ emotions. These deceptive calls often target families and close friends, capitalising on their personal connections. Scammers understand that individuals are more likely to react quickly and positively to a distressing call about a relative or friend in danger and needing financial assistance.
AI voice cloning scams might seem like a complex task that demands a team of expert hackers and an array of screens, but in reality, it’s quite straightforward. The tremendous growth in AI’s popularity in recent years has driven a surge in demand for AI-powered solutions. People increasingly seek AI-driven tools for tasks such as writing, editing, brainstorming, dating, and more.
Certain websites now enable internet users to clone any voice, given a sample. Scammers often obtain this sample by stalking their potential victim’s social media pages, looking for an active social life. If the target has videos or audio clips online, cybercriminals can extract samples of their voice and use these websites to create a convincing clone.
AI algorithms only require a few seconds of video to complete the cloning process—sometimes as little as three seconds. With this minimal sample, the AI system can generate a voice clone that closely resembles the target’s. Scammers then use text-to-speech technology to create fake calls, exploiting the emotions of family members or even diehard fans of celebrities. For example, they might claim that a beloved celebrity is stranded with no ATMs and urgently needs money for a cab fare.
Sadly, numerous people have fallen victim to AI voice cloning scams, resulting in financial losses and varying levels of emotional distress. A survey of 7,054 people across seven countries, conducted by McAfee, a cybersecurity company, found that 77% of respondents had lost money due to such scams. One notable case involved an Ontario-based man who was deceived by a phone call that appeared to come from his fishing friend, requesting monetary aid after being arrested for texting while driving. Believing the call to be genuine, the victim wired $8,000 to help his friend. He later discovered it was a scam.
Interestingly, not all AI voice cloning scams result in the victim being defrauded, though the success rate remains high. Gary Schildhorn, a US-based attorney, received a call from someone who sounded exactly like his son. The caller pleaded for money to help resolve a dire situation, claiming that the son had been involved in a car crash that knocked down a pregnant woman and was subsequently arrested.
To further verify the call’s credibility, Schildhorn requested to speak with a court clerk and a defense attorney. Unbeknownst to him, the voices of these individuals had been cloned from samples available on the internet. While on his way to the bank to withdraw $9,000 for the supposed bail, his son FaceTimed him, revealing that he was safe and advising his father not to send any money.
AI voice cloning scams are a new venture that are quite difficult to spot because they create tension which makes it easy for the target to make rash decisions like transfer money to an unknown account. If you got a call saying that your child had been arrested, the first step you would likely take is to secure their freedom.
Fortunately, there are ways to identify fake calls, even when they deliver distressing news about a relative or close friend. Calls featuring AI-cloned voices often carry a heightened sense of urgency, much like scams where you’re informed of winning a grand prize in the lottery that must be claimed immediately.
The cloned voice typically warns that something bad will happen if you do not act instantly. Another way to identify these AI voice cloning scams is by scrutinising monetary requests. Scammers may ask you to wire money to an overseas bank account, as local accounts are easier to trace. Be wary of requests to send money to a foreign account.
You can also request specific facts about the relative or family member in danger. Even though scammers may stalk the victim’s social media to gather useful details, they are unlikely to know all the specifics. They may make mistakes when describing how your relative got into trouble. Pay close attention to the story and look for any inconsistencies.
The above strategies are useful for spotting potential AI voice cloning scams. Here are some steps you can take to safeguard yourself from losing money. If you receive a suspicious phone call about someone in danger, always question the source.
Contact other loved ones to verify the situation before sending any money. Additionally, consider establishing a codeword with friends and family, particularly for children who might be more vulnerable.
It could be a word like ‘Applesauce’ or ‘Jackhammer’. Teach them to use it whenever they feel they are in trouble, as it can help you distinguish authentic calls from false ones. Additionally, be mindful of what you share on the Internet, as it is a public space. You never know who might be stalking your page looking for valuable information to use in their latest scheme.
AI voice cloning scams are highly dangerous because they exploit victims’ emotions, bypassing logical reasoning when someone they care about is supposedly in danger. The fact that websites used to clone people’s voices are often free to use is particularly alarming, as it provides fraudsters with numerous opportunities to exploit.
As countries step up their efforts to introduce AI regulations for responsible use, legislators should consider implementing a law that requires voice cloning websites to watermark every cloned voice sample. This would clearly indicate that the voice is AI-generated, potentially deterring cybercriminals from exploiting this technology for scams.
Michael Akuchie is a tech journalist with four years of experience covering cybersecurity, AI, automotive trends, and startups. He reads human-angle stories in his spare time. He’s on X (fka Twitter) as @Michael_Akuchie & michael_akuchie on Instagram.
Cover photo credit: ID & RD