The New York Times dubbed deepfake Elon Musk the biggest scammer on the internet thanks to the proliferation of AI-powered videos. The same technology that can superpower deepfake videos such as those using Musk’s likeness can be turned against corporate executives and leadership, says KnowBe4’s Perry Carpenter, who offers some common-sense advice for security teams.
AI-powered voice cloning isn’t just a nuisance for celebrities, politicians and other public figures. The technology has emerged as a significant threat in the cybersecurity landscape as a vector for social engineering attacks. While many are familiar with its use in entertainment and disinformation, the application of voice cloning in targeted cyberattacks, fraud and extortion schemes poses a growing concern for organizations worldwide.
This evolving threat, known as vishing (voice-based phishing), leverages advanced AI to clone voices of individuals, creating convincing impersonations that can fool even the most vigilant employees. As the technology becomes more sophisticated and accessible, businesses must adapt their security strategies to protect against these novel attack methods.
Corporate security teams now face the challenge of defending against attacks that exploit innate trust in familiar voices, potentially compromising sensitive information, financial assets and even personal safety.
A new generation of attacks
Traditional vishing attacks include robocalls, pre-recorded messages, alleged technical support, bank representatives, IRS officials or someone from a well-known or trusted company. Today’s vishing attacks are entirely different. Threat actors and scammers are using voice cloning technology to run dangerous, scary and highly targeted vishing attacks against their targets, potentially even causing physical harm.
High-value CEO fraud
Scammers recently cloned the voice of Mark Read, the CEO of WPP, and tried to convince an individual to set up a new business with the aim of extracting money and personal details. In 2021, scammers cloned the voice of a company director and convinced bankers to authorize fund transfers to the tune of $40 million. And the New York Times has dubbed deepfake versions of Elon Musk the internet’s biggest scammer.
Virtual kidnapping
An Arizona woman was left shaken after she received a phone call from an unknown person threatening to harm her child if she did not dole out thousands of dollars. The scammer cloned her daughter’s voice to make it sound like she was abducted and crying for help.
Grandparent scams
The elderly are being increasingly targeted with what is known as a grandparent fraud in which people receive unexpected calls from a grandchild claiming to be stuck in a family emergency (accident, jail, lost wallet, help traveling abroad, etc.) begging to have money sent immediately.
How it works & why it’s such a risk
In a recent social engineering contest, the John Henry Competition, I and other researchers demonstrated that it was possible to prompt-engineer large language models and fuse them with audio generation tools to create AI-powered voice phishing bots that can operate autonomously and even outperform experienced human social engineers. Judging by the pace of AI innovation, organized cyber gangs soon will not only clone people’s voices but design and unleash AI-powered bots that will conduct targeted and automated vishing attacks at massive scale.
The internet is already chock full of videos, images and audio recordings of millions — even billions — of people. A McAfee survey found that more than half of all adults share their voice data online (social media, voice notes, etc.) at least once a week. Business executives, too, are easy to find online, as they regularly appear in media interviews, podcasts, events and webinars.
Today’s tech is so powerful that threat actors are able to clone a person’s voice using just a few seconds of audio recording.
Thwarting vishing attempts
Below are best practices and recommendations that can help organizations mitigate these threats:
- Keep staff members informed and aware: Update and remind employees about the prevalence of vishing attacks. Ask them to cautiously limit the amount of information they share online, not to succumb to urgency, pressure or emotional manipulation, not to respond to unexpected requests or unsolicited calls and to deflect threats or intimidation.
- Update security measures: Update security policies and processes around vishing attacks. Implement security tools that can help verify callers. Use phishing-resistant multi-factor authentication, zero-trust network access and other advanced security controls to protect organizations from social engineering and unauthorized access.
- Vishing training and exercises: Leverage vishing simulation exercises and real-world training to protect against vishing attempts. Remaining calm, avoiding revealing sensitive information and reporting suspicious calls are all smart options. Advise them to use a “secret code” with close colleagues and families that only they would know.
Source link