In early 2024, an employee at a multinational firm joined what appeared to be a routine video call with the company’s CFO and several colleagues. The faces looked right. The voices sounded right. Following instructions from the call, the employee approved a series of wire transfers totaling roughly $25 million. Only the executives on the screen weren’t real. Every person in the meeting was a deepfake.
If you can’t trust a live video call, can you trust anything digital anymore? It’s clear that MSPs must learn how to protect clients from such deepfakes.
The Biggest Business Risk
Deepfakes — a mashup of deep learning and fake — manipulate a person’s image or voice for deceptive purposes. While synthetic video grabs headlines, fake voices remain the bigger day-to-day risk for businesses because the technology is cheap, fast, and widely available. Criminals no longer need Hollywood-level skills. A few seconds of audio is often enough.
“Imagine how fake advice from Elon Musk could affect the stock market,” illustrated Joseph Steinberg, a cybersecurity adviser in New York.
The Danger of Scale
Steinberg once warned that deepfake attacks were imminent. Now they’re here. Advances in AI have made synthetic voices and faces easier to create, cheaper to deploy, and more convincing than ever.

Joe Palmer
The real risk is scale. Once criminals refine a deepfake technique, they can reuse it across hundreds of targets, according to Joe Palmer, chief innovation officer of iProov, a United Kingdom-based biometric authentication firm. This turns what was once a bespoke attack into an automated business.
Financial fraud and benefits abuse remain among the most common outcomes. “If a gang does the work, they can scale their crimes,” Palmer said. “An example is filing false unemployment claims for hundreds of people, routing the checks to themselves.”
‘Crime as a Service’ Abounds
Deepfake-enabled fraud is no longer limited to high-profile government targets or highly skilled attackers. Tools for cloning voices and generating realistic video are now widely available, lowering the barrier to entry. Criminal groups increasingly package these capabilities into services, mirroring the “crime-as-a-service” model already familiar from botnets and ransomware.
These techniques are most often used to supercharge social engineering attacks. Fake executive voices, synthetic video messages, and AI-generated emails are being combined to pressure employees into approving payments, resetting credentials, or installing malware. The goal is speed and credibility, not perfection.

Joseph Steinberg
Remote and hybrid work environments amplify the risk. When employees rely on video calls, messaging platforms, and voice-only communication, attackers have more opportunities to impersonate trusted executives or coworkers, Steinberg explained. The informal “just double-check in person” safeguard no longer exists for much of the workforce.
Technology Alone Isn’t Enough
Defending against deepfakes requires more than a single control. Biometric verification remains a useful tool, but it works best as part of a layered approach rather than a standalone solution.
Modern facial verification systems no longer rely on simple checks such as photos, video playback, or basic “liveness” cues. Today’s platforms analyze depth, motion, and subtle lighting changes to help distinguish a real person from a synthetic image or video. These techniques make casual spoofing far more difficult, while costs have dropped enough to put advanced biometric tools within reach of many SMBs using standard webcams and smartphones.
For high-risk actions — such as wire transfers, payroll changes, or access to sensitive systems — organizations increasingly combine facial verification with device trust, behavioral signals, and contextual checks. The goal is not to prove identity once, but to reduce the likelihood that a single compromised factor can lead to fraud.
Even so, technology has limits.
Trust Can’t Be Automated
Authentication tools often improve only after criminals find ways around them, Steinberg noted. That makes training and process just as essential as technology. Employees need clear rules for verifying high-risk requests, especially those that appear to come from an executive or trusted vendor. They need training to slow down, recognize pressure tactics, and verify unusual requests through a secondary channel.
If a voicemail, video message, or chat demands immediate action, the safest response is still the simplest one: Stop and confirm it independently. In an era of convincing fakes, Zero Trust — especially when applied to human behavior — may be the most effective security control.
James E. Gaskin contributed to this article.
This article was updated on 1/7/2026.
Featured image: iStock













