Include:
Tech
Cybersecurity
Business Strategy
Channel Insights
Stay Connected
Acer America
Acer America Corp. is a computer manufacturer of business and consumer PCs, notebooks, ultrabooks, projectors, servers, and storage products.

Location

333 West San Carlos Street
San Jose, California 95110
United States

WWW: acer.com

ChannelPro Network Awards

hello 2
hello 3

News & Articles

May 14, 2024 |

Deepfakes + Generative AI = Major Problems for Business

Deepfakes that can’t be distinguished from reality threaten to shatter the fundamental hierarchy of human trust and impact businesses.

In February 2024, a finance worker at a multinational firm received a message from the company’s U.K.-based CFO stating a need for a secret financial transfer of $200 million in Hong Kong dollars ($25.6 million U.S.).

At first, this worker assumed it was a phishing attempt, but a video call with who he thought were several other colleagues was enough to put aside his early skepticism — only they weren’t real. Despite looking and sounding like other members of staff he had come to know, each were deepfake recreations made by generative AI.

In this elaborate scam, one could easily conclude that it was an inside job and some of that money made its way back to that finance worker. Regardless, there is no denying the massive surge in deepfake video and audio being used for nefarious purposes.

Joshua Liberman of Net Sciences Inc.

Joshua Liberman

Deepfake technology isn’t new. But adding generative AI capabilities to tools that are already powerful, easy-to-use, and readily available is rapidly changing the threat landscape.

“Generative AI based on large language models is a force multiplier,” explained Joshua Liberman, president of Net Sciences, talking about how AI can realistically impersonate human voices — down to the minute flaws in someone’s speech — with as little as a few seconds of audio captured.

As we learned from the finance worker in Hong Kong, deepfake video is quickly following suit. According to Ian Richardson, principal consultant at Fox & Crow Group, everyone is at risk. “If you’re visible in the marketplace, doing public appearances, webinars, podcasts, live streams, or a YouTube channel, you’re sharing yourself with the world. Not just your likeness, but your body mannerisms, the way you dress, your hairstyle, and your voice.”

With very little effort required, bad actors can feed those samples into an AI model. The resulting avatar can convincingly “request system access, data, money, you name it. And that’s the ballgame,” said Richardson.

That means almost anyone with malicious intent can create deepfakes, even those that aren’t particularly smart or tech savvy.

Ian Richardson of Fox & Crow Group LLC

Ian Richardson

“What used to take a small band of people months to do might be executable in weeks or even days. You could be some 400-pound kid in his grandmother’s basement running these scams that would previously take a nation-state level of capabilities,” Liberman said.

The Threat to MSPs and Their Customers

The ability to digitally turn nearly anyone into a virtual puppet has opened the door to new forms of attack. But criminals are starting with the “low-hanging fruit,” per Liberman, to cash in on opportunities for quick and easy money.

By leveraging effective infiltration techniques like email compromise, a threat actor can build detailed profiles of an MSP’s staff, customers, and communication patterns. That represents everything needed to put together a customized and convincing deepfake campaign.

The approach “targets the very core of the IT managed services space,” said Richardson.

Social Engineering as the Next Digital War

Businesses are facing more than just financial risks from deepfakes. Shidarion Clark, chief information officer at Lannan Technologies, sees the reputational damage from social engineering and misinformation as having the largest impact.

Shidarion Clark of Lannan Technologies

Shidarion Clark

“Business growth often hinges on word of mouth. If somebody wanted to use deepfakes to paint your company in a negative light, it’s going to be that much harder [to recover from it].”

Clark’s insight should rattle any business owner. Anyone with a grudge against a company or even a competitor could wield deepfakes and disinformation as a weapon. To make matters worse, it’s an incredibly effective one.

An example of this is the recent onslaught of convincing deepfake videos of political and public figures. Even when the content is proven to be fake, the damage is already done.

“Disinformation is the most powerful, destructive force there is, and it’s really hard to grasp how profound and pervasive it is in some societies,” noted Liberman.

“Most people reason quite emotionally and make a quick decision, and later facts don’t have much impact on them,” he added. “Because this misinformation has the force multiplier of the internet and social media, it stays out there. So sure, it’s a fake video, but it may still get played another billion times.”

Legal consequences can serve as a deterrent for those who use deepfakes to commit crimes, such as fraud, but when it comes to social engineering, there’s little on the books to dissuade them.

“Assuming you even knew who was doing it, I’m not aware of any strict remedies under the law that would prevent that type of stuff,” said Joseph Brunsman, best-selling author and managing member at Brunsman Advisory Group. “You start running into fair-use and First Amendment issues.”

Using an example of making a video of someone saying something outrageous, Brunsman went into the complex myriad of questions deepfakes cause. “It’s parody, but then parody is in the eye of the beholder. Was that made with malicious intent or not? Was there malice or forethought to make this to specifically lie to people to construct some final goal? Or is it just, ‘I thought this was funny?’”

Joseph Brunsman of Brunsman Advisory Group

Joseph Brunsman

Brunsman says most of the generative AI litigation on the books or introduced deals with the use of AI for things like hiring and firing, but nothing that touches on deepfakes.

The Next Level of Zero Trust – and the Opportunity for MSPs

At a societal level, deepfakes that can’t be distinguished from reality threaten to shatter the fundamental hierarchy of human trust. If you can’t believe what you see, then what can you believe?

That has severe implications for how businesses will need to operate, taking the concept of Zero Trust to a whole new level and adding verification to nearly every business process — which undoubtedly will impact the speed at which businesses run. “From a workflow standpoint, things might have to slow down just so you can vet where these things are coming from and the sources,” said Clark.

Resetting customer expectations will be a part of that shift, as Richardson noted, “We’ve got to slow down the immediacy in the world that was brought via email and start to reset expectations for clients where no matter what you get on your computer in any way, shape, or form, you need to verify as the new default instead of trust as the default.”

As a digital threat, your customers will be looking to you to help protect them, which is a new opportunity to bring value to that relationship and generate additional revenue.

The place to start is awareness. Customers need to understand that businesses of all sizes are targets for this form of attack.

“Educating people on what deep fakes are is the first step. You can’t trust what you see or hear or read on a computer by default; if it’s coming in off of a computer, it could be manipulated,” Richardson said.

Both Clark and Liberman highlighted user training as key to the fight against deepfakes. Common sense could go a long way. Liberman emphasized the importance of slowing down and analyzing situations from a calm, rational perspective. “You have to maintain situational awareness as a potential ‘dupee.’ Don’t fall for it. Do you normally pay for things by gift card? Do you normally rush through a six-figure order to a new bank?”

Deepfakes could also be an opportunity for MSPs to work with customers on shoring up their operational maturity to stay compliant with cyber liability coverage. When asked whether cyber insurance would cover damages from deepfakes, or if he expects to see new policies specifically for loss against social engineering or deepfakes, Brunsman responded, “It could. It depends on what your exact policy says. For example, some policies may require you to have a pre-arranged callback number to a known entity before you send any money over X amount, and you have to demonstrate you did that before the insurance company will reimburse you.”

Richardson believes the industry will develop and use new security technologies, opening potential revenue opportunities around generative AI defense. One such opportunity he said will be focused on defending against the appropriation of a person’s likeness. It can be compared to the evolution from antivirus to EDR to detect and respond in real-time, he said.

“There’s going be some sort of generative AI detection and response stuff or an AI operations center. First it was the NOC, then the SOC, next it will be the AIOC. You heard it here first.”


Image: iStock


Editor’s Choice

Microsoft Purview: 3 Strategies that Enable Significant MSP Growth

July 25, 2024 | Chris Clark

Microsoft Purview is a complex solution because it’s so comprehensive. Fortunately, MSPs are well-positioned to help.

ChannelPro DEFEND Conference Heads to NJ, Promises to Lift Cybersecurity and Profitability of MSPs

July 8, 2024 |

Register now for ChannelPro DEFEND: East in Islen, NJ, on Aug 7 and 8 for unparalleled cybersecurity learning, networking, and collaboration opportunities.


Related News & Articles

Explore ChannelPro

Events

Reach Our Audience