Deepfakes: A Rising Cyber Threat

Posted by Mark Greisiger

Deepfakes: A Rising Cyber Threat
A Q&A with John Farley of Gallagher

One of the most dangerous cyberattacks emerging on the threat landscape is also among the most difficult to detect or prevent. Deepfake technology enables perpetrators to mimic the voices and images of real people and it has significant consequences for companies, individuals and the democratic process. John Farley, managing director of the cyber liability practice of Gallagher, gave us an update on this concerning development.

What are deepfakes and how are they created?
Deepfakes are videos and audio essentially created to make it appear is if a person did or said something they never actually did or said. Artificial intelligence and deep learning techniques are used to analyze real video and audio featuring these individuals and imitate it in a believable fashion.

How are you seeing deepfakes being used?
So far, the vast majority of deepfakes have been found on pornography sites—with prominent women from a wide range of professions being targeted and mimicked. Lately we’ve begun to see this technology used in a more mainstream way, and often with a political agenda. These deepfakes make it appear as if someone from an opposing party did something that would alienate voters or sway their opinions. Given that there’s an upcoming presidential election, it’s a worrying matter. There hasn’t been widescale damage so far, but the general public is not equipped to understand what is real and what is fake.

Who creates deepfakes?
They can come from different sources. As with ransomware, there were initially only a handful of threat actors who knew how to use the technology, but over time, it’s become more widespread. It could come from nation states, organized crime groups, or anyone who knows how to navigate the dark web to hire someone to carry out these crimes for them. Most often they are distributed through social media, which is the easiest and fastest way to disseminate this disinformation. Recently, an actor [Jordan Peele] made a deepfake video of Obama saying something negative about President Trump. In that case, it wasn’t done to be harmful, but rather to create awareness about the technology and its risks.

How might a company incur some cyber risk or liability from deepfakes?
There is a real risk of financial crime associated with deepfakes. It’s already happened in the UK where an energy company executive was mimicked on a voicemail, which led an employee to transfer $243,000 to an account controlled by the criminal. This is just one example of how it could be used in conjunction with social engineering. Deepfakes can cost the company reputational harm, lost funds, business interruptions, and litigation fees.

What can be done about this cyber threat?
Right now, the challenge is that no single person, organization, or technology can really control the creation and dissemination of digital content on an end-to-end basis. The trick is to figure out how to possibly monitor this in a way that won’t stop the flow of real information. Last year, a group called the Deep Trust Alliance convened the best and brightest minds from all walks of life to create solutions. There are many issues to figure out—not just the technology needed to combat the threat but also how we think about deepfakes in the context of free speech. If they’re used for art, is that protected by free speech or is that going to be considered a crime?

Where does cyber insurance come into play?
As I mentioned, deepfakes can lead to losses such as reputational harm to the company or brand, loss of funds, and business interruption. Cyber insurance has always risen to the challenges posed by evolving cyber risks but this one has only been around since 2017 so it’s not explicitly accounted for yet. While some of these losses could f be paid by cyber insurance, the nature of deepfakes might not trigger the policy. For example, often there needs to be a network intrusion to make a loss claim, but making a deepfake in most cases would not require penetrating the network. While we haven’t seen any direct solutions proposed by the cyber insurance industry we know these companies need to stay competitive and that there will soon be clarifications made to these policies.

What advice do you have for risk managers anticipating the proliferation of deepfakes?
I would just say that this is a very different type of threat and you really can’t implement preventative controls as you might for other cyber threats such as ransomware. What you can do is implement an incident response plan now that addresses a deepfake attack scenario, such as devoting resources to PR to issue a public statement that could help mitigate the risks incurred.

In summary…
We want to thank Mr. Farley for his overview of this new form of cyber risk. It will be interesting to see how deepfake technology, along with the potential liability from its misuse, evolves over time. Often, it’s after the first class action lawsuit that the industry wakes up and starts to better understand the theories of liability unfolding. Looking into the cyber crystal ball is always challenging and new forms of cyber risk can be difficult to forecast until they’re all around us. The ransomware avalanche perfectly exemplifies the inherent difficulty in getting ahead of an emerging threat. As such, I commend John for his willingness to speak and sound the warning bell about this new peril.