2020 was an impressive year in many ways but one. We haven’t seen social order destroyed by deepfakes, an AI-powered media manipulation technique. The tech carries a nuclear potential and experts fear that subsiding hype over manipulated content can create a false sense of security.

On Christmas day, a minor outrage campaign hit the UK’s social media space. Channel 4, a British public-service television, aired an alternative Queen Elizabeth II’s annual Christmas speech. The head of state joked about Prince Henry and Meghan Markle leaving for Canada and, quite unusually for a monarch, did a TikTok dance. None of it was real, of course.

It was a deepfake, a video generated using artificial intelligence, made to look and sound like Queen Elizabeth II. Channel 4 broadcast this simulation, which was supposed to be convincing enough to agitate numerous commentators on social media, in order to raise awareness about the technology that made it possible.

We start noticing cases which show that the attacks are moving towards being a threat to average internet users and private individuals,

Giorgio Patrini.

And there is plenty to warn about. As Reuters found out last July, a well-crafted deepfake, coupled with a little bit of social engineering, can fool even journalists whose livelihood depends on separating fact from fiction. Not only did Oliver Taylor, a fake persona with an AI generated face, manage to publish his ideas in a handful of blog posts, but he also received attention from major Israeli news outlets. This is precisely the type of danger Channel 4 was trying to warn about – a world where we can no longer believe our eyes.

Murky Beginnings

The term “deepfake” originated on Reddit, where members of a subreddit of the same name used AI to put celebrity faces on porn actors. Unsurprisingly, a 2019 report on deepfakes by Sensity, an Amsterdam-based visual threat intelligence company, found that the vast majority of deepfakes online are used for porn.

The prevalence of deepfakes involving celebrities comes by design. The technology that helps create deepfakes is very data-hungry, Giorgio Patrini, CEO and co-founder of Sensity, explained to CyberNews. Even a short deepfake video requires thousands of real pictures. But with advances in AI, anyone can become a target. 

“We start noticing cases which show that the attacks are moving towards being a threat to average internet users and private individuals. Then some cases involve more offensive types of exploitation and weaponization of tech related to harassment, blackmail, and public shaming,” he said.

Recently Sensity published a report about a bot network on the Telegram platform where pictures of women, often taken from their social media accounts,  were “stripped” of clothing using artificial intelligence. Mostly male users targeted over a hundred thousand women without their consent.

How it’s made

Usually, programs that generate deepfakes use two or more different AIs working together. The first AI scans an image (or video, or audio) of the subject to be faked and then creates a doctored image or other types of media.

The second AI will then examine these fakes and compare them to real images. And if the differences are too stark, the second AI will mark the image as an obvious fake and tell the first AI.

The first AI takes this information and continually adjusts the fake image until the second AI can’t tell a fake from the real thing anymore. This system is called Generative Adversarial Network, or GAN for short. A few years ago, this technology was accessible only to a handful of savvy researchers with specialized tools and equipment. According to Patrini, this is no longer the case. 

Every time people would describe at least one incident when someone accused of some wrongdoing said that you can’t believe video evidence because it’s all video manipulation now,

Sam Gregory.

“This tech is commodified very much to the point that in most cases, you would find open-source code that will support new users, with graphical interfaces and sometimes even webinars. This is now accessible to people with very, very limited tech skills,” he explained.

With advances in computing power, the trend is unlikely to change. And easier access, lower costs, and dwindling thirst for input data will invite more people to use the relatively new way to manipulate media as they see fit. 

The future is now

Back in August, UCL published a report ranking deepfakes as the most severe AI crime threat to date. Apart from the apparent threat of shaming and fake revenge porn, experts point to fake audio and video content with extortion applications.

“Recent developments in deep learning, in particular using GANs, have significantly increased the scope for the generation of fake content. Convincing impersonations of targets following a fixed script can already be fabricated, and interactive impersonations are expected to follow,” claims the report.

That has already happened. In August 2019, The Wall Street Journal reported that a CEO of an unnamed UK-based energy firm believed he was on the phone with his boss, the German parent company’s chief executive, when he followed the orders to transfer €220,000 immediately. Fraudsters used AI voice technology to imitate the German chief executive.

Excerpt from the Channel 4’s doctored video of Queen Elizabeth II

This behavior could become standard modus operandi for online crimes targeted at parents, forced to hear an imitation of their offspring over the phone, allegedly kidnapped, and pleading to pay ransom to whoever is calling. Not to mention instances of insurance fraud or tampering with stock prices made possible with deepfake technology in the wrong hands. 

More pressingly, though, deepfakes might interfere with security systems we use for authentication every day. When we submit a government ID, maybe a selfie or a video, technology on the other side identifies us and allows us to open a bank account, for example. The growing capacity to fake a face may render remote identification useless. 

“This is a problem for all of us. As tech moves to be remote-first with no physical presence, we’re opening to the weaponization of deepfakes on the field of biometrics and identification,” fears the CEO of Sensity.

Attack on reality

Echoes of disinformation campaigns that surrounded the 2016 US Presidential election fueled fears that the 2020 election will be known for deepfakes. Sam Gregory, a program director at Witness, a New York based NGO focused on using video evidence to protect human rights, told CyberNews that these sorts of fears did not materialize.

“I think because of that, there’s also a little bit of a false sense of security about deepfakes. And everyone’s like deepfakes, total bust, never going to happen. And I usually say to them, I would love if that were true.”, Gregory said.

As tech moves to be remote-first with no physical presence, we’re opening to the weaponization of deepfakes on the field of biometrics and identification,

Giorgio Petrini.

Before GAN technology allows for flawless faked videos of politicians or activists, deepfakes may cause harm indirectly. For example, they create a liar’s dividend, a situation where perpetrators can claim that video evidence of misbehaviour was manipulated. 

“In all the different meetings we had, every time people would describe at least one incident when someone accused of some wrongdoing said that you can’t believe video evidence because it’s all video manipulation now. And this is interesting because it’s really the power of rhetoric,” he said.

Experts claim that we are a few years away from fake videos being indistinguishable from real ones. This means that there is ample time to figure out how bad actors might use the emerging technology.

According to Gregory, discussing the issue with journalists and human rights activists worldwide offers insight into what the future has to offer. In the past, people have experienced being framed as criminals by their governments. They hold no illusions on how AI-manipulated media will impact their daily lives in the future. 

“Amplification of synthetic media technologies is going towards much more accessibility and much more flexibility around the technique. And if you match those capabilities to the threat models that you hear across a wide range of people, we should be preparing. We shouldn’t panic,” explained Gregory.

Here to stay

It doesn’t have to be all about crime. There are less heinous applications for GAN generated content. For example, advertising company WPP used deepfakes in training videos this July, tailoring AI-generated audio to match different languages spoken by the employees. 

Mid-October, creators of the legendary animated series South Park, Matt Stone and Trey Parker released their first episode of a deepfake-based show called “Sassy Justice”. Thanks to AI, the show’s characters bear an uncanny resemblance to well-known politicians and business leaders.

Creators of a documentary “Welcome to Chechnya,” detailing the lives of the LGBTQ+ community in southern Russia, used deepfake tech to hide the faces of the people who were filmed, thus minimizing the threat of unwanted identification. 

Tech giants such as Facebook are banning deepfakes while simultaneously developing tools to detect such content better. Both Gregory and Patrini agree that deepfakes are not going anywhere, and that it is essential to educate about possible dangers and learn to co-exist. 

“I am convinced that the technology behind deepfakes is very new, so we should not judge it, per say. What we have to judge is its users. This technology will bring positive innovations regardless of the fact that there is a dark side to it,” said Patrini.

After all, why would anyone ban videos of Bill Hader making Arnold Schwarzenegger impressions with a touch of AI for comic effect or Mark Zuckerberg posing as a small town salesperson?