There are likely thousands of active bank accounts set up using manipulative software enabled by artificial intelligence.

Deepfake videos and images can easily penetrate defenses companies put up to enable remote identification verification, a recent report from Sensity.AI, a visual threat intelligence company, shows.

Academic research shows that deepfakes are almost five times better at spoofing verification solutions. Traditional spoofing methods like printed photos placed in front of the camera or realistic 3D masks succeed in 17.3 % of cases. Meanwhile, deepfake-enabled spoofing techniques succeed in 86% of cases.

According to Francesco Cavalli, Co-Founder of Sensity.AI, banks and fintech companies detect up to 1,500 deepfake-based spoofing attacks every month. However, the fakes are mainly caught by humans and likely after the fraud scheme has been completed.

“Given the accuracy of the current deepfake models, it is likely a lot of them will never be discovered manually, so the entirety of the problem is likely underestimated,” Cavalli told Cybernews.

Sensity-verification-KYC
Successfully completed verification process with the assistance of deepfakes. Image by Sensity.AI/Cybernews.

Know your customer

Most companies employ Know-Your-Customer (KYC) verification practices to comply with anti-money laundering legislations and confirm that individuals are who they claim to be.

Most common KYC practices involve ID verification, face matching, or liveness verification. The first mode requires showing either an ID card or a passport for recognition, while the second makes users take a selfie and matches it with a picture of an ID.

The liveness verification checks fall into two categories: passive and active. During active liveness checks, the user is asked to perform specific actions like blinking, smiling, or following movement instructions with their head. The passive mode only analyzes the user-captured images with proprietary tests.

According to the report by Cavalli’s team, all of the standard KYC practices are highly vulnerable to deepfakes.

Attackers only need to capture the victim’s appearance, create a deepfake using face swap or facial reenactment techniques and inject the deepfake into a face biometric identification system.

No survivors

Researchers have built a deepfake testing toolkit (DOT) and deployed it against five KYC verification solutions with a 24% global market share to test out how vulnerable KYC solutions are.

According to the researchers, the test focused on banking, marketplace, gambling, lending, and government sectors with millions of customers worldwide.

The report claims that Cavalli’s team spoofed every ID verification and active liveness solution tested and four out of five passive liveness solutions.

The reason why passive liveness fared better is that it detects inconsistencies at the pixel level in the face and unrealistic brightness and blurriness typical of deepfake models.

Amplified threats

Worryingly, to successfully bypass a KYC verification process and set up an illegitimate account in a financial institution might cost next to nothing.

“It can be 100% free. The attacker only needs to find a good face swap library on Github and understand how to inject the model in the camera feed during the KYC process,” Cavalli explained.

While the advancements in deepfake technology might not create new threats, they certainly amplify existing ones. For example, fraudsters who turn to the dark web to purchase a stolen ID to set up a bank account often struggle with biometric checks. Using deepfakes allows scammers to work around built-in defenses.

“Once the face swap model setup is done, the attack can be replicated again and again without using any other photo editing/animation tool. So deepfake face swap models represent a scalable opportunity to improve the spoofing quality but also to speed up fraud operations,” Cavalli said.

How deepfakes are made

To grossly oversimplify, programs that generate deepfakes use two or more different AIs working together. The first AI scans an image (or video, or audio) of the subject to be faked and then creates a doctored image or other types of media.

The second AI will then examine these fakes and compare them to real images. And if the differences are too stark, the second AI will mark the image as an obvious fake and tell the first AI.

The first AI takes this information and continually adjusts the fake image until the second AI can’t tell a fake from the real thing anymore. This system is called Generative Adversarial Network, or GAN for short.

A report from the University College of London (UCL) listed deepfakes as the most severe AI crime threat to date. Apart from the threats of spoofing, experts point to fake audio and video content with extortion applications.

“Recent developments in deep learning, in particular using GANs, have significantly increased the scope for the generation of fake content. Convincing impersonations of targets following a fixed script can already be fabricated, and interactive impersonations are expected to follow,” claims the UCL report.


More from Cybernews:

‘Space pirates’ penetrate deep into Russia’s aerospace industry

Hackers can remotely unlock Tesla by exploiting a Bluetooth vulnerability

Cyber kingpins “earn” up to $600,000 a month

Crypto traders call for compensation

Is cyberwarfare not as scary as we feared?

Subscribe to our newsletter