Written by Suzanne Smalley

The FBI is deeply worried that cybercriminals and nation-state adversaries are developing more precision in their attacks and taking advantage of innovations in artificial intelligence that will compound the digital threat in the years to come, FBI Assistant Director for Cyber Bryan Vorndran said Wednesday.

“When we think about software as a service or even supply chain attacks, what happens when the adversary understands that there is perhaps one software factory that services the entire community,” said Vorndran, who oversees 1,000 FBI agents focused on cybercrimes nationwide, during a speech Wednesday at a Fordham University cybersecurity conference.

“If they’re that precise on targeting, it could shut down the entire commercial real estate industry. That is a huge problem,” he said.

China is already displaying more exactness when it comes to targeting its victims, but that level of sophistication isn’t commonplace among other nation-state adversaries yet, Vorndran said. For now, his No. 1 recommendation for business leaders is to understand how shared service software is used across their organizations because that’s often malicious hackers’ easiest target.

The FBI is also concerned that Chinese, Spanish, Portuguese and other foreign cybercriminals will begin carrying out ransomware attacks with the same intensity as Russia’s cybercrime syndicates. It’s a concern that’s further complicated by what Vorndran called the “blended threat.”

“Our intent is always to stay at pace with the threat. Sometimes that’s simply not possible.”

bryan vorndran, fbi assistant director for cyber

“It is very difficult to understand where the nation-state threat starts and stops and where the criminal threat starts and stops,” he said. “When we don’t see command and control from the nation-state over[head], it muddies the water in terms of understanding clearly what an adversary is after and that affects our ability to deploy the right resources.”

Finally, Vorndran said the emergence of synthetic content such as deep fakes and artificial intelligence will create massive and vexing problems for democracies. Citing the work of University of California at Berkeley scholar Hany Farid, an expert in digital analysis, Vorndran said he believes that within two to three years the world will be saturated with synthetic audio and video content that is virtually indistinguishable from reality.

“It is a huge threat that we need to get our arms around,” Vorndran said. “There’s obviously tremendous downstream effects to deepfakes and synthetic content.”

For example, he said, it is possible that in the near term prosecutors will not be able to prove criminal guilt because of deep fakes that copy human voices with staggering precision.

“Think about the argument in court where somebody says, ‘That’s not my client’s voice on a Title III intercept. That’s a fake,’” Vorndran said. “How do we authenticate that even in the basic rule of law?”

Vorndran said artificial intelligence is also a technology the FBI is closely watching. U.S. Cyber Command Executive Director Dave Frederick recently said CyberCom is also devoting increasing resources to monitor and develop AI capabilities.

“The fear of how AI is going to expand and the fact that we may not be able to control it should be of deep concern to us,” Vorndran said. He declined to discuss this threat in further detail, calling it “too amorphous.”

At the beginning of the ransomware threat, he admitted that the FBI didn’t fully anticipate the scope of the problem. “Our intent is always to stay at pace with the threat,” he said. “Sometimes that’s simply not possible.”