With the rise of the cloud and remote work, the old model of a fortified corporate network you could protect from outside attack has crumbled. The perimeter has stretched so far that it’s nearly impossible to defend. Instead of trying to learn everything about attackers, says Darktrace’s global head of threat analysis Toby Lewis, a better strategy is learning about the network’s users.
“That gives us a really good baseline of normal, because fundamentally, an attacker in an environment is gonna be operating outside those bounds of normal,” Lewis says. “And that’s probably the easiest way we can approach this use of AI [to] identify unusual behavior.”
AI can be used in a number of ways, he says, starting with behavior analysis and threat detection. The logical next step is to automate response, according to Lewis. “How are we going to contain this threat in a way that doesn’t impact the end user, it doesn’t impact the original customer’s networks — allows them to still operate, but also uses the benefits of operating at machine speed?” he asks. “It doesn’t necessarily need a human to go to investigate it and to click some buttons and to make it happen.”
Artificial intelligence can bring swifter and more flexible threat remediation than traditional cybersecurity tools. But in order to maximize the benefits, says Lewis, you need to build trust with your customers by, say, demonstrating the AI and then putting it in human confirmation mode. “So it’s now able to take action, but the human’s still in control,” he says.
The next step would be full autonomy. But, Lewis points out, “that step from human confirmation mode to fully autonomous doesn’t have to be a binary on or off. You can configure it to only do that at certain times of day, out of hours or at the weekend.” He also says the AI can be set to respond to certain types of behaviors, such as always stopping ransomware-related activities, no matter what time or day, while leaving less urgent activities for human responders to analyze. That allows the AI to supplement the human cybersecurity staff while keeping them in charge.
To truly trust off-leash AI, those humans need to understand why an automated tool takes the actions it does. By helping people understand its decisions, Lewis says, the AI builds even more trust. “If you start to bring AI into your environments, yes, you get some great efficiency savings in terms of your ability to respond to an attack,” he says. “But … [people] still need to understand what’s going on. And that’s still really, really critical, if you’re using AI or not.”