Cyber-incident attribution gets a lot of attention, for good reasons. Identifying the actor(s) behind an attack enables taking legal or political action against the adversary and helps cybersecurity researchers recognize and prevent future threats.
As I wrote in the first part of this article, attribution is both a technical and analytical process. Therefore, extracting the necessary data requires collaboration from many types of information and intelligence disciplines. Attribution is getting harder as tradecraft improves and malicious actors find new ways to obfuscate their activity. Human intelligence frequently comes into play, making the work of government intelligence agencies like the FBI and CIA so valuable.
There are multiple factors involved in trying to attribute an event. Here is a general framework you can apply in your attribution activities.
Victimology
Finding out as much as you can about the victim (e.g., yourself) through analysis can yield some surprising results. To paraphrase Sun Tzu, “know your enemy and you will win a hundred battles; know yourself and you will win a thousand.” What do you make or manufacture, what services you provide, and who your corporate executives are will all have a direct bearing on the adversary’s motives. Who wants what you have? Is a nation-state fulfilling collection requirements? Does someone want to reproduce your intellectual property?
Tools
Categorize the adversary’ tools you find during your investigation and analyze each group. What did the adversary use? Are they open source? Are they open source but customized? Were they possibly written by the actors? Are they prevalent or common? Unfortunately, tools used in a breach are often transient or lost due to time and anti-forensic techniques (such as malware that exploits a vulnerability). Different tools can maintain persistence, escalate privileges, and move laterally across a network. Tools are harder to detect the longer the adversary remains in your network.
Time
Looking and behaving like everyone else in your environment is crucial to an adversary’s longevity. They tend to use what is available to them on the corporate network (“living off the land”) or innocuous tools that won’t arouse suspicion, making them harder to detect. An adversary backed by a strong military-industrial complex or sophisticated intelligence apparatus has the time, resources, and patience to linger in your network. In contrast, time is money for cybercriminals and ransomware groups, so their dwell time may be significantly lower.
Infrastructure
Investigate what type of infrastructure the malicious actors used, especially elements related to command and control (C2) functions. Was it leased infrastructure, virtual private server (VPS), virtual private network (VPN), compromised space, or botnets? Did they use Tor or another anonymous network? Was C2 hard coded into the malware? How does the C2 work? Unique infrastructures are easier to identify, whereas commonplace tools make attribution more difficult.
Implementation
It’s not enough to identify the adversary’s tools and infrastructure; reviewing how they are implemented during the attack is critical. How tactics, techniques, and procedures (TTPs) are implemented can tell you if someone is attempting to intentionally mislead you (i.e., using false flags). If data was exfiltrated from your network, do a detailed analysis to understand what they took or targeted.
Logging internal user actions can help if the adversary moved laterally and took on an administrator’s or employee’s persona. If they did a “smash and grab,” taking everything, well, you’ve got some work to do. If the attack was unique and there are no benchmarks to start from, that is an indicator.
Attacks rarely work that way though. Adversaries tend to go with what they know: they learn a way of doing things and try to stick with it. While the tools of the trade (e.g., hacking tools used, vulnerability exploited, infrastructure used) change, tradecraft is more difficult to change wholesale.
Next Steps
Once you collect the intelligence or evidence you need, consider: What is the fidelity of the information captured (how accurate is it)? How exclusive is it? Is the information you know about the attack tied to a particular actor or organization?
When you make an assessment, you inevitably have information gaps — either missing material information or indicators that are not neatly explained by your strongest theory. If a government needs more information, it probably has the resources to close the intelligence gaps. Any other type of organization must find other ways to derive attribution for defensive purposes.
Final Thoughts
Many people and organizations want to rush attribution and take immediate action. Hasty attribution doesn’t bypass the need to conduct a thorough investigation. On the government side, rushing a response to a cyber event to set a foreign policy standard or meet a perceived national security objective is a recipe for disaster.
Attribution should be enhanced and not bypassed; otherwise, highly skilled false flag and deception operations will draw companies and countries into conflict while playing into the hands of a determined adversary. Foreign policy strategy is a game of chess where you must always anticipate the adversary’s countermoves.
Attribution often requires a whole-of-government and private sector effort; rarely does one agency or company have all the necessary information to put the pieces together. We need to incorporate and formalize threat intelligence and attribution into academic curricula and give it the attention it deserves. This is not something any nation or the cybersecurity community can afford to get wrong.