dark reading threat intel and cybersecurity news

A simulated phishing attack against more than 82,000 workers found that e-mails with a personal impact resulted in more clicks and that technical teams — such as IT workers and DevOps teams — clicked just as often and reported suspected phishing attacks less often compared with nontechnical teams

Software-security firm F-Secure worked with four multinational organizations to create campaigns featuring one of four different phishing e-mails: a purported message from human resources, a fake CEO fraud message, a spoofed document-sharing message, and a fake notice of a service failure. On average, 12% of users clicked on the phishing e-mail in their inboxes, but the rate depended significantly on the content.

In addition, the median time to report a suspected phishing attack was 30 minutes — good but somewhat problematic as a quarter of those who clicked on a phishing e-mail did so in the first five minutes, says Matthew Connor, F-Secure’s service delivery manager and lead author of the study report.

“The identification of an attack and a successful phish is by far the most important part here,” he says. “It is all well and good to train your staff so they don’t click on an e-mail, but if the e-mails that do get through your network and to the inboxes, if you yourself haven’t picked that up, you need to know that someone is going to report that.

Fast Phish Reporting Is Key
The lesson is perhaps the the study‘s most important: Speed is critical, Connor says. One way to improve the reporting rate, and the speed, is to make reporting very simple, such as a click of a button. Two companies that did not have such an easy way to report suspect phishing attacks had an average reporting rate of less than 15%, while a third company that did have a ubiquitous button had a 45% reporting rate.

Because companies need to rely on workers to report phishing as soon as possible, keeping the process as friction-free as possible is important, said Riaan Naude, director of consulting at F-Secure, in a statement announcing the results.

“The evidence in the study clearly points to fast, painless reporting processes as common ground where security personnel and other teams can work together to improve an organization’s resilience against phishing,” he says. “Getting this right means that an attack can be detected and prevented earlier, as security teams may only have a few precious minutes to mitigate a potential compromise.”

The study also found that working in IT or DevOps did not equate to better judgment when evaluating potential phishing attacks. In the two organizations that had workers in IT or DevOps, both groups clicked test emails with equal — or greater — likelihood than other departments in their organizations, F-Secure stated in the report.

In one organization, 26% of DevOps team members and 24% of IT team members clicked on the test phishing payload, compared with 25% for the organization overall, while 30% from DevOps and 21% from IT clicked on the phishing payload in the second organization, compared with 11% overall.

The results likely show the difference between those workers who have been trained in IT security and those who have a suspicious nature that complements a position in IT security.

“Phishing gets lumped into the category of information-security problems — and it is — but it is also just a vessel for a scam,” Connor says. “It is just the same as having something snatched from you when you are looking elsewhere. And there is a mentality to defending against that.”

One Click
Whether this phishing study is a true representation of the workforce’s state of vulnerability is debatable. The study measured only two outcomes, with a single click determined to be a successful attack. Typically a phishing attack will require multiple steps as part of its attack chain: The user clicks on an attachment or link in the e-mail and then allows a program to run or enters in information on a website that seems legitimate but is under attacker control.

Often, the user will think twice before entering in their information to a website or clicking on the OK button to allow a program to run.

However, F-Secure also had to make concessions in constructing the e-mail lures. The rules of the simulation did not allow the company to tailor e-mail content to classes of users, instead requiring the messages be broadly applicable. In addition, the e-mail messages could not include a logo or use languages other than English. More customization would likely have led to more successful attacks, F-Secure stated in the report.

For example, the document-sharing and service issue notification emails were not branded as a well-known service, and that likely affected their click rate, the company argued, adding that “[s]ecurity teams should be aware that a real attacker might mimic a well-known service and have more success.”