Enter boardroom, set hair on fire. How not to tackle incident response
Enter boardroom, set hair on fire. How not to tackle incident response
Event anomalies can be an indicator of attack, but they can also just be an IT problem. New research suggests the latter might be more common than you think.

The Incident Response Report published today by F-Secure and summarising it's own investigations, shines light on both attack methodologies and corporate attack reporting.

Email inboxes, via the dual whammy of phishing and malicious attachments, are the most common source of breaches (34 percent combined.) The single biggest attack source was the exploitation of Internet-facing service vulnerabilities (21 percent.) Neither of which are exactly surprising statistics to be honest.

That 13 percent of the reported incidents investigated by F-Secure turned out to be false alarms is, perhaps, more so. The number of such false alarms certainly took Tom Van de Wiele, F-Secure's principal security consultant, by surprise and reveals an enterprise struggle with detecting what is and isn't an attack. "Sometimes we'll investigate and discover an IT problem rather than an attack" Van de Wiele says "which drains resources and distracts everyone from dealing with the real issue." 

While every incident response (IR) process starts with the same question of 'is it an incident?' just how quick and accurate the enterprise is will, Van de Wiele concludes "define the cost of the answer."

So, how can the enterprise best deal with the 'is it a security incident' question and avoid engaging incident responders on a time-consuming and expensive wild goose chase?

"At no point in the incident response procedure, process or checklist is 'enter boardroom, set hair on fire' in the manual" Ian Trump, chief technology officer at Octopi Research Lab (UK) points out. It's a serious warning as well, over-responding can be as disruptive or damaging, and often both, to an organisation as under-responding. Trump argues that this isn't surprising though, as absent in the dialogue and understanding of IR is the actual monotony of the job. "It's actually fairly (and blessedly) rare to encounter a working cyber-fire" Trump told SC Media "unless you have a highly elevated threat and risk profile, it's unlikely the IR team will be encountering malware associated with APT 1-100 on a daily, weekly or monthly basis."

Travis Smith, principal security researcher at Tripwire, doesn't think that goose chases in anomaly-based detection are necessarily a bad thing. "Chasing down wild geese allows both the detection system and the security analyst to gain a better understanding of what normal looks like for their environment" Smith told SC Media "and having a solid baseline of what normal looks like will help to reduce the false positive rate over time."

Smith's colleague at Tripwire, director of security research and development Lamar Bailey, doesn't sound so convinced. "False positives are the bane of existence for security teams" Bailey insists "when a vulnerability is reported and the system admin goes to fix the issue, if the information is wrong, the tool used loses credibility." 

Math-based anomaly tools add to the problems faced by security teams, argues Andy Norton, director of threat intelligence at Lastline. "Heuristic alerts tell you nothing about the nature of the potential attack or how to remediate it quickly" Norton told SC Media "Microsoft operate 12 security operation centres and have found that the most optimal way to answer the 'is it a security incident? question is to base escalations and investigation on behavioural analysis gained from attacks in a contained and instrumented malware analysis platform." This approach ensures optimal accuracy and speed of remediation, focused on actual intent rather than anomalous circumstance, he concludes.

We will leave the last word to AlienVault's security advocate, Javvad Malik, who suggests that it's essential to bear in mind most of the work for incident response needs to be done up front in the preparation phase. "This would include knowing what and where critical assets are, where vulnerabilities lie, what normal operations look like, and have a reliable method to pull all data together to generate high quality alarms." If the work isn't done up front, or if information is scattered across too many disparate security technologies, then Malik warns "it can be very difficult to pinpoint when an actual incident is occurring versus not..."