Mark Kedgley, CTO, New Net Technologies (NNT)
Mark Kedgley, CTO, New Net Technologies (NNT)

“Fake news” is big news right now. Anyone can join in – we all have the same free-of-charge, instant, mass communication capabilities provided by social media and even if you don't want to create fake news content, you can still participate by liking and forwarding. With over 44 percent of Americans said to use Facebook as their primary news source, it's seen as potentially a major problem when misinformation, false-truths, and completely made-up stories become indistinguishable from facts and reality.

But even before this current era of the Information Age, too much information, whether real or fake, was already a problem. In 1971, political philosopher Herbert A Simon wrote that, “A wealth of information creates a poverty of attention.”

In the IT security world, this “poverty of attention” becomes “alert fatigue”. The scale of the problem starts with the number of zero-day threats being created daily (430 million in 2015, according to Symantec).  It's the reason why you need forensic-level, real-time integrity change monitoring more than ever. As we all know, zero-day threats, including ransomware, are designed to evade anti-virus detection.

This means that in anything other than an airtight static estate, you will need to review a virtually endless stream of file integrity change events, 99.999 percent of which will be benign, safe, and of your own making. But - you need to review all of these in order to spot the 0.001 percent remaining that will be the zero-day threat activity.

This is confirmed by feedback for the recent EMA Report: “Achieving High-Fidelity Security”. When asked which security analytics were used, 79 percent of respondents cited File Integrity Monitoring, making it the most-valued “Indicator of Compromise” data source, ahead of installed software changes, registry modifications, and monitoring of processes. However, in the same report, 80 percent of organisations receiving 500 or more severe/critical alerts per day currently investigate fewer than one percent of them.

There is plenty of evidence to show that this kind of “alert fatigue” is often to blame for hacks being successful - certainly Target was a prime example of how you can be doing everything right in terms of monitoring and security, but still be breached. In fact, it took weeks for Target to discover they had been compromised, even though their subsequent security autopsy showed that their defence systems had detected and alerted about the breach activity much earlier.

The conclusion is that we are now in an era where our tools are sensitive enough to detect and expose the subtlest breach activities, but where the majority of us lack is the capability to analyse and prioritise our investigation resources.

Today, the new frontier for solutions seeking to deliver both detection and analysis is broadly covered by “threat intelligence”, with the vision being an entirely automated solution with sufficient intelligence to distinguish between malware and regular application and IT operations, such as patching.

The emphasis in the market today is back onto passive monitoring rather than active intervention (blocking processes and removing files is always dogged by false positives with the risk that innocent bystanders get hurt in the battle). On the horizon, Microsoft is flexing its muscles with a strategy to phase out EMET in favour of the more “fit for purpose” Windows Defender “Advanced Threat Protection”. The potential scale of the worldwide Windows community makes it a compelling concept – bigger is always better where subtle trends in low-level malware behaviors are concerned - but the zero-day threat will always mean someone somewhere must get sacrificed first before everyone else benefits.

Which is why for now, a combined blacklist/whitelist based-analysis is still the most definitive - literally “black and white” - decision analysis for breach detection.

Organisations need to adopt real-time perceptive analysis of events, referencing whitelisted file-reputation data for “known safe” changes, for example, Extended Validation (EV) Certificate-signed manufacturer patches, the overwhelming source of change noise within any IT estate. The greater the built-in knowledge of “safe” files, the more change noise is muted to expose the remaining minority of genuinely suspicious unrecognised files. Within this minority there will be legitimate, non-whitelisted files, such as bespoke applications and occasional “left-field” niche products, which should then be re-classified once assessed. But also, included in this “no reputation” classification, will be the zero-day malware - the millions of Trojans and other APT and ransomware vectors - and with today's “poverty of attention”, that's the stuff we really need to know about.

Contributed by Mark Kedgley, CTO, New Net Technologies (NNT)

*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.