Zeroing in on zero-day vulnerabilities with looping

Zero-day vulnerabilities are a fact of life in cyber-security, which is why looping is so essential, says Darren Anstee.

Zeroing in on zero-day vulnerabilities with looping
Zeroing in on zero-day vulnerabilities with looping

A zero-day vulnerability refers to an issue in a particular product or application that can be exploited, where the issue is either unknown or a patch is not yet available.  Threats which target zero-day vulnerabilities are a key issue, as security solutions often fail to detect them because they don't know what they should be looking for. This allows the attacker to gain an undetected foothold within a network, and once inside they can sometimes steal information etc for extended periods whilst remaining hidden.

Most organisations have focused their security on preventing threats from entering their networks. To this end security architectures tend to involve layered solutions at the network perimeter. Once a threat has made it through these solutions, many organisations have very limited threat detection capabilities.  However, security strategies are changing and the ease with which attackers can build new malware variants, obfuscate known threats and manipulate network traffic to bypass security solutions is driving organisation to focus more on being able to detect threats that are already inside their network much more quickly.

Why do we need to Loop?

Traditionally, security solutions compare traffic or network telemetry data to current threat intelligence information in near real-time.  This allows them to detect threats that have been seen and analysed elsewhere. Some solutions extend these capabilities with heuristic, behavioural and sandboxing mechanisms to identify suspicious behaviours or traffic patterns to try and prevent zero-day exploits and new malware variants getting through.  What is common to all of these technologies is that they look at traffic (or telemetry) once, and if nothing is identified (given current information) simply move on.

As we all know from simply looking at the media, or from our own experience, threats are getting through these defences and to an extent organisations should now expect this.  As mentioned above this is becoming a hot topic, with organisations now starting to look at ‘how' they can more quickly detect threats which have made it inside their networks. If we look at data from this year's Verizon Data Breach Report we can see a continuation of the trend where a high proportion of assets can be compromised in days (or less), but a relatively low proportion of threats are detected in days – in fact the time to detect can be much longer.  Looping aims to reduce the time-to-detect a threat that is already inside an organisation.

Highest fidelity of data

What we are doing here with looping is paralleled in other walks of life; in athletics for example samples taken from athletes are now routinely stored for extended periods so that they can be tested for new types of doping as they come to light and tests become available.  The idea is to catch cheats, even if the offence occurred in the past.

Looping is a very simple concept:  threat intelligence data evolves over time as new data is gathered, vulnerabilities identified and new threats and threat variants are analysed; if we could retrospectively and repeatedly apply new threat intelligence data to historical network traffic we should be able to detect threats which made it through our perimeter defences undetected (in the past).  One barrier to doing this is the fact that we need the highest fidelity of base information for this to work – we need historical packet captures.  If we have these, and up to date threat intelligence feeds, then we just need a mechanism for quickly, easily and repeatedly applying one to the other – along with a way of visualising and investigating any results. 

One thing is certain, as network and service architectures continue to become more complex and more porous, and attackers continue to succeed at overcoming our defences, it is increasingly important for organisations to be able to identify any breach as quickly as possible. The costs involved in intellectual property or customer information loss can be significant, and minimising the time attackers have to leverage any foothold within an organisation is key. Technologies which provide incident response teams with the ability to quickly analyse and identify both current and historic security breaches are becoming more and more necessary to combat today's threats.

Contributed by Darren Anstee, director of solutions architects at Arbor Networks