The cyber-threat has grown exponentially in recent years, and arguably with a step change over the last 12 months. We have seen extreme examples of random ‘collateral damage' to a wide range of organisations because of apparently state sponsored attacks (e.g. WannaCry or NotPetya) or through high grade hacking tools and sophisticated viruses becoming readily accessible to hackers and criminals on the internet. These viruses are now often ‘poly-morphic' – slightly different every time they are deployed, making traditional approaches to anti-virus protection difficult to sustain.
Using the ‘six degrees of separation' analogy, a virus has the ability to connect you to anyone in the world through acquaintances of acquaintances - viruses go where they want, not necessarily where intended. These days you don't need to be a target to be infected, just connected to someone who is connected to someone who is. And in addition to those actually impacted by an attack, there is often a far greater number affected by the need to take remedial action in response to the incident.
We now live in a world of constantly increasing connectivity - the Internet of Things, the integration of operational and information technologies, cloud based services, all of which are expanding the attack surface. Indeed, there are even ‘Cyber Attack as a Service' offerings on the dark web.
We have seen attack capabilities and vectors rise in sophistication. We've had to accept that perimeter defences – keeping attackers out of your infrastructure, while still a key part of overall protection, are porous, necessitating better segregation of networks and deployment of controls to contain infection in as small an area as possible, and prevent it spreading or the destruction, encryption or exfiltration of data. Organisations have an obligation to protect the data, particularly personal data, entrusted to them by citizens and customers. With the implementation of the General Data Protection Regulation in May next year, failure to comply with that will carry substantially increased penalties.
To manage this situation, we need to know not only what is happening in our infrastructure at its boundaries, but also out there in the wilds of the dark web. It is therefore necessary to monitor the plethora of data our computer activity is generating. We need to understand how the risk is changing and who may be motivated to attack either us or those with whom we have a digital connection, or the equipment or software we rely on. This presents an enormous collection of log data, the output of monitoring tools and an assessment of what insight threat intelligence is giving us. As the complexity of the networks and of the threats increases, the sheer volume of that data is mushrooming.
“In parallel to the changes in cyber-security challenges, we have seen Big Data Analytic capabilities increasingly deployed in the scientific and commercial world to turn large or complex data sets into business value. Advances in the sophistication of analytic algorithms used for that has been matched with vastly increased computing capability; exascale computing that achieves a billion billion computations a second, and, in the near future, quantum computers so powerful they will render any pre-existing encrypted data readable, requiring even more complex ‘post-quantum' encryption capabilities to be developed.
In understanding what is happening in our networks we have traditionally made use of reactive security capabilities – that tell us what has happened. Layered on top of that, we have made significant strides in recent years developing predictive capabilities that suggest to us what is likely to happen next, but to capitalise on the value of the vast quantity, quality and diversity of data that can be available to us from system monitoring and threat intelligence, we are now able to achieve prescriptive security capability. This brings together enhanced threat intelligence, advanced analytics, artificial intelligence and machine learning: either intervening autonomously on the basis of what it has learned to prevent an adverse impact, or flagging up and providing valuable insight to issues to assist human decision making in as near real time as possible.
Prescriptive Security Operations Centres will be the next generation cyber-security infrastructure. These will enable organisations to thrive in the digital economy while protecting their assets from valuable business data to citizen or customer personal data. This frees up analyst time from the burden of routine tasks dealing with suspected or real incidents, allowing them to spend more time ‘threat hunting' – looking for potential vulnerabilities and risks, helping the machines learn how to operate more autonomously and making real time use of the decision support that is now available to them.
To misquote Henry Ford, prescriptive security is akin to the invention of the motor car rather than training faster horses.
Contributed by Sandy Forrest, client executive, Cyber Security Capability, Atos UK&I and member of the Mayor of London's Cyber Security Advisory Panel.
*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.