With the average organisation acutely aware of the harsh, everyday realities of cyber-crime, it is becoming apparent that it takes more than human intuition to successfully fight against the sophisticated methods of cyber-criminals.
But will new advancements in machine learning and Artificial Intelligence (AI) eventually replace the need for human intervention in identifying advanced threats? Will autonomous machines ever take over and fight threats on their own, without human intervention?
AI has huge potential to accelerate the detection of threats, to discover problems and attacks that they were not explicitly programmed to look for, and to find patterns and anomalies. Machines can now even be programmed to detect the so-called “unknown unknowns” which enables the defenders to stay one step ahead. However, when it comes to identifying real threats, context is everything and this is where we need human input, to read the underlying communications that are at play, such as intent, reason, and motivation. It is this interplay of human reasoning and the machine's power to process and correlate data that we should harness to best effect to bring to light the previously unknown threats; those cyber-threats and risks that we can't even anticipate.
Into the Unknown
Managing security is, in essence, about managing risk. We can't stop every threat, but we can use all the information and intelligence that is available to keep risks at an acceptable level. To do this, the key questions that we're asking are:
- What's happening in the computer networks?
- Is there something unusual?
- If there is something unusual, what is it, and why is it unusual?
- Is there a risk? If there is, should I do something about it, and if so, what?
In security, risk management is about taking raw data and turning it into actionable insights, then acting on it. We only know what the threat is when we have information. Sometimes these are explicit – such as certain specific action patterns which are known to potentially carry risks (multiple failed logins), Indicators of Compromise (e.g. a ransom notice) or known malware data patterns.
However, there can be blindspots: either we don't have the data, or we do have the data but don't know how to make sense of it. Perhaps we don't have the tools to analyse or make meaningful correlations. Increasingly, a large part of the challenge in fighting against cyber-crime is addressing things we don't know, such as changes that we couldn't anticipate, threats that we have never considered or foreseen and even risks that we don't think apply to us.
Here, we're increasingly in the realm of “unknowns”. How do we begin to manage risk when we don't know what the threat is?
Added to this challenge is that executives can suffer from information overload. The attack surface has expanded, there are more devices which need to be protected, more data held on-premise and in the cloud and there are more security tools and appliances gathering information. We can be stymied by too much data, which means that executives can't draw any meaningful conclusions on where the real risks are.
The Human Factor
Now, advances in machine learning are helping us to get better at identifying these unknown risks by processing large volumes of data, at speed, detecting advanced threats and using algorithms to make predictions on behaviour. These can be applied to identify patterns that we haven't seen yet, deviations from the norm, or revealing changes in the frequency, volume or patterns of behaviour.
Humans naturally bring different skills; we have empathy and understanding, we can attribute intent and anticipate effects in ways that machines simply can't. Whilst machines will be better suited for routine, cognitive tasks, the human brain is better at unstructured problem solving.
This is where I see the greatest difference between computers and humans. Computers can pattern-match and find reason only if previously explicitly told to do so. Skilled humans may be able to infer reason or intent even if they have never before seen an exact pattern pointing to malicious behaviour. As Professor Lee Hadlington, cyber psychologist at De Montfort University comments:
“We're always going to need some element of the human within any machine-based system, particularly when it comes to making a final decision. The key issue is that humans are very uncomfortable with machines taking over critical aspects of decision making. So it isn't necessarily that humans are better than machines, but more that humans still don't trust machines.”
So my advice would be to work to the strengths, and understand the limitations of, humans and machines. There are critical decisions that must still be made by individuals when making judgements on security incidents, so invest in the skills of staff so that they can interpret data. Use human creativity to make improvements to systems and processes, to evaluate and apply critical thinking to data that has been processed by machines. Invest in the best possible security tools and automate what needs to be automated.
Ultimately, machines will not replace humans in cyber-security, but rather will allow us to focus our resources on activities that are truly adding value to the fight against cyber-crime.
Contributed by Sándor Bálint, security lead for applied data science, Balabit
*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.