While waiting for true AI, humans still needed on the cybersec frontline
While waiting for true AI, humans still needed on the cybersec frontline

Artificial Intelligence and machine learning are fascinating, nascent technologies which hold exciting possibilities for many industries, cyber-security amongst them. It's easy to see why the industry is so keen to harness AI's potential. As hackers and their tools become more sophisticated, AI is widely viewed as critical to combatting the next generation of cyber-criminals.

With the cyber-security skills shortage at tipping point, AI is also being heralded as a means of filling cyber-security jobs and replacing cyber-security professionals. The narrative being that we need machines to do the tasks that humans are unwilling to or incapable of performing.

With so much attention around AI and such high expectations of its potential, it's important to take stock of where expectations and reality align. Despite advancements in AI technology it is too early to disregard the most intelligent, creative and effective means of tackling cyber-crime; human ingenuity.   

Use of AI in security

There's no denying the fact that computers are much better at performing certain security tasks, such as log collection and monitoring. Humans lack the ability to rapidly sift through large volumes of data to identify patterns of behaviour, but machines have no such issue and are perfect for these kinds of processes. 

Perhaps the biggest current failing of AI, however, is that computers still need, for the most part, to be taught what is normal and what is not. A typical server will generate many different types of log events, some of which are only seen occasionally. Infrequent network events should not always be seen as a firm indicator of attack.

Humans are also important for reviewing alerts generated by machines, since AI can struggle to confirm whether events are genuinely malicious. AI systems are also susceptible to errors. Data poisoning attacks, where attackers introduce false learning data, is just one example of how AI can be duped into making incorrect decisions. 

Wide-ranging definitions of AI and what actually constitutes intelligence is also an issue. AI is often misunderstood or misused, and if an organisation relies on AI solutions which are not as advanced as anticipated, it may weaken rather than enhance its security posture. For instance, if you showed Siri or Alexa to someone fifty years, they might classify it as AI.

However, we don't in 2018 because voice recognition and personal computers have existed for decades. As rulesets become more complex and allow computers to accomplish more impressive feats, it will be increasingly easy for people to confuse AI with sophisticated machine learning algorithms. We might not understand how an algorithm makes a decision, but this does not necessarily make it AI. 

The need for a human touch

We may be inefficient at sifting through large amounts of data but humans understand humans better. Only a human can review a security alert and assess it instinctively, rather than by rules alone. 

A VPN connection from China might be valid during the week that an organisation's sales team is in the country for trade show. However, a similar request the following week may suggest that an employee's credentials have been compromised. A computer may struggle to identify the difference between the two instances, but a human wouldn't.

AI is quicker, and in many cases, better at handling scenarios where there are obvious indicators of compromise. In situations that are less clear cut, humans are needed to contribute additional insight to determine whether alerts are actually genuine and judge how to respond accordingly.

Humans and AI: a multi-layered approach to threat detection

While we wait for technology to mature, businesses need to view AI as a tool to be harnessed by skilled cyber-security professionals, not used in place of them.  Any organisation that undertakes proactive security monitoring should have technology as the first line of defence, supported by experienced cyber-security experts to help maximise its benefits, reduce false alarms, and coordinate swift incident response.

Rather than looking to technologies like AI to solve all of our cyber-security problems, businesses need to prioritise staffing and training. We need more intelligent machines too, but a larger, better trained, and more diverse pool of security talent must come first. 

Contributed by Andy Kays, CTO at threat detection and response specialist, Redscan  

*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media UK or Haymarket Media.