From omniscient personal assistants to supercomputers that could fight crime, artificial intelligence (AI) was heralded as the technology that would change our lives in more ways than the internet did. While AI is undeniably changing the world, is it making the same strides in tackling cyber-crime? AI is up against an opponent which is constantly evolving and finding new ways to change the rules of engagement.
With the rise of the Internet of Things, and an increasingly large and diverse range of devices and data sets, we need AI to cope with the scale and complexity of the threats we are now facing. It can help to spot things that we would otherwise miss, and almost all of today's Threat Intelligence technologies tout their AI and machine learning capabilities. Such techniques can also offer the automation of response actions, reducing the dependency on human involvement and potentially increasing the speed of defence.
AI has been used in cyber-security for quite some time through behavioural profiling and anomaly detection. These approaches don't have much scope without using AI – statistical methods can work to some degree, but AI can pull out more subtle patterns and help to derive new rules that would have been unlikely to be identified. AI allows us to find ‘the needle in the haystack' that we would otherwise face when performing analysis and correlation across increasingly large and diverse data sets.
From the automation perspective, AI can be linked to areas such as incident monitoring, recognising that the speed of response is often critical in limiting impact and can help to reduce the dependency upon human involvement in deciding how to respond; potentially increasing our speed of defence, as well as learning how to differentiate fake from genuine attacks.
However, as highlighted by The Malicious Use of Artificial Intelligence Report, AI has the potential to be a double-edged sword, with an increasing chance to be used against security as well as for it. It could also be applied to change the way that attacks are targeted, how they are launched, as well as to help automate, personalise, and disguise attacking activities.
As with all technologies, there are potential negative applications, and it is reasonable to assume that AI will become a feature of future attacks. Given that the security community has already foreseen various scenarios in which AI could be applied, it is practically inconceivable that attackers haven't already noted them as well. At present, the need to use AI for attack is arguably limited by a plenitude of low-hanging fruit in terms of unpatched or misconfigured systems that can be targeted without it.
However, as the technology becomes standard in defence, it is likely to become equally standard in attack – with one side using AI to spot patterns of misuse and malicious activity, while the other uses it to find vulnerabilities and evade detection; taken to the extreme, this may reduce things to a machine-to-machine conflict, in which humans become the observers rather than participants. Cyber-attack and defence could literally become a case of each side pressing their ‘go' buttons, and waiting for a message to tell them who won!
While AI offers a speed of response that human analysts would be unable to match, people bring elements such as intuition and creativity that an AI system may not be able to replicate. The potential for ingenuity and out-of-the-box thinking may be the key in defending against machine-driven attacks, doing unexpected things that the underlying logic of AI may have left unanticipated.
Contributed by Professor Steven Furnell, senior member of the IEEE and Professor of Information Systems Security at the University of Plymouth.
*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media UK or Haymarket Media.