Artificial intelligence has the potential to revolutionise cyber-security but we must understand its limits if we are to make it work, according to Dr Jamie Graves, CEO of ZoneFox.
Graves told an audience at IPExpo 2018 to be wary of the artificial-intelligence ‘hype cycle’ when looking at how to deploy it in cyber-security operations.
He said it was vital to understand the strengths and limitations of the technology to avoid the inappropriate use of AI. The real strength in AI is parsing huge datasets or constantly monitoring certain processes with a view to presenting the data to a human operator to make a final determination on its meaning and significance, he said.
"The technology is fantastic and it has some really good applications in certain specific areas but I think the demise of the human has been trumpeted a little too much," he told SC Magazine UK (see video).
Code analysis is a "really good application of this particular technology because as humans we are very bad at doing detailed, complex work like that so offloading onto machines will hopefully reduce the number of zero days and free up a lot of time for patching and the various other issues that come with it," he said.
While AI has made great strides in mastering narrow areas of expertise – such as driving and playing chess and go – other applications of AI such as visual recognition algorithms could be easily fooled by manipulating a few pixels.
He warned that proprietary AI systems left users in the dark as to how decisions were made, which could lead to unintended consequences.
Areas where AI could be used most effectively was:
- Replacing humans where errors are common and problematic
- Replacing humans in fasting moving domains such as malware detection and response
- Augmenting humans through the use of superior pattern detection, learning common responses and finally the ability to offer (without implementing) solutions to problems.
Taking data from across an organisation and analysing it with AI can help you determine good from bad, he said. "If you got humans to do that, you’d need millions of them sat in front of screens, like monkeys in front of typewriters – it’s really difficult for humans to do and we can help condense that problem down," he told us.
He said that ultimately augmented humans would be more powerful thanks to AI, but cautioned against the idea that machines could replace humans any time soon.
In one case his company was involved in, AI helped them analyse a vast amount of security data at an engineering company and identify an employee who was attempting to steal engineering, sales and financial data prior to leaving the company and joining a competitor.
This was a prime example of how AI could be applied to the insider threat kill chain. He demonstrated how AI could be deployed at various stages in the chain and programmed either to detect or automatically disrupt unwanted activity at key stages.
Each step can involve a human or be automated and how far you go in this automation process depends on your threat appetite, he said.