Artificial intelligence (AI) is everywhere today. Watch an online film via Amazon, listen to Spotify or engage with a chatbot on a travel website and you have interacted with this robot-like power. It's no wonder that the concept of AI is buzzing in the networking and security industries too.
Microsoft's acquisition last year of the AI cyber-security Hexadite (reportedly for US$ 100 million/£74 million) only served to turn up the heat further. With cyber-crime going up and the relative number of suitably skilled cyber-security professionals going down, could AI be the answer to filling the void? One recent report estimated that by 2021 there will be 3.5 million unfilled cyber-security positions worldwide. In many industries AI is seen as a serious threat to jobs, but perhaps the security industry should be welcoming it with open arms.
However, here I would like to strike a note of caution. Often what is talked about as AI is, in reality, simply a knowledge-based system; a collection of algorithms or an application of machine learning. The point of AI in this context is typically to drive operational efficiencies; to reduce a lot of the ‘heavy lifting' that human operators would otherwise be expected to do. It's all about automation and crunching data as quickly as possible rather than some higher knowledge that humans are yet to attain.
Yet, let's not underestimate the value of being able to leave technology to do the spadework. At the moment many cyber-security operations are merely reactive, relying on threat patterns based on previous behaviour. If these algorithms can be used to predict and screen threats – and then take corrective action, this is all to the good.
But unless a company can buy the equivalent of IBM's Watson their algorithms will be limited. I am not trying to downplay the skills of those who develop this technology, but I am left wondering whether it's as radical as the hype surrounding it suggests. Quantum computing is revolutionary, machine learning is evolutionary.
Unfortunately, the data in itself will not enable users to take decisions about networking and/or security. I believe that those in our industry getting carried away by the promise of AI should refer back to a well-known information sciences model, known as DIKW or ‘Data Information, Knowledge, Wisdom'.
Data needs context, meaning and insight to move it up to the level of wisdom or understanding. Then, when that level is reached and businesses get a grasp of the purpose of the data, this should give them the insight they need to reach a decision. AI should always be seen as decision support rather than decision-maker. The latter role is always best left to the human mind.
Again and again it's been shown that cyber-criminals have calculating and devious minds. According to TechCrunch: “They already know very well how machines and machine learning works and how to circumvent their capabilities. Many attacks are carried out through minuscule and inconspicuous steps, often concealed in the guise of legitimate requests and commands.”
Besides, if a new piece of malware is created every 4.2 seconds then it's reasonable to assume that some cyber-criminal is designing a way to bring down AI and machine learning as I write. Like a double agent, a seemingly legitimate system could be working for the enemy without being detected for years.
In other words, use automation and machine learning to bridge the skills gap by doing all the tedious routine sifting and searching – but it's not a magic fix. Remember all you humans out there, we still need you.
Contributed by Dave Nicholson, technical sales consultant, Axial Systems.
*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media UK or Haymarket Media.