AI in cyber-security - are we trying to run before we can crawl?
Intelligent and automated systems are currently being touted as the next step in cyber-security to help combat the 'always-on' cyber-criminal, but are they right for us? And are we prepared for them?
Don't worry, we're still a while away from a robot uprising.
While walking around the larger industry shows, those hosting say more than 140 vendors, it doesn't take long to realise that artificial intelligence and machine-learning are the current ‘it' girls of the cyber-security industry.
In an effort to define what ‘artificial intelligence' actually is, Luger & Stubblefield described in their 2004 book on artificial intelligence, that an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximise its chance of success at some goal based on a complex set of calculations.
As notifications from UBA, SIEM and threat intelligence systems continue to grow, artificially intelligent systems are being touted as the solution to the fatigue experienced by SOC teams who have to try and figure out what to do with each threat, and whether or not they should investigate it further.
Research from security company Hexadite, a security automation company, claimed that 37 percent of cyber-security professionals face 10,000 alerts per month” with 52 percent of alerts turning out to be false positive.
SCMagazineUK.com asked, David Thompson, LightCyber's senior director of product management, what kind of tasks machine-learning is best suited for. He responded: “Highly repetitive and intricate tasks may be well suited for a machine rather than a human. On the other hand, making a firm conclusion or judgement and then acting upon it is better suited for a human. Machine-learning can do a lot of the busy work and can more readily keep a comprehensive perspective that is both broad and historical. It's hard for humans to keep too many things at the forefront of their minds, and even harder to institutionalise this perspective among multiple people.”
Commenting on the idea that AI is perhaps being as used as a misnomer for an intelligent system, Thompson said: “Unfortunately, there is a tremendous amount of hype and confusion around the terms artificial intelligence (or AI), machine-learning and data science when it comes to security. In some circles, AI is the broader and more theoretical field, and machine learning is the more specific application of these technology principles. A common refrain in the industry is AI is defined as whatever outcome machine-learning is not (yet) capable of achieving, which is an ever-receding target.
Perhaps owing to the hype, research by security company Ipswitch into artificial intelligence and machine-learning is being taken very seriously. The company's recent research has shown that investment in intelligent business systems and automation is well underway across the globe.
Top current application deployment areas cited by respondents include digital customer engagement systems (55 percent), process automation and workflow systems (52 percent), and automated risk monitoring and management solutions (50 percent).
Gareth Lauder, director of Cyberseer, which describes itself as “specialists in advanced threat detection and cyber-incident resolution” told SC that: “The interest in machine-learning is coming from customers, as CISOs currently feel the tools they have are somewhat inadequate and not capturing everything.”
In search for the ‘known unknown', Lauder said that: “the increase in logs and alerts being created has meant that customers both large and small are looking to try and catch the full range of abnormal behaviour, from the malicious insider to the infiltration from a third party supplier. Companies simply don't have the capacity or manpower to catch all of these manually from in between all the data.”