AI defence versus IoT threat: and the winner is?

News by Davey Winder

In a research study, 'Closing the IT Security Gap with Automation & AI in the Era of IoT', HPE Aruba and the Ponemon Institute concludes that AI-based security automation tools are needed to halt advanced attacks that originate from IoT devices in the workplace.

In a research study, 'Closing the IT Security Gap with Automation & AI in the Era of IoT', HPE Aruba and the Ponemon Institute concludes that AI-based security automation tools are needed to halt advanced attacks that originate from IoT devices in the workplace.

Looking specifically at the data from the UK, the researchers found that 68 percent of those security and IT professionals asked thought cyber-security 'gaps' are being created through a lack of adequate controls over user and device activity connected to enterprise IT infrastructure. Just over half, 53 percent, also said that a lack of skilled security staff was the main enabler for these gaps appearing. More than three-quarters of respondents thought their own IoT devices were not secure, and 60 percent that the simplest of IoT devices pose a real threat.

Interestingly, in a sector not exactly under-populated by Artificial Intelligence hyperbole, three quarters (73 percent) of respondents really do think that AI-enabled technology holds the key to increasing the ability of security teams to be effective in detecting attacks.

With a diminishing IT perimeter being matched by an equally shrinking pool of skilled security professionals, it should come as no real surprise that many enterprises are struggling to deal with increasingly sophisticated threat actors and attack methodologies. Only 25 percent of those asked could confirm that they were currently using an AI-based security solution, although a further 26 percent did state they were planning on such a deployment within the next year. But is AI really the key to unlocking victory when faced with stealthy threats from IoT devices within their own IT infrastructures?

Larry Ponemon, chairman of the Ponemon Institute, certainly seems to think so. "Despite massive investments in cyber-security programs, our research found most businesses are still unable to stop advanced, targeted attacks, with 45 percent believing they are not realising the full value of their defense arsenal" he says, adding "against this backdrop, AI-based security tools which can automate tasks and free up IT personnel to manage other aspects of a security program, were viewed as critical for helping businesses keep up with increasing threat levels."

SC Media UK asked security professionals if they agreed with those who see AI technologies as inherent to improving the effectiveness of security teams in their ability to fight off IoT-instigated threats?

"Our threat graph data indicates that it takes an intruder an average of one hour and 58 minutes to begin moving laterally to other systems in the network after entry" Zeki Turedi, Technology Strategist at CrowdStrike told SC "emerging best practice advises that when an attack is in progress an organisation has, on average, one minute to detect it, ten minutes to understand it, and one hour to contain it." AI and machine learning are providing security teams with the capability required to thwart zero-day attacks and advanced persistent threats, Sam Haria, global SOC Manager at Invinsec reckons. "Security professionals now have the ability to combat attacks with the support of AI" Haria says "without fear of being replaced; a true enhancement to security teams."

Deploying these enhanced technologies is critical to protecting enterprise IoT devices by making them self-aware of manipulation according to the Chief Analytics Officer at FICO, Dr. Scott Zoldi. "Continuous monitoring of connected systems by self-learning AI at network and end-point levels is a must for real-time detection of compromises" Dr Zoldi concludes.

But what about the report findings that AI products will reduce false alerts (68 percent), provide greater investigation efficiencies (60 percent) and increase their security team’s effectiveness (63 percent) in particular?

Richard Lush, Head of Cyber Operational Security at CGI UK, told SC Media that the use of expert systems is resulting in a reduction in false positives and an increase in items of interest. "From a managed service provider perspective" Lush says "we need to ensure that we are only dealing with items worthy of investigation and the sheer volume of data we are presented each day means we have to use machine learning and AI to reduce this workload." Chris Morales, Head of Security Analytics at Vectra, agrees but warns it isn't all about perimeter or false alerts. "Perimeter detection is designed to prevent attacks from occurring in the first place, but that is always a law of diminishing returns, even when applying AI to breach prevention" he says, adding "even the best tools only achieve a 99 percent efficacy, regardless of vendor claims."

That said, we can expect to see elements of AI and ML playing a role across enterprise technologies as Charl van der Walt, Chief Strategy Officer at SecureData, reminds us. "Microsoft’s March announcement that they will include the ability to run machine learning models natively with hardware acceleration" he notes "put native ML capabilities in almost every developer’s hands. Other frameworks, tools and services from Microsoft and Google plus a plethora of courses, libraries, tools and other resources for almost every platform suggest that ML will soon become just another tool in the developer’s toolbox."

Of course, the industry does have to bear in mind that the bad guys are also onto the benefits of AI. "While AI will certainly help us identify attacks faster and more easily" warns Danny O'Neill, Head of Managed Security EMEA at Rackspace, "organisations should be aware that attackers are using it for their own purposes as well..."

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Video and interviews