Is AI hype weakening trust in cybersec vendors, and enterprise security postures?

News by Davey Winder

Some 75 percent of IT decision makers questioned reckon that Artificial Intelligence is a silver bullet when it comes to dealing with the challenges of cyber-security.

Also in:

Some 75 percent of IT decision makers questioned reckon that Artificial Intelligence is a silver bullet when it comes to dealing with the challenges of cyber-security. That's according to newly published research from ESET. The same report also suggests that only 53 percent of organisations actually understand the difference between AI and Machine Learning (ML).

Things appear to be worse in the US, where 82 percent look to AI as a security panacea, compared to 67 percent in the UK and 66 percent in Germany. Overall though, 79 percent of the organisations questioned both in Europe and the US thought AI would help detect and respond to threats more quickly, and 77 percent that it the answer to a security skills shortage.

When it comes to ML specifically, the research found that 89 percent in Germany, 87 percent in the US and 78 percent in the UK were already actually using the technology in their endpoint protection products.

SC Media wondered if the industry itself agrees that AI is being over-hyped by vendors and media alike, with little real effort made to distinguish between current levels of ML implementation and the perhaps more broadly perceived and fanciful definition of AI as promoted through books, movies and the media? "I don’t think anybody expects fully conscious machines to be running our security operations for a while" says Mike Bursell, Chief Security Architect at Red Hat, continuing "but there are applications where machine learning provides abilities that are either too time-consuming for humans or that involve deep and extended pattern-matching that doesn’t come naturally to most of us."

Andy Kays, CTO at Redscan, told SC Media that he thinks the problem is that "there is disconnect between the current capabilities of AI and people’s expectations." So, while there are many benefits to the latest technologies, these systems should not be expected to work out of the box. "Just because they don’t understand how an algorithm makes a decision" Kays concludes "this does not necessarily make it AI." While agreeing that the term AI is over-hyped, Pascal Geenens, Radware EMEA security evangelist, goes further by insisting "I am yet to see a security application or system that can intelligently adapt and evolve to different situations and not just continuously perform a single, repetitive task."

Sam Curry, Cybereason CSO, says that claims of machine learning and especially "AI where almost all with a few exceptions are false" can be "a massive help to vendors in getting funding, driving FUD and seeming cutting edge." And Alesis Novik, CTO and co-founder of AimBrain, thinks we are creating a hype-bubble that threatens to put style over substance. "In research circles, it’s accepted that AI and ML are connected. It's more of a cool dinner table chat than a scientific discussion" Novik insists, continuing "a much more valuable discussion is what cyber-security vendors are able to deliver, rather than how they are doing it."

So, given all of this, what can businesses do to better understand, and vendors to better explain, where we really are regarding 'AI' and its role in helping to secure the enterprise?

"Vendors should focus on the education of the target audiences and help with de-hyping the discussions around AI/ML" Ondrej Kubovic, security awareness specialist at ESET, said in conversation with SC Media UK. "There is a lot of talk about using artificial intelligence in security products or services" he continues "but only a few vendors add that by this they really mean machine learning."

Not that ML is a silver bullet either, according to Prof. Kevin Curran, senior member of the IEEE and professor of cybersecurity at Ulster University. "It's important that vendors explain the limitations, for the most part due to computing power and the actual sophistication of the Machine learning algorithms" Prof. Curran says "a computer can only solve problems it is programmed to solve, it does not have any generalised analytical ability, and for many that is the limitation which ultimately hinders machine learning."

David Dufour, VP of Engineering & Cybersecurity at Webroot, agrees that vendors must clearly outline the current capabilities of available technology and have a "frank and transparent conversation with customers, not only about how it works but also why it’s needed in a constantly evolving threat landscape."

It could, however, take some while for business to truly understand and define use cases for 'AI' as far as security posture is concerned. Not least, as Prakasha Mandagaru Ramachandra, AVP of technology and innovation at Aricent, says because they "find it difficult to define ROI on cyber-security investments and there is certainly vendor fatigue..."

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Upcoming Events