Can AI smarts replace humans in the Security Operations Centre?

News by Davey Winder

Newly published research suggests 27 percent of enterprise security teams see more than 1 million alerts per day, and more than half of IT professionals admit they are struggling to identify critical incidents and false positives alike.

Also in:
Newly published research suggests 27 percent of enterprise security teams see more than 1 million alerts per day, and more than half of IT professionals admit they are struggling to identify critical incidents and false positives alike. Is AI the Saint Bernard that can rescue those buried under this security alert avalanche?

Imperva researchers surveyed IT pros during RSA 2018 in order to determine how security alert overload was impacting upon enterprise security teams. The results were released this week, and they make for sobering reading.

The headline figures include 27 percent who are on the thick end of a million threat alerts each day, and more than half (55 percent) see in excess of 10,000 such alerts. It's hardly surprising, therefore, that 53 percent also admitted their security operations centre (SOC) struggled to separate critical security incidents from harmless noise.

Equally not surprising, although more than a little alarming, this influx of alerts led to certain categories being ignore completely by 30 percent of those surveyed. The false-positive effect meant that 56 percent admitted to ignoring alerts based on previous false-positive experiences. Only 10 percent said they hired more SOC staff to tackle the problem, with 57 percent preferring to 'tune' policy to reduce alert volume.

"By harvesting the power of AI, we've provided a solution that cuts through the noise to pinpoint the threats that matter most" says Eldad Chai, senior vice president of product management at Imperva. 

AI, or more accurately ML (machine learning), is certainly making an impact here. Data researchers from Cyentia Institute and Kenna Security analysed the effectiveness of a machine learning-based predictive model. "We found that it performs 2-8 times more efficiently" says Ed Bellis, CTO at Kenna Security "with equivalent or better coverage of vulnerabilities, when compared against the 15 other remediation strategies in the research."

JASK, another AI solution, was the winner of the 'hot cyber-security startup' at the NetEvents IoT, Cloud and CyberSecurity Awards 2018 in San Jose last week, which SC Media UK attended. "Using ML to vet and add contextual elements can enhance and produce meaningful alerts, thus helping analysts focus on their core tasks" Rod Soto, director of security research at JASK, told us. 

As Tyler Moffitt, senior threat research analyst at Webroot, said in conversation with SC Media UK, "humans simply aren't built to process and apply intelligence to that amount of information and quickly become overwhelmed, which is exactly what this research showed. This becomes even more worrying as under GDPR the law now requires a maximum 72 hour window to report cyber-security breaches once detected."

Not everyone is convinced that AI is the answer to the security alerts problem though, at least not yet. Take Pascal Geenens, Radware's EMEA security evangelist, who told SC Media that the question remains if "incremental advancements in deep learning combined with adversarial studies will ultimately lead to the next generation of fully automated cyber-defensive solutions, or if we need another breakthrough in machine learning and neutral networks to achieve the ultimate goal of fully autonomous cyber-defence."

And Stephen Gailey, solutions architect at Exabeam, warns that some ML-based User and Entity Behavioural Analysis (UEBA) systems are "just more tuning and correlation rules." Real UEBA systems, Gailey says "use anomaly detection and predictive algorithms, together with organisational context to distil real information from the mass of raw data now available to large security monitoring teams."

Meanwhile, Rashmi Knowles, the field CTO EMEA at RSA Security, told SC Media that security engineers must lay the right foundations before handing tasks over, figuring out a well-developed manual process that ties in with business-critical systems and keeps up with the human hackers on the other side of the frontline. "Only once this process has been ironed out, and is found to have an extremely low rate of failure" Knowles says "can it be taken on by AI to reduce the likelihood of human errors and significantly speed up the security alert sifting process."

Ofer Maor, director of solutions management at Synopsys, agrees that humans are going to have to become far more skilled in their ability to guide and optimise machine learning technologies "especially as attackers start to use machine learning to automatically improve their attacks and respond to defensive measures." 

But everyone concurs that humans are not about to be replaced by AI in the SOC anytime soon. Kevin Stear, lead threat analyst at JASK, is certain that almost every analytic implementation still requires human operators engaged in the triage of results and fine tuning of the algorithms. "This is especially true for security data sets" Stear told SC Media "where most organisations require a human in the loop for incident response, mitigation, and remediation."

We'll leave the final word to Ed Macnair, CensorNet CEO, who thinks the reality of the situation is that right now "machine learning is the only possible solution for dealing with alert overload. While we can talk about addressing the cyber-skills shortage, and improving technology to remove false negatives, neither is realistically going to make a dent in one million alerts a day."

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Upcoming Events