Security shortage forces CISOs to increase reliance on machine learning

News by Jay Jay

With enterprises struggling with a massive shortage of experienced cyber-security professionals, today's CISOs are placing more faith in machine learning which they believe will be important to their IT security functions.

With enterprises struggling with a massive shortage of experienced cyber-security professionals who understand different security certifications and a wide range of areas of cyber-security work, today's CISOs are placing more faith in machine learning which they believe will be important to their IT security functions in the next two years.

According to a recent report from F5 Labs, CISOs across hundreds of enterprises are placing a lot of faith in machine learning technologies both for the speed at which such technologies handle alerts, vulnerabilities, and threat feeds, but also because of a severe lack of qualified security professionals.

A recent survey conducted by F5 along with Ponemon showed that 70 percent of CISOs considered machine learning to be important to their IT security functions in the next two years as it helped them address security staffing shortages. With 77 different security certifications and 33 distinct areas of cyber-security work defined by the National Institute of Standards (NIST), it is nearly impossible to find professionals who excel in all these areas, the firm said.

"We don't have enough people to look at all the alerts, vulnerabilities, and threat feeds. Worse, under a deluge of data, humans get tired and produce inconsistent results. Because of the wide spectrum of expertise, training, and experience, human bias can creep into the results.

"Yet, a machine learning system, once trained with enough correct statistics, can produce consistent and usable results. Machine Learning excels at classifying a population of data into buckets, which makes it good at anomaly detection and finding hidden relationships," it added.

However, while admitting that machine learning technologies are still far from perfect, F5 Labs said CISOs can use machine learning as a "first cut", allowing such technologies to analyse alerts, vulnerabilities, and threat feeds and letting humans address the most interesting results.

"No matter how advanced machine learning technology is, cyber-criminals will continue to develop increasingly sophisticated attacks. This means we will always need human input to validate that the AI-based technology meets security requirements and, if it doesn't, re-evaluate, refine and enhance its performance accordingly. Ideally, machine learning will help tackle real-time threats and free up people to monitor new and evolving dangers to keep ahead of the ever-evolving cyber-security landscape," said Tristan Liverpool, director of systems engineering at F5 Networks.

Giovanni Vigna, CTO and co-founder at Lastline, also warns against placing too much emphasis on machine learning, stating that it is sometimes presented "as the ultimate solution to every problem, because “it's math. How could it be wrong?"

"Unfortunately, the community is largely ignoring the problem of adversarial machine learning: what if a cyber-criminal knows what process is used to perform machine learning and what parameters have been learned? Will the cyber-criminal be able to change his/her attacks so that they appear to conform to what has been learned to be “normal”?" he asks.

"In the future, we will see them explicitly targeting machine learning and artificial intelligence. We should be careful not to rely blindly on these seemingly “perfect” approaches. Machine learning can help security, but one has always to assume that the cyber-criminal knows what is learned, and how it is learned, and prepare accordingly," he adds.

"Whilst Machine Learning (ML) and Artificial Intelligence (AI) systems can play an important role in alleviating the pressures on short-staffed security teams, there is still a significant amount of human input required in order to provide context and judgment," believes Stuart Clarke, head of security and Intelligence solutions at Nuix.

"ML also requires access to vast amounts of representative data in order to train the underlying models to stand a chance of making repeatable and accurate decisions, which also requires human input. It is critical that ML and AI are utilised to augment human intelligence and not replace it. Companies must ensure that they are still hiring and training security analysts as well as deploying ML and AI systems in order to create the most fully rounded security program," he adds.
Topics:
Security

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Upcoming Events