Can hype-scarred cyber-security pros dare to be hopeful about artificial intelligence as a means to ease the acute information security labour shortage?
The answer is a highly qualified "yes," say several industry players usually skeptical of trendy cyber solutions but who are on the lookout for any tech that promises greater efficiency for labour-intensive grunt work.
"A lot of the cyber-security tasks today are mundane and manual," says Chenxi Wang, founder and general partner at Rain Capital. "From that lens, AI can help eliminate those tasks. Human time can be better spent on the more sophisticated tasks such as threat hunting, versus things like fixing configurations or processing low-level data."
Certainly information security executives are already looking to artificial intelligence to bring some relief to SOCs inundated with information as they struggle to spot and respond to threats.
According to a mid-2018 Ponemon Institute survey, "The Value of Artificial Intelligence in Cybersecurity," some 69 percent of respondents credit AI with increasing the speed of analysing threats. Another 64 percent conclude that AI assists in containing compromised systems. Survey respondents were somewhat less confident about AI’s ability to help identify application security vulnerabilities, with 60 percent agreeing that it does so.
But when it comes to AI’s potential role in cyber labour saving, the Ponemon survey found that the 603 respondents were sharply divided. Some 52 percent anticipated that AI will actually increase their need for in-house expertise, and half reported that AI requires too much staff to implement today.
"We’re going to have a difficult time finding people with the [AI] skills," concludes Larry Ponemon, who oversaw the survey. "There is not yet a program at MIT or Caltech for people to assume control over an AI platform" in cyber-security, he says. Like Wang, Ponemon says AI’s rollout is best focused on the bottom of the cyber-security to-do list – mundane jobs where human labor can more easily be replaced with less complexity. "AI can help with routine tasks and your typical level one work; however, there still needs to be labour for higher end analysis," adds a CISO at a major financial institution based in Chicago.
The gap seen in the Ponemon study between predictions of AI’s promised role in cyber-security and today’s limited deployment in cyber-security drudgery is to be expected, says Rahul Kashyap, former CTO of Cylance and now CEO at Awake Security. With cloud computing bringing vast and affordable computational power within reach of organisations of all size, AI-based products and services will no longer be prohibitively expensive and will gather momentum. But "it takes five to 10 years to mature any real wave," Kashyap says. "We’re right in the middle of that right now."
Even some AI proponents are careful about raising expectations about what the technology can do to improve cyber-security. "AI learns biases," says Arran Stewart, founder of Job.com, which uses AI-enabled blockchain technology to automate the recruitment process. As a result, a savvy malicious actor can manipulate AI defenses, he adds. "An attacker could send a fake attack in for 100 days in a row" to make the machine learning system learn a pattern. "Then, on the 101st day, it does something completely different."
An AI cyber reality check
If Kashyap is correct, cyber-security teams will have to make their first forays into the new technology amid the unresolved debate over just what distinguishes artificial intelligence. According to a recent article in the MIT Technology Review, "the broad definition of AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can." Algorithms that find patterns constitute machine learning; deep learning is a still more powerful subset of AI.
The power of deep learning is already apparent in applications in many fields, and it has bedazzled cyber-warriors looking for a means to defend (and attack) critical infrastructure, government and military targets the world over. But autonomous cyber-defences will be limited by the algorithms developed by researchers who base their work on known threats, says Jane LeClair, president of the Washington Center for Cybersecurity Research and Development.
"Generally speaking, AI can effectively identify and defend against known threats and attackers but as of now is limited in its ability to match wits with the evolving and sophisticated attacks of skilled protagonists," LeClair says.
While the world waits for a cyber-security machine-learning breakthrough comparable to face recognition, beleaguered SOCs are looking for off-the-shelf AI solutions that help them tackle the challenges of the day. Typical applications include recognition and blocking of malware based on machine learning of known samples. On that basis, AI-enabled malware detection can identify a possible variation of the malware and move to quarantine it or take other action. (Hackers, of course can do likewise: IBM recently harnessed its machine learning tech to typical malware to build a hard-to-detect AI-enhanced ransomware called DeepLocker.)
Another benefit of AI in cyberdefence is better automation of incident response – for example, the analysis of an email reported as a phishing attempt of the check against known viruses and routing it for further investigation. "Advanced AI can also act as an advisor to analysts, helping them quickly identify and connect the dots between threats," says Limor Kessem, executive security advisor, IBM Security.
IT inventory management for cyber-security–almost always a headache for cyber-security teams, who typically don’t control lists of assets – can also be boosted by AI. Too often, issuance of a CVE or discovery of a bug sends cyber-security vulnerability management teams rummaging through outdated or incomplete inventory, a problem that is growing more complex with the growth of cloud-native ephemeral infrastructure such as containers.
Then there’s the even bigger headache of identifying dependencies of vulnerable apps that require patching. By tracking system additions, decommissions and changes, AI-enabled asset management – even if outside the scope of cyber-security teams’ work – will make cyber-security teams’ lives easier.
More generally, AI-based Big Data apps can be used to ingest the vast amounts of data that aren’t typically setting off alarms to set a pattern for baseline security expectations and track anomalous behavior of systems or networks that a human may not see.
AI can also amp up SIEM with threat intelligence. An early adopter is end-user behavior analytics, which sort through data to find outliers such as user logins from unexpected locations and correlate activity with behavior deemed suspicious. "The real problem from a technical perspective is about becoming smarter about the events that are showing in the SOC," says Dhananjay Sampath, co-founder and CEO at Armorblox. "What AI brings to the table is to make sure the alerts are going to the right person at the right time."
Further, information security auditors can add AI tools to their repertoire with great effect, the author of a 2016 white paper from the American Audit Association says. "Face and voice recognition software and archives can serve as supporting evidence for cyber-security, or more prosaically as authorisation and separation of duties, controls and meta-controls."
Similarly, AI can also be a balm to identity and access management (IAM) teams tasked with tracking both user activity and increasingly complex access to containers and functions as a service. In that context, machine learning can help IAM teams keep pace with dynamic, cloud-native environments and minimise the need for multi-factor authentication (MFA) and other restrictions on trusted users and systems.
Another cyber-security issue ripe for AI is an intrusion detection system/intrusion production system (IDS/IPS). While combined IDS/IPS capabilities are increasingly considered a must-have for enterprise-class organisations, the notorious amount of false positives around intrusion prevention has pressured SOCs to turn down the noise. AI-enabled IDS/IPS, vendors say, could allow a smoother transition from learning mode to blocking mode, as machine learning enables the systems to distinguish truly malicious user behavior from a rare but innocent action by a legitimate user.
In the near team, AI is likely to have its biggest impact in helping SOCs round up the disparate sources of data from the raft of cyber-security tools typically employed in large organisations, says Jenn Black, a cyber-security executive currently with EY. Currently, I’ve seen [AI] best used in orchestration efforts – how to orchestrate workflows and integrate between existing point solutions," she says.
But today’s AI-based incremental enhancements will soon become baseline cyber-security, especially in heavily regulated environments, says Darin Hurd, CISO at mortgage provider Guaranteed Rate – which means cyber-security professionals must master AI skills or be prepared to pay up in an already expensive AI labor market.
"As companies continue to build their cyber-security programs to support evolving threats, laws and regulations, AI tech will more likely play a role in these updated programs," Hurd says. "Companies won’t be able to build or operate the AI-enabled tech without properly trained people to make it happen."
This article was originally published on SC Media US.