The research, which Carbon Black says looked “Beyond the Hype” found that the roles of AI and ML in preventing cyber-attacks have been met with both hope and skepticism.
The vast majority (93 percent) of the 400 security researchers interviewed while conducting this research said non-malware attacks pose more of a business risk than commodity malware attacks, and more importantly that these are often not stopped by traditional anti-virus offerings.
Mike Viscuso, co-founder and CTO of Carbon Black told SC Media UK: “Researchers have reported seeing an increase in the number, and sophistication, of non-malware attacks. These attacks are specifically designed to evade file-based prevention mechanisms and leverage native operating system tools to keep attackers under the radar.”
One respondent explained: “Most users seem to be familiar with the idea that their computer or network may have accidentally become infected with a virus, but rarely consider a person who is actually attacking them in a more proactive and targeted manner.”
Two-thirds of security researchers are not confident that legacy antivirus (AV), such as those seen in the recent WikiLeaks data dump of CIA files, could protect an organisation from non-malware attacks.
Nearly half (47 percent) said their AV solution missed malware or they weren't sure if it had missed any malware during 2016.
Attackers can bypass machine learning-driven security technologies according to 70 percent of security researchers and nearly one-third said ML-driven security solutions are easy to bypass.
Most security researchers consider artificial intelligence (AI) to be in its development stages and not yet able to replace human decision making in cyber-security to protect businesses from attack.
Three-quarters (74 percent) of researchers said AI-driven cyber-security solutions are still flawed. It will be longer than three years before 87 percent of researchers trust AI to lead cyber-security decisions.
Some of the most common types of non-malware attacks researchers reported seeing were remote logins (55 percent), WMI-based attacks (41 percent), in-memory attacks (39 percent), PowerShell-based attacks (34 percent) and attacks leveraging Office macros (31 percent).
Despite PowerShell commonly being used for malicious purposes, Rick McElroy, security strategist for Carbon Black, spoke at the event and highlighted that “that's how Netflix stays up, they use PowerShell to spin up new instances whenever they are needed. So it isn't going anywhere soon.”
However, there are risks associated with using AI-driven security solutions too.
If you're too reliant on humans to make security decisions, you may end up making more mistakes.
High false positive rates are still proving to be a challenge, as AI requires huge data sets to learn what is and isn't a something which a security analyst should be alerted to.
AI can slow down security operations, as security analysts are having to investigate more alerts, and often security teams don't have the bandwidth.
And finally, they are easy for an attacker to bypass, if the attacker knows a way around the way which that specific AI system operates.
Viscuso concluded: “Eighty-seven percent of cyber-security researchers indicated it will be at least three years before they trust artificial intelligence to lead cyber-security decisions. That's because AI must rely heavily on human experiences and training to arrive at ‘decisions.' Researchers noted a number of reasons they don't yet trust AI in cyber-security. Most security researchers know that cyber-security is still very much a human vs. human battle. These researchers are able to see through the marketing hype of current AI solutions and understand that AI is a component to modern information security programs and should be used primarily to assist and augment human decision making - not replace it.”