AI provides autonomous cyber-incident response for threats in progress
AI provides autonomous cyber-incident response for threats in progress

There are plenty of companies claiming that AI will automate the security function, but perhaps one with more automation than most is Antigena, recently launched by Darktrace with the ability to react autonomously against in-progress cyber-threats according to the company. The system is being promoted not as a complete incident response system which  would allow security analysts to put their feet up, but as a means of helping to  free up time for busy Security Operations Centre (SOC) teams.

Essentially the technology leverages machine learning and probability mathematics to learn the normal ‘pattern of life' for every user and device in a network. It then automatically responds to serious threats by taking proportionate, remedial action that neutralises threats and allows the security team precious time to catch up. The system may let the laptop work, but might shut it off from writing to the corporate drives.

Darktrace's director of technology, Dave Palmer, speaking with SC Media UK, said the company is looking to differentiate itself in how it might tackle threats. Darktrace says it takes very targeted action – for example, it can slow down or stop a compromised connection or device without impacting normal business operations.

Palmer told SC: “Security people we speak with are endlessly speaking of wanting to have time to move from Windows XP and Server 2003, both of which have reached their end of life, and we're hoping our technology could help security teams achieve this. We want teams focusing on bigger picture, security posture stuff, rather than having to manually execute tasks every time a threat is spotted and they are then distracted.”

Darktrace says its customers report that the product is able to augment their human security teams, taking automatic action against the evolving cyber-threats targeting their networks. To date, 30,000 previously unknown in-progress attacks have been detected.

Steve Drury, COO, Family Building Society commented: “We were impressed with the power of Darktrace Antigena when we saw it in action during the Proof of Value.  After a period of learning, the Antigena logic demonstrated its power to detect and contain potential ransomware attacks by blocking unusual traffic instantaneously, proving that Darktrace Antigena's ability to fight against in-progress threats is a real game-changer.”

Palmer told SC: “Whenever you mentioned artificial intelligence, people assume you're speaking of general artificial intelligence, akin to Skynet or Terminator. We're nowhere near that yet. We're doing what might be referred to as narrow AI, which does lots of probability calculations to make decisions in split seconds. We've effectively taken this idea and scaled it for the enterprise, so it is able to make decisions in a contextual and thoughtful way.”

Despite these high promises, recent research by Carbon Black conducted to gauge how security researchers perceive non-malware attacks, and how good Artificial Intelligence (AI) and Machine Learning (ML) are at stopping them confirms that the roles of AI and ML in preventing cyber-attacks have been met with both hope and skepticism.

70 percent of security researchers interviewed by Carbon Black said attackers can bypass machine learning-driven security technologies. Three-quarters (74 percent) of researchers said AI-driven cyber-security solutions are still flawed. And  it will be longer than three years before 87 percent of researchers trust AI to lead cyber-security decisions.

Most security researchers interviewed consider artificial intelligence (AI) to be in its development stages and not yet able to replace human decision making in cyber-security to protect businesses from attack.

Late last year, SC spoke with Simon Crosby, chief technology officer of Bromium, who argued that “there's no silver bullet in security.” He says the idea that, “you can just detect bad guys and stop attacks is hugely misleading.” This is because many attacks are carried out through tiny steps, that don't often seem like much, but are concealed in the guise of legitimate requests and commands.

Crosby explains, “In cyber-security you're often up against criminals who already know very well how machines and machine-learning works and how to circumvent their capabilities.” More and more SOC teams are saying that they experiencing breach notification fatigue, which is presumably down to the increase in these little steps taken to try and breach a company's perimeter.

And this is why it's very difficult to argue that machine-learning doesn't have a ‘business case' within the cyber-security industry. Everyone interviewed for this article agreed - machine-learning is not perfect - but it's  a great companion for sifting through the thousands of notifications the average SOC teams see on a monthly basis.

As Crosby said, “Having tools which can help find the needle in the haystack is amazing.” In other words, people generally sing machine-learning's praises when it comes to analysing large data sets. And it is because of this that it is helping improve the fight against cyber-crime, despite the fact that, “we still don't feel comfortable leaving it to it's own devices,” according to Crosby. It allows SOC operations teams to concentrate on what matters and investigate the things which are potentially the biggest threats to their systems.