Artificial intelligence and the future of cyber-security
Alexandre Arbelet and Daniel Brown explain the role of artificial intelligence in enhancing cyber-security
Alexandre Arbelet, security researcher and Daniel B Brown, security consultant, FarrPoint
As more cyber-security threats arise every day, extensive research into prevention and detection schemes is being conducted globally. One of the issues faced is keeping up with the sheer mass of new emerging threats online. Traditional detection schemes are rule or signature based. The creation of a rule or signature relies on prior knowledge of the threats structure, source and operation, making it impossible to stop new threats without prior knowledge.
Manually identifying all new and disguised threats is too time-consuming to be humanly possible. One solution that's getting global recognition is the use of artificial intelligence.
The subset of AI that relates to cyber-security is the ‘learning' branch. This field, called 'Machine Learning', relates to the capability of a computer to learn from data and improve over time. AI can use knowledge it gains to detect threats, including those that are yet to be discovered, by identifying shared characteristics within families of threats.
It is hard to believe AI could one day replace a room full of humans monitoring a network. The human brain is limited in its ability to simultaneously consider multiple variables during decision-making. Psychologist Graeme Halford estimated this number between four and five variables at any one time. AI has filled this gap by providing a tool that can take into account hundreds of variables at a time, whilst processing millions of records per second.
AI is most accurate when it can get feedback on the decisions it is making. One thing AI cannot currently borrow from humans is the ability to assess a decision based on the situation and environment. Combining the situational and environmental awareness of a human with the data processing and pattern recognition abilities of AI makes for the strongest possible detection scheme. This was recently proven by a research team at MIT with the creation of their AI Squared detection scheme that used AI learning reinforced by a security analyst giving the AI feedback on its most unusual decisions. This reduced the number of false positives the AI was making by a factor of five.
Cyber-threats are rapidly evolving. Attacks are stealthier, more targeted and more evasive than ever before. Because of this, we tend to move away from prevention towards detection, relying upon a human component to take action. What is abnormal is not necessarily intrusive, and what looks to be legitimate might not be within a given context. Attackers can use stolen credentials and access systems posing as a legitimate user. There is no way to differentiate this from a regular user without situational awareness. Because of this, humans must remain in the detection scheme, whether it is to take action or provide feedback to the system for it to improve.
What to expect
The future of cyber-security will continue as it always has, as a game of cat and mouse. Attackers will create new methods of concealment and defenders will create new methods of detection. The difference with AI is that we are trying to make something that will adapt to the changes the attackers make. Current research suggests we will soon see distributed AI detection schemes operating similarly to the human immune system, giving some form of environmental awareness. Like the human immune system, one part would be dedicated to addressing common threats (innate immune system), whilst another part would investigate anomalies to detect threats that have not yet been seen by the system (adaptive immune system).
The primary objective of AI-based security solutions is to help detect what other controls fail to. Many researchers and vendors are claiming unheard of accuracy rates for their AI detection schemes. Specific families of threats can be detected in a very accurate manner, however, emerging families of threats may display changing characteristics, or characteristics that purposefully try and trick AI detection. This makes accuracy metrics relative as researchers and vendors can only assess the detection performance against a small set of threats, amongst infinite real-world possibilities. There are good products out there that will truly enhance your security posture, however, accuracy statistics are more of a selling point than a feature to be relied upon. For now, AI detection schemes are strongest alongside human decision makers. This will make them commonplace in environments like security operations centres in the near future, allowing a huge workload to be alleviated with help from AI.
Contributed by Alexandre Arbelet, security researcher and Daniel B Brown, security consultant, FarrPoint