It is only a matter of time before artificial intelligence (AI) boosts cyber-crime, warn security researchers and industry leaders. With digital transformation expanding the attack surface of organisations, eight out of 10 security decision makers think AI-supercharged attacks are inevitable, shows a survey by Forrester Consulting on behalf of Darktrace.
Majority of the respondents agreed to the fact that AI should be used as part of their defence mechanism. The survey sourced the responses from the United States (23 percent), United Kingdom (23 percent), The Netherlands (15 percent), Singapore (25 percent) and Australia (16 percent).
“AI will supercharge existing attacks like ransomware, helping criminals run broader and more contextualised campaigns. Offensive AI will make detecting attacks far more difficult,” Max Heinemeyer, director of threat hunting at Darktrace, told SC Media UK.
Organisations needed cyber-security defenses that cover everything from cloud to email to keep pace with AI-driven threats and respond with surgical precision and at machine speed to keep pace with AI-driven threats; and AI augmentation that can adapt to shifting threat vectors, as well as reduce time in detection and analysis, the survey respondents said.
However, these kinds of attacks have existed for several years already, Immuniweb founder and CEO Ilia Kolochenko told SC Media UK. It is just a vector to enhance, augment or accelerate existing types of widespread attacks, not a fundamentally new type or class of attack, he explained.
“Many cyber-gangs leverage machine learning to better profile the most susceptible victims, randomise and improve spear-phishing emails or drive-by-download web content to spread ransomware.”
Heinemeyer agrees, pointing towards the proliferation of AI-manipulated deepfakes.
“Offensive AI will be used to create realistic digital fakes designed to supercharge phishing campaigns. AI cuts out the hours of social network research and reconnaissance that today’s hackers conduct in order to launch a highly targeted attack – the AI attacker can do this in seconds,” he said.
Relatively new vectors are used to bypass existing security mechanisms (eg WAF or antifraud systems), accelerate application fuzzing or falsify someone’s voice in vishing attacks, Kolochenko noted.
“But we are still dealing with the very same security vulnerabilities exploited for over decade, ranging from different types of buffer overflows to SQL injections attacks. We are still very far from seeing a strong AI capable of creating and launching novel attacking techniques.”
Still, the capabilities that AI can offer to evasion techniques are tremendous, said Heinemeyer.
“An agent using some form of decision-making engine for lateral movement might not even require command and control traffic to move laterally. Eliminating the need for command and control traffic drastically reduces the detection surface of existing malware.”
Traditional security controls are already struggling to detect attacks that have never been seen before in the wild – be it malware without known signatures, new command and control domains or individualised spear-phishing emails, he explained.
“There is no chance that traditional tools will be able to cope with future attacks as this becomes the norm and easier to realise than ever before.”
One of the most alarming elements of AI-powered attacks is that they can target the entire digital environment at the same time. The existing IoT environments, which are often under-secured, will almost certainly be targeted, warned Heinemeyer. An AI-powered attack against industrial IoT could constitute a devastating blow to critical national infrastructure, making AI-powered attacks a serious cause for national security concerns, he added.
Interestingly, securing IoT specifically was not the major concern for security leaders in the event of weaponized AI becoming mainstream. System/business interruption was the highest concern (75 percent), closely followed by IP or data theft (74 percent). Loss of trust and reputational damage came a distant third, with 45 percent, said the survey.
“When a major leap in innovation occurs, such as AI-driven attacks, defenders have to think beyond regular security. Time to respond, time to investigation, and time to remediation must be cut down. We cannot bring a human to a machine fight – AI-driven defence will be critical,” said Heinemeyer.
“Given that the cyber-crime industry managed to extract considerable value from the practical usage of machine learning, we will likely see a further surge of AI-power attacks, though they will unlikely change our threat landscape in a substantial manner,” Kolochenko added.