The latest DARPA Grand Challenge is looking to artificial intelligence to seek out and destroy vulnerabilities in software.
The US Defense Advanced Research Projects Agency (DARPA) Cyber Grand Challenge will see seven teams battle it out to see if machine learning can do better in finding and fixing exploits better than humans.
The agency said on its competition website that identifying threats and remediating them can take over a year from first detection to the deployment of a solution, by which time critical systems may have already been breached. “This slow reaction cycle has created a permanent offensive advantage,” reads the blurb.
DARPA wants the competition to automate this cyber defence process, “fielding the first generation of machines that can discover, prove and fix software flaws in real-time, without any assistance. If successful, the speed of autonomy could someday blunt the structural advantages of cyber offense.”
At a teleconference, Mike Walker, program manager of the Cyber Grand Challenge at DARPA, said that he was “hoping to see proof that the entire computer security cycle of responding to a flaw can be automated.”
He added that the bar for the competition has been set deliberately high so no team will find and fix all flaws. Qualifying heats held last year looked at a number of software codes and while no single team found all vulnerabilities, when the results of all the teams were combined, 100 per cent of the test code was patched by the competition's finish.
Seven teams have been selected for the final and will be given a computer put together by DARPA with the task of finding and fixing faults. The software will be undisclosed to finalists and once the challenge starts no further human interaction will be allowed.
"The machines have to understand the language of the software, author the logic for the software, write network clients and arrive at the path of the new vulnerabilities entirely on their own,” said Walker.
The challenge will take place at Def Con 24, held in Las Vegas, in early August. All code from the teams and DARPA's test code will be available after the event under an open source licence.
Marc Laliberte, information security threat analyst at WatchGuard Technologies, told SCMagazineUK.com that artificial Intelligence in the cyber-security realm boils down to smart, unsupervised decision making by computer systems.
“An AI-driven solution monitors data and identifies attacks based on matched indicators and machine learning from previous experiences. Instead of relying on human-provided signatures for known attacks, an AI solution can be proactive, identifying previously unknown threats on its own.”
“I think in the next year we are really going to see machine learning as a feature take off in the world of cyber-security. Human security experts shouldn't worry about their jobs being replaced quite yet though, as they will still be needed to identify false positives and missed attacks,” he added.
Giovanni Vigna, CTO and co-founder at Lastline, told SC that more AI is definitely a good thing as far as the security industry is concerned.
“The algorithms and the approaches that are developed as part of this competition will help security professionals by allowing them to focus on the important parts of an application, without having to waste time on trivialities.”
István Szabó, product manager at BalaBit IT Security, told SC that AI and machine learning has the “potential to improve the effectiveness of cyber-security by enabling a more human-centric approach, continuously monitoring what is happening in the environment, automatically building a baseline of what is considered normal behaviour and automatically detecting anomalies.
“AI intervenes to eliminate the threat by improving the reaction speed and preventing data breaches at an earlier stage of the attack.”