The presentation was effectively a demolition job on the efficiency of CVSS (Computer Vulnerability Standard Scores) as a means of prioritising attacks to be remediated.
Looking at the approach of ranking threats using CVSS, and then tackling all those above a certain score – typically those reaching more than nine on the score, Roytman - data scientist at Risk I/O - said that even targeting those on 9.5 and above only tackled about 3.5 percent of actually likely attacks. To illustrate his point of the system not working, he said that this result wasn't much more than a blind ‘random' approach that would counter about 2 percent of attacks.
The reasons were primarily to do with the way the CVSS tests were set up, which Roytman says do not always reflect reality. Tackling known vulnerabilities that are not being exploited can be a waste of time if it means resources are taken away from vulnerabilities that are being exploited but not resulting in exfiltration of data or targeting less high value targets – as these can just be steps in a bigger attack.
The most efficient approach, Roytman suggested, is to target those vulnerabilities that are most often exploited – including those for which patches are available, as the availability of patches can reduce a CVSS score from 10 to 8.5 even if the patch hasn't been implemented.
By looking at 50 million live values across 1.5 million networks at 2000 organisations, Roytman found 3 million breaches. Analysis of this big data on actual exploits – what was used and what would have stopped them, the biggest risk identified came from the ‘script kiddie' end of the market – those using Metasploit and Exploit DB. Random prioritisation hit 2 percent of attacks; CVSS 10 hit less than 4 percent, whereas prioritising attacks using Exploit DB and Metasploit would have prevented 30 percent of attacks.
As such, Roytman concluded was that this provides a more effective approach to prioritisation and said that it could yield a success rate up to nine times better than CVSS.