Even the most popular anti-virus signature-based solutions detect less than 19 per cent of malware threats.
A report by Cyveillance claimed that traditional anti-virus vendors ‘continue to lag behind online criminals when it comes to detecting and protecting against new and quickly evolving threats on the internet'.
Cyveillance tested 13 popular anti-virus solutions to determine their detection rate over a 30-day period, and found that popular solutions only detect an average of 18 per cent of new malware attacks. By day eight the solutions averaged a 45.7 per cent detection rate, rising to 56.6 per cent on day 15, 60.3 per cent by day 22 and 61.7 per cent after 30 days.
The most capable solution on a zero-day detection, according to the report, is F-Secure with 27 per cent of detections, followed by Kaspersky Lab and McAfee with 22 per cent each. Symantec comes next with 21 per cent and Sophos with 20 per cent.
It claimed that as it takes an average of 11.6 days to ‘catch up' with malware, ‘users should not rely on the AV industry as their only line of defence'.
Cyveillance COO Panos Anastassiadis said: “Even after 30 days, many anti-virus vendors cannot detect known attacks, making it critical for enterprises to take a more proactive approach to online security in order to minimise the potential for infection.
“To increase protection, users can't forget the basics – avoid unknown or disreputable websites, increase security settings on their web browser and leverage supplemental malware block lists to increase security on their devices. Only through both proactive and reactive tools can a solid security platform be achieved.”
Randy Abrams, director of technical education at ESET, said that the report ‘is a textbook example of how to do anti-malware testing fundamentally wrong'. He said that a sample set of 1,708 unconfirmed malicious files were used, and as ESET sees around 200,000 unique new samples each day, 1,708 is not a statistically significant sample set.
He said: “Cyveillance claims to have confirmed the samples, but their methodology proves otherwise. The way you confirm a sample is malicious is by examining what the sample actually does. Cyveillance discarded any samples that were not detected by three anti-virus companies. There are two interesting aspects to this approach. For one, if a harmless file is detected by three scanners (false positive) and correctly, not detected by the rest of the scanners, then the scanners with the false positives are rewarded and the scanners with accurate detection are penalised.”
He claimed that it is true that the anti-virus industry has room for improvement, but the flaws in the testing methodology mean that the Cyveillance test has enormous room for inaccuracy and is not a scientific illustration of the problem.
He said: “Seriously, if a professional security organisation such as Cyveillance can make such obviously novice errors, how do you expect the average user to have knowledge to contribute to the establishment of scientific standards? It would be great if Cyveillance joined AMTSO, or at least read some of the documentation on the site.”
David Harley, senior research fellow at ESET, claimed that it is an ‘interesting case of a correct conclusion based on inadequate data'. He too commented that the results are based on too small a sample population to be statistically useful, that you cannot rank comparative performance meaningfully on that basis unless you can demonstrate accurate weighting for prevalence, and there is no indication of that in the report.
He said: “Confirmation that each file is malware by vendors used in the test is not validation: it sounds as if it should eliminate false positives but it doesn't (remember the Kaspersky experiment demonstrating cascading FPs due to insufficient validation of shared samples?) That doesn't necessarily mean that the results were significantly skewed by false positives: it just casts doubt on their understanding of what they're really testing.
“It's not clear what they meant by real-time testing: presumably using an on-access scanner, which is better than pure static testing using on-demand scanning. However, since they say ‘the samples were fed to the anti-virus solutions in real-time and only consisted of confirmed malicious files', it doesn't sound like true dynamic or whole product testing. Which is okay: a test can be static, WildList-based and so on and still be valid and even in accordance with AMTSO principles.
“But only if it's clear to the tester and to his audience what the limitations of the test really are. The trouble with most tests is that the testers draw big conclusions from tests that only look at a single detection behaviour.”
Alan Bentley, SVP international at Lumension, commented that the overall message is that the market is being attacked from different places and companies are seeing a huge increase in the amount of malware.
Talking to SC Magazine, he said: “Now big businesses are trying to circumvent known vulnerabilities in their technology. Companies are only looking at signature-based protections. There will be a point where all you have is signatures and companies need to move to a more proactive approach where they determine what they allow to execute in their environment.
“What the report did not say is that organisations are under constant pressure to protect against multiple malware attacks and the threats are going up and up. Traditional anti-virus vendors will struggle to compete as the attackers are changing tactics.”