Misunderstanding anti-virus test results can cost organisations money and resources.
As an active, veteran member of the anti-virus community and a pioneer of one of the earliest AV companies, I have spoken with thousands of people with an interest in anti-virus over the last two decades. The one thing that I consistently came across is that there were many myths about anti-virus software. These myths can cause misunderstandings, and potentially cost organisations money and resources. This is particularly evident when comparative testing needs to be done - differences among testing methodologies can cause one anti-virus engine to appear more powerful than another, where in a real-life situation the opposite may be true. I have summarised three of these myths below.
Myth 1: Anti-virus software can only detect specific, known viruses.
In the very early days of anti-virus technology this was certainly the case. Today, however, most AV products use a combination of reactive (signature/fingerprint) and proactive technologies – primarily heuristics. There is a general trend towards increased use of heuristics, due to their capability for early detection, as well as their speed.
Testing the proactive capabilities of an anti-virus is best done with the most recent virus sets. Some testing organisations actually disable the signature update capabilities of an anti-virus in order to evaluate its proactive detection of these newer viruses (e.g. less than one week old).
Myth 2: If an anti-virus has flagged or blocked a file, this file is definitely malware.
Anti-virus engines may mistakenly block files even if they do not contain viruses. A few incorrectly blocked files (false positives) are not necessarily a major cause for concern but should be taken into account when comparing AV engines. Another type of non-malware file that may also trigger a malware indication is ‘pseudo-malware' consisting of corrupted malware, garbage files and ‘intended malware'.
Pseudo-malware may appear during testing of anti-virus solutions. Many organisations that maintain malware collections also save these files and use them in tests. For this reason, many AV companies detect these files as malware. On the other hand, some anti-virus engines do not identify these types of files as malware, since after all, there is no risk associated with them. For this reason it is not enough to simply compare the number of viruses detected between anti-virus engines – one must analyse the types of files that were flagged.
Myth 3: Testing an anti-virus solution should be done by throwing as many viruses at it as possible.
Whenever an anti-virus product is evaluated, it is important to test with a mix of clean (non-malicious) files and malware to ensure that the product not only detects viruses, but also generates few or no false positives. Testing using clean files has another important purpose and that is to test the performance and resource utilisation of the AV solution.
Performance differences translate into actual costs – an anti-virus engine that requires four servers is twice as expensive to run as one requiring only two servers. To test actual anti-virus performance or system impact, a test using only clean files should be run. The reason for this is that AV engines in production spend most of their time scanning legitimate files, and only a small time scanning malware.
This discussion is but a small indication of the complexity of comprehensive anti-virus testing and the interpretation of detection and performance results. Before making a purchase decision organisations should be aware of the test methodologies and virus samples used. In addition, differences in AV engine behaviour such as proactive and reactive algorithms and flagging of pseudo-malware should be closely examined.
About the author: Helmuth Freericks is general manager, anti-malware solutions at Commtouch, an internet security company that serves Google and McAfee with its anti-virus solutions. He has been fighting malware for nearly two decades.