Why traditional antivirus is facing increasing criticism

Traditional antivirus (AV) products have been taking a beating in the media recently, but why? The reason is simple; they cannot and do not protect you from new malware.

Why traditional antivirus is facing increasing criticism
Why traditional antivirus is facing increasing criticism

There are only – and only ever will be - three types of files in the world; known bad (confirmed malware), known good (confirmed ‘trusted' files) and unknown files. The former two are easy to deal with, with blacklisting and whitelisting, but what do you do with the unknown files where the blacklist and whitelist rules don't apply?

You follow an 'If else - then' routine. If you have ever written any code you'll get this straight away…

  • If [New File] is not on the blacklist…ElseIf
  • [New File] is not on the whitelist....
  • Launch [New File] in an isolated, contained environment where it is impossible for [New File] to contaminate the host OS.

The core reason that so many infections take place on a daily basis is because of solutions that don't have the 'then' part of the above procedure and launch the file anyway.

The problem is clear at first glance. The default action is to allow the execute action to take place if the application isn't already blacklisted: ‘default allow'.

But what if [New File] is bad but just not on a blacklist yet? How many infections will take place before it gets added to the blacklist? Will I be one of the guys infected?

Some solutions take the above one step further by using heuristics and their procedure looks like will check for suspicious activity within a default timeout period before launching the file.

Once again; ‘default allow' the application/script if it is not on a blacklist and it looks OK. The critical errors with this approach are that you're relying on the application/script to not do anything suspicious. CryptoLocker didn't seem to do anything suspicious. What's so suspicious about encrypting files? BitLocker does it every day.

Secondly, you are relying on the application/script to perform its suspicious action with a certain period of time. All your bad guy (who has thoroughly researched all the vendor's flaws) has to do is insert a 'sleep' command as the first line of code in his script where the 'sleep' value is greater that the heuristic timeout value.  There are solutions that use virtualised sandboxing who have this same problem.

So what's the alternative? ‘Default deny' and not ‘default allow', but this is a little harsh isn't it? What if the file is genuine?

I would have to go around and whitelist everything that belongs to me plus all its dependencies and repeat if an upgrade is made to the file. This, while a great way to secure the environment, is an administrative nightmare.

Clearly these methods, or a combination of them, still don't work.

The solution is sandboxing -contain the application/script in an isolated environment. I can hear the groans of despair: "Yeah, that's what Sandboxie did and that failed. That's what Java did and failed. That's what Google did and failed". However, it comes down to how your sandbox works. There are criteria for creating a fail-safe sandbox.

Firstly, don't use virtualisation because it's expensive at the desktop level and desktop-virtualisation is trivial to bypass. The two most favoured ways is time-bombs and the heuristic approach.

Secondly, use persistence. Load the sandbox at Windows start-up, create a persistent layer on top of the OS and anything not on the blacklist and not on the whitelist can ONLY run outside this layer and therefore CANNOT touch the OS. Why? Because the sandbox got there first so anything that application tries to do can only be done inside the sandbox. This means that it is impossible for the application to escape the sandbox.

It also means that there is zero reliance on heuristic detection. It is irrelevant if that application behaves suspiciously or not. If it isn't on the blacklist and it isn't in the whitelist, then it goes in the sandbox. You Shall Not Pass - Job Done.

Thirdly, make it non-interactive -remove the human element. Do not put pop-ups on the user's screens asking them to make a judgement call as to whether the file should be allowed to perform [RPC call] to [COM]. They have absolutely no idea what is being requested. All they know is that they double-clicked something, a popup came up asking for permission to allow or sandbox something and, because the user double-clicked on that something, then it must be OK.

In addition, do not put pop-ups on the administrators screen.  Admins don't have four hours to Google whether a previously unknown application should be allowed to perform [RPC call] to [COM]. They'll also end up having the user on the phone complaining that the PDF attachment they double-clicked isn't working and the computer is broken. Sometimes, overworked, underpaid admin will occasionally just this one time, click ‘allow' and get infected.

The sandboxing process needs to be fully-automatic while still allowing the user to interact with that 'contained' application/data. The administrator needs to be allowed to manage the sandboxed applications in order to whitelist the good stuff and uninstall/delete the bad stuff at their leisure.

Opt for a non-virtualised, persistent, non-heuristic-dependent, auto-sandbox. Then allow the admin to see all sandboxed applications/scripts/whatever in the sandbox manager so that the admin can do whatever they need to, whenever they feel like doing it.

Melih Abdulhayoğlu is CEO and founder of Comodo

Sign up to our newsletters