Cyveillance responds to criticism over malware testing research

Opinion by Dan Raywood

Last week's news was dominated by the report that claimed that anti-virus detections were not of a decent standard.

Last week's news was dominated by the report that claimed that anti-virus detections were not of a decent standard.

The report from Cyveillance claimed that zero-day anti-virus detections are inadequate, and led to vendors dismissing the method of research and stating that they found the results to be poorly determined and tested.

Following the furore, I spoke with Eric Olson, vice president of solutions assurance at Cyveillance. He explained that while Cyveillance is not an anti-virus vendor, what it does is look at the internet as a standard user would and understand what malware is doing and how people get infected.

Asked what Cyveillance actually do, Olson explained that it ‘hoovers up the detail with one machine, and uses another to find the ‘needle in the haystack', that being the malware of malicious URL'. Its work, he said, is to pick the needle out of the haystack but more rapidly and at the layer side rather than the user.

He said: “We see it as a publicly accessible internet and find things that are available and malicious. We see source code, intellectual property and uploaded financial information, what seeps out of the seams of the enterprise.”

Looking at the report, Olson said that one of Cyveillance's activities is to report in real-time, as an undetected URL may be installing malicious code – pulling a needle from a haystack.

“We generally see URLs and the overriding majority are installing an executable file on a computer without a user's approval so they are force-fed malware without their permission,” he said.

I asked if this report was a typical piece of research from Cyveillance? He said: “Yes, the company and me personally work on situations and put out a whitepaper, so we are not saying that we had just shown up to do this, we spend 365 days a year looking at threats. We decided to take 36 hours at random and take a sample and see how long the anti-virus vendors take to detect it as a threat that was getting force-fed.

“To be clear the paper says that we are not systems evaluators and we browse the way a user does. What got lost was the last point which was when a given anti-virus covers them in isolation then it says what is malware, how long is taken to detect the things that they consider to be malware.”

The last point that Olson referred to was the ‘average lag time in days', which showed anything from two to 27 days could be taken to recognise a threat.

Olson said: “The point is this, the critique was if you take a long time to call a file malware because a vendor said it was, you would be saying it is malware when it is not. But if somebody says that there was a long time taken, how long is it from infection to detection? People say how did we select the binaries for this report?

“The client does not suffer from product failing, of the things the product calls malware, it is how long between infiltration and protection.”

One of the criticisms from anti-virus vendors was that 1,708 pieces of malware was ‘not a statistically significant sample set'. according to ESET, who said that it sees around 200,000 unique new samples each day.

Olson had said that a random 36-hour period was selected and it detected 1,708 different pieces of malware. He said: “We are not trying to stake performance versus malware, it was a cycle that was emulating a surfer's behaviour, and the numbers are accurate. Vendors may get feeds in from an academic institution or from data sharing, we looked at what is a threat now and for that sample our findings are accurate.

“Malware regenerates every time it is installed, something that can generate in a way with one space in a different system. We have a view of the world that the vendors don't have - our speciality is what is happening on the web now.”

He concluded by saying that its customers, typically Fortune 500 businesses, ask what the payload is and what impact does it have? Olson said that this sort of paper is done twice a year, and it set out to answer what is the lag window of when a user is infected.

"At the time that you are infected with it, are you protecting me an hour or a month in advance? How long before finding things does it take before you are eventually protected? The answer is several days or weeks depending on which vendor you use,” he said.

What has been an incendiary piece of research was always going to draw criticism from the masses of vendors, and with Olson's claim that this research will be followed, either this year or next with more findings, I doubt we have heard the end of Cyveillance's thoughts.

You could say that the findings are interesting in showing how quickly vendors are able to respond to zero-day threats, but with the questions raised on testing methods and the selection of samples, I doubt this is going to draw much sympathy.


Find this article useful?

Get more great articles like this in your inbox every lunchtime

Upcoming Events