Is the truth about vulnerability disclosure too dangerous for the public to know?

The issue of vulnerability disclosure is a difficult one to assess, on one side there is the public acknowledgement of a flaw and on the other there is the perceived ‘announcement' of it.

Will Hogan, VP of sales and marketing at Idappcom, looks at vulnerability disclosure, asks when it is appropriate and whether we should be disclosing vulnerabilities at all.

A debate has been raging for at least the last ten years concerning the rights and wrongs of vulnerability disclosure. It became the topic of the day for security professionals recently when two giants of the industry, Microsoft and Google, had a public disagreement about how to handle disclosure after a Google researcher went public with a vulnerability only days after Microsoft were informed.

For those who might be unaware of this subject, or why it is important, it is about telling users of software if there are any faults in the software that might allow unauthorised persons to gain access to the users' systems, what is normally called hacking.

A huge amount has been written about this subject over the years and anyone with enough time and energy to research it all would probably conclude that we are nowhere near resolving the fundamental issues. Do we disclose, and if so, when?

On one side of the argument we have the vendors and the ethical researchers. They know that the users need to be told but want a period of time between vulnerability discovery and disclosure to the public, which would give them an opportunity to produce a patch. This is referred to as ‘reasonable disclosure', or ‘coordinated vulnerability disclosure' (CVD) in Microsoft speak.

On the other side you have the users, some ethical researchers and the ‘not so ethical' researchers. They argue that the disclosure should be immediate so that they can be aware of all the threats and take action to defend themselves. This is referred to as ‘full disclosure' or ‘zero-day disclosure'. There are merits in each position and it is difficult to say who is right.

What is clear is that there is no real consensus in the industry with regard to how long there should be between discovery and disclosure. Tipping Point gives vendors six months to fix, Google says 60 days, CERT says 45 days and IBM ISS gives vendors 30 days to respond.

Some researchers seem to be disenchanted with this confusion and have taken the initiative themselves. A Russian security company released zero-day details for major vendors in January 2010 and the Abysssec Security team are doing the same in September. So, is it possible to say what a reasonable amount of time is? Probably not, as if the vulnerability was a serious one, like a Kernel level one, then it can take a long time to produce a patch, but in the meantime end-users need to be protected.

It is easy to sympathise with the vendors' position. It can hurt them commercially if users, or potential users, know about problems in the software. Just imagine the situation where a company is in an evaluation process for new software or an upgrade and hears of a vulnerability disclosure.  It is strongly rumoured that this has happened in the past when a company with 70,000 users deferred an upgrade on an operating system because there were problems.

One argument against full disclosure is that by keeping vulnerabilities secret the hackers won't find out about them and create exploits. This is flawed logic. Hackers are good at their jobs. They are all looking for their 15 minutes of fame and they work very hard on this. It is argued that full disclosure is what made vendors become more reactive to vulnerabilities.

As an example, consider this: In April 2010 Tavis Ormandy disclosed a flaw in the Java virtual machine that made it easy for hackers to install malware on end-user machines. Oracle told Ormandy that the threat didn't warrant them issuing a fix outside of their next scheduled patch release. Ormandy published and five days later the vulnerability was fixed. So, does full disclosure work? Well maybe.

Another group with an interest in this is the network security appliance vendors. These guys produce IP filtering devices that can recognise the attacks and keep them out. This group needs to know about vulnerabilities as soon as possible so that they can create a security rule to defend the network. If they don't know then their appliances are ineffective against any exploit that might be developed for the vulnerability.

Many of them have their own researchers, or pay independent researchers, to find vulnerabilities before the hackers do. They can then choose to report this to the vendor before they disclose and in the meantime produce a security rule to stop any attacks.  It sounds good but does it work? In a recent test of IP filtering devices run by NSS, an independent testing company in the USA, none of the security appliances on test were able to block 100 per cent of the attacks. The appliances on test were from the major suppliers in the industry and the worst performing appliance only stopped 17.5 per cent of the attacks. The best appliance only stopped 89.5 per cent and this was only after the appliance vendor had their own highly skilled engineer configure the appliance to provide maximum attack coverage.  Based on these results it seems that a little more disclosure could be the order of the day.

So, where does this leave the ordinary end-users who have to protect their networks against exploits for vulnerabilities that they might not even know about? They certainly need to protect their networks with an IP filtering device (IPS, IDS, UTM, firewall) and they need to keep the security rules up-to-date.

If we don't have ‘full disclosure' then it is a difficult task. One thing that they can do is to ensure that their security devices are working correctly by using a vulnerability assessment tool. These tools allow the user to replay exploits through the security devices to see if they get through.

The more comprehensive ones have libraries of threat traffic files that contain the malicious content of the original exploit. They also must not allow the traffic to penetrate the network and reach an endpoint, such as a user's PC.

Of course it is the job of vendors such as Idappcom to make sure that their traffic library is current, with the latest exploits available. So they too need to know about vulnerabilities as soon as possible.

When we look at the whole picture it is difficult to reach a decision about full disclosure or reasonable disclosure, which is why the debate still rages on. What we can say is that there must be disclosure at the earliest possible moment. This cannot be driven by the vendors need to protect themselves commercially. It has to be driven by the need of the users.

When an exploit is created for a vulnerability, and gets used successfully, it is the users who suffer not the vendor. However, it is not unreasonable to allow the vendor a period of time to produce a fix before a full disclosure, and that is the dilemma and the reason why there is no consensus.

Sign up to our newsletters