There's an old saying: “When you assume, you make an ass of you and me.” It was a series of assumptions that have led to the widely-reported problems caused by the Heartbleed bug recently. At the core of the issue is a simple coding error – the type that any developer could make. But the error wasn't the real problem; it was the assumptions made by thousands of people globally that led to the headlines and the rush to close off the vulnerability.
· Assumption #1: someone had double-checked and security-tested the code before adding to the OpenSSL software repository.
· Assumption #2: because open-source development has a community of programmers working together free of commercial imperatives, it leads to software with fewer bugs.
· Assumption #3: because OpenSSL is used by tens of thousands of websites and products worldwide, including some of the biggest online brands, and in a range of vendors' security solutions, it's fully tested, robust and secure.
With Heartbleed, all of these assumptions were proved wrong. Let me be absolutely clear: I am not criticising or blaming open-source software development, nor am I criticising any of the individuals involved in developing, deploying or using OpenSSL. The world's most popular commercial software, worked on by huge development teams and used by hundreds of thousands of companies, is just as prone to serious bugs and vulnerabilities. If it wasn't, we wouldn't have Patch Tuesday every month.
But security has to be based on more than just people's assumptions, and the fact that thousands of other companies worldwide use the same security code on their websites and as part of their products. There is no global Internet security task force that seeks and closes off vulnerabilities when they are discovered. The rush to deploy OpenSSL – because everyone else was using it and, being open-source, it was cheap – played the major role in creating the scale and seriousness of the Heartbleed flaw. People trusted, but failed to verify that their trust was deserved.
What's in the box?
This also leads to the question of why OpenSSL was built into vendors' networking and security software and solutions, without being rigorously checked? A huge number of these solutions must also be updated, as the list at the Computer Emergency Response Team (CERT) website shows – creating a headache for corporate IT teams that have to apply those updates.
OpenSSL was probably used for much the same reasons as outlined above: it's cheap, widely deployed, helps to make development and adding new features faster, and is assumed – there's that word again – to be secure.
But it's not good security practice to use a large open-source library as part of a security solution without rigorous checking, because you then trust that another party has properly reviewed and tested the code.
It was recently announced that an open-source software group was creating a simpler, cleaner version of OpenSSL, and had already removed nearly 250,000 lines of code and content that wasn't needed. If so many lines could be removed, how many more are completely irrelevant for use on a network security gateway or firewall? How could the vendors building the OpenSSL code into their solutions be sure that all the code was secure? Put simply, they weren't sure: they assumed.
Security solutions need to be rigorously developed, tested and re-tested to ensure any vulnerabilities are removed. They should not include, or rely on off-the-shelf code which has not been verified as secure, no matter how appealing it is to try and accelerate the development of products. Security must be about trust, founded on a solid technical basis. The Internet already has enough threats and vulnerabilities, and we don't need any more being introduced by dangerous assumptions.
Contributed by David Sandin, product manager, Clavister