Linus's Law, named after Linux creator Linus Torvalds, states that 'given enough eyeballs, all bugs are shallow.' This has, for many people at least, been used to drive the argument that open source applications are somehow 'more secure' than proprietary software.
If proof that such sweeping statements are never a good idea when it comes to risk analysis, a new report from Black Duck reveals significant cross-industry risks related to open source vulnerabilities.
The 2017 Open Source Security and Risk Analysis (OSSRA) is based upon an audit of more than 1,000 applications which were analysed by the Black Duck 'Center for Open Source Research & Innovation' (COSRI). Of these, 96 per cent contained an open source element and more than 60 per cent also had open source security vulnerabilities.
Within the financial industry, applications were found to contain 52 open source vulnerabilities per app; 60 per cent of them were categorised as high-risk.
Retail and e-commerce fared even worse, with 83 per cent of audited applications having such high-risk vulnerabilities.
With estimates suggesting that anywhere between 80 and 90 per cent of code in the applications we use today is open source, this level of vulnerability makes for uncomfortable reading.
Speaking to SC Media UK, Mike Pittenger, vice president of security strategy at Black Duck Software said that open source is neither less nor more secure than proprietary software and the many eyes argument is one of its many strengths.
"Black Duck's COSRI security and risk analysis isn't about vulnerabilities not being identified by those many eyes" Pittenger insisted, continuing "the audits clearly show that very few organisations are doing an adequate job detecting, remediating and monitoring open source vulnerabilities in their applications, even when those vulnerabilities have been identified months, even years, earlier."
SC Media UK asked security professionals whether they thought that the 'more eyes makes safer software' argument was now dead in the water?
Neil Cook, chief security architect at Open-Xchange is insistent that “years of evidence shows that a commitment to open source software helps to improve software security by virtue of source code transparency, and the extra pairs of eyes available to spot its flaws."
Yet Alex Mathews, lead security evangelist at Positive Technologies reckons it was thrown out long before the Black Duck analysis. "Openness itself doesn't make the code safe. This year, an expert from our company found a critical Linux core vulnerability that had passed, unnoticed, for seven years!"
Elmar Eperiesi-Beck, CEO at eperi, counters this saying "generally speaking, the more people discuss a topic and add value to it, the better the outcome will be."
Patrick Wardle, director of research at Synack pointed out to SC that just because something is open source doesn't mean people will audit it. Of course, this doesn't make closed-source projects any more secure as the number of Microsoft patches each month will testify. "The opposite of each of these arguments (open source is more insecure) is the inverse of the counter (closed-source is more secure)" Wardle insists.
That said, associate partner at Citihub Consulting, Craig Parkin, argues that with there being such a lucrative market for zero day vulnerabilities "a zero-day discovered in a non-open source product is perhaps likely to generate more money for the seller as the vulnerability is likely to stay private for longer."
Going back to Synack for a moment, we also spoke with the director of research and data, Mikhail Sosonkin, who put forward the example of a recently discovered vulnerability in the Linux kernel dating back to December 2003. The bug was in one of the most exposed parts of the OS, the network stack. So why did all those eyes take so long to spot it?
"I think the answer is because there is no coordination involved" Sosonkin argues "how would independent eyes know which pieces of the code were looked at, when and how thoroughly?" We need ways to better utilise and let the eyes self-organise to be more effective Sosonkin says.
Art Swift, president of prpl Foundation counters with the fact that in Linux "it is relatively easy to make choices in the development process to harden or armor the kernel" while admitting that unfortunately "many teams don't make the effort either because they don't know how, or don't think it's important."
Thomas Owen, head of security at Memset, was perhaps most blunt in his answer to our question: "It's a nonsensical comparison" he said. And he has a point.
Open source and closed source are both just software and all software has vulnerabilities. Owen told us that the eyes of the community include professional pen-testers and the eyes of the developing organisation itself. "The risk posed by a piece of software" he explains "is a combination of the initial quality of its coding, the quality of developer/owner support for patching, the app's exposure to threat actors researchers looking for vulnerabilities and the extent to which it's deployed in the wider world."
Balázs Scheidler, co-founder and CTO at Balabit agrees it's a stupid question, arguing "claiming that open source is less secure than proprietary code is misleading and wrong" and points to deploying an unpatched Windows XP being a risky business.
Kyle Wilhoit, senior security researcher at DomainTools, also kind of agrees when he insists "the largest application security risk is a lack of fundamental security awareness by many developers in today's world."
So, bearing all this in mind, is the Black Duck suggestion that open source vulnerability exploits are the 'biggest application security risk that most companies have' a valid one? Not so, according to Mel Llaguno, open source product manager at Synopsys who says "the biggest challenge most companies face is something more prosaic - keeping their existing software up to date given known vulnerabilities."