Open source security: know your code

Adopting open source software isn't a question of "if" anymore, but of "when?" suggests Mike Pittenger.

Mike Pittenger, VP product strategy, Black Duck Software
Mike Pittenger, VP product strategy, Black Duck Software

Forrester reported that in 2015 four out of five developers surveyed had used some type of open source software for deployment/development over the past 12 months.

The adoption of open source is a good thing overall, leading to faster time-to-market and lower development costs. But if we are relying on open source so widely (and we are), we have an obligation as security professionals to understand what we're deploying. Since 2014, more than 6,000 new vulnerabilities associated with open source have been disclosed. And the fact that the open source code you use today is free from vulnerabilities doesn't mean that it will remain that way in the future.

Good code hygiene goes beyond one-time analysis

When we talk of “code hygiene,” we're referring to minimising vulnerabilities as well as reducing code complexity. Good code hygiene requires visibility into all the components used to build the application, along with information on characteristics that are important to us for a particular use case.

Several activities in the software development lifecycle support good code hygiene, including threat modeling and automated testing (that is, static and dynamic analysis).The shortcoming of each of these activities is that they are not able to identify many of the types of vulnerabilities found in open source, and only provide a point-in-time snapshot of code hygiene (and therefore can't account for a changing threat space).

The inability of automated tools to identify vulnerabilities in open source is often overlooked.  The facts support this, however.  Of the thousands of vulnerabilities in open source reported each year by NIST, almost all are found by security researchers; very few are first discovered by automated scanning tools. The National Security Agency's Centre for Assured Software reported that the total code coverage area of the average application security testing tool is only 14 percent. For example, the Heartbleed vulnerability was a surprisingly small bug in Open SSL's implementation of the TLS heartbeat mechanism. The code was likely scanned thousands of times by a variety of different tools. But the vulnerability remained undetected for two years before a security researcher spotted what turned out to be a classic coding error.  The ShellShock vulnerability in Bash was even more long-lived; the bug was introduced to the code in 1989 and not discovered (again, by a researcher) until 2014!

Knowing your code

A fundamental rule of security is that you must be aware of the threats you face to defend against those threats. Open source enters into an organisation's application code through many sources, including developers, vendors, and outside contractors. As a result, most organisations lack visibility into what open source they're using and where it is used. In fact, audits conducted by Black Duck indicate that 98 percent of companies are using open source software that they don't know about.  These organisations are blind to open source security vulnerabilities in those components.

Even within organisations with mature policies for open source use, controls that enforce those policies are often missing. In the typical organisation we speak with, security personnel are often in the dark about which open-source projects are being used. Most track open source through manual processes which are unreliable and inaccurate.

For example, a large financial services institution insists on a listing of licences, alternative libraries, code complexity, and vulnerability history before approving any open source for internal use. Yet they acknowledged that no controls are in place to ensure that only the specific versions of approved projects are used, or even that unapproved components could not enter the code base. In essence, their policies, however well-meaning, are unenforceable.

What policies do you have in place for open source use?

Your development team probably has a general awareness of the security risks associated with open source, especially given the high visibility of security issues like the Heartbleed vulnerability. However, it's also likely they may not have a full sense of your organisation's level of exposure, due in large part to the issues of tracking down open source in use as well as the need to continuously track vulnerabilities associated with those open source projects. To avoid becoming tomorrow's security media story, you should sit down with the head of your application development team and ask these questions:

  • What policies do we have for using open source code?
  • Do we have an up-to-date list of open source components in our applications?
  • How was that list created?  Who maintains it, and at what frequency?
  • What controls do we have in place to enforce our policies?
  • Who (specifically) is tracking vulnerabilities for all our components over time?

Developing secure software has always been a challenge. While using open source code makes business sense for efficiency and cost reasons, open source can undermine security efforts if it isn't well managed with policies, controls, and the right tools in place.

Contributed by Mike Pittenger, VP product strategy, Black Duck Software