Container Security: The Code You Don't Know About

Mike Pittenger discusses what he believes is the most dangerous code in your application, whether standalone or containerised

Mike Pittenger, vice president, security strategy, Black Duck Software
Mike Pittenger, vice president, security strategy, Black Duck Software

A few years ago, the Applied Crypto Group at Stanford University published a research paper titled, “The most dangerous code in the world: validating SSL certificates in non-browser software,” which contended that SSL certificate validation was completely broken in many security-critical applications and libraries. It was just one of a very long list of security issues associated with the SSL protocol, including the DROWN vulnerability, which allows attackers to read and steal sensitive communications, including passwords, credit card numbers, trade secrets, and financial data, on servers still supporting SSLv2 connections.

While the concerns around SSL security are more than valid, I'd argue that the most dangerous code in your application – whether standalone or containerised – could be the code you may not even know is in your container.

Do you know what open source you're using?

As organisations turn to containers to improve application delivery and agility, security of their contents is coming under increased scrutiny. Your containers almost certainly bundle third-party software and Linux modules, and may be using outdated and insecure open source components. Popular container images contain hundreds – sometimes thousands – of open source packages, comprising libraries, application frameworks and other utilities and middleware. And with more than 6,000 open source vulnerabilities discovered since the beginning of 2014, organisations need to know what open source code is inside their containers - and ensure it's up-to-date and secure.

Yet, the majority of firms don't know what open source they're using, and are blind to the vulnerabilities hidden within the code inside their containers.  A recent study, "The State of Open Source Security in Commercial Applications," found that 98 percent of the commercial applications reviewed contained open source code. This isn't surprising, as open source use has consistently proven to reduce development costs and to accelerate time to market. But, what was surprising was the lack of insight companies had into the open source they were using. The study found over 100 discrete open source components in the average application – in most cases, more than twice the amount that the respondents thought they were using. Fully two thirds of the applications included components with publicly disclosed security vulnerabilities. 

If companies don't have visibility into the open source they use, and don't put processes into place to track its ongoing security, they provide adversaries with a simple path to attack. With the popular use of open source, its security management can't be ignored without putting organisations at risk of being blindsided by the next high-profile and costly security exploit.

Static application security testing, dynamic application security testing, and run-time application self-protection (usually referred to as SAST, DAST, and RASP) are all essential for finding application vulnerabilities in custom code. But alone they provide an incomplete picture of risk. Only a handful of the more than 6,000 open source vulnerabilities reported since 2014 that I noted earlier were detected by traditional static or dynamic testing tools.

Detection, management & ongoing monitoring of open source

Organisations need a solution that can detect, manage, and monitor open source vulnerabilities, that can build a bill of materials, cross-reference the bill of materials against vulnerability databases, and will notify the developer or operations manager if a problem exists. And there is another piece needed to complete the puzzle's solution. Identifying open source components and versions is essential to knowing whether you are using software that contains a known vulnerability, but you also need a process to continually update that information even after your container is deployed. Applications age, new versions are released, new vulnerabilities are discovered, all widening the potential attack against your container.

Verifying the provenance of containerised application code is not enough. Issues such as exploitable vulnerabilities in application components require a process in place to ensure the security of your containerised applications. Whether building applications for containers or traditional deployment, it's critical to use security-testing tools to gain visibility into and identify vulnerabilities in your code.  Equally important is the need to understand the components (including open source) in the applications, and any risk that may be introduced by those components. Without that visibility, organisations risk exposing their containerised applications to attack.  

Contributed by Mike Pittenger, vice president, security strategy, Black Duck Software