It wasn't long ago that containers were just a handy way to quickly provision systems for prototyping and testing. Times have changed and enterprises are using containers in production systems, often across a variety of Cloud vendors. The danger with this brave new world is that adoption always proceeds security. Securing these new cloud-native architectures requires a new approach in terms of both technology and enforcement.
The allure of cloud
For example, whether deployed as a public or private model, one way that the technology and the underlying tools can improve developer productivity is by leveraging existing code from open source repositories. However, this opens the door to vulnerabilities because you no longer have detailed knowledge of the entire code base. You will need to introduce proper systems to scan for, and highlight, any potential risks.
Another goal in the development process is to strive for a portable solution, allowing a multi-cloud platform choice and the potential to swap providers. This can help lower costs or gain new functionality. In fact, containers promise some of the core aspects that have been struggled with for decades. We finally have decoupled microservices which by their very nature offer scale, redundancy, and isolation.
Thanks to containers and Agile methodologies, developers now continuously update applications, or components of applications, and push those updates live to add new functionality, support the latest devices, or simply fix bugs. There are, however, several models for deploying cloud-native applications:
Managed containers - either in the data centre or on a cloud provider's servers
Serverless containers - where the cloud provider manages the entire container infrastructure
Serverless functions - these eliminate concerns about where the service is running
Hybrid - many applications are already using combinations of these approaches, implementing each where they fit best
No matter which option you choose, it is vital that the system is end-to-end secure, especially if any part of it operates with sensitive data. However, they each pose unique challenges when it comes to implementing a consistent security policy and monitoring the overall system.
The first major challenge is the dynamic nature of the environment. The flexibility to provision new services automatically, and often for little cost and at the developer's discretion, means that the security team may no longer have adequate time to evaluate risks and provide late stage guidance to ensure compliance. There may now no longer be a window for DevSecOps to review the application and its infrastructure at its "leisure".
Another new problem is the loss of complete control over the physical network infrastructure. For example, the services could be moved across data centres in different locations, or the IT operations and security teams might not even know where the services are running, as is the case in serverless models.
Traditional security tools cannot handle the velocity, scale, and dynamic networking capabilities of containers and serverless infrastructure. Adapting to this new reality requires supporting three key requirements:
* Integrating security into the build process, also referred to as "Shift Left" - the goal is to identify any issues early on and prevent vulnerable code from being introduced in the first place.
* Enforcing consistent rollout and enforcement policies - this is regardless of the organisation's choice of cloud provider or technology "stack".
* Implementing tight controls at the application level - for example whitelisting and baselining can be used to tightly limit the ability of services to behave in a way that is not consistent with their intent.
The security benefits of cloud
Cloud-native environments offer several ways that make securing systems easier, as long as they are managed correctly. Firstly, containers are created as images and deployed. This means that they are meant to be immutable and there can't be dangers from patching because only whole images are deployed. Secondly, containers are often the basis for applications based on a microservices architecture, leading to each container being a separate, simple function. Lastly, container images are declarative, so their contents can be used to determine their intended use.
Organisations can, and should, take advantage of a new type of security model where container images and serverless functions are examined, and then made immutable in runtime. Comparison against the original images will flag up security problems.
The simplicity of containers used for microservices applications facilitates behavioural profiling. This can be used to generate whitelists of actions and resources, preventing a function from accessing anything it shouldn't. Machine learning is now being used to analyse applications at runtime and to alert DevSecOps teams if anything out of the ordinary occurs. This moves away from relying on malware solutions, which only handle already discovered attacks, to being able to react to potentially any new threat as it occurs.
Cloud-native environments might be a whole new challenge for DevSecOps, but this new approach to application security creates a highly controlled system where the risks can be reduced, not only before deployment, but also during runtime. This next generation of applications has the ability to be more secure and more reliable, whilst still supporting and benefitting from the fast-paced development and time-to-market of Agile practices.
Contributed by Benjy Portnoy, CISSP, CISA, director of DevSecOps, Aqua Security.
*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media UK or Haymarket Media.