Anyone responsible for information security will understand the struggle to balance what business leaders want to achieve with the commensurate security requirements. In fact, it's highly likely that being greeted late on a Friday and told that a “super-important project” has to go live over the weekend is not an unfamiliar scenario. The problem? Supporting these projects is important, but hands are tied. Security has to ask the dreaded question: “have the endpoints, servers and networks been penetration tested?” This gets a mixed reaction of confusion and anger, leaving business leaders of the opinion that security is a blocker – the “department of no” that delays “live projects” from going live.
The vital testing tool
So, what is penetration testing and why do we have it? In simple terms “pen testing” is an assurance mechanism – a vital tool to aid risk management. It's a way to validate that IT systems, networks and applications are without undiscovered vulnerabilities.
Though pen testing isn't there to “fix” or remove all vulnerabilities, it helps to gain an understanding of an environment's security posture. It also allows an organisation to make balanced risk decisions, assuming qualified security professionals are on-hand to receive the results and contextualise them for a business audience. What inhibits our ability to provide important, cost-effective security assurance are two paradigm shifts: cloud computing and agile development.
Agile and flexible
Rightly or wrongly, penetration testing is perceived to be expensive: both financially and regarding resource consumption. Testing is performed at certain project milestones, which are relatively rigid. This point-in-time model is disconnected from project lifecycle processes and security is begrudgingly engaged because it has to be. Yet, despite it not being popular, for the most part this method has always worked.
In a new world, where this approach is no longer possible, project managers and development teams see agile and lean methodologies as the answer to slice through unnecessary bureaucracy and organisational project baggage. For the security team, agile is seen as a euphemism for “wing it” and a nebulous unmeasurable series of parallel activities.
Agile is a development methodology, not a project methodology, an often overlooked but salient security consideration. Agile methods consist of a series of sprints, which run at the same time. Agile allows for an organisation to “fail fast”, to work without prescriptive, often burdensome, requirements. Agile is not a means to throw the project management rulebook out of the window, but to aid development cycles. Organisations have adopted agile methodologies with empirical success. Although as a security function, many are still trying to shoe-horn “waterfall testing” (process model) and square pegs in round holes help no-one. If security assurance is to succeed in an agile world, then security must be a key part of the sprint planning process.
It needs to be embedded in a cross-functional development workforce providing on-hand consultancy. The earlier security testing can be performed, the quicker (and cheaper) remediation becomes. For application testing, enterprises must deploy automated code scanning capabilities that can be embedded into the development planning processes. If code can be “checked in” to repositories and scanned periodically (ideally daily) for vulnerabilities, developers are provided with suitable timeframes for remediation and retesting. This approach can be used to incentivise and empower developers to fix their own code but with the safeguards to avoid a “poacher turned gamekeeper” situation.
Trade-offs come into play regarding infrastructure testing in a DevOps and an agile world. DevOps teams need infrastructure that is available instantaneously. As security teams, it's important to support this requirement or be quickly bypassed. That means ensuring automated system configurations can be deployed quickly and consistently.
Pen testing in the cloud
As organisations shift to the cloud, a change is often also needed concerning trust. As data is moved “off-prem”, and under the control of a cloud service provider (CSP), there is often a need to “trust but verify” with the CSP. This is certainly true if a cloud model is software as a service where end-to-end infrastructure and application testing simply isn't possible.
For years, on premise pen tests have been scoped to test each layer of a tier of architecture at a time and date to suit the customer, using a myriad blend of misuse cases and exploit scenarios. In a world of shared resource and multi-tenancy, infrastructure testing like this is more complicated.
There are still some myths surrounding cloud testing, that cloud and black-box are intrinsically linked terms. Some feel that that cloud adoption means less governance or security, though many challenge this.
If we're trusting our CSPs, trust needs to be built and periodically re-evaluated. Establishing security requirements for all environments: on premise or otherwise, is key. Location of data should not be a consideration here. It's imperative that organisations understand the classification of data that they're sending to the cloud and ensuring that controls exist commensurate with that data. Enterprises must make sure that if they are selecting infrastructure in the cloud, the environment has readily available security capabilities to mitigate the threats associated with malicious and accidental threat actors. Where infrastructure testing isn't possible, organisations should look for mitigation. Does the cloud provider have ISO27001 certification, can they provide information on their legal and regulatory position? A “take it or leave it stance” from a vendor is not one to be handled lightly…
Contributed by Chris Hodson, CISO EMEA, Zscaler
*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.