Invite attacks to identify weaknesses

Intelligence-led third party red-teaming testers can identify the blind spots that in-house teams thought they had covered suggests Simon Saunders.

Invite attacks to identify weaknesses
Invite attacks to identify weaknesses

Every business would like to know how it is likely to be attacked, and what the outcome would be, in advance of a real attack happening. This type of information is exactly what intelligence-led testing is designed to provide.

If risk management was a perfect science there would never be such a thing as a major incident. Rather, there would be scores of minor incidents, all tolerable, but nothing that had a tangible impact on a business. However, risk management is not a perfect science. Firstly, a risk must be identified; a business cannot manage a risk it does not know about. Secondly, throughout the assessment process, risks levels must be calculated correctly based on current, accurate information. If a risk is misunderstood all subsequent decision making may prove unwise. It is also true that risk calculations can date relatively quickly and seemingly disparate risks can become intertwined over time to represent an unexpected cumulative exposure.

The media keeps reporting incidents in large, security conscious organisations and this is the real-world evidence that the above statement is true; it is unlikely the risks at the core of these issues would have been accepted if they were known about or properly understood. Every organisation has risk blind spots and it is within these areas that real-world attackers thrive, especially the more professional attackers who will be at the heart of most significant incidents.

As a family of services, intelligence-led testing aims to identify the unknown or miscalculated risks. Once identified, they can be brought into risk management and addressed appropriately. The specific techniques required to achieve this outcome will be dictated by the security posture of the business and the types of threats being faced.

For organisations at the start of their information security journey, there is often a complete acceptance that the risks are unknown and / or miscalculated. Almost any information security professional should be able to identify threats and calculate risks with little need to call on outside sources. However, for more mature organisations, it becomes harder for those close to an organisation to identify what has been overlooked or miscalculated. The first step on that journey is often bringing together a workshop between IT, information security, the wider business and an external security consultancy to discuss the various threats and how they have been met. Quite often, discussion alone will identify risks that haven't been historically considered or where the mitigating action is flawed. This is a relatively inexpensive exercise that can deliver a lot of value.

If this kind of collaboration already occurs, particularly in larger organisations with a wide array of in-house consultative capabilities, the next step is invariably red-teaming. Red teaming involves a penetration tester(s) being given a relatively free reign to attack an organisation, with the wider workforce being unaware of this piece of work. Naturally testers will focus on the areas that are the weakest, which is derived from intelligence gathering. As well as identifying these weaknesses, this type of exercise also establishes the effectiveness of the controls and countermeasures that are meant to identify or stop attacks.  These types of exercises can't cover or identify every unknown or miscalculated risk, but in addition to the specific weaknesses identified, subsequent root cause analysis can often identify further similar weaknesses, all of which can be subject to risk treatment.

The next level up is a CBEST style project. The Bank of England, FCA, HM Treasury and the financial sector came together to produce a framework for conducting high-end intelligence-led testing. Whilst a formal CBEST engagement will only be conducted in the finance sector and under specific circumstances, the general model has wider applicability. In addition to a penetration tester, CBEST also takes advantage of a relatively new type of provider; an expert in threat intelligence. Through a range of means, threat intelligence organisations can provide detailed insight into the current threats to an organisation; the types of information likely to be targeted, by whom and how. This provides a very specific brief for the penetration testing organisation to follow, effectively using a range of techniques and specialist tools to closely match what the real-world attacker is likely to do. This allows an organisation to understand the effectiveness of its current counter-measures against a current and real threat.

What marks out these types of work as different to the norm is their use of threat intelligence, to a lesser or greater extent, to identify more weaknesses to bring into the scope of risk management. Organisations often get stuck in a process-derived rut and assume that this is effectively identifying their risks. Whilst annual and project-based security reviews identify a range of risks, they do not identify every risk simply due to limitations in scope. The media reported security incidents demonstrate this, but what about what occurs behind closed doors? That the Bank of England has taken the lead on this and are pushing the financial sector in the direction of high-end intelligence-led testing is strong evidence that organisations have the ability to better understand their weaknesses, but not enough are using it.

Contributed by Simon Saunders, managing consultant, Portcullis Computer Security.

close

Next Article in Opinion