China in your band(width)
China in your band(width)

Put simply - politically motivated hacking is no longer ‘the next threat' to national security.

The blame game that both China and the US are currently engaged in illustrates quite nicely how a foreign hacker's targeting of a country's critical national infrastructure (CNI) is real, is 21st century espionage and, to a certain extent, is silent. 

Admittedly, the story is largely made up of inflated rhetoric, but despite the hype and fear around the subject and the West's vulnerability to attacks – using computers as opposed to troops on the ground is the on-going war that has been occurring every single day for years.

When nation states were establishing themselves and their borders centuries ago, having a permanent military force disciplined in tactics ultimately translated into international power and might. Countries now need a disciplined digital frontline to retain (or grow) their authority on the world's stage. This monumental change to the concept of war raises questions on the definition of armed conflict itself.

Military tactics
This is the reason why the recent Mandiant APT1 report ‘Exposing One of China's Cyber Espionage Units' has proved controversial. The report is a riveting read and gives a rare insight into the threat actors behind attacks - the people behind the technologies.

Crucially, it highlights that ‘cyber warfare' is an active part of nation states' warfare activities and strategy - just like an Air Force or navy. I would recommend that anyone interested in the amount of weight respective nations place behind their cyber ‘armies'; look at the United States Cyber Command. While the vast majority of the US Army is bracing itself for all too familiar recession belt tightening, the Cyber Command is on a hiring spree. 

It must be made clear that the report has not just been blindly accepted as gospel. There have been repeated accusations of faked screenshots displaying malicious activity and questions have been raised over the conclusions and the possible ‘expectation bias' - we expect it to be China, therefore it is China.

The Chinese government/military have denied the link between APT1 and the PLA Unit 61398, but then again would anyone really expect otherwise? Additionally, the report's validity has been further undermined by fake versions circulating online as malware in order to stage attacks.

The times they aren't a-changing'
One of the most notable surprises revealed by the report is that attack methodologies and approaches used have remained unchanged for a number of years.

Broadly speaking, the methodology is the basis for all penetration tests. Admittedly, the focus of initial compromise has shifted from attacks on the perimeter to attacks against the human, but ultimately this is simply a reflection of areas where security programs are working.

A number (‘number' being the operative word – as it certainly isn't everyone) of defenders have finally cottoned on to perimeter protection. Clearly, the cost of exploit development in the face of increasingly robust external facing systems has become prohibitive - which leaves the most traditional attacks against people as the most obvious, cost effective attack method remaining.

Silo security
Despite the report highlighting a multitude of controversial points, such as state sponsored hacking and the lifecycle of an APT1 attack, the most pertinent issue is simple – it illustrates how security teams are not protecting their networks.

Irrespective of whether APT1 has links to China or not, the report details specific techniques that haven't changed notably in a number of years. It is an illustration of the total and abject failure of widely accepted approaches to securing computer systems and the poor and isolated focus security assurance programs have when attempting to protect assets.

In my opinion, the most interesting and provocative parts of the report are the small sections called 'move laterally' and 'complete mission'. These are the crucial points of failure and make it clear that information security has gone too far into the weeds. This is where everything goes wrong – because we all know very well that there is no ‘complete mission' when it comes to security.

Here's the official line - security is seen as a 'project' step, a design goal for a 'system', a box to be ticked in the grand (albeit time constrained) project plan for the 'next big venture'.

By moulding and packaging organisational security up into a discrete applicable, such as a project cost against a specific system, organisations are neglecting to identify the risks inherent in the connected nature of their network.

This commoditisation of security, which has in part been promoted by frameworks (or increasingly bureaucratised mandates), such as the PCI DSS, leads to major failings.

Organisational information systems need to communicate, share information and trust each other. Without this triumvirate of dependence, an information system working in isolation will always fail to achieve its desired goal.

Being human
When exposing a system to audit functions such as penetration testing, interdependent systems must not be the only component taken into account. We need to look at the bigger picture from, dare I say it, a high level.

Management should look across the whole security landscape, have their message form the context and be the entire foundation on which any test is built on. Testing a system in isolation and setting bounds and limits on testing will always fail to identify risks and effectively protect our national infrastructure.

Furthermore, a user will never operate on just one system within a corporate or government network. The human being and their computer need to be treated as another 'system' forming part of the holistic approach to security. As human beings we make mistakes by nature and - as an integral part of our lives – cyber space has proven very effective at enabling and amplifying our mistakes.

The only common sense approach to identify the risks posed to a system by security testing is the removal of boundaries and limitations on testing and to examine the entire organisation as an information system. This can only be achieved by abstracting the information system risk away from project owners and assessing the bigger picture.

Mark Crowther is a consultant at Information Risk Management