Fred Piper and Malcolm Marshall discuss risk mitigation and coming developments that may make your current methodology largely ineffective.
Nothing is totally safe, a truism that people often forget even when they claim to be practising ‘risk management'. The latter implies that something might go wrong, but that the likelihood of this happening, and/or the impact if it does, has been reduced to a level that is judged acceptable when compared with the benefits of taking the risk.
Managing risk is by no means straightforward: changing the likelihood of a bad event is probably not entirely under the risk owner's control; it might be uneconomic to reduce the impact of the event happening (even if investment is initially made in a fall-back system, for example, justifying the continuing maintenance of the fall-back becomes increasingly hard as time passes without it being needed); what is judged acceptable after a bad incident might be entirely different from what was judged acceptable beforehand; and one has to ask, for whom is the residual risk judged to be acceptable? In particular, while a certain level of risk might be acceptable to one or a few organisations, if many organisations take the same risk the collective effect can be much worse – as we have seen most recently in the banking crisis.
With technological developments in particular, there has been a recurring pattern of events in the way that risks are dealt with. In transportation, or power generation, manufacturing or agriculture, we have seen evolving processes where an advance is introduced, it gains popularity because of the benefits that it delivers, the risks of the advance are increasingly realised, and modifications are introduced to reduce the risk, while still delivering benefits. Whether it is electrical safety standards, brakes on cars, signalling systems for trains, or pesticide use in farming, improvements happen in an evolutionary fashion to keep risks under control. Even when major new risks have been discovered, such as the long-term environmental damage of CFCs (widely used as refrigerants until the damage to the ozone layer was recognised) or DDT (used for pest control, but then found to accumulate in the food chain), the ‘solution' has been to develop alternative chemicals that deliver the benefits with less damage.
Information and communications technology has developed along a similar path. The security of the first computers was established by putting locks on doors. Additional security measures were introduced firstly to protect the operating system from user programs, and then to protect one user's programs from another's. Anti-virus scanners were introduced to reduce the risk to a user (and the system) of other compromised programs, and, as more machines became networked, firewalls protected machines from each other. A plethora of security measures have now been implemented which, to a varying extent, reduce the risks that accompany all the benefits of being online.
Can we expect this trend to continue? Will an evolution of our current approaches to information security keep risks at an acceptable level? Our thesis is that it will not. We believe that the coming changes will be so substantial and so quick that, like the cartoon rabbit, we will find ourselves running over the edge of a cliff and only realising too late that we have no means of support.
What are the changes?
Cloud computing has received a lot of attention over the past few years. Some say it is just another form of outsourcing, but we believe that this is only true in the same sense that the iPhone is just a smaller and faster version of ENIAC. The economy and agility provided by cloud computing will fundamentally change the way that businesses view IT. The pressure to accept the security that the cloud provider delivers ‘out of the box' will be overwhelming. For many organisations, this security may well be better than that provided in-house, but for some it won't. Security failings might be unlikely, but if they happen, they will affect many organisations at the same time. But, most importantly, users will not have the opportunity to manage their own risks – they will simply have to accept what they are given, probably without even understanding what those risks are. And in our complex and highly interconnected society, the aggregated risk might be much greater than anyone realises until the bad event happens.
Mobile devices, smartphones and tablets have as much storage and computing power as the desktop machines of only a few years ago. But their mobility, and additional functionality such as cameras and GPS receivers, associate these devices much more intimately with their users. The devices hold information on their contacts and day-to-day conversations; videos and photographs taken spontaneously in a wide variety of situations; the precise geographical position of the user at many times in the day; and details of large and small financial transactions. Individuals' daily lives are increasingly recorded in these devices, but the physical and technical protection the devices provide is limited. In particular, the trend for broad app marketplaces, from which users can download and install programs with ease and at low cost, exposes them to risks that they have no way of evaluating, let alone mitigating.
Then there is ‘consumerisation', where companies are continuing to move away from the in-house provision of IT services and encouraging employees to use their own devices and applications for business purposes. Again, there are upfront cost advantages for the business, as well as advantages in giving employees the freedom to use whatever devices they are happy with. These devices will include not just home desktops or laptops, but mobile devices and, in the near future, cars, televisions and so on. But the mix of corporate and individual data and applications on the same devices raises unresolved problems for individual privacy, corporate due diligence, legal discovery, and the protection of corporate information and systems.
Finally, we note the growing prevalence of networked devices. IMS Research indicated in 2010 that there were about the same number of networked devices as there were people in the world; and it estimated that there would be around 22 billion networked devices by 2020. These include desktops and mobile devices, TVs and cars, and an increasing number of machine-to-machine devices such as streetlights and smart meters. Most of these will be invisible to the end user, but many will have limited provision for security, and we have no model for how security would work in this environment.
For example, who will ‘patch' internet TVs? Will the manufacturers wish to take on this responsibility? These TVs might be running operating systems with broadband connectivity – and might be used by criminals as proxies.
We know that the current security models are broken. Cost and efficiency advantages are driving business models that militate against security, our protective measures (anti-virus scanners, firewalls, intrusion detection systems) are increasingly ineffective, and the impact of failure is becoming ever more serious.
The approach of ‘cyber security', rather than just ‘information security', recognises this challenge. Improved defences are still important, but are nowhere near sufficient. We need greater awareness of the risks, to make the environment more hostile for those seeking to cause harm, to adjust the business environment to increase resilience, and to develop new methods of detecting and reacting to attacks.
The question is: with even the changes that we can foresee, and the pace with which we are rushing to embrace them, will we get the solutions we need in time – or will we be over the cliff before we even realise that we've got to the edge?
Fred Piper is a senior professor at the Information Security Group at Royal Holloway, University of London. Malcolm Marshall is partner, KPMG I-4 Program.