The difficulty of utilising information security in a business

Implementing information security in a business is hard, with hidden questions and unexpected costs, say Steve Marsh and Fred Piper.

A recurring theme in information security is the question of ‘return on investment', or ‘how much of my IT budget should I spend on security?'

This is not a simple question to answer, as it is an unfortunate fact that it is impossible to reduce risk to zero. Thus, no matter how much is spent on security, there must be an accompanying expected loss. Although it may be possible to decrease this expected loss by increasing the amount spent on the security, spending on security measures is subject to diminishing returns, and therefore there is some point at which one should stop spending, as to go beyond that would involve spending more than the additional protection was worth. Maybe the simplistic, straightforward answer is that one should spend the amount that minimises the sum of the cost of security added to the expected loss after implementing those security measures.

The main complication appears to be that of selecting the security measures that are most cost-effective. Such a selection is not unproblematical. For example, the initial ‘cost' of implementing a username and password scheme is small, but the properties of a good scheme – long, hard-to-guess passwords that are regularly changed and stored nowhere but in the user's head – are in direct conflict with the way that human memory works and may force users to contravene the scheme. Conversely, functionality that makes users' or customers' lives easier – automatic execution of code on the customers' machines, or repair of a failed security mechanism – may be subverted for criminal purposes.

Even so, this ‘straightforward' answer hides a multitude of further questions. Firstly, information systems usually involve many ‘actors', among whom are: the business owner, the system user, the business customer, the system administrator, perhaps a software or system vendor and possibly a system integrator. The ‘cost' of security measures varies for each of these, as does the residual expected loss.

So the first hidden question is: what function of costs and expected losses for each of the actors are we minimising? It is not just a weighted sum because of the interdependencies among the resulting actions of the various actors; and the minimisation is complicated because the costs and losses do not fall in proportion for each actor. Even if we knew this function, it is unlikely that its minimisation would give a solution that minimised any of the component functions for the individual actors. None of the actors will be fully satisfied with the solution – but will any of them even believe that it's ‘good enough'?

The second hidden question is: what are the costs of security measures? There are many, and they fall in differing proportions among the different actors. For example, there is the through-life cost of technical security measures, of accreditation and of auditing the security controls, the inconvenience cost to the user or customer of operating the security measures, the cost to the vendor of building a more secure product, missed business opportunity if security measures restrict system functionality or delay time to market, the cost of training and education, additional project costs from delay and complexity of security implementation, and, for system integrators, the cost of complying with security requirements. There may also be broader societal costs: the impact of security measures on privacy and human rights, or the wider economic costs of closed, or locked-down, systems. The question of trust also arises: to what extent will someone accept that government, or a business, or a software vendor, is, in fact, acting in that person's best interest?

These costs are also interdependent, and complicated by disparity in timing of when costs fall: for example, a software vendor who skimps on security testing may be able to sell a product that is cheaper initially, but causes the purchaser significant additional costs months or years later as the security vulnerabilities become exploitable.

The third hidden question is: what is the expected loss? We can break this down into three component questions: what is the real likelihood of a security breach? What is the perceived likelihood of a security breach? And what is the loss or impact of a security breach?

The likelihood of a breach depends on the threat posed by an attacker (a function of motivation, capability and opportunity), and the vulnerabilities of a system. An attacker's motivation will itself be a complicated function of perceived risk and reward – depending on the possibility of getting caught and punished, for example. Motivation is generally increasing, as more value is going online, and the internet is similarly raising both capability (as exploits are shared and a marketplace in exploits develops) and opportunity (as greater connectivity allows attacks from anywhere in the world). But, although one may gather statistics for specific, known attacks (in particular, those that fail), quantitative figures on threat are not available. Similarly for vulnerabilities – we may estimate the number of faults in code of a certain size and know that more complex systems have more faults, but we do not know which of these faults will lead to a security vulnerability, nor how that vulnerability might be exploited.

So we believe that the likelihood of attacks is growing but we don't know what this likelihood is (though reports of successful attacks may give a lower bound). We do know, however, that people's perception of risk bears little relationship to the actual risk. Well-known psychological effects enhance the perception of a rare but highly visible incident – such as a train crash – while habituating people to a more common event – a car crash.

The impact of a security breach may also be diverse: direct financial or material loss, damage to reputation, contractual loss, compliance penalties, or loss of information that might be used to commit other crimes or to steal business from your company. In extremis, the breach may even lead to loss of life – for example, in a safety-critical situation. Again, the impact will differ for each of the actors involved and may fall on those who have no culpability for the breach. And as systems become more complex, interconnected and interdependent, so the impact of a potential breach becomes harder to determine. The removal of ‘slack' in systems and supply chains increases the rate at which disruption propagates, and makes disruption harder to contain.

So the current approach to security is based – exaggerating only slightly – on the calculation of a return on investment that is based on a formula we don't understand, using values that we don't know, to deliver an answer that leaves most or all of the parties involved believing it to be less than ideal.

This is why information security is hard. It is why experience and professionalism for security practitioners are so important. It is why greater awareness and understanding are needed, from the shop floor to the boardroom. It is why more research and innovation are required. And it is why, if we are to reap the full benefits of the continuing remarkable progress in information and communications technologies, the public and private sectors, academia and individuals, must work together to solve these challenges.

Steve Marsh, deputy director, Office of Cyber Security, Cabinet Office, is a visiting professor at the Defence Academy of the UK, Cranfield University, Shrivenham. Fred Piper, a professor at Royal Holloway, University of London, is director of its Information Security Group.

SC Webcasts UK

Sign up to our newsletters

FOLLOW US