GDPR is a top-to-bottom reform of European data privacy law and deals with a much wider range of topics than information security. Nevertheless, security is a key element of GDPR's overall policy objective of promoting transparency, accountability and trust in organisations which deal with people's data, and its security provisions are a critical part of achieving that objective.
GDPR requires all organisations who hold, analyse, store, host or otherwise process personal data to take “appropriate technical and organisational measures to ensure a level of security appropriate to the risks” to the “rights and freedoms of natural persons” which arise out of those activities. The appropriate level of security is to be assessed by:
- the likelihood and severity of the risk
- the “state of the art”
- the cost of implementation
That three-way balancing act is not new, and closely corresponds to current data privacy law. However, GDPR makes two important changes:
The first is that this obligation applies to data controllers and data processors alike. Under the current regime, the data processor has no direct obligations to the regulator or to the data subject. Under GDPR, the data processor – typically the vendor/supplier in the technology world – has direct liability and accountability for security breaches, both to the regulator and to those affected.
GDPR now includes a list of particular matters which organisations must be able to show that they have considered as part of their security measures, and either have implemented or have a good reason for not doing so. That list includes “a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing”. In other words, you need to have a documented, systematic means of ensuring that your assessment of your security measures against the “state of the art” and the level of risk remains valid and effective as those risks morph and as the state of the art changes.
What does this have to do with open source? The answer is patch management. Today's software is built on a core of open source, and open source use is pervasive across every industry vertical. 96 percent of the 1,000+ applications scanned in Black Duck's latest Open Source Security and Risk Analysis (OSSRA) found open source in the code, and nearly 70 percent of those applications had vulnerabilities in the open source components used.
Open source's sheer ubiquity, and the challenges in keeping its use under control, mean that it needs to be managed as a discrete issue. This management burden falls squarely on the shoulders of those using and incorporating open source components into their applications, unlike commercial code, where the vendor will provide patches or other fixes.
Given the sheer volume of open source being reused today, the challenge organisations often face is simply knowing what open source they are using. Manually tracking and monitoring re-used open source is unlikely to be considered “state of the art”. Automated tools, with real time security monitoring and vulnerability alert notifications are becoming the expected norm.
The law itself is technology-neutral. However, we already know that the regulators view a failure to patch OSS vulnerabilities as having the potential to breach data privacy law. To take just one small example, the ICO (the UK regulator) recently fined Gloucester City Council for a failure to patch Heartbleed out of its edge firewalls, leading to a compromise of internal emails According to the ICO's published MPN, the Gloucester Council breach was relatively small-scale, affecting 30 to 40 people. Nevertheless, the ICO considered a monetary penalty to be appropriate.
A lot has been made of the greatly increased maximum fining powers in GDPR, and it is too easy to shout about the potential for “MASSIVE FINES!!!” and leave it at that. The ICO has made it clear that it has no intention of imposing huge fines for less serious breaches, and that it views its fining powers as a last resort. However, consider something on the scale of the recent Equifax breach, which may have been caused by a failure to patch an OSS vulnerability. If a breach on that scale were to happen under the GDPR regime, the consequences for the organisation concerned would likely be very serious indeed.
*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.