Shift Left - how to improve security in your developers' code - do it earlier
Shift Left - how to improve security in your developers' code - do it earlier

There was a pretty simple premise behind last week's Shift Left conference, organised by CheckMarx at the Bulgari hotel in Knightsbridge.  If you look at the software development cycle, it typically starts with the setting of requirements, design, build, test, deploy and maintain.  All too often security considerations don't get a look in until after testing and just before deployment.  But the further to the left on that process that you engage in security, the greater the impact, because later design changes are more complex and difficult to make, more costly, and more time consuming – even if they are still possible.  And a recall of deployed systems is the most costly of all.

While a nice drawing of this development continuum portrayed it as a straight lever, lifting a rock, whereby the earlier you exerted security effort, the more dramatic impact you had, it was acknowledged that it's not really that simple. Security needs to be addressed at every stage in the process. But the later you leave it the more it costs.

Troy Hunt, Pluralsight author and Microsoft MVP for developer security explained how one of the reasons for the reluctance by organisations to implement security is the difficulty of evaluating code. And because developers often don't consider that people will do bad things with their software, so the security review happens late – sometimes after data has appeared on paste-bin!  And when it's done at the end of a project, the project is out of time and money and everything prior has been signed off as acceptable. So compromises are made – either deliver late or with problems that need to be fixed.

Bad code is common. And sometimes this bad code is made available for others to download which propagates the problem. But the earlier bugs are fixed, the cheaper it is.  So how do you fix it?

Of course there is no one fix – a multifaceted approach is advised.  Training usually pays off. Static code analysis is part of it, including using AI, and do it in IDE when codewriting, and via CI (integration).  Dynamic analysis is required for brownfield sites using outdated code, and should be done at each release.  And you should find fewer and fewer faults, not more and more as you progress.  Penetration testing is effective but expensive, so you don't want to be paying PenTesters to fix the simple stuff.   Well known vulnerabilities should have already been tackled.

What will drive a change?  Trade bodies and organisations can help by demanding certain levels of security and raising standards; PCI compliant sites do get hacked so more can be done there.  Fines under EU GDPR can change the ROI proposition – but the ICO's penalties were seen as too weak to have an effect.

While some companies may think how share prices have bounced back after breaches, there has been significant falls, of up to a third of value, after the initial breach revelation, and the Dow dipped a £1billion when AP was hacked. And if companies lose trust, they can lose everything.

Amit Ashbel, director of product marketing and cyber-security evangelist at CheckMarx called for greater App Sec awareness, telling developers why they need to do security.  Looking at the Ponemon development phases, he says that if implementing security costs US$80 at the development stage, it would cost US$240 at the build, US$960 at the QA testing stage and US$7,600 once it's a product.

One of the problems is that developers are usually measured by how fast they develop and not how secure is the code they write.

To encourage them to adopt secure coding standards, gamification of education has been used, such as Game of Hacks, with materials to spot vulnerabilities, competitions, quizzes and rewards in the finals. The most effective approach was providing education in real time and with a contextual background, so teaching people how to hack their own code paid dividends.

Dr Achim Brucker, a senior lecturer at the department of Computer Science, University of Sheffield showed how many basic vulnerabilities were overlooked by the majority of developers and called for security training at the outset of a developer's professional education.  But companies also need to prioritise risk. Plans should be about mitigating risk, as security is not binary in the sense of things being secure or unsecure – they are more secure or less secure. And secure development should avoid known risks, so “There is no justification for an SQL injection,” with just a few databases given as exceptions. Security testing should include an external assessment. And a first customer should provide security validation to check for flaws before wider deployment – plus repeat each time there is integration into a new environment. And there should be a security response plan – as well as monitoring of any third party components.

De-centralised development teams who were empowered to choose approaches and tools, and who were given a level of responsibility, were found to be more receptive to implementing security than those managed centrally who took a tick-box approach.

Measuring success is difficult, as finding fewer flaws does not necessarily mean you are getting better – and nor does finding more flaws. An interesting counter-intuitive proposal as to how to measure success suggested by Brucker was to look at the share of your flaws found to be from old vulnerabilities and from new ones not previously covered.  If you have an increasing share of vulnerabilities coming from areas not previously covered you are improving because you are getting better at those vulnerabilities previously covered.  So the bigger percentage of your vulnerabilities that were not previously covered by your security tools, the better.

However, while the message was, how can we increase developer awareness of security? companies were warned that they should not expect developers to become penetration testers.  It is a different mindset.  Also, don't ask developers to break company rules to do security testing – explicitly allow hacking for security purposes to make developers experts.  Delegate power and accountability and your developers are more likely to be on side.