CRASH report: UK comes last in analysis of secure coding practises

News by Davey Winder

An analysis of over one billion lines of code finds the UK ranks last for the security of its code and finds that teams of 10 do better than teams of 20 or more.

The code quality of software has been put under the microscope in new research published today, and the UK fares miserably.

The CRASH report, from software analysts CAST, looked at more than one billion lines of code across 2000 enterprise applications.

What it found does not make for happy reading, especially as far as UK-based developers are concerned.

The UK scored the worst out of all countries globally (France did best), and CAST underlines just how badly it did in terms of software security.

If that doesn't sound bad enough, financial services software code was revealed to be particularly susceptible to security risk.

The takeaway from this report is that poor code quality exposes businesses to unacceptable risk, and legacy IT is leaving systems creaking under the strain.

Development teams of less than 10 performed best with teams in excess of 20 creating the least secure code.

SC Media UK spoke to Lev Lesokhin, EVP strategy and analytics at CAST and one of the report authors, who told us that there is "a well-established canon of software weaknesses in the Common Weakness Enumeration (CWE), developed by the US Department of Homeland Security, of which the 22 most important weaknesses are specified as a standard security metric by the Consortium for IT Software Quality (CISQ)."

When looking at security, the CRASH Report analyses an application's susceptibility to unauthorised entry, theft of data, memory management problems and other data integrity issues – "The types of software engineering flaws that allow SQL injections or cross-site scripting," as Lesokhin points out.

He adds: "It does this across the languages most commonly used in enterprise applications such as Java, .NET and C++, as well as, importantly, legacy codebases such as COBOL."

COBOL, of course, is still of importance courtesy of its continuing widespread usage in the financial services sector. Indeed, the spread of security scores was extremely wide in this sector according to Lesokhin. "In fact, between 30 percent and 40 percent of the applications in financial services were below a minimally acceptable threshold, making them a target for hackers,” he said.

Ben Taylor, VP of engineering at drie, argues that one problem is that in large organisations there's often a trade-off between security and productivity caused by 'security by checklist' syndrome. "This is really dangerous because it assumes the checklist will never be ignored to meet a deadline," Taylor told SC Media.

Dave Levy, associate partner at Citihub Consulting, blames the fact that for too long "developers have focused on functional requirements only", expecting the infrastructure to provide 'non-functional requirements' when they couldn't be bothered to capture them and write them down.

And Winston Bond, EMEA technical director at Arxan, warns that "weak coding will also likely fail to protect the cryptographic keys which perform key tasks such as proving identity and binding devices – the holy grail for any cyber-criminal."

So what can be done to turn this situation around and get developers coding in a more secure manner?

BSI Espion security consultant Philip Close agrees that poorly coded software remains one of the biggest concerns for security teams. "By delivering insecure, poorly coded software," he warns, "developers are increasing the attack surface for hackers to exploit." A good place to start coding more securely, Close says, is for developers to at least code against the vulnerabilities found in the OWASP Top 10 Project.

Meanwhile, Chris Carlson, vice president of Product Management at Qualys, argues that instead of making security a trade-off at the end of the cycle when it's already in production, developers should be looking for methods to bring security into the development process, the DevOps process and make security part of the entire process line from a continuous integration point of view.

"DevSecOps allows you to do different things earlier and better things earlier," Carlson explains, "so if we think about how you can apply security earlier in DevOps process, that is part of that continuous integration." In other words, in order to do and apply security earlier and in a better way, security needs to ensure it's not a process blocker to the development teams.

Bharat Mistry, cyber security consultant at Trend Micro, is a fan of the Secure Software Development LifeCycle (SSDLC) approach to ensure that secure coding practices are built-in from the ground up. "There are some basic steps developers can take to ensure their code is safe," Mistry told SC, "such as having peer-to-peer reviews to QA the code, and running it through a Code Analyzer program such as Fortify or Veracode."

Javvad Malik, security advocate at AlienVault, sees many issues while doing penetration testing which stem from poor coding. The approach to better quality code can be broken down into the following broad steps according to Malik:

  1. Developer training: so they are aware of security issues and code securely by default.

  2. Developer access to testing tools. Being able to run static and dynamic testing against code can help developers pick up on issues early.

  3. External assurance: where a third party will conduct tests to ensure the code is developed securely.

While these steps are getting common in large enterprises to some degree or another, for smaller companies they are often cost-prohibitive. "Until secure coding training and testing can be made widely available to developers" Malik warns "this will likely remain a significant issue."

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Video and interviews