Top 5 most common security development errors
Top 5 most common security development errors

Over the past year, a lot of attention has been focussed on the security failures of software and devices. When these vulnerabilities are presented by clever and talented security experts, it is easy to imagine that protecting software from hackers requires equivalent talent and ingenuity. The reality is quite the opposite.

Most of the prevalent vulnerabilities stem from some fairly common failures in software development. We highlight five of these failures here, but there are many more. The trick to making hackers' lives difficult is to get really good at the basics and pick all the low-hanging fruit during the development process. Avoiding these five errors is not everything, but it's a really good start. Advanced protections in software will not succeed unless the basic requirements are all met.

API validation – The world is evolving into connected “things”. Each “thing” that is networked – a camera, phone, television, printer, electric meter or medical device – has an API for interacting with it. It will accept messages and it will respond to them. For every API, you must know a few things: Who are you talking to? Are they allowed to have what they're asking for? On the web, this is fairly obvious (even if we get it wrong fairly often). Someone asks for a web page, we check who they are and their authorisation, and then return appropriate results. However, these two requirements don't change when coding the navigation system in a car. We must distrust callers until they prove their identity and authority. This is notoriously hard to do from scratch, but there are lots of protocols at lots of layers that we can leverage.

Information leakageModern software leaks information everywhere. We leak it in error messages, parameter names and in the source and object code. To top it off, software and devices often record verbose logs that tell an observer the functions that called, the values of some parameters, the order in which they are called and the results. Anyone who looks to attack software will get a wealth of information from these logs. Developers must be extremely aggressive in minimising the output of any errors, standard headers, progress messages and state indicators. Any code that runs outside your data centre should be obfuscated, whether it's mobile, embedded or just JavaScript in the browser. Security through obscurity is never sufficient, but it is no longer superfluous, either.

Input validation – Like the 17-year cicada, input validation lays dormant for a little while and then suddenly springs to life making a lot of noise and nuisance for everyone. Years ago, a stray hole in a paper tape might crash the whole mainframe. A few decades later, we feared buffer overflows in C++ programs. Cross-site scripting and SQL injection were last decade's big problems, and today we revisit many of the old buffer overflows on new platforms, like Android, iOS, cars and smart TVs. We have to be incredibly pessimistic and pedantic with our input – we can't cut corners and make assumptions about language, character set or encoding. We have to strictly enforce it throughout the data's lifetime. The fact is that a massive number of bugs leading to major vulnerabilities turn out to be input validation failures.

Output encoding – The other side of input validation is making data safe for release to the wild. The two big problems are knowing the storage format of the data and knowing the output context. There are numerous examples of bugs related to double encoding – or lack of encoding – because developers did not know what format the data was in when it was stored. Knowing the output context, however, is extremely complicated. Consider HTML: If the output is going to text, an HTML tag, an HTML attribute, an HTML comment or a JavaScript context, the characters that are special differ. The less-than character – “<” – might be very special in some contexts, and not at all special in others. It is notoriously tricky to encode correctly based on context, but that's exactly what we have to do.

Testing – A major limitation to successfully developing software is successfully testing it. Too many development organisations relegate “testing” to mere checking that functions execute as expected. Not nearly enough open-ended, exploratory testing is performed with the explicit aim of finding previously unknown defects that arise when the whole system is integrated. In addition to human exploration, fuzz testing is a valuable technique for teasing out strange edge cases and behaviours. Penetration testing, a popular late-lifecycle security technique, rarely has enough scope, domain knowledge or time to cover a significant percentage of code and functions. It can find egregious oversights, but cannot match the level of coverage of functional testing. Functional testers must incorporate basic security testing activities into their regression tests and build regression tests based on penetration test findings to eliminate the recurrence of known vulnerabilities.

In summary, a grand master of any martial art earns rank by spending countless hours practicing basic moves: stances, punches, kicks, etc. This expert can probably win nearly any fight with just those rudiments. 

Writing secure software is much the same. This list of top five errors is the coding equivalent of basic moves towards secure development mastery. If development teams practice and apply these techniques to software consistently, they will win many more fights with hackers.