Coverity's technical director doesn't believe in surprises. The company has a triage process for resolving defects, in timely fashion, to help develop secure applications. It has clients across the globe, including financial giants such as Barclays. By Paul Fisher.

What work is the Coverity Scan project doing around open source?
The Coverity Scan has been going since 2006. We have found 50,000 defects in over 300 open source projects. We make this information freely available to the organisations or the groups that develop the open source products.

I believe over 100 of those projects actually make active use of the issues that we find.

Android is probably the most successful and well-known open source project today, yet isn't it full of holes?
We're not saying that! We found around 350 defects in total. We applied the Coverity integrity rating to that and we found that Android meets the threshold that was defined for the mobile and telecom sector, which is one defect for 1,000 lines of current.

Having said that, we also found 88 ‘high-risk' defects, but we're not saying that equates to 88 vulnerabilities. They are risk indicators – a bit like when you go to the doctors and they measure your vital signs.

Is that a trade-off, though – that users benefit from a wider choice of platforms and applications, but must accept the greater risk of vulnerability?
Well, yes. There is a big can of worms around disclosure and what's fair and so on. The message is not so much for Google, you know. It's more for the people who are implementing these off-the-shelf components of software, all the way through what we call the software supply chain, where one bad ingredient can turn up in multiple issues.

We are all moving to mobile working, but so far we haven't seen a serious piece of malware targeted towards mobile devices. Will this change?
Yes. We are not waiting for one critical attack, but for an increase in dependency on mobile devices. The more that these platforms open up, the more opportunity there is for software defects to now become accessible.

In the past, these devices may have had any number of software defects that wouldn't have necessarily been exposed, because of the closed nature of the platform. So as we are seeing things opening up, there will be more opportunity there.

That means the device manufacturers have to pay attention to even minor defects, such as simple coding errors or oversights.

So what social trend in mobile use would you see having an impact on mobile security?
It is just an increasing dependence, really. Phones today are mobile computers. So now we'll probably see a lot of the security issues, which in the past were confined to our computers, attacking us on mobile devices. Again that will be down to the device manufacturers. Those manufacturers that use Android, they have a base that is made up of all these different ingredients. They need to have a really good handle on what the risks are that they're exposed to by using platform X or platform Y.

Do you think they do? Do you think people do know the risks?
Yes, absolutely, most will recognise at a high level that there is going to be an element of risk in what they do and actually it's about quantifying that risk and managing that risk. Security in the past has been this kind of gate in the development lifecycle.

Most sensible security groups realise that they have to look a bit further back, because it's no good knowing about security issues a week before a product launches. This is backed up by research: according to Forrester, it is 30 times more expensive to investigate and resolve software defects post-release or at the end of a lifecycle than it would be to address them in the design and architecture phase – or six times more complex than during implementation.

We use that as part of the training that we give to our customers. Every customer I show that statistic to says 30 times is a far too conservative estimate!

Most security groups recognise that they need to be active much sooner in the lifecycle, but the challenge that they had is that they need to roll out a tool. It is no good just giving a bunch of developers a security tool. It needs to be a tool that has credibility with developers – which is kind of where Coverity has created its niche.

How will virtualisation affect the integrity of code?
It obviously brings new challenges. Unlike a standalone physical machine that you can control, a virtualised machine may have a number of different machines on it.

If you can compromise one and break into the lowest level, you can achieve control of a whole set of devices.

All it takes is a relatively straightforward defect that hasn't been addressed, hasn't been identified, for the payoff to increase.

It is also about the evolution of dependency – we are placing all our eggs into one basket, whether it amounts to putting our whole life onto a mobile phone, or moving all of our servers into a virtual server.

When your web browser and your mobile phone communicate with, say, a website, your packets are going to go through all these different level elaborate devices, all these routers and all these bits of hardware infrastructure and so on.

Typically, Coverity is very strong in that space and very active. Chances are, when you visit a website, your packets will go through one or more devices that come from a manufacturer that has run Coverity against that product to make sure it's free of software defects.

In the sort of world where a body such as WikiLeaks can prompt people to attack multinational corporations quite easily, isn't this a further concern?
On WikiLeaks, there has been an evolution in the way that attacks are taking place. So, in the past, it has been a case of, ‘well I have more bandwidth than you. I can bombard you with traffic'.

Now attackers are looking for a flaw in applications. They can find a piece of your application which will slow down your server or a task which will run and consume a lot of your memory or your CPU time, which could be down to a defect in the software. It could be down to a piece of code that has been written inefficiently. Instead of just bombarding you with traffic, they bombard you with a piece of traffic that they know will trigger that sort of defect.

It is going to be quite concerning, isn't it, for a business, if it has got to worry about that potentially happening?
A lot of people group security issues and business continuity issues together. When you look at it like that, it is actually this one piece of inefficient code; and this one defect is threatening your operations as much as any conventional or traditional security vulnerability could do.

They wouldn't know it is there, would they? Unless they have got someone such as you who has already looked for it?
Yes, absolutely, unless there has been a proper scan. The case with many defects is quite often that it is the symptoms you find – and then track that back down to the root cause.

That is incredibly difficult and expensive to do when a security incident is happening and you have to figure out what to do in the middle of it all. Which goes back to the Forrester figures that we spoke about earlier.

Was Stuxnet an example of that? It was written especially for a specific type of hardware.
I don't think the Siemens controller is inherently insecure, because it is pretty common in industrial scenarios, but it is a case of relatively straightforward defects that have been turned into vulnerabilities and exploited. We now have the situation where they presumably have to go back and identify what the root cause of that vulnerability was.

Isn't it surprising that something as important and serious as a power plant controller had that problem or that defect, that Siemens didn't know there was this defect?
Yes, but I guess the full details of the background to Stuxnet have yet to come out. I suppose there are some groups out there that are a bit more exposed to this than others. A lot of what is happening at the moment is, not so much guesswork, but again looking at the symptoms, rather than understanding what the root cause was.

It is possible that users have a security assessment at the end of a lifecycle. You have to prioritise what you fix. There may well be issues that just become accepted as being a risk.

However, what we are saying is, you have to go back through that development lifecycle, you have to say, ‘we want to be active, we want to be proactive in asking developers to scan their code, to find these defects'.

We have a whole structure where we manage what we call a triage process for resolving defects and get security insight into that. So that when we get to the end of the lifecycle, there are no surprises.