How security could have saved Yahoo! from its Axis woes
Realities of cloud-based encryption and key management show lack of control
Last week Yahoo! released its Axis extension for Chrome, but accidentally leaked a private key allowing anyone to digitally sign malicious extensions as if they came from Yahoo! Itself.
The issue is that Yahoo! launched a new standalone browser called Axis for mobile devices that also acts as an extension for Firefox, Chrome, Safari and Internet Explorer. The good news is that they use code signing to allow customers to validate the integrity and authenticity of the code that Yahoo! publishes; in practice this means that the browser (like IE) can validate the software plug-in before it allows the plug-in to run.
In this case, the plug-in for the Chrome browser included the private signing key (a PGP key) when normally it would only include the public key in the form of the signing certificate. Consequently, anyone can misuse the private key and publish their own plug-ins, sign them with the Yahoo! public key, and the browser (Chrome, in this case) would not be able to tell the difference.
So, for example, an attacker could write a browser plug-in that captured passwords, cookies or web history and it would look (and work) just like a trusted Yahoo! plug-in. Fake plug-ins could also be used to carry malware that would have effect outside the browser or even bring the host computer to a standstill.
There is no suggestion that the private key was stolen, it seems to have just been pure human error. Needless to say, this is a great example of where a hardware security module (HSM) would have helped. An HSM would have ensured that copies of the private key simply wouldn't have been available to humans, and therefore there would be much less risk of humans doing something silly; it can reduce the risk of human error, not just the threat of attacks
A HSM could have been used to enforce even greater controls since it could have been used to enact “dual controls” or shared responsibility policies where more than one person would be required to actually sign the code. A final check and balance before the code is published, again reducing the risk of human error (as well as human misuse).
Remember that this is not an APT attack (like with RSA or Diginotar), this a security error that left the door open for that sort of attack. If the loophole was exploited, it wouldn't qualify as an APT since it wouldn't be ‘targeted'; in this it would potentially apply to every instance of a Chrome browser.
One of the downsides of auto update processes is that if a new plug-in is published and it contains an error, it could be propagated to a large number of computers very quickly since they all run out and get the new software as soon as it is published.
Richard Moulds is vice-president of product strategy at Thales e-Security