We shouldn't let the potential misuse of a product in the wrong hands blind us to its benefits in the right ones.
If, like me, you came down with the lurgy last winter, you might have read reports on the lifting of the ‘bird flu' research moratorium with some interest (http://preview.tinyurl.com/muse187-0).
I've always been fascinated by biological viruses, and one of my regular diversions from information security is This Week in Virology (www.twiv.tv). I'll be honest, much of the detail goes over my head, but it's fascinating and entertaining nonetheless.
I was particularly interested with the discussion last year about research into how H5N1, a potentially nasty variant of influenza, might mutate and start spreading between humans as typical seasonal flu does. To do this, researchers used ferrets to create a mutant H5N1. This caused uproar among the tabloid press, with the usual scare stories of “Frankenstein viruses” and similar misinformed hyperbole. Many considered this a dangerous set of experiments, and a moratorium was imposed.
Most of the objections focused on the risk that the mutant strain might get out, or be used by terrorists in some nefarious Outbreak-style scenario. Few of the objectors bothered to consider the benefits of the research: that it gave us valuable insight into what mutations would be required for airborne transmission, and what we might be able to do to reduce the risks.
This is a familiar story in infosec. We often get complaints about ‘dual use' technology or research that the bad guys will pick up on and use to their advantage. The endless debate about ethical disclosure and even attempts to criminalise the possession of ‘bad' software tools illustrate how seriously this issue is treated.
But what about cases where our good tools definitely are used for bad purposes? Recently, in an online discussion about the use of anonymous routing tool Tor by a sex offender to avoid capture, one of my associates said he would not want to be linked with any tool used in that fashion, and couldn't see how the project team could accept the risk of its misuse in this way. Similar claims are often made about disk and file encryption products.
So how prevalent is the use of such tools by the criminal element? It's hard to get accurate figures; my law enforcement friends, understandably, won't discuss details. The closest I've come to objective evidence is in the latest Office of Surveillance Commissioners report, which lists 57 uses of a RIPA s49 “give me the password or else” notice. While hardly definitive, this hints that encryption is not blocking many court cases.
It's easier to demonstrate the value of these tools to those on the right side of the ethics line. In her 44Con presentation last year, the Tor project's Runa Sandvik described an arms race between her developers and the techies working for oppressive regimes. The list of countries attempting to block Tor is a veritable who's who of the Amnesty International naughty list. It's hard to believe they're doing it for benevolent reasons.
As a security researcher, I make regular use of Tor to protect my identity, as do many of my colleagues. It's a valuable tool for anyone interested in privacy and anonymity. That shouldn't make me a criminal.
There are very few, if any, tools that are purely black hat. It's easy to do a scary demo for management with Metasploit, Aircrack, Set and such, but it's equally straightforward to show how they can be used to strengthen corporate infrastructure.
This is not just a computing issue – kitchen knives are the most common weapon used in crime, and burglars are rather fond of screwdrivers.
It's important to consider the overall benefits of a tool or research project and balance these against the detrimental aspects. Of equal importance, however, is an open and honest debate – after all, it's foolish to brush misuse under the carpet, even if on balance it is an acceptable risk.