Is it ethical to use malware when disrupting cyber-crime?

As the FBI declares its malware-like software cannot be malware as it is used with non-malicious intent, we ponder the ethics of the good guys using the same tools as the bad guys.

When is it ok to kick in the metaphorical cyber frontdoor? (Pic: Rizuan/Wikipedia)
When is it ok to kick in the metaphorical cyber frontdoor? (Pic: Rizuan/Wikipedia)

The FBI has declared that software it used during a sting operation, and which judges have tried to throw out of court for not having appropriate warrants, could not be malware because the FBI did not have malicious intent.

Say what?

According to legal briefs filed by the FBI, "Malicious, in criminal proceedings and in the legal world, has very direct implications, and a reasonable person or society would not interpret the actions taken by a law enforcement officers pursuant to a court order to be malicious."

The 'court order' part of that statement being of most import it seems to us, and Nathan Dornbrook, CTO at ECS Security for that matter. "The deployment of computer tools should meet the same standard as the deployment of any other electronic attack tool," he told SCMagazineUK.com. "A judge should determine whether there is enough cause to issue a warrant."

Dornbrook also thinks that it's about time that, when it comes to such usage of 'offensive security', we "should acknowledge it, make it public, and put in place strong oversight to patrol its use".

Not everyone agrees per se. "The idea that law enforcement, government and the security industry could be using the same tools as the bad guys sets a dangerous precedent," said Kevin Bocek, vice president security strategy at Venafi. "And is certainly not something the public should accept."

Bocek suggested that law enforcement agencies are hardly the most competent custodians of technology and data. "By creating malware that does exactly the same job as that of a cyber-criminal, the government is essentially releasing the weapon, or the designs to build new weapons."

Cris Thomas, a strategist at Tenable Network Security, agreed that the big problem with software of this kind is one of control. "Law enforcement usually likes to keep tight control over its assets," Thomas told SC. "Distributing software like this introduces the possibility of the program getting into the hands of the bad guys."

It's not just a case of the tools being used against law enforcement but also negating the software asset by knowing when it was being used against them.

And what about if, or rather more likely when, such software appears on an organisation's network – how should it be handled? "If it was installed on my system or my company's systems without notice or authorisation," said Adrian Sanabria, senior security analyst at 451 Research, "I'd be obligated to view it as a threat."

Of course, that would be different if law enforcement had gone through an employer to install this software, or requested assistance in an investigation. The trouble is that, due to the clandestine nature of many investigations, it might not be possible from an individual, company or government perspective to distinguish between malware and court-sanctioned monitoring software installed under a warrant.

"This could be handled through some sort of inter-departmental mechanism to authenticate software as being used in accordance with an investigation," Sanabria suggested, but even if such a thing existed it would be "just a matter of time before criminals found a way to subvert that system and pose as law enforcement".

So let's say that using the same kind of tools as the bad guys to fight fire with fire, as it were, is OK. But what about the security industry itself doing the same? "To me, there is a much larger grey area when it comes to the private sector's role in going after cyber-criminals," said Troy Gill, manager of security research at AppRiver. "Their level of flexibility with what tools or techniques they are free to use should reflect a little more restraint."

Whatever, the ethics of using such tools must surely come into play? Ilia Kolochenko, CEO of High-Tech Bridge, sees it as a similar problem to the firearms dilemma: "Is it ethical to kill an armed criminal?"

As Kolochenko said, "It's probably a good question for ethicists, not for cyber-security experts."

Back to the FBI and the semantics question.

Ian Trump, global security lead at LOGICnow, insisted that "equating software with an ethical label based upon good and evil is about as nonsensical as saying a shovel used by terrorists is an evil shovel".

As Trump said, the term malware is simply being used incorrectly here. The software is being used as a tool, and in this case one that has been sanctioned by a law enforcement organisation. "It may have characteristics, behaviours and capabilities similar to criminal software," Trump concludes, "but the software itself is morally ambiguous..."

Professor Steven Furnell, senior member of the IEEE and professor of IT security at Plymouth University, agreed that few technologies are inherently good or bad in themselves. However, he summed up the dilemma using the analogy of fighting fire with fire – the irony being "that you end up with more fire!"