Is responsible disclosure responsible enough?

Is responsible disclosure responsible enough?
Is responsible disclosure responsible enough?

Much of that battleground is populated by the definition of white and black hat researchers, which can be a difficult one to call. "My personal view is that if someone tries to make money, gain personal advantage, or unnecessarily damage others from a vulnerability (other than bug bounties offered by the vendor or improving their own reputation as a researcher) then that counts as black hat," says Andrew Conway, a security analyst at Cloudmark. "If they are just trying to make people aware of a vulnerability to protect potential victims and make sure it gets fixed, than that is white hat, however they go about it" he insists.

Conway also thinks that responsible disclosure has to be a two way street, with the vendor receiving the disclosure data having to be 'responsible' in how it is dealt with. This is often, very often according to Conway, not the case. "Some companies may simply ignore reports, or worse still, threaten legal action against the security researcher for reverse engineering their code," he told SCMagazineUK.com, continuing: "The legal department should be reserved for black hat hackers who are trying to profit from the vulnerability, not the white hats who are trying to get it fixed". If vendors are also responsible, then Conway thinks responsible disclosure is absolutely the right approach. However, given that many vendors are still not as responsible as they should be, he is also willing to accept that sometimes it makes sense for security researchers to publish vulnerabilities before they are fixed. "If a vendor responds to this by attempting to sue the researcher and suppress the bug information rather than fixing the problem," he concludes, "that's usually a sign it was the right decision to publish."

Ryan O'Leary, senior director of WhiteHat Security's Threat Research Centre, is adamant that while responsible disclosure is a difficult topic, it should always start with notifying the responsible party, in most cases the vendor or developer of the potential threat. "In extreme cases where people's lives may be in danger it is important to get this information out to the vendor quickly and stress the importance of the issue," O'Leary says.  "Where this gets tricky is when the vendor ignores the issue entirely." However, he remains convinced that when an issue arises that has a tie to public safety, the status quo should be to let the vendor have more time to fix the vulnerability. "All avenues should be exhausted before going public with the vulnerability including going to CERT or outside agencies that can reach people inside the organisation," O'Leary insists, continuing: "The problem with vendors that do not fix the vulnerability is that anyone can find this same issue at anytime, and they may not be as ethical as the person that responsibly disclosed it".  Which is where the moral justification for publishing vulnerabilities to force the vendor's hand enters the debate. But O'Leary maintains his ethical and moral position that in cases where lives are potentially in danger, it is important to draw a boundary. That boundary, when the vendor is unwilling to fix the vulnerability, even though it could cause people harm, is that any public disclosure should not come with all the details on how the vulnerability is exploited. "Only the end result should be shared and the potential implications the vulnerability carries with it," O'Leary told SCMagazineUK.com, citing the Jeep hack as an example. The researchers released the implications, "we can control a Jeep," but didn't give the exploit code out to the world. 

Some commentators, such as Ilia Kolochenko, CEO of High-Tech Bridge, are of the opinion that you cannot handle medical device vulnerabilities which have the potential for injury or death, rather than data loss or financial loss, in the same way as other software bugs. "Vulnerabilities that may affect human lives should ideally never ever be disclosed or even mentioned in public until we can make sure that every single medical device has this vulnerability patched," Kolochenko told us, insisting, "otherwise security researchers will be also responsible for all the bad things that may happen." Kolochenko also thinks the law needs to change so that there are direct financial and legal responsibilities for medical device manufacturers and vendors when it comes to security. While acknowledging that it's impossible to develop totally flawless code, Kolochenko argues that many security researchers disclose vulnerabilities in medical devices because manufacturers and vendors ignore them, or worse try to buy silence without any intentions to release patches. "Security researchers should be able to report such vendors to the nearest police station," he says.

Stephen Cox, chief security architect at SecureAuth, meanwhile doesn't think that the rules of responsible disclosure change when there is a potential for danger to life. Instead he told SC that the determination of the risk and the life-threatening nature of a vulnerability is best served by a collaboration between the vendor and the researcher. "The major problem we face today is that there is still a gigantic trust chasm between vendors and security researchers," says Cox. He adds, "We need to work on building that trust relationship."

Page 2 of 3