Is responsible disclosure responsible enough?

News by Davey Winder

We ask industry experts, when life and limb are at risk, is responsible disclosure of vulnerabilities enough? Or should there be mandated disclosure?

A Jeep taken over from 10 miles away via in-car entertainment system in the summer and just this week news breaking of critical medical devices that are being 'owned' by botnet operators. Vulnerabilities in your web browser are one thing, but when they are in your car or an MRI scanner then the potential impact takes on a different hue. As, indeed, does the small matter of how the security researchers who most often uncover the coding flaws disclose them.

New research from AlienVault reveals that 64 percent of security professionals think that when security researchers get no response from vendors when it comes to disclosing a vulnerability with 'life-threatening implications' then the vulnerability should be made public. Some 19 percent of the 650 IT security pros questioned at Black Hat in Las Vegas earlier in the year went as far as to say the vulnerability should be fully disclosed to the media. This is in stark contrast to the traditional process of responsible disclosure whereby all stakeholders agree to a set period for a fix to be produced before any such publication.

Maybe it's not that surprising, given that when a group of researchers disclosed a flaw in the Chevy Impala software that could give hackers  control of the vehicle it took General Motors five years to fix it. Fast forward to this year and the public hacking stunt, if not actually a public disclosure, with the Jeep led to an almost immediate recall and fix by Chrysler. AlienVault security advocate Javvad Malik told SCMagazineUK.com that "there's only so much public shaming that will remain effective until fatigue sets in and you'll end up with ‘oh there's another hacker claiming the sky is falling' and lose effectiveness." Indeed, Malik insists that often when a vulnerability is disclosed publicly it's not necessarily the public opinion that gets a vendor to change "but rather a government or regulatory body that applies pressure." What Malik wants is a "mechanism whereby researchers could work with regulatory bodies directly" although he admits it is something of a tricky question with many moving parts. SCMagazineUK wondered what other industry insiders thought, so we asked them:

Ian Trump, the security lead at LogicNow, rightly calls this a huge issue, and one for which security professionals largely find themselves in uncharted waters when dropped into the middle of it. "I don't think when human life is at stake we can rely on a system which is semi-voluntary and is governed by people doing the right thing" Trump said, talking exclusively to SCMagazineuk.com "This situation reminds me of the debates about regulatory bodies versus free market advocates" he continued. "Was it not the automobile industry that was outraged back in the day with mandatory safety standards like seat belts?" Trump thinks that the Internet of Things has thrown us right back into this debate, and that government (or licensing, or regulatory, or insurance agencies) need to gain visibility into what the risk is to human safety and be empowered to act. "This process, invented largely when the Internet could not be imagined to hurt anyone, has to adapt to the new reality that planes, trains and automobiles are really fast moving data centres," Trump insists, adding: "It's not morally or ethically acceptable to disclose anything which could hurt or harm a third party".

Nick Pollard, UK general manager of Guidance Software (which has trained some 50,000 cyber investigators) agrees that, "a framework for truly responsible disclosure to the public should be implemented by government agencies who have the resources and capabilities to enforce along with the accepted mandate of jurisdiction in these matters." This framework should be implemented as an international standard of vulnerability handling and disclosure, according to Pollard, who says it should allow private vulnerability testers a time period of disclosure with the vendor, creating guidelines of handling and response time by the vendor, and recourse or accepted allowances by vulnerability testers should vendors not respond. "It should not fall to individuals or commercial interests to have this responsibility and make that possibly fatal decision," Pollard told SC, adding that, "there has long been a palpable tension between security researchers and software vendors." With vendors incentivised to focus on new features over closing security holes, and researchers often selling vulnerabilities that require immediate fixes, it's no wonder the debate surrounding responsible disclosure has been marked by lawsuits and PR holy wars.

Much of that battleground is populated by the definition of white and black hat researchers, which can be a difficult one to call. "My personal view is that if someone tries to make money, gain personal advantage, or unnecessarily damage others from a vulnerability (other than bug bounties offered by the vendor or improving their own reputation as a researcher) then that counts as black hat," says Andrew Conway, a security analyst at Cloudmark. "If they are just trying to make people aware of a vulnerability to protect potential victims and make sure it gets fixed, than that is white hat, however they go about it" he insists.

Conway also thinks that responsible disclosure has to be a two way street, with the vendor receiving the disclosure data having to be 'responsible' in how it is dealt with. This is often, very often according to Conway, not the case. "Some companies may simply ignore reports, or worse still, threaten legal action against the security researcher for reverse engineering their code," he told SCMagazineUK.com, continuing: "The legal department should be reserved for black hat hackers who are trying to profit from the vulnerability, not the white hats who are trying to get it fixed". If vendors are also responsible, then Conway thinks responsible disclosure is absolutely the right approach. However, given that many vendors are still not as responsible as they should be, he is also willing to accept that sometimes it makes sense for security researchers to publish vulnerabilities before they are fixed. "If a vendor responds to this by attempting to sue the researcher and suppress the bug information rather than fixing the problem," he concludes, "that's usually a sign it was the right decision to publish."

Ryan O'Leary, senior director of WhiteHat Security's Threat Research Centre, is adamant that while responsible disclosure is a difficult topic, it should always start with notifying the responsible party, in most cases the vendor or developer of the potential threat. "In extreme cases where people's lives may be in danger it is important to get this information out to the vendor quickly and stress the importance of the issue," O'Leary says.  "Where this gets tricky is when the vendor ignores the issue entirely." However, he remains convinced that when an issue arises that has a tie to public safety, the status quo should be to let the vendor have more time to fix the vulnerability. "All avenues should be exhausted before going public with the vulnerability including going to CERT or outside agencies that can reach people inside the organisation," O'Leary insists, continuing: "The problem with vendors that do not fix the vulnerability is that anyone can find this same issue at anytime, and they may not be as ethical as the person that responsibly disclosed it".  Which is where the moral justification for publishing vulnerabilities to force the vendor's hand enters the debate. But O'Leary maintains his ethical and moral position that in cases where lives are potentially in danger, it is important to draw a boundary. That boundary, when the vendor is unwilling to fix the vulnerability, even though it could cause people harm, is that any public disclosure should not come with all the details on how the vulnerability is exploited. "Only the end result should be shared and the potential implications the vulnerability carries with it," O'Leary told SCMagazineUK.com, citing the Jeep hack as an example. The researchers released the implications, "we can control a Jeep," but didn't give the exploit code out to the world. 

Some commentators, such as Ilia Kolochenko, CEO of High-Tech Bridge, are of the opinion that you cannot handle medical device vulnerabilities which have the potential for injury or death, rather than data loss or financial loss, in the same way as other software bugs. "Vulnerabilities that may affect human lives should ideally never ever be disclosed or even mentioned in public until we can make sure that every single medical device has this vulnerability patched," Kolochenko told us, insisting, "otherwise security researchers will be also responsible for all the bad things that may happen." Kolochenko also thinks the law needs to change so that there are direct financial and legal responsibilities for medical device manufacturers and vendors when it comes to security. While acknowledging that it's impossible to develop totally flawless code, Kolochenko argues that many security researchers disclose vulnerabilities in medical devices because manufacturers and vendors ignore them, or worse try to buy silence without any intentions to release patches. "Security researchers should be able to report such vendors to the nearest police station," he says.

Stephen Cox, chief security architect at SecureAuth, meanwhile doesn't think that the rules of responsible disclosure change when there is a potential for danger to life. Instead he told SC that the determination of the risk and the life-threatening nature of a vulnerability is best served by a collaboration between the vendor and the researcher. "The major problem we face today is that there is still a gigantic trust chasm between vendors and security researchers," says Cox. He adds, "We need to work on building that trust relationship."

What Cox recommends is a trusted, non-profit, non-government-backed third party with a charter to ensure that vulnerability information is delivered to the vendor securely, that the researcher is recognised and compensated for his or her work, and that ethical practices are encouraged across the board. Adam Winn, senior manager at OPSWAT, is of a similar opinion. He told us that researchers, "should not be forced to follow a separate set of rules when researching vulnerabilities in medical equipment; discoveries often happen simultaneously, and when this happens there's certainly no guarantee that each discovery is by a white hat." The effects of extended delays to public notification could potentially be harmful and should not even be considered unless backed by empirical justification, Winn says, arguing that, "once a vulnerability is revealed to the public, IT administrators are in a much better position to mitigate attacks and detect compromised devices." We will leave the last word, for now, to Qualys CTO Wolfgang Kandek, who agrees that the overriding issue in disclosures is, "the see-saw between trust and vulnerability – the security community has to work together for the benefit of everybody."

What do you think? Do the accepted rules of responsible disclosure need to be changed when life or limb is at risk? Where is the responsible line drawn when it comes to responsible disclosure, and who has the ethical right to draw it? Join in the debate by commenting below...

Topics:
Crime & Threats

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Upcoming Events