Do drug pump disclosures point to culture of insecurity in healthcare?

News by Davey Winder

Researchers have uncovered security vulnerabilities in an insulin pump that had the potential to cost lives. SC asks if medical hardware device vulnerabilities are purely a technical problem, or whether a culture of insecurity is to blame...

Researchers at Rapid7 disclosed three vulnerabilities in the Animas OneTouch Ping insulin pump that were admittedly at low risk of wide scale exploit. That said, the potential was still there for an attacker to deliver unauthorized insulin injections to the user.

There are a number of reasons why this particular incident raised so many red flags here at Firstly, the potential payload is death; there's no getting around that one simple fact. Secondly, Jay Radcliffe, the researcher involved, had disclosed vulnerabilities in an insulin device some five years before. Thirdly, we warned about drug infusion pump vulnerabilities ourselves last year. Finally, Animas' parent company Johnson & Johnson's responses left much to be desired.

Let's start with the last of these, the manufacturer response. Radcliffe himself, a senior security consultant at Rapid7, told SC that "Johnson & Johnson has done a great job of working to understand this issue and act accountably." He explained that Rapid7 and the manufacturer had worked together in order to validate the vulnerabilities and present solutions to users.

However, Johnson & Johnson told the BBC that the risk of exploit was "extremely low" and that it would require "technical expertise, sophisticated equipment and proximity to the pump." It went on to say that the pump remains "safe and reliable", and suggested if patients were worried they could stop using the infra-red remote, the device that has the potential to be hacked, or limit the maximum dose.

Sorry, but that kind of off the cuff response is indicative of the lack of understanding of security as a core competence that leads to such hardware being insecure in the first place. All it would have taken to secure this device was the radio frequency remote to encrypt transmissions rather than transmit them in the clear.

That encrypted communication wasn't standard at the time of the devices design, 2008, is no excuse; the manufacturer should have been pushing to be driving secure design not tailgating everyone else.

Speaking to SC, Radcliffe said that "often we forget that these devices are designed to be in use for 8-10 years or longer. This is much longer than a consumer device." However, we note that it hasn't been recalled, nor is there a recommendation that users replace it with a more secure model.

Not that more modern devices, the Internet of Medical Things, is necessarily secure by design either. Billy Rios, a security researcher well known for his work with medical devices including insulin pumps, says that he has not walked away from any medical device he's investigated "without discovering at least one serious issue."

So is the problem primarily technical, or is there a culture of insecurity in the healthcare hardware world? John Steven, internal CTO at Cigital told SC that from his vantage point the problem appears largely cultural. "Medical device suppliers should be leading adoption of secure embedded software and network components" Steven insists, adding that they should be "developed defensively." A threat model of hardened device code, secured device communication of data with the back-end and assured provenance of updates isn't exactly rocket science after all.

And, as Chris Day, a security researcher at MWR Infosecurity says, it's also true that "embedded, Internet of Things and medical devices are receiving increasing levels of attention from the whitehat security community."

Day hopes that this will encourage improvements in the security of the devices. We are not so sure, after all healthcare is pretty much unique in that it spins conventional security thinking by 180 degrees: usability and security are not equal bedfellows and if usability (the delivery of patient care in other words) suffers then security gets kicked out from under the covers.

There lies the rub, as Nettitude principal security consultant Matt Gough points out. "Encryption would have solved this issue but this would likely impact the battery life due to the additional load", Gough says. So even if it had been an option, it could easily have been ruled out on grounds of usability over security. After all, what's the likelihood of the right attacker, with the right skills getting close enough to the right user in order to do harm?

Yet talk of likelihood and probability, bringing us back where we started with Johnson & Johnson insisting this is a low risk exploit, is often misleading in the realm of health and safety. "It is the relationship between ease of attack and seriousness of consequence that should guide our decisions" Sean McBride, attack synthesis lead at FireEye told SC, continuing "we can't control the “who” or “how often” but we can do things to reach a threshold of difficulty."

So, let's assume that Rapid7's conclusions are valid and someone could exploit the vulnerabilities with the right tool (and limited knowledge is required once we've reached the tool stage) in order to shut down, or ramp up, dosage delivery from the other side of a rugby field. That, as McBride concludes "should give us all some pause for thought..."

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Video and interviews