Security researcher blasts United Airlines' bug bounty programme
Security researcher claims United Airlines sat on serious bug for five months which would have allowed an attacker to access customers' flight details and even cancel flights.
United Airlines launched the Bug Bounty programme in May 2015
A security researcher has claimed that United Airlines failed to fix a severe bug in its mobile app even though he reported it to them five months ago.
United Airlines launched its Bug Bounty programme six months ago to much media fanfare with the promise of paying bug finders in air miles rather than cold, hard cash. Two months after launching the scheme, the airline announced it had awarded one million air miles for one bug to security researcher Jordan Wiens.
But according to Randy Westergren, a security researcher and senior software develop for XDA Developers, a vulnerability in an API endpoint he reported to the firm was met with silence for five months.
The flaw exposed the personally identifiable information of any member of its Rewards scheme, ironically the same scheme that allows bug hunters to view their own MileagePlus points.
The vulnerability comes to the fore when data is requested in order to provide a customer with details of upcoming flights and United Club pass details.
By creating another user, Westergren was able to substitute the new MileagePlus number with his original one from another test account.
“Using just these two values, an attacker could completely manage any aspect of a flight reservation using United's website. This includes access to all of the flight's departures, arrivals, the reservation payment receipt (payment method and last 4 of CC), personal information about passengers (phone numbers, emergency contacts), and the ability to change/cancel the flight,” said Westergren in a blog post.
He said that an attacker could likely gain access to the United Club by “spoofing another customer's barcode value at the entrance, essentially stealing his purchased pass”.
The researcher then submitted his report on the flaw and while he knew that there would be a delay in responding he didn't expect it would take five months.
After five months, he said he had no choice but to tell the airline that he would publicly disclose the unpatched vulnerability by 28 November in order to give them a few more weeks to get the patch in and avoid public disclosure of the bug. United patched the flaw on 14 November.
Westergren said that while bug bounty programmes are “a great step in the right direction” as far as IT security is concerned, he said, but running one effectively is critical.
“Though the intention to publicly disclose the vulnerability appears to have pressured United to fix it, I suspect that the request for comment by media personnel ultimately forced them to take the necessary action,” he said.
United Airlines said in a statement: “The protection of our customers' information is one of our top priorities, and we have extensive security measures in place to safeguard their personal data. We have addressed this issue and are confident that our systems are secure. We remain vigilant in protecting against unauthorized access and will continue to use best-practices on cyber-security to maintain our effectiveness.”
Rahul Kashyup, chief security architect at Bromium, told SC that responsible disclosure should be the “first choice”.
“We need to look at the issue objectively, from both sides – ultimately, the aim of vulnerability disclosure is to produce better, more secure code,” he said.
“In rare cases, the bug discovered might genuinely require major changes and testing efforts from the vendor. In this case, the vendor should respond accordingly and request more time, and the vulnerability discoverer should certainly consider providing more time.”
Gavin Reid, VP of threat intelligence at Lancope, told SC that while we don't know if United “sat” on the bug, “fixing a bug or changing code (especially in home-grown applications) can often have multiple dependencies prompting other work that must be done to support the fix.”
He added that there should be good communications between the researcher and the impacted organisations, letting both sides know exactly where they stand. “There are good reasons to use a third party to run a service like this to ensure consistency of approach – like HackerOne.”
Fraser Kyne, principal systems engineer at Bromium, told SC Magazine that companies always have to balance hypothetical risk with the real cost of managing changes to their infrastructure – but the balance has swung towards being as proactive as possible.
“It's bad enough if something happens and the organisation pleads ignorance. Imagine the fallout if a compromise occurred, followed by the public announcement that the risk was disclosed in advance and nothing was done to prevent it. As far as the duty of care for the researcher: surely finding the problem and reporting it privately to the affected party is enough?”
Ilia Kolochenko, CEO of High-Tech Bridge told SC that bug bounty programs are not easy to manage.
“Bug bounties are also expensive to maintain as they require your technical team to be very responsive for any submission (including tons of garbage like automated scans reports). Otherwise, situations like this one will regularly happen damaging your business reputation,” he said.
He said that he reported an XSS flaw to United just after the bounty programme was announced and the airline had replied in 30 days to tell him someone else had reported the same fault.
“This was at the beginning of the year, and the XSS is still unpatched by the way,” he said.
He said many firms companies blindly follow the bug bounties trend and face very serious problems afterwards. Bug Bounties are mainly aimed at large companies that really can benefit from crowd testing.
“For example an e-banking application should never be tested with a public bounty, as you will have to give away sensitive information, allow unknown people to test your system and face many other problems (we saw cases when researchers run DDoS attack and claimed ‘bounty' for putting servers down). Despite that you will pay only for results, at the end of the day it will cost you more than a reliable penetration test with guaranteed methodology, timing, schedule and results,” said Kolochenko.
“On the other hand, a new Google or Facebook web service would definitely benefit [more] from a bug bounty than from a penetration test.”