RSA 2015: Bug bounties - accepted but concerns remain

News by Tony Morbin

Bug bounties often get results quicker than in-house teams and pen testers - but concerns remain that there may be unintended consequences.

At the RSA panel discussion, Bug bounties: Internet saviour, hype, or somewhere between, moderator Jake Kouns, CISO, Risk Based Security, asked if bug-bounties – where companies pay freelance researchers for discovering flaws - are the way ahead, given that many companies which previously said that they would never have a bounty, now do.

Casey Ellis, CEO of Bugcrowd responded: “They are effective - that's the main thing. There are not enough people on the good side and this connects (software developers) with more resources.”

He was backed up by fellow panellist and bug-bounty user Nate Jones, technical programme manager at Facebook who said: “It's a good way to pay for results, not for time – so it helps us, rather than paying for pen-testing. We won't get the same coverage from pen-testers (who will have to ration their limited time) – so that's really positive.”

Chris Evans, ‘Troublemaker'(researcher), Google, added:  “You launch one of these and you increase your results. Your own internal team won't be able to compete with thousands and thousands of people (participating in a bug bounty).”

Ellis however did voice a word of caution: “Yes, all companies should have one, but it's not one-size fits all.  Google and Facebook are able to respond to reported vulnerabilities.”

An informed speaker from the floor added: “If you are not ready, you shouldn't do a bug bounty. If you haven't done internal pen-testing you are not ready. Only when you are ready is a bug bounty appropriate and then you can choose to go direct, go to third party, or crowd source.”

Jones, who said that Facebook received 14,000 reports last year as a result of bounties, added: “They help us incentivise people to reach out to us in a way we would like, and they can impact how they report, what they report, allowing us to communicate internally and externally. We do need to discuss it with our internal teams.”

A major downside identified was the potential high volume of responses, with Evans pointing out that a lot of people think they are reporting really critical issues and mostly they are not, so it can be difficult to get through all the noise.

Ellis pointed out: “A lot of people participating have English as their second language so frameworks need to be as complete as possible to get the response you want. You need to be able to differentiate good behaviour from bad. If customers are using our infrastructure for single source IP, different hacks, using tagging etc, it's not a problem and while we can't solve it 100 percent, we can ensure it's manageable.”

Another speaker from the floor pointed out that not all bugs are equal in terms of exploitability, and asked the panel, “How do you decide how much to pay?”

Evans responded: “We tell researchers as much as we can, we have a tabular format showing how much we pay for what bug type, and the likely price range. Certain bugs are particularly exploitable, and the more likely they are to be used, the better the bonus.”

 

Ellis said it's necessary to align expectations and be consistent with follow through. “Decide what behaviour do you want to attract from researchers?  A high level of creative effort goes into (discovering bugs) so encourage them to do it again – and tell friends to do it again.

“Its a market place, so prices are going up – the more you offer the more you get… I believe prices paid will eventually standardise – but we're nowhere near that yet.”

Evans noted that one control to regulate response is the price you set, and this can affect volume, so you should start with a lower price, get some good bugs identified and dealt with, then turn the dial up.

A speaker from the floor added that there is a need for a better model to reward bounty hunters, and there should be a trade off, because companies are paying less than they pay pen-testers and getting better results. It was noted that currently researchers are usually working on something else when they find bugs.

Ellis explained the disparity between what companies pay for vulnerability and their value on the black market, saying that the risk model is 100 percent in favour of the supply side because closing a vulnerability is a one-off action/saving, whereas for criminals, the exploit can be used multiple times so it's worth more.  It was accepted that currently it is the ‘hunters' taking on the risk, but there are people making a living relying on this in India and the Phillipines, but not in the West.

“How do you wind down a bounty programme without pissing off researchers?” asked one delegate, and the answer from Jones was that: “You are going to upset someone, either because you are not meeting their expectations or simply because you are winding back the operation.  So you need to align expectations – then wind back.”  

Kouns, however, questioned why, if you're finding out about a wide range of security issues you wouldn't need to keep on doing that.

While there was no single answer as to what to do to prepare product for test, given the permutations of what's being tested and their risk profile, it was advised that companies do put thought into this aspect.  

Jones noted how Facebook runs an entire test-Facebook with test users, and gets people to research with test information rather than real users. And Evans advised that you should have a good idea of how buggy the product is beforehand, based on traditional testing.

How much overlap is there between legal and illegal sale of exploits and is there a grey market where the two overlap or are they separate economies?

Panellists believed that most individuals ‘chose their hat colour' beforehand, rather than making a decision after finding an exploit. One delegate suggested, however, that researchers may go for a different buyer if can't get what they want from the vendor.  

He went on to complain that the corporate world drastically underpays in its bug-bounties, saying criminals and governments vastly outbid them “if I have good bugs.”

Ellis agreed that there needs to be some economic parity. But a call for governments to subsidise bug bounties failed to get much traction, and it was noted that many researchers do this for more than just money – including recognition and contributing to the greater good – with the comment: “We decided to use our skill for good, not to rob the bank.”

Topics:
Crime & Threats

Find this article useful?

Get more great articles like this in your inbox every lunchtime

Upcoming Events