Wanted: Patched or Hacked

Written by Harry Papadopoulos on 13/10/2017

It is a well-known fact that everyone wants their life to be, more or less, as simple as possible. Continuous advancement in technology shows that it is possible to achieve this, and it even might be sooner rather than later. This is also where the problem appears. The faster we move, the sloppier security becomes. A lot of companies, in order to prove that they do care about security, openly expose their services in a kind of bug bounty hunt. In these cases, everything is nice and peachy... until it isn’t.

A fist full of dollars

What’s the actual value of a bug bounty programme? By actively challenging everyone to attack your services in exchange for money, you don’t only attract security researchers (whitehat) that are going to raise the issue with the company itself, but also the malicious types (blackhat) who are in it for the exploitation. The main difference between the two is that whitehats are doing it for better security, for the benefit of digital society as a whole, whereas the blackhats are in it purely the lolz. Blackhats can benefit by finding a serious bug and sell it as 0-day vulnerability to the highest bidder. After some time has passed, they can even disclose the same vulnerability to the company itself and claim their bounty too.

Once upon a time in the West

What’s new to the whole process you ask? Well, to start with, in case something goes wrong during the vulnerability exploitation, a blackhat could just disclose all the details along (along with a huge apology) and say that it wasn’t clear where to stop. As they used to say in the old wild wild west: “You said that you want the wanted man alive, not that he must be able to walk too.” The argument in favour of bug bounties is to do with transparency and the importance that the company puts on security. It promotes better measures, continuous improvement and the users are happy when they hear that their favourite website patched up a serious vulnerability. So am I saying that patching vulnerabilities is bad? No, of course not. I’m saying it’s an interesting discussion point: instead of making sure that there are no vulnerabilities in their application in the first place, release code they know to probably contain security flaws and hope other people will sort it out. Sort of unofficially subcontracting their security to the whitehats/blackhats.


Before you start shaming me, let me make it clear that I am not against bounty hunting and also that I may or may not be a bounty hunter myself. As a security professional I like to test the security of products and applications that I am using, and what’s better than having the permission to do it by the company? Even better, if you find a vulnerability you get paid! If I were being humorous, I’d write it as the following equation: Fun + Research + Skill Improvement = Moneyz! On the other hand, once problems start getting reported it means they also need it to be fixed. And sharpish, as blackhats may well be using the exploit in the wild. The vendor’s teams rush into fixing them, but without an dedicated internal security team there’s a chance that the fix opens up a new, undiscovered hole.

Bad day at black hat

Thankfully, the bigger companies tend to have an internal security team (or a third party as a managed service) that takes care of security issues and mostly prevents anything serious from happening. As we security wizards are not almighty, we might miss something, and in these occasions it is good that other fellow wizards might pick up what we missed and fix it. This can be considered good practice, but it makes the job of the mentioned wizards even more difficult. “You said that not having a Security team is bad. Now you are saying that having one is even worse. What is the problem with you?” Hang on. Hold your horses. I’m not saying that having security team is bad. I am saying that in case of an actual incident, there response time might be slower or even non-existent. When you openly give permission to everyone to assess your security and try and find your holes, it is twice as hard to spot a malicious act. To put it even simpler, when everyone has a permission to break your locks leaving the door wide open, how will you know that they didn’t take a sneak peak of what’s inside?

The good, the bad, and the money

Eventually all comes down to this: internal security or open season? The answer cannot be clear as each approach has its own benefits. The decision must be taken only after a very serious risk assessment with everything that is in stake, including the reputation of the company. Having open season and waiting for others to find the issues means you must trust your product enough and have faith that there won’t be (m)any issues. A definitive answer to this cannot be given, only a suggestion that everything is important. It’s well known that a lot of companies stopped their bounty hut programs weeks after they started it as the number of bugs and reports was so big they needed time to patch everything up. Another thing that must be taken into serious consideration is the scope. Be absolutely clear about the scope of the program and set the right limits. It will save you a lot of time when something actually happens.

  • Bulletproof are CREST approved

    CREST approved

  • Bulletproof are ISO 27001 and 9001 certified

    ISO 27001 and 9001 certified

  • Bulletproof are Tigerscheme qualified testers

    Tigerscheme qualified testers

  • Bulletproof are a PCI DSS v3.2 Level 1 service provider

    PCI DSS v3.2 Level 1
    service provider

  • Bulletproof have 24/7 on-site Security Operations Centre

    24/7 on-site Security
    Operations Centre

Get a quote today

If you’re interested in our services, get a free, no obligation quote today by filling out the form below.

By submitting this form, I agree to the Bulletproof privacy policy.