dark reading threat intel and cybersecurity news

As vulnerabilities continue to take center stage and organizations look to launch bug bounty and security assurance programs, the competition for good researchers is fierce. But it can be hard for prospective researchers to evaluate these programs. Some common questions include what are the current rates for common vulnerabilities? How experienced is the team performing the report triage work? Does a security team or expert make the decisions about what classifies as a vulnerability?

Researchers generally doubt that a product and development team can fairly classify a bug report as a security vulnerability because of a perceived lack of security expertise. Instead, they tend to trust a program more if a dedicated security team (like ProdSec, AppSec, PSIRT, etc.) makes the decision about whether a report is a security vulnerability.

Security researchers also like to review program statistics, which often includes total payouts for the previous year. Higher total payouts often indicate there are more bugs to be found, where lower payouts may mean fewer bugs and/or rewards. They also care about average time to triage, or the time between submission and acknowledgement of a valid or invalid vulnerability. This can tell researchers if the program is experienced and sufficiently supported.

Time between submission and payment, or average time to bounty, is also important. Again, shorter is better and often means the program doesn’t have funding issues and values the researcher’s time. Longer bounty turnarounds can seed uncertainty.

It’s no secret that vendors prefer lower severity bounties and lower payouts, while researchers like higher ones. A lower payout average may lead to the belief that vulnerabilities will be downplayed, and higher severity issues may not get the attention they deserve. High payouts may signal to researchers that no low-hanging fruit exists, which could dissuade new researchers from joining the program. It could also signal that the time investment to find a vulnerability is very high. Often a moderate payout average is best.

Another area that can dissuade researchers involves the legality of vulnerability reporting. Coordinated vulnerability disclosure policies can be tricky to follow when one party doesn’t feel valued. Safe Harbor is a promise but can also appear as a threat.

Safe Harbor is a clause added to a vulnerability disclosure program policy that outlines that the security researcher shall not need to fear legal consequences (and may even be provided extra protections) by the target of their security research if that research is performed in good faith and in cooperation with the target. This leads to researchers wanting to know how vendors handle violations or how to get support when a vendor is not following their own stated policies. These are important processes to have defined.

What do researchers value in a program? In my experience, there are six primary areas.

Speed

As mentioned previously, faster is better. Researchers want decision-making, responses, vulnerability disclosure, and rewards distributed quickly.

Humanity

Researchers want to hear from people, not a corporation or its lawyers. They want to be treated and spoken to like a human being, not as researching robots.

Transparency and Accessibility

There should be as few barriers as possible when submitting a bug report. Do the researchers need a tax ID, email address, or encryption software? How can a researcher best engage with a vendor and program? What tools, software, hardware, training do the researchers need to invest in? And how does or will the vendor invest in its researcher population? Answer these questions and be transparent.

Expertise

Researchers spend time learning about a product and how it works before they can find ways to break it or make it do things it was not designed for. As a result, they expect that the people reading their reports have at least that same level of expertise.

Advocacy

Researchers don’t work for the vendor; that’s one reason why they submit through a bug bounty program. They can’t be involved in all the conversations, so they need an advocate inside the company.

Recognition/Rewards

Bug bounty programs are a mutation of a vulnerability disclosure program (VDP), which is entirely based on the concept of “See something, say something,” but offers a reward incentive instead of just simply doing it because it is the right thing to do. Ensuring that the incentives offered match the goals of the researcher is crucial. Do they want to be recognized or stay hidden? Do they want payment? Are they even allowed to receive rewards?

For vendors, there’s a lot to consider when attracting talented researchers. Present program statistics at least annually (if not in real-time). Publish the decision matrix for rewards and recognition so researchers know what it takes to get a top bounty. Make documentation available that explains the process a bug report undergoes once it leaves the hands of the researcher. Set service-level agreements for key steps in the process and publish stats about how well the team meets them. Use human language, and understand that everyone makes mistakes in this process, but they’re rarely malicious. Seek first to understand before casting judgement.

Finally, if you want to learn more about bug bounties, consider these organizations: Bug Bounty Community of Interest, FIRST, and OWSP. As bug bounty programs mature and become more humanized, vendors and researchers can work together to create policies and processes that are rewarding for all involved.