Blog
March 7, 2024 Marie H.

Handling HackerOne Bug Bounty Reports: An Engineer's Perspective

Handling HackerOne Bug Bounty Reports: An Engineer's Perspective

Photo by <a href="https://unsplash.com/@guerrillabuzz?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">GuerrillaBuzz</a> on <a href="https://unsplash.com/?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">Unsplash</a>

Handling HackerOne Bug Bounty Reports: An Engineer's Perspective

Most engineers interact with security through internal processes — quarterly scans, pen tests on a fixed schedule, maybe a security champion rotation. Bug bounty is different. Reports arrive unpredictably, the researchers range from methodical professionals to people who found one thing and want their $200, and the technical depth required to assess a finding varies enormously. I spent time as the internal technical point of contact for HackerOne submissions on a recent project, and this is what the process actually looks like from the engineering side.

How Reports Come In

The HackerOne workflow from an engineer's perspective: a researcher submits a report through the HackerOne platform. The security team does initial triage — they're looking at whether the report is intelligible, whether it's in scope, and whether it's a known/duplicate issue. If it passes that gate, an engineer gets looped in for technical assessment.

That engineering handoff is where things get slow if you don't have a clear process. The security team often can't fully assess technical impact without someone who knows the codebase. I've seen reports sit for 2-3 weeks at this stage because the right engineer wasn't identified quickly, or because nobody took ownership of getting the technical verdict down in writing on the platform.

The practical fix: make sure your security team has a clear escalation path to specific engineering owners for each product area. Not "the backend team" — a named person and a backup. When a report comes in for your scope, you have 24-48 hours to provide a technical assessment. That's the SLA that matters.

SSRF Findings

Server-Side Request Forgery shows up in bug bounty reports regularly because it's a class of vulnerability that's hard to detect with static analysis and easy to find with manual testing. The pattern: an attacker can supply a URL or hostname to a server-side function, and the server makes an HTTP request to that destination, potentially on the attacker's behalf.

When you get an SSRF report, the first question is: can the SSRF reach internal endpoints? On GCP, the metadata endpoint is 169.254.169.254. A successful SSRF to that address can expose the service account credentials of the instance or node running the vulnerable service. That's a full credential compromise, not a theoretical finding. Treat any SSRF that can reach the metadata endpoint as critical.

Assessment checklist for SSRF:
- Can the attacker control the full URL, or only part of it?
- Is there a response returned to the attacker (full-read SSRF) or just a server-side effect (blind SSRF)?
- Can it reach 169.254.169.254, internal VPC addresses, or other non-public endpoints?
- What service is making the request? What permissions does its service account have?

Remediation patterns that actually work: an allowlist of valid destination domains/IPs (not a blocklist — blocklists get bypassed), validation that the resolved IP is not in RFC1918 or link-local ranges before making the request, and response validation so the function doesn't return arbitrary HTTP responses to the caller. The GCP metadata endpoint bypass protections (requiring the Metadata-Flavor: Google header) help but shouldn't be your only control.

NTLM Relay Findings

NTLM relay findings are less common in cloud-native environments but still show up in enterprise environments because of Windows-integrated authentication endpoints that haven't been fully migrated. The attack: if an endpoint accepts NTLM authentication and is reachable from an adversary-controlled position, an attacker can potentially relay a user's NTLM credentials to a backend service and authenticate as that user without knowing the password.

When you get this report, the assessment questions are:
- Is the endpoint externally reachable, or is it internal-only?
- Does it actually handle NTLM auth? Some endpoints advertise NTLM in the WWW-Authenticate header as a fallback but primarily use other auth.
- What backend service would a relayed credential reach? A read-only reporting service versus an admin panel is a very different risk.

Remediation: the right fix is to move to Kerberos where possible, which is resistant to relay attacks. Where you can't do that immediately, require Extended Protection for Authentication (EPA) and channel binding, which prevents cross-service relay. Disable NTLMv1 everywhere — there's no reason for NTLMv1 to be enabled in 2024. If the endpoint doesn't need NTLM at all, disable it and require modern auth.

Access Control Findings

This is the most common class of report in any bug bounty program, and the quality varies enormously. At one end you have researchers who have actually exfiltrated sensitive data and documented it cleanly. At the other end you have reports that are theoretically valid but require a chain of preconditions that don't hold in practice.

The triage question is always: is this actually exploitable in the production environment? A broken object-level authorization finding that requires an authenticated session in a role that doesn't have external-facing access is different from one that works with a freshly registered free account. Assess the actual exploitability, not just the technical existence of the vulnerability.

For IAM-level findings (overly permissive GCP IAM roles, public GCS buckets), the data sensitivity is the primary driver. An over-permissioned service account that can read a dev logging bucket is very different from one that can read a bucket containing PII.

The Triage Framework

We use: Impact (CVSS score) + Exploitability + Data sensitivity = priority. Importantly, do not auto-prioritize on CVSS alone. CVSS is calculated in isolation from your environment. A CVSS 9.8 in a third-party library you include but don't exercise the vulnerable codepath through is less urgent than a CVSS 6.5 in a user-facing authentication endpoint that millions of people hit every day.

The questions that determine actual priority:
1. What is the realistic attacker profile to exploit this? Anonymous internet user, authenticated user, or does it require a privileged insider position?
2. What data or systems are exposed if exploited?
3. Is there evidence of exploitation in logs? (For reported vulnerabilities that have been in place for some time, check your audit logs.)
4. What is the fix complexity? A one-line patch that can deploy today is handled differently than a 3-month architectural change.

High CVSS + low exploitability + no sensitive data = fix it in the next sprint.
Medium CVSS + trivially exploitable + PII exposure = treat as P0 regardless of CVSS.

Communicating With Researchers

Bug bounty researchers invest real time into finding and documenting findings. A report with a working proof-of-concept, clean reproduction steps, and a clear impact assessment represents hours of work. Treat it accordingly.

Respond within your published SLA — for us that was 5 business days for initial response, which I think is too long. Two business days is achievable and sets a better tone. Acknowledge valid findings clearly. If you're rejecting a report, explain why — "out of scope" with no elaboration is not a useful response to someone who spent a weekend on a finding.

On timelines: give a realistic fix timeline and stick to it. If the fix is delayed, communicate that proactively. Researchers who've had good experiences with your program submit better reports and notify you first when they find something serious, before they start thinking about other disclosure paths. That relationship has real value.

One thing I didn't fully appreciate until doing this: bug bounty researchers often find things that your internal security scans miss entirely. Scanners look for known patterns. Researchers reason about your application logic. The SSRF and NTLM relay findings we received weren't things any automated scan would have flagged. They were found by someone who sat down, thought about how the application worked, and looked for the places where that logic could be abused.

For Engineers Who Haven't Done This

If you haven't worked security from the engineering side: bug bounty reports are worth taking seriously. The researchers are often more skilled than the reports initially look, and the findings are real. The instinct to dismiss a finding as "theoretical" is almost always wrong when a researcher has taken the time to document it.

Get your escalation paths documented before the first report arrives. Know who owns what surface area. Have a technical assessment template ready so you're not writing from scratch under time pressure. And if a finding is valid, say so clearly and get a fix in the pipeline. The alternative — sitting on a known vulnerability while the fix stalls in the backlog — is how small findings become large incidents.