Bug Bounty Guide

Everything I know about finding vulnerabilities, from recon to report. Written from experience, not theory.

A figure pinning notes on a wall, laying out a bug bounty methodology step by step, minimalist blue illustration.

Introduction

I started bug bounty in 2021, during my studies. I spent almost a year finding nothing. Not for lack of motivation, just because I was doing it on the side and hadn't figured out how to approach a program yet.

The turning point came during an internship at BZHunt, a French pentest and bug bounty company. I saw people actually finding bugs, day in day out, and I realized it was possible for me too. My first real bug came shortly after: 100 euros. Not huge, but symbolic.

Today, four years later, I do this full time. Around 100k dollars a year, bug bounty only.

What changed most between 2021-me and now isn't the technique. It's understanding that it was possible, and starting to believe it. The mental side is a huge chunk of the job, more on that later. The other big shift is AI, these past two years. Not as a gadget, as leverage. It handles everything tedious (recon, POCs, parsing, routine), I focus on what actually makes the difference: business logic and ideas.

This guide is my current methodology. Simple, minimalist, and working for me day to day. The idea: you walk away with two or three concrete things to apply tomorrow, not with theory you won't remember.

Before you jump in

Quick reality check because it matters. Bug bounty is hard. Really hard. AI doesn't change that, it just makes the tedious parts less tedious. You're going to spend weeks finding nothing. Months, sometimes. You'll watch other hunters drop 10k bounties in an evening while you've been grinding for 15 days on a program that just N/A'd you because they didn't understand your bug.

If you're in it for easy money, stop right here. There are a hundred easier ways to make cash. The ones who last in bug bounty are the ones with real passion for problem solving and hacking, the ones who can sit in the void, the ones for whom the game itself is the reward.

Without that passion, you'll crack before making a living from it. Might as well tell you now.

What this guide is, and what it isn't

My methodology. Not THE methodology. My point of view, my choices, my biases. There are 50 ways to do bug bounty, I'm showing one.

What you won't find here:

  • Not a course for complete beginners. You should already know web basics, the classic OWASP vulns, how to read an HTTP request. If you don't know what a session cookie is, this isn't for you.
  • No "OWASP top 10", no "how to install Burp", no basic XSS tutorial. You find that everywhere.
  • Not an exhaustive guide. I only cover what I personally use.

If you're starting from zero: PortSwigger Academy, HackTheBox, read disclosed writeups on HackerOne and other great blogs. Come back here once you can find a bug on your own and want to level up. And honestly, in 2026, I'm not sure it's the right time to start bug bounty from scratch. That's my take.

In the AI era

Everyone's asking: is AI going to kill bug bounty? Short answer: no, but it will transform it deeply, and those who don't adapt will disappear. I've written the full analysis in this article.

What to take away:

  • Simple bugs, the classic low-hanging fruit, will be more and more picked up by automated LLM-powered tools.
  • Value is shifting to business logic, complex workflows, edge cases that AI doesn't understand without human context.
  • The hunter using AI as leverage beats the hunter who ignores it. But they lose against the hunter who combines AI with human creativity.
  • Your competitive edge no longer comes from typing commands fast, it comes from asking the right questions and having original ideas.

This guide is written with that in mind. AI is everywhere in my methodology, as a multiplier, not a gadget.

About

I'm Cassim, aka aituglo. I've been doing bug bounty full time for 4 years now and I love exploring other fields. I started as a developer and moved to the attacker side later.

I post my thoughts on aituglo.com and X if you want to follow what I do day to day.

If you want the next guides and articles as soon as they're out, subscribe here.

Choosing a program

TL;DR. Pick a program you like, built around an app you actually use or would use daily. It keeps you focused instead of hunting bugs just to hunt bugs. Pick a program that respects you on your own criteria, and stick with it long enough to get your bearings and know the scope by heart.

Enough context. Let's start with the most underrated decision in bug bounty: picking your program. It's a big part of the job. Pick wrong and you'll burn weeks for nothing. Take your time.

Which platform to pick

Personal choice. Each platform has its pros. I've mostly hunted on YesWeHack and HackerOne. YesWeHack has a responsive team and efficient triage (slower since the AI boom, but still solid). And as a French hunter it's convenient, a lot of the programs are sites and apps I use every day.

HackerOne has tons of programs. Some triagers don't really get your bugs, which gets frustrating fast. Personally, I prefer programs that manage their own triage without going through the platform. More efficient, no middleman.

Wide or deep

There are roughly two kinds of programs. Both work, but not with the same approach.

Wide Deep
Scope Large (*.company.com) One app, few assets
Recon effort Heavy, it's the core of the job Light, quickly done
Bug types Subdomain takeovers, forgotten services, default configs, low-hanging on abandoned surface Business logic, complex IDORs, auth flows, race conditions
Time to first bug Fast if you scan well Slow, you need to understand the app
Strategy Sweep wide, prioritize what surfaces Become the app's expert, dig deep
Examples Yahoo, Epic Games, Shopify A B2B SaaS, a fintech app, a marketplace

Personally, I prefer deep. You spend more time before your first bug, but once you know the app better than some of the company's own devs, you find things nobody else will. Especially now, with AI letting me parallelize multiple angles on the same app.

Scope quality

Read the scope. Actually read it.

How many assets are in-scope? Which vulns are accepted or rejected? Are there weird restrictions (no automated scans, no testing on prod)? Is the program paid or VDP?

A tiny scope with 5 endpoints on one domain is almost guaranteed wasted time. Unless the company pays very well and the surface is genuinely complex.

Program responsiveness

Before investing, check the public stats: average time to first response, average time to resolution, bounty-per-report ratio.

A program that takes 3 months to respond is a program where you'll die waiting. Avoid it, unless you love it and you don't mind. Everyone has their own criteria. I prefer active programs even if they pay a bit less. Velocity matters more to me than average ticket size, but again, it depends on you.

Ask other hunters too. A program that consistently pays within 3 months is a strong signal. A responsive program, open to discussion, is one you can invest in. Sure, one hunter may have had a great experience where another got burned. Don't take it at face value, but it helps you form an opinion.

Public or private

I know a lot of hunters really want private invites. Sure, it's nice. But it doesn't fix everything. Fewer invited hunters doesn't automatically mean the program is full of bugs waiting for you.

Public programs are often bigger, with a bigger scope, and there's room for everyone.

That said, private programs are still a goldmine. To get in:

  • Polish your early reports on public programs
  • Aim for a high signal (few N/As, few invalids)
  • Be patient, invites come
  • Do the challenges. On YesWeHack for example, the Dojo often gets you private invites pretty quickly

Enjoying the program

A fun program is often one built on an app you already use every day. It helps you stay focused and really know the application. Most of the time, I hunt on apps I already subscribe to or use daily.

Setup and recon

TL;DR. Five tools mastered beat forty tools installed. Claude plus MCP drive your recon in 30 minutes instead of a full day. One CLAUDE.md per program, otherwise you redo the same work three times.

You've got your program. Before you touch the target, two things to lock down: your setup and your recon.

Setup-wise, everyone has their own style. A lot of hunters build stacks with 40 tools. Mine is simple and minimalist. You don't need 40 tools, you need 5 mastered ones. Recon-wise, it's the most important phase and the one everyone rushes. Done right, it saves you a lot of time.

Claude

My Swiss army knife. Today, I barely hunt without AI. Not out of laziness, for velocity.

  • Claude Code locally for long hunt sessions. It orchestrates, calls the MCPs, and holds the project context via the CLAUDE.md.
  • The web interface for brainstorming, one-shot questions, and analyzing specific requests I paste in.
  • I'm on the Claude Max plan to have enough tokens.

For those who prefer something less Anthropic-dependent, I've heard great things about opencode, where you can plug in any LLM, local or remote.

The trap to avoid: asking the AI to "find vulns". It'll hallucinate. The right use is a junior assistant you delegate repetitive tasks to (recon, JS parsing, payload generation, writing POCs), and who you brainstorm with.

MCP

MCPs change everything. Without them, it's like a dev without an IDE. Especially in our field, with our own tools, you want to give Claude easy access to the right ones.

It gives Claude context and tools it can call directly.

The ones I mainly use for hunting:

  • Playwright: a must for anything that needs to be logged in or needs a real Chrome. Much faster than the official Claude in Chrome extension.
  • Caido: I mostly use the Caido skill (more on that below), but there's an MCP too. Both work and let Claude drive the proxy: read request context, replay, find findings with more ideas.

There are plenty of others and it's easy to plug new ones in. Depends on what you're trying to achieve.

The proxy: Caido

Essential for seeing requests, replaying them, understanding them. Two tools dominate the community: Burp Suite and Caido.

Both are great, and I used Burp for a long time. But today I largely prefer Caido.

Lighter, more customizable, and the team ships new features every week. It's also super easy to add plugins or workflows to really tailor your setup.

I'm biased as a Caido ambassador. But try it, you'll see.

VPS

A small VPS at 5 euros a month is more than enough. Good for:

  • Receiving callbacks (SSRF, blind XSS, OOB)
  • Running long scans without eating your machine
  • Continuous monitoring (DNS changes, new subdomains, JS diffs)
  • Launching and managing Claude sessions
  • Hosting payloads or a domain for callbacks

Hetzner or Contabo do the job.

Notes

Not a hacking tool, but central. A good note system is the difference between picking up a program 3 weeks later in 5 minutes versus having to re-read everything for 2 hours.

These days, keeping notes in .md is super useful: portable, versionable, usable by Claude Code directly. Obsidian is a good pick, more on that in the toolbox section.

Personally, I also use pen and paper to think. It helps me see what I need to do and where I am. But anything structural lives in my .md files, sorted by target.

Claude-driven recon

Give Claude the scope, the tools via MCP, and let it sweep. It will enumerate subdomains, identify tech stacks, pull JS files, tag suspicious endpoints.

You get in 30 minutes what used to take a day. You come back to validate and prioritize.

The program's CLAUDE.md

For each program, a dedicated folder with a CLAUDE.md at the root. It's the file Claude systematically reads at the start of every session, and it's also what lets you come back to a program in 3 weeks without having forgotten everything.

Here's the skeleton I use, adapt it to your program:

# [Program Name] — Context

## Scope

### In-scope
- `app.example.com`
- `api.example.com`
- `*.example.com` (except the exceptions below)

### Out-of-scope
- `blog.example.com`
- `status.example.com`
- Third-party integrations (Stripe, Mailgun, Segment, etc.)

### Program restrictions
- No automated scanning above 5 req/s
- No social engineering
- No testing on real user data
- No using found bugs beyond the POC

## Program

- Platform: HackerOne / YesWeHack / Direct
- First response SLA: 3 days
- Company self-triages: yes / no
- Bounty table: low 100 to 250, medium 500 to 1500, high 2000 to 5000, critical 5000+

## Tech stack

- Frontend: React 18 + Next.js 14
- Backend: Node.js (Express) + Postgres
- Auth: JWT + sessions, Google OAuth
- Cloud: AWS, Cloudflare CDN
- Upload hosting: S3 via pre-signed URLs

## Test accounts

- **User A** (standard member, org 1): credentials in bitwarden
- **User B** (standard member, org 2): credentials in bitwarden
- **Admin** (if available): credentials in bitwarden

You need at least 2 accounts (A and B) in different orgs for cross-tenant IDOR testing.

## Roles and theoretical permissions

- **Owner**: full rights on the org
- **Admin**: manages users and billing, cannot delete the org
- **Member**: read and write on shared resources
- **Viewer**: read-only

Documenting what each role can do helps spot privilege jumps.

## Mapped surface

### Interesting endpoints
- `GET /api/v1/users/:id` — direct IDOR candidate
- `PUT /api/v1/organizations/:id/members/:user_id` — verify role check
- `/api/admin/*` — check if a non-admin can hit it directly

### Feature flags found in the JS
- `BETA_NEW_BILLING`: toggleable via user settings
- `INTERNAL_DEBUG`: seems only enabled in staging

### Weird observations
- `/api/v1/search` returns 503 on some payloads, dig in
- The JS on `/settings/billing` references an undocumented `/api/v1/admin/refund` endpoint

## Current findings

### 01. IDOR on GET /api/v1/organizations/:id/invoices
- Status: confirmed locally, not yet written up
- Confidence: 80%
- Potential impact: read other orgs' invoices (high)
- Next: reproduce cleanly, record a video, write the report

### 02. Race condition on POST /api/v1/coupons/apply
- Status: suspected
- Confidence: 40%
- Next: test with Turbo Intruder, 50 parallel requests

## Dismissed findings

Log here what I tried that didn't work, and why. Otherwise in 3 days I re-test the same thing for nothing.

- `PATCH /api/v1/users/:id/email`: direct IDOR, IDOR via org_id, race condition. Three angles tested, protection solid.
- `search` param of `/api/v1/search`: sanitization OK, no injection.

## What I've learned about the app

Free-form notes on the architecture, weird behaviors, hypotheses.

- Authz middleware is very solid on the `Organization` model, but not consistently applied on `Invoice` and `AuditLog` resources.
- The frontend checks feature flags client-side. Some backend endpoints don't enforce the flag server-side.
- Uploads go through S3 pre-signed URLs. Check bucket isolation.

## Reporting conventions

- Template: skill `report-writer`
- Short Loom video when possible
- Severity justified by a business scenario, not just CVSS

What matters is that Claude has everything it needs to pick up where you left off. The more up-to-date this file is, the more efficient your sessions. Same for you when you come back in three weeks.

JS recon

Modern JS files are a goldmine:

  • Undocumented API endpoints
  • Hidden admin routes
  • API keys hanging around (Stripe, Algolia, Mapbox, etc.)
  • Business logic exposed client-side
  • Feature flags revealing non-public features

Ask Claude to download everything, beautify, and hand back a structured list: endpoints, routes, potential secrets, suspicious behaviors. Then you dig into whatever looks juicy.

DNS and infra recon

  • Subdomains (amass, subfinder, passive sources)
  • Open ports on interesting hosts
  • SSL certificates (crt.sh often reveals internal subdomains)
  • Tech stack (Wappalyzer, HTTP headers)
  • Cloud provider (often visible through CNAMEs)

No need to do it all by hand. This is exactly the kind of task you delegate to a Claude-orchestrated tool chain.

Recon on past reports

HackerOne disclosed, Bugcrowd disclosure, Twitter, personal writeups from known hunters. For each program, look at:

  • Which bug types were found?
  • On which recurring endpoints?
  • How did they handle the fixes?

Tip: bugs often come back at the same spot after a half-baked fix. If you see 3 IDORs fixed on the same endpoint in the disclosed reports, go check, it's probably still broken from another angle.

Clean setup before the hunt

Before you start hunting, everything should be ready:

  • Proxy configured, cert installed
  • Accounts created (ideally 2 normal users and 1 admin if possible)
  • MCPs plugged in and tested
  • Project folder initialized with CLAUDE.md
  • Log file created

The hunt

TL;DR. Never ask Claude for bugs, ask for leads. Systematically challenge its findings, 80% fall. Verify everything by hand before touching a report.

This is where it happens. It's also where people do crazy stuff with AI.

Run multiple sessions in parallel

Don't stay on a single monolithic session. Fire up several agents with different angles:

  • One session with a custom "IDOR hunt" skill that knows what to test
  • One session with an "auth bypass" skill focused on login and SSO flows
  • One raw session, no skill, exploring freely, which sometimes surprises you

You get three complementary angles in parallel. Custom skills are well worth the time you put into them. A well-built skill means a part of the job is done right.

Keep a log

Everything. What Claude finds, what you test, what you dismiss, and especially why you dismiss it. Otherwise, in 3 days, you're redoing the same tests on the same endpoint. The log saves you dozens of hours on a program that runs for weeks.

Simple format: one dated markdown file per session. Bullet list, not a novel.

Watch your context

Beyond 50% context used, Claude starts forgetting things from early in the conversation and hallucinating. When you approach the limit, either compact (clean summary and reset) or start a new session with a clear brief of the current state.

Five clean sessions beat one 3-hour session that drifts.

Playwright for logins

Instead of struggling to explain how to log in ("click here, wait for the 2FA, type this"), spin up Playwright and let it do the work. Then you observe what the app does: requests, redirects, tokens exchanged. Often, the bugs are in that flow itself.

Ask for leads, not bugs

Huge efficiency difference:

  • Bad prompt: "Find me a vuln in this app"
  • Good prompt: "Give me 10 suspicious spots to dig into in this app, and explain why"

You turn Claude into an assistant that clears the terrain. You decide where to dig. You keep the strategy, you delegate the mass analysis.

Here's the kind of prompt I actually use, adapt to your program:

Context:
- Program: [program name]
- Target: see @CLAUDE.md for the full scope
- My profile: logged in as a standard user (account A, org 1234),
  account B available in another org for cross-tenant tests

Goal:
Give me 10 priority spots to dig into in this app, ordered by
likelihood of an exploitable vuln. For each lead:
- The endpoint or feature involved
- The suspected vuln type
- Why you think it's suspicious (observed pattern, abnormal behavior,
  broken convention, etc.)
- The fastest way to validate or rule out the lead

Constraints:
- No generic suggestions like "test for classic XSS". I want leads
  specific to this app only.
- Prioritize business logic bugs, access control, and complex chains.
  Scanners already pick up the rest.
- If you lack context on part of the app, say so rather than making
  things up.

You walk away with 10 structured leads you can tackle one by one, knowing exactly what to test and why. The rest of your time goes into digging, not hunting for a starting point.

Concretely, this let me tackle big programs that intimidated me a bit when I was on my own. Pointing Claude at a huge scope and having it spit out 10 priority leads unblocks you mentally. On several programs like that, I got really interesting leads and a few solid bugs I probably wouldn't have dug into on my own.

And most importantly, it gives you ideas. You can then fire up a session to have Claude dig into specific leads itself.

Challenge its findings

Claude is overconfident. On every "I found a vuln", challenge it systematically:

  • "Are you sure it's exploitable?"
  • "What's the possible false positive here?"
  • "How do we confirm manually?"
  • "Could there be a protection you missed?"

80% of the "findings" fall at this stage. The remaining 20% are either real (the ones worth gold) or pure AI slop to throw out. Skip this step and you'll send N/As non-stop and destroy your reputation.

Verify by hand

Always. No report leaves without an end-to-end manual verification. Non-negotiable. You reproduce the bug yourself, in your browser or your proxy, step by step. If you can't, it's not a bug.

You also need to challenge the impact. Claude will often tell you it's critical, but if you think about it a bit, it often ends up being a low, or a medium at best.

Browse manually too

AI doesn't do everything. Spend time clicking through the app like a normal user. You catch things Claude misses:

  • Hidden or disabled fields
  • Weird behaviors after certain actions
  • Silent errors in the console
  • Features that unlock under specific conditions
  • Complex multi-step workflows

You then hand all that back to Claude to dig into.

Typical lead workflow

  1. Identify a lead (via Claude or manual intuition)
  2. Formulate it clearly (hypothesis and potential impact)
  3. Ask Claude to dig, run tests
  4. Challenge the results
  5. Verify by hand
  6. If confirmed: clean POC and report
  7. If invalidated: log it so you don't come back

Reduce friction

Every step that slows you down is a step to automate, script, or eliminate. Hunting is an endurance sport. Every drop of energy you burn on setup or repetition is energy you don't have for creativity. And creativity is what separates a good hunter from an average one.

Reporting and communication

TL;DR. A clear, short report sells the bug. Verbose or AI slop and you lose half the bounty. Video, screenshots, numbered steps, honest severity.

This is where many hunters lose their bounties, or watch them get slashed.

Verify the finding

One more time. Before writing a single line of the report. If you can't reproduce it yourself, step by step, in your browser, it's not ready.

Clean, clear POC

One or two requests max. Numbered steps. A triager who's never seen the app should be able to reproduce it in under 5 minutes.

Standard format:

  1. Log in with account A
  2. Do X action, intercept the request
  3. Change parameter X to value Y
  4. Resend the request
  5. Observe: account B is compromised

No useless details. Don't tell your life story.

Video if possible

Loom or a plain screen recording. It speeds up triage a ton because the triager sees the bug live instead of having to reproduce it.

Watch out for AI slop reports

You spot the pattern a mile away:

  • "I am pleased to report a critical vulnerability..."
  • "This vulnerability has significant implications..."
  • "Executive Summary" sections on a 200-dollar IDOR
  • Bullet lists everywhere, hyper-formal tone

Triagers hate it. You lose credibility before they've even read the bug.

If you use Claude to write (totally fine), re-read and cut everything that sounds AI: bloated phrasing, useless transitions, useless sections ("Background", "Methodology", "Recommendations" on a simple bug), polite disclaimers that serve nothing.

Keep your tone, your words, your conciseness.

Short and precise

A good report fits on one page:

  • Clear title (bug type and endpoint)
  • Impact in 1 or 2 sentences
  • Numbered steps to reproduce
  • POC (request and screenshot or video)
  • Suggested fix in 1 or 2 sentences (optional but appreciated)

Here's what my standard template looks like:

**Title**: IDOR on GET /api/v1/organizations/:id/invoices — access to other organizations' invoices

## Impact

An authenticated user can read any other organization's invoices by
changing the ID in the URL. Invoices contain company name, address,
total amount, VAT number. All SaaS customers are affected.

## Steps to reproduce

1. Log in with account A (org_id = 1234)
2. GET /api/v1/organizations/1234/invoices → OK, org A's invoices
3. Replace 1234 with 5678 (org_id of another test account)
4. GET /api/v1/organizations/5678/invoices → returns org B's invoices
5. No ownership check is performed server-side

## POC

- Loom video (45 seconds): [link]
- Annotated screenshots attached

## Suggested severity

High (CVSS 7.5). Exposure of sensitive business data (amounts,
commercial partners) across all SaaS customers. Exploitable in a
single request by a free-tier user.

Direct impact, steps reproducible in 5 minutes, POC with video, severity justified by a concrete scenario.

Honest severity

The severity you propose must be justified. If you mark "Critical" on a bug that needs 12 conditions and an admin account, the triager will downgrade and you'll look like a beginner. Be honest, propose the real severity. You build credibility long-term.

Communicating with triagers

Underrated section. How you communicate directly affects your bounties.

Don't push too early. The program announced a 5-day SLA for first response? Wait 5 days before nudging. Pinging the next day makes you look impatient and burns the relationship. And no "any update sir" after 2 days of silence either.

If they downgrade your severity, argue calmly, with facts. Describe the real exploitation scenario, the business impact. If it's justified, accept it. If you feel it's unfair, push once cleanly, then let it go. Fighting 3 weeks for 200 euros isn't worth it. Use the CVSS spec to back up your points too. Some programs don't know the spec by heart.

If they mark it N/A, ask why, calmly. Sometimes it's a misunderstanding on the repro. Sometimes it's a protection you missed. Sometimes it's just unfair. In that case, log it for next time and move on.

Build a relationship. If you report multiple bugs on the same program, the triager gets to know you. Be reliable, be clear, and you become a hunter they take seriously. It pays off on severities, on retests, on private invites.

Mindset

TL;DR. The part nobody talks about. Without passion, you'll crack. Dry spells are part of the game. Go where others don't, dig into business logic, take breaks, know when to pivot.

The thing nobody talks about enough. Probably the most important section of this guide.

Have a direction

Without a goal (money, technical, learning), you'll drift and quit after a month. Be clear on why you're doing this:

  • Make X per month?
  • Learn a specific stack?
  • Build a name in the community?
  • Find a specific bug type?

Without direction, every program looks the same and you'll flutter around.

Impostor syndrome

You'll have it. Everyone does. Top hunters have it. You'll read writeups from some insanely good hunter and tell yourself "I know nothing". You'll see 50k bounties and tell yourself "I'll never get there". That's normal, everyone goes through it, and it keeps coming back even after years. Keep going anyway. The only real impostor is the one who quits because of that feeling.

Go where others go less

The hyper-visible programs (GAFAM, ultra-famous unicorns) can be saturated. Hundreds of hunters fight for them, and yes you have a shot, but you need to think differently from the crowd. Look instead for:

  • Private programs
  • Newly opened programs
  • Public programs, but with your own angle on specific scopes
  • Technical niches (certain stacks, certain industries)
  • Unsexy B2B companies
  • Programs with weak public comms (fewer hunters on them)

No low-hanging fruit

If it was easy, it was already found. On any program active for more than 6 months, the low-hanging is gone. Go where others avoid:

  • Complex workflows (multi-step, multi-role)
  • Rarely-used features (export, advanced settings, third-party integrations)
  • Business edge cases (unlikely action combinations)
  • Business logic (bugs that require understanding the domain)

Harder. Also where the big bounties are.

Duplicates are part of the game

The number-one frustration in bug bounty isn't finding nothing. It's finding a juicy bug, thinking you've landed a critical, and seeing "Duplicate" two days later. A program takes 3 months to fix, and in the meantime ten other hunters stumbled on it. Nothing you can do, you go home empty-handed.

It happens to me often. What I've learned: the simpler a bug is technically, the more you risk a duplicate. A single IDOR, a well-hidden XSS, multiple hunters can trip over it in one evening. On the flip side, a complex chain that combines two or three bugs for real impact is where you really stand out. The pattern is too specific for multiple hunters to converge on it at the same time.

When you see a simple bug, grab it and send it fast. When you see a pattern that could chain, take your time, dig. The biggest bounties that don't get duplicated are there.

Motivation

It goes up, it goes down. Weeks without a bug is normal. Months without a bug is normal too. Dry spells are part of the game.

To avoid cracking:

  • Work in cycles (1 program full-on for X weeks, then switch)
  • Celebrate small wins (a solid finding, even if it's not critical)
  • Keep a side project outside bug bounty so you don't put it all there

Dumb ideas

The best bugs often come from a "what if I did this absurd thing". Keep that playful side. When you tell yourself "nah that's too dumb, no way it works", try it anyway. You'll be surprised.

Intuition

After a while, you feel apps. Something seems off without you knowing why. An endpoint has a weird name. A response behaves oddly. Dig. It almost always pays off. Intuition is just your experience talking faster than your analytical brain.

Take breaks

Hunting for 12 hours non-stop guarantees you miss obvious things. Your brain needs to breathe. Go out, run, sleep, see people. Your brain works in the background during that time, and it's often in the shower that you finally see the bug.

Know when to let go

Not every program pays off. If after 2 or 3 weeks of serious hunting you have nothing out and you feel the surface is saturated, switch. Grinding out of pride is the worst strategy. Good hunters know when to pivot.

Building for the long run

TL;DR. A hunter who writes their own tools compounds. One who's been typing the same commands for 3 years plateaus. Claude skills for execution, Obsidian for memory, monitoring for passive bounties, public profile for invitations.

You pick well, you set up clean, you hunt well, you report well, you hold mentally. Great. Now, how do you make all of it compound over years instead of starting from zero on each program.

That's what turns the average hunter into one who scales: the tools you write for yourself, the knowledge you accumulate, the monitoring that brings bounties without you, and the public presence that attracts invitations.

Claude skills

A Claude skill is a folder with a SKILL.md file containing specialized instructions. Claude loads it automatically when the task matches, or you can invoke it explicitly. The official docs are here: docs.claude.com/en/docs/agents-and-tools/agent-skills.

Why you want to write them for bug bounty:

  • You codify your methodology for a given vuln type
  • Claude applies it systematically, without missing a step
  • You iterate on the skill, not on each conversation
  • Findings become more consistent

Think of it like training a junior: once, properly, in writing. After that, they apply.

Anatomy of a skill

The SKILL.md file starts with YAML frontmatter, then markdown instructions:

---
name: idor-hunt
description: Systematically tests IDORs on a web app. Use when the user wants to hunt IDORs, test authorization, or analyze an endpoint that manipulates user resources (/users/:id, /orders/:id, etc.).
---

Rule number 1: the description is everything. It decides whether Claude loads the skill at the right moment. Be specific, include the keywords you use when you want this skill, give examples of triggers.

Rule number 2: instructions in the body. You write your methodology as you'd explain it to a junior. Clear steps, checks not to forget, classic traps.

Rule number 3: additional files. You can add referenced files (wordlists, POC templates, disclosed finding examples). Claude only loads them when needed, so you can be exhaustive without bloating the context.

Skills to write first

Start with the vulns you hunt the most. My priorities:

  • idor-hunt: full IDOR methodology (ID enumeration, testing across roles, diffing responses)
  • auth-bypass: login flows, SSO, JWT, session fixation, password reset
  • ssrf-hunt: detecting URL-fetching endpoints, payloads per cloud environment (AWS, GCP, Azure)
  • js-recon: how to analyze a JS bundle, what to look for, secret patterns
  • report-writer: your report template, your tone, your structure, to prevent Claude from producing AI slop when you ask it to write things up

Each skill is 1 to 2 hours of work the first time. But once written, it serves you on every program.

Best practices

  • Keep SKILL.md short (max 500 lines). Details go in separate files referenced from the skill.
  • Actionable descriptions. Not "skill for doing bug bounty" but "tests IDORs by enumerating IDs, comparing responses across roles, and hunting mass assignments, use when...".
  • Add 2 or 3 concrete examples inside the skill. Activation reliability goes from ~70% to ~90% with examples.
  • Progressive disclosure. Everything heavy (long docs, wordlists, templates) goes in separate files that Claude loads on demand.
  • Iterate by testing. Write, test on a real program, adjust. A skill is never finished.

Anthropic's official skill-creator can help you get started: describe what you want, it generates the structure. Use it as a base, then customize.

Obsidian as a knowledge base

Obsidian is just a markdown editor with links between files. Nothing magical. But for a hunter, it's perfect:

  • Everything in .md: portable, versionable, usable by Claude Code directly
  • Bidirectional links: you find the connections between your notes without searching
  • Instant full-text search: type "JWT" and see all your notes on it
  • No lock-in: if you want to switch tools, your files are just .md in a folder

Vault structure

Nothing sacred, adapt to your brain. Mine looks like this:

/vault
├── /programs
│   ├── program-A
│   │   ├── CLAUDE.md
│   │   ├── recon.md
│   │   ├── findings.md
│   │   └── notes.md
│   └── program-B/
├── /vulns
│   ├── idor.md
│   ├── ssrf.md
│   ├── jwt.md
│   └── race-conditions.md
├── /techniques
│   ├── oauth-flows.md
│   ├── graphql-hunting.md
│   └── js-analysis.md
├── /writeups
│   └── [notes on writeups I've read]
├── /tools
│   ├── caido-tips.md
│   └── claude-prompts.md
└── /skills
    ├── idor-hunt/SKILL.md
    └── auth-bypass/SKILL.md

Connecting Obsidian and Claude Code

The Obsidian vault is just a folder of markdown. So:

  • Claude Code can read your vault. Point it at the folder, it gets access to all your notes.
  • Your program's CLAUDE.md can reference your vuln notes. For example: "For IDOR analysis, see ~/vault/vulns/idor.md".
  • Your skills can draw from your writeups. An "ssrf-hunt" skill can reference ~/vault/writeups/ssrf/ to see how you handled similar cases.
  • You keep the knowledge in one single place. You update a JWT note, and all your skills and programs benefit.

The typical flow: you learn something on a program, you note it in /vulns/ or /techniques/. Next time Claude works on a similar case, it has access to that knowledge.

Version your vault

Git, always. Private GitHub, commit regularly. Bonus: you see your knowledge evolve over time, and you sync easily across machines.

Monitoring and retesting

Bounties are sleeping everywhere. Most hunters do a one-shot on a program, ship their findings, and move on. Mistake.

Programs change constantly: new endpoints, new features, redeploys, infra changes. The hunter who monitors is front-row when a new surface appears.

Why nobody does it

Monitoring is annoying to set up, invisible day to day, and results come with a delay. That's exactly why it pays: the entry friction filters out 90% of the competition.

Once set up, you get alerts without doing anything. You check, you see if it's interesting, you hunt.

What to monitor

JS endpoints. JS bundles change with every release. Diff the versions, you see new endpoints appear, new admin routes, feature flags flipping. Download the JS periodically, diff against the previous version, alert on changes.

Sensitive endpoint responses. Found an endpoint returning suspicious behavior? Monitor it. If the response changes (new field, new HTTP code), you get alerted.

Fixes on your past bugs. A program fixed your bug? Tag it, come back in 3 months to test if the fix holds. Regressions are frequent, especially on business logic bugs. A fix bypass is often easier than finding the original bug.

Infra changes. New cloud provider, new tech detectable via headers, new CDN. Every migration introduces temporary bugs.

The program scope. Scopes expand regularly. An alert or a scraper on the program page, and you're notified when a new domain becomes in-scope.

Minimal setup

A VPS, a cron, and a small script that sends you a notification (Telegram, Discord, email) when something changes. No need to reinvent the wheel:

  • axiom or shot-scraper for periodic screenshots
  • subfinder + httpx in cron for new live subdomains
  • nuclei scheduled to re-scan targets
  • git to version downloaded JS and see the diffs

And with Claude you can put together a small automation super easily now.

Structured retesting

Regressions are super frequent. And fix bypasses can be reported as a new bounty chain, often at the same severity as the original.

The long-term edge

A hunter who has monitored a program for 2 years knows the app better than some of the company's own employees. They know the history, the development patterns, the probable teams behind a given feature. That knowledge compounds. It's impossible to catch up for a new hunter showing up.

That's why top hunters stay on the same programs for years. Not because they're lazy, because they have an asymmetric edge they won't give up.

Public profile

Yes it's annoying. Yes it's important. No, you don't need to become an influencer.

Why it matters

In bug bounty, your resume is your public profile. When a private program is looking to invite, they check:

  • Your HackerOne (or similar) profile (signal, severities, number of reports)
  • Your Twitter / X (are you active, technical, reliable)
  • Your blog (how you think, what interesting things you've found)

Without a public presence, you stay invisible, except via your own platform stats. With a public presence, the invites come to you. That's how hunters move into the interesting circles.

Second reason: the bug bounty community is one of the best tech networks out there. People help each other, share leads, challenge each other on complex findings. You miss all of that if you're solo.

Twitter / X, the bare minimum

It's the bug bounty platform. Not LinkedIn. Not some obscure other app. X, despite all its flaws.

What works:

  • Sharing your writeups when you're allowed to (after disclosure)
  • Commenting on sec news with a technical angle
  • Asking technical questions, the community often answers well
  • Sharing small tricks that saved you time
  • Engaging with other hunters, for real, not just liking

What doesn't:

  • Recycled generic "10 tips for bug bounty" threads
  • Low-value beg bounty tips
  • AI slop (posts that reek of ChatGPT a mile away)

You don't need to be funny or have 50k followers. You need to be identifiable as a serious hunter.

The blog

Twitter is for visibility, the blog is for depth. A good writeup on an interesting bug gives you an existence and shows what you can do.

What works on a bug bounty blog:

  • Writeups of disclosed bugs (with context, methodology, what you learned)
  • Methodology posts (how you approach this kind of app, this kind of vuln)
  • Retrospectives
  • Tools and scripts you've written and share

For me, this blog is what brought me the most opportunities. Whether it's invites or requests to write articles elsewhere, it helps a lot.

Talks and conferences

The next level up. Speaking at a conference (Le Hack, nullcon, DEF CON, etc.) puts you in a different category. It's a real investment (prep, travel, stage fright), but it proves real knowledge of the field.

Start with local meetups. The jump is much more accessible than you'd think. Conferences often have small rump sessions, not bad and good practice.

Resources

People to follow, no bullshit.

  • Critical Thinking Bug Bounty Podcast: the best bug bounty podcast right now, listen to every episode.
  • Monke: excellent methodology content. His guide still holds up today and his blog is super interesting.
  • Stök: YouTube. More vibes and inspiration than pure technique, but useful for motivation.
  • NahamSec: Twitch, YouTube. Solid educational content, especially for intermediates.
  • X / Twitter: the real-time info feed. Follow active hunters, not security influencers. Watch who they retweet, build your timeline. You can look at who I follow or who the hunters you like follow, to find others worth tracking.
  • Blogs of hunters who post writeups regularly, like mizu.re or worty.fr. Reading CTF writeups matters a lot to learn.

If you read French:

  • Fransosiche: great cybersec content on YouTube.
  • Laluka: really good lives and technical cybersec content, perfect for keeping up with what's happening: X and linktr.ee/TheLaluka.

Docs to bookmark: