Big Tools / Frameworks

I'd like to use free tools. Many of the best tools have huge price-tags. The following tools are free unless indicated otherwise.

There are a zillion tools available, and many of them are GUIs or frameworks that call other tools. Many people have just written scripts or GUIs that duplicate other efforts and don't add much value. Some tools have multiple versions, with the really good functionality available only in the very expensive "Pro" version. Some tools were hot 10 years ago and haven't been maintained since then.

[From this point, everything assumes Linux as your test-driver machine.]

The main classes of big tools of interest for web-app testing (I think)

  • Browser-proxy and app-logic GUIs.

  • Vuln-scanners with exploits and payloads.

  • Automated testing drivers.

There is lots of overlap, and tools can import/export among each other.

Be careful about just running "try everything" in some big tool, or some script that calls lots of tools. You may hammer your target with lots of port-scanning and attack traffic and brute-forcing, causing alerts to go off at target and your ISP, maybe cause DOS at target, get yourself in trouble.

Browser-proxy and app-logic GUIs

  • Look at the developer tools / debugger in your browser.

  • Burp:

    GUI app with intercepting-proxy, mainly does web-app testing.
    Can import nmap scan output into it.

    PortSwigger's "Burp Suite Editions"
    Professional version costs €349 per year.

    Free edition has the proxy and good traffic-recording and repeating and modification and parameter brute-forcing/fuzzing, but lacks the Scanner (which is a great module for trying classes of app vulns/exploits). Also, Free edition is throttled so it does iterations much slower.

    ZeroSec's "Learning the Ropes 101: Burp Suite Intro"
    InfoSec Institute's "Quick and Dirty BurpSuite Tutorial"
    PortSwigger's "The Burp Methodology"
    Jean Fleury's "Burp Suite For Beginners"
    Bugcrowd University - Introduction to Burp Suite (video)
    PortSwigger's "Burp Suite Support Center"
    The Defalt's "Bypass File Upload Restrictions Using Burp Suite"
    OccupyTheWeb's "How to Hack Web Apps, Part 3 (Web-Based Authentication)"
    OccupyTheWeb's "How to Hack Web Apps, Part 4 (Hacking Form Authentication with Burp Suite)"
    OccupyTheWeb's "How to Crack Online Web Form Passwords with THC-Hydra & Burp Suite"
    Ryan Wendel's "Burp Suite Tips - Volume 1"
    Articles in Hacking Articles' "Web Penetration Testing"

    Yeah Hub's "19 Most Useful Plugins for Burp Suite"
    Offensive Security by Automation's "Worthwhile BurpSuite Plugins"
    Regala / burp-scope-monitor
    Trust Foundry's "The Top 8 Burp Suite Extensions That I Use to Hack Web Sites"

    HUNT Suite Proxy Extensions (indicates parameters where you should look manually for bugs)
    Do Son's "HUNT Burp Suite Extension"
    Ranjith's "HUNT = Burp Suite Pro/Free and OWASP ZAP Extensions"

    PortSwigger's "Burp Collaborator"
    integrity-sa / burpcollaborator-docker

    Backslash Powered Scanner (Burp extension):
    PortSwigger / backslash-powered-scanner
    James Kettle's "Backslash Powered Scanning: hunting unknown vulnerability classes"

    Get lists from danielmeissler / SecLists and fuzzdb-project / fuzzdb and plug them into the Intruder module.

    Installed Burp Community Edition on my Mint 19.1 system, using
    PortSwigger's "Getting started with Burp Suite":
    # Initial download is a 95 MB ".sh" script file !
    sudo bash ./
    # says it's taking about 530 MB
    # installing into /opt/BurpSuiteCommunity
    # creating symlinks in /usr/local/bin
    # go to Start menu and find Burp Suite Community Edition
    # Get Welcome screen that says you can only do Temporary projects with this edition
    # Click through next screen, accepting default config values
    # Get to main screen - Temporary Project
    # Don't exit the app; keep it running while setting up the browser
    # in following steps:
    # In Firefox browser, go to Preferences / General / Network Settings.
    # Select Manual Proxy Configuration, HTTP Proxy, Port 8080,
    # and Use this proxy server for all protocols.
    # Click "Intercept is On" button in app to make
    # it change to "Intercept is off".
    # Install Burp's trusted root certificate in the browser, so the proxy can
    # generate a new cert for each SSL session/site it intercepts
    Go to http://burp/ in browser
    # Request should be intercepted by Burp application
    # See a page saying "Welcome to Burp Suite Community Edition."
    # If instead you get some domain-parking service, the proxy is not working.
    # Click on "CA Certificate" in upper-right of page.
    # Save certificate file to disk.
    # In Firefox browser, go to Preferences / Privacy & Security /
    #		Certificates / View Certificates / Authorities
    # Click Import.  Choose the file you saved to disk.
    # Check "Trust this authority to identify websites".
    # Normal browsing works with app running, proxy used, intercept off.
    # See "HTTP History" tab to look at all the traffic that went through.
    # Proxy works fine with Windscribe VPN running too.
    # Some web sites react strangely to this setting, turn it off:
    # In ZAP, go to Tools / Options / Local Proxy and turn off "Remove Unsupported Encodings"
    # I don't see anything for overall project management: OSINT, reporting, etc.

    It's a good practice to set the Target - Scope before connecting to the proxy and the network. You don't want to hit anything unintentionlly.

    To run Burp on your computer and have Wi-Fi from your phone go through Burp, you set your phone to use an HTTP proxy, with Server address set to your computer and Port number set to 8080 (as usual), and Burp's Proxy Listener set to "All Interfaces" instead of just "Loopback".

    Later decided to remove Burp and concentrate on OWASP ZAP instead:
    sudo /opt/BurpSuiteCommunity/uninstall

  • green check-mark  OWASP ZAP (zaproxy; Zed Attack Proxy):

    GUI app with intercepting-proxy, mainly does web-app testing.
    Has limited port-scanner; I think it can't import nmap output.
    I don't see any connection to Metasploit.
    Has some kind of Selenium component in it, so it can drive a browser and do AJAX spidering.
    Uses dirbuster code in the "forced browsing" component to find pages or files that don't appear anywhere in the application pages.
    Uses Wappalyzer.
    Uses sqlmap core.
    Has a CMS scanner.


    OWASP Zed Attack Proxy Project
    HUNT Suite Proxy Extensions

    zaproxy / zaproxy Wiki
    reddit's /r/OWASP
    OWASP ZAP User Group
    zaproxy's "FAQmobile"

    Kali Linux's "ZAP -- Most Used Web Vulnerability Scanner"
    Devopedia's "OWASP ZAP"
    zaproxy / zap-core-help (to run ZAP from CLI)
    Grunny / zap-cli (to control ZAP from CLI)

    Has a scripting language (Zest) based on JSON. Used to send bug-reproducing scripts to companies (if you wish), define scan-rules, more.

    Understands concepts of:
    • Context: a set of URLs, usually representing one web-app.
    • Session Management Method: how web sessions are handled by the server (cookie-based, HTTP authentication, query-param based, etc).
    • Authentication Method: how a new session is established.
    • User Management: relating users to authorization for operations.
    See "Session Properties" dialog (icon for it on toolbar).
    And fields in that dialog can be filled in automatically after you login through the browser; select the HTTP request recorded and send it to dialog.
    There are regex properties to tell app when a state is logged-in or logged-out.

    Installed OWASP ZAP on my Mint 19.1 system:
    sudo sh -c "echo 'deb /' > /etc/apt/sources.list.d/home:cabelo.list"
    wget -nv -O Release.key
    sudo apt-key add - < Release.key
    sudo apt update
    sudo apt install owasp-zap
    # launch from CLI
    # In Firefox browser, go to Preferences / General / Network Settings.
    # Select Manual Proxy Configuration, HTTP Proxy localhost, Port 8080,
    # and Use this proxy server for all protocols.
    # In OWASP ZAP, go to Tools / Options / Dynamic SSL Certificates.
    # You should see a certificate's text in the big text pane.
    # Click on Save and save a ".cer" file to disk.
    # earlier, I created a profile called "testing" for Firefox, so run:
    firefox -P testing -no-remote
    # In Firefox browser, go to Preferences / Privacy & Security /
    #		Certificates / View Certificates / Authorities
    # Click Import.  Choose the file you saved to disk.
    # Check "Trust this authority" for everything.

    Installed OWASP ZAP on my Kubuntu 20.10 system:
    flatpak install ZAP
    # Couldn't get it to work: Quick Start / Manual wouldn't launch any browser,
    # and I couldn't figure out how to set the cert and proxy in
    # any browser except Firefox.  I don't want to use Firefox, since that's
    # my main browser.
    snap install --classic zaproxy
    # Same problems.
    # launch from GUI
    # update add-ons
    # go to Safe mode
    # Unable to change proxy settings inside any chromium-based browser
    # installed as snap or flatpak.
    # Firefox is installed as deb, but it's my main browser, don't want to mess with it.
    sudo apt install midori
    # In Midori browser, go to Hamburger / Preferences / Network.
    # Select HTTP Proxy Server, HTTP Proxy http://localhost, Port 8080.
    # In OWASP ZAP, go to Tools / Options / Dynamic SSL Certificates.
    # You should see a certificate's text in the big text pane.
    # Click on Save and save a ".cer" file to disk.
    # Use a chromium browser to save that certificate into the system store.
    # Settings / Privacy and security / Security / Manage certificates / Authorities / Import / All Files

    Getting started:
    1. Launch ZAP ("owasp-zap" on CLI, or through GUI). In upper-left corner, select Safe Mode.

    2. Launch the browser. Make sure it's using the ZAP proxy.

    3. Don't type an URL into ZAP's "Quick Start" tab; ZAP would start crawling the site right away.

    4. Go to the browser and browse the target site a bit.

    5. Then go to ZAP and look in the Sites and History tabs.

    6. ZAP analyzes all the requests and responses it records ("passive scanning"), and reports any potential issues in the Alerts tab.

      Select a request in the Alerts tab, and see the request or response headers and data. Double-click on the request in the Alerts tab to get an explanation of the issue.

    7. Set a limit on things by adding the specific target site to the "Default Context". Delete any traffic to other sites out of the Sites tab.

    8. Each "Context" is supposed to represent a web-application.

    9. Once you've set the Context, you can change from "Safe" mode to "Protected" mode. This will let you attack anything in the Context.

      Safe == can't do anything dangerous
      Protected == can do dangerous things only in Context/Scope,
      Standard == can do anything to anything,
      Attack == automatically attacks any new items that appear in Context/Scope.

    10. At the right end of the tabs list that starts with Alerts, click on the green "+" symbol. Click on Spider, and click on New Scan.

      It should spider only the site you specified, finding all the pages. Maybe click the Stop icon after it has spidered a dozen pages. Look at Alerts again.

    11. If you have permission to attack the site, find Active Scan and click New Scan. A dialog will appear to define the scope; click Select and then Default Context (where you put the site). Then start the scan. (Not sure what to do with Recurse)

      It should scan only the site you specified, trying lots of parameters. Maybe click the Stop icon after it has done a few pages. Look at Alerts again.

    12. When you get going on a real app, after manual browsing but before Active Scanning, go to Tools / Options / Anti CSRF Tokens and add any token names that might be custom. Also go to Tools / Options / Active Scan and enable "Handle anti CSRF tokens".

    Later un-installed OWASP ZAP:
    # went into my testing profile of Firefox
    # deleted OWASP proxy out of list in FoxyProxy
    # deleted OWASP's CA certificate out of certificate store
    sudo apt remove owasp-zap
    cd ~
    rm -fr .ZAP
    Installed the weekly release of OWASP ZAP:
    # downloaded weekly .zip file from
    extract from .zip to somewhere such as /usr/local/bin
    # create file /usr/local/bin/owasp-zap containing:
    cd /usr/local/bin/ZAP_D-2019-03-04
    chmod a+x   # on that file
    # launch from CLI
    # earlier, I created a profile called "testing" for Firefox, so run:
    firefox -P testing -no-remote
    # In Firefox browser, go to Preferences / General / Network Settings.
    # Select Manual Proxy Configuration, HTTP Proxy localhost, Port 8080,
    # and Use this proxy server for all protocols.
    # In OWASP ZAP, go to Tools / Options / Dynamic SSL Certificates.
    # You should see a certificate's text in the big text pane.
    # Click on Save and save a ".cer" file to disk.
    # In Firefox browser, go to Preferences / Privacy & Security /
    #		Certificates / View Certificates / Authorities
    # Click Import.  Choose the file you saved to disk.
    # Check "Trust this authority" for everything.
    # the proxy threw some error, had to quit browser and ZAP and
    # then run again, all fine now
    # new HUD feature is annoying, in ZAP go to Tools / Options / HUD
    # to turn it off for now
    If you launch ZAP and a new weekly release is available, ZAP will tell you about it, and give you a "Download" button. Download it, but don't launch it. Quit out of ZAP. Then:
    sudo bash
    cd ~/.ZAP_D/plugin
    mv ZAP_WEEKLY_D-*.zip /usr/local/bin
    cd /usr/local/bin
    rm -fr ZAP_D-*
    unzip ZAP_WEEKLY_D-*.zip
    xed owasp-zap		# and change it to point to the new tree
    rm ZAP_WEEKLY_D-*.zip

    If menu items are greyed-out and you can't figure out how to edit a Request, check to make sure the mode is set to "Standard" or higher.

    From someone on reddit 5/2022:
    "There are certain challenges in portswigger labs that cannot be done using zap. For example, the host header injection won't work in zap due to the way zap programmed when resolving the url. SSRF labs as well because the server of the labs are not connected to the internet and only connected to the burp collaborator server."

    Cyber Army's "Authenticated Scan using OWASP-ZAP"

  • Caido:


  • Vega Vulnerability Scanner:

    Subgraph's "Vega Vulnerability Scanner"

    TokyoNeon's "Scan Websites for Potential Vulnerabilities Using Vega in Kali Linux"

  • Arachni:

    Arachni / arachni

    Mozes Cermak's "Scan Websites for Vulnerabilities with Arachni"

  • Pappy:

    Synex's "Pappy Proxy"

  • SecApps Suite from Websecurify:

    SecApps Suite

  • mitmproxy:

    MobSF / httptools

  • Telerik Fiddler:

    Telerik Fiddler

  • Pown CDB:

    pownjs / pown-cdb

Web-driving engines

+/- Non-GUI ways of driving web pages or accessing URLs:

  • A few HTTP operations, no JavaScript:
    • curl:

      "curl PAGEURL"
      "curl --head PAGEURL"
      From curl docs: "We hide HTTP/2's binary nature and convert received HTTP/2 traffic to headers in HTTP 1.1 style."
      "curl --head --http2 PAGEURL"
      "curl --head --http3 PAGEURL"

    • http:

      "sudo dnf install httpie"
      "http --headers"
      Apparently always does HTTP 1.1; no way to specify HTTP 2 or 3.

    • fetch:

    • wget:

      "wget --server-response"
      "wget --server-response --inet6-only"
      Apparently always does HTTP 1.1; no way to specify HTTP 2 or 3.
      Linux Shell Tips article

    • wget2:

      Bobby Borisov article
      Supports HTTP 1.1 and 2.

  • A sequence of HTTP operations, with detailed control and visibility:
    GUI or headless browser using a proxy, then:

    • Burp with scripting:

    • ZAP with scripting:

  • A lot of HTTP operations, with varying parameters:
    GUI or headless browser using a proxy, then:

    • Burp with module:

    • ZAP with module:

  • Non-GUI to complicated pages with lots of JavaScript:
    Headless or text browser driven by something.


    • Headless Firefox:

      MDN's "Headless mode"
      "About:profiles", create a profile, "firefox -P NEWPROFILENAME -headless"

    • Headless Chrome:

      google-chrome --headless --disable-gpu --dump-dom https://somedomain/somepage

    Run GUI from cron job: need to run as a valid user instead from root.
    * * * * * su MYUSERNAME -c "DISPLAY=:0.0 /usr/bin/firefox -new-window";


  • GUI to complicated pages with lots of JavaScript:
    GUI browser with macro-engine extension.

    • iMacros for Firefox:

      Firefox browser extension.

      But in free version: only macros saved as bookmarks can be saved/played; can't save data to a file.

    • UI.Vision Kantu for Firefox:
      +/- Firefox browser extension.
      Selenium IDE commands

      But in Kantu 5.1.9 after doing an "open" command in a macro, I get "No ipc available for the playing commands tab".
      Have to install XModules - Extension Modules for Kantu ?
      On Linux, download ZIP file, extract files to a directory "XModules", then:
      sudo bash
      mv XModules /opt
      cd /opt/XModules
      chmod +x *
      bash ./
      But that didn't fix the problem.
      Found that, after opening Kantu window, only first macro run will have the "open-IPC" problem. Just run the macro again and it works. Filed a bug report.

      Kantu window stuck in foreground, in front of normal browser window, so can't get "Enter" to work. Filed a bug report.

      You can create a bookmarklet to run a Kantu Macro:
      javascript:(function() {try {var evt = new CustomEvent('kantuRunMacro', {detail: {name: 'NAMEOFMACRO',from: 'bookmark',storageMode: 'browser',closeKantu: true} }); window.dispatchEvent(evt);} catch (e) {alert('Kantu Bookmarklet error: ' + e.toString());} })();

      Empty tabs/windows:
      +/- Weirdness with empty tabs/windows:

      • You can run a macro from an empty tab when started from the Kantu extension. The Kantu extension can load a new webpage by itself.

      • You can not run a macro from an empty tab when started from a bookmark. The reason is that these JavaScript "bookmarklets" need a "normal website" to work. So on an empty tab or e. g. the Chrome settings page the bookmark fails to run and thus Kantu is never started.

Vuln-scanners with exploits and payloads


Automated testing drivers

Some of the big tools in the previous two sections (such as OWASP ZAP and Metasploit) have APIs and/or CLI interfaces and headless operation that let them be driven as testing engines.


Pentesting / hacking distros and tool bundles

+/- 0ut3r Space's "Linux distributions for hackers"

My opinion at the moment:
"Number of things installed" does not equal "power". Better to install each tool yourself, so you know something about how it works and what it's doing. And you're probably not going to test all the areas covered by the Kali tools; maybe you'll test web apps, so the tools for Wi-Fi cracking and password brute-forcing and malware reverse-engineering and smartphone-exploitation and such are just distractions. I'm just installing individual tools on Linux Mint and using them there.

From /u/subnetq1:
> So after messing around with Kali, then Kali Light and Black Arch,
> then Arch w/ Black Arch Repos, I was just curious. What are some
> major differences between the latter of the two, or is it just a
> matter of preference? I know there are some obvious differences:
> 1. Kali Light includes xfce, while Arch doesn't really include anything.
> 2. Kali uses apt, Arch uses pacman.

Metasploit is Metasploit whether you run on Arch or Kali, the package manager makes no difference. What you are really buying into when you decide 'black arch' or 'kali' is a set of default configurations, default packages, default desktop environments (all of which are changeable), and a specific support team (how fast will they update packages, and provide new releases, will they do this in a timely manner for your favorite packages?, how well integrated are the packages? Do they consistently work?)

All of this is why you might choose to use a distribution like Kali, or Black Arch, for pentesting. You can install Metasploit or most other common pentesting tools in Ubuntu. But they are not a priority, and may not be updated as frequently, or integration bugs fixed as fast as with Kali or Black Arch - these distributions have a commitment to supporting these packages as "mission essential" for the distribution.



Note: if "View Page Source" doesn't show you much about a page, open browser's DevTools and look in Inspector.

Browser Add-ons

  • green check-mark  FoxyProxy: easily switch among proxy settings.

    To stop Mozilla/Firefox/Google traffic from showing up in Burp (or similar for ZAP):
    liamosaur / foxyproxy.json and set FoxyProxy to "Use Enabled Proxies By Patterns and Priority".

  • Reload Skip Cache Button (by Button Guy): reload current page without using the browser cache.

    Green circular-arrow icon in toolbar to open it.

  • d3coder: encode and decode selected text.

    Chrome only.

  • Cryptext (by cscarpa): encode and decode selected text.

    Green "C" icon in toolbar to open it. To copy/paste text, you have to right-click and use context menu, you can't ctrl-C/ctrl-V, as soon as you hit ctrl the window closes. And as soon as you go to browser's address bar, the Cryptext window closes and you've lost anything in there.

    Better to use Code Beautify web site.

  • green check-mark  BuiltWith: see what technologies a web site uses.

    Green "bw" icon in toolbar to open it. Getting 1 "details" listing per day is free; getting 5 requires a free account; more than that costs $144/year. But the simpler free "tech" listing seems sufficient. But it doesn't list frameworks such as AngularJS or Angular.

  • green check-mark  Wappalyzer: see what technologies a web site uses.

    Purple (usually) icon in address bar to open it, or you can have the icon change with each web site's technology. Also has a telemetry preference you should turn off, probably.

  • What CMS Is This: see what CMS a web site uses.

    Blue "W" icon in toolbar to open it. Failed to find CMS on any site I tried, while BuiltWith reported several CDNs. It does detect references to GitHub.

  • HackBar (v2, by Khoiasd): performs encryption, encoding, decryption, POST data manipulation, inject code generation, more, on HTTP requests.

    But doesn't show HTTP responses. Shows up as an icon in the browser's debugger (Firefox shift-F5).

  • HackBar Quantum (by DLS): same as HackBar by Khoiasd, plus some payloads and auto-pwns.

    UI less convenient. Green globe icon in toolbar or F9 to open it.

  • Tamper Data (for FF Quantum, by Pamblam): manipulate GET and POST requests.

    Blue cloud icon in toolbar to open it. Can't figure out how to use it: my HTTP requests hang, never go out or get a response or something.

  • Tamper Dev (for chromium): manipulate GET and POST requests.

    Tamper Dev

  • HTTP request maker (by stefano): manipulate requests.

    Ctrl+Shift+Y to open, but in Firefox that combination means "open Download history window".

  • HTTP Header Live (by Martin Antrag): manipulate requests.

    Blue hexagon icon in toolbar to open it. Awkward: it shows activity by all tabs in one stream.

  • WebSecurify: capable of finding XSS, XSRF, CSRF, SQL Injection, File upload, URL redirection, more.

  • XSS Me:

  • XSS chef: connects to the XSS Chef framework, installed separately.

  • XSS Rays:

    beefproject / beef / Xss Rays

  • SQL Inject Me:

  • HPP Finder:

  • SPAudit

    Vladimir's "Single-page applications need better auditing"

Pavitra Shankdhar's "18 Extensions For Turning Firefox Into a Penetration Testing Tool"
Pavitra Shankdhar's "19 Extensions to Turn Google Chrome into Penetration Testing tool"
Firefox-only to find hidden links on pages: SixOrNot, LinkGopher, IPvFoo.
mazen160 / Firefox-Security-Toolkit (install lots of useful extensions)

If you want to grab or examine code from a site, but it's all on one line, process it with: Prettier.

External web site to send data to

For some challenges, mainly XSS, you need an external site the victim will access, and a way for you to pick up the params they sent to that site.

One way to do that is to use RequestBin. Go there in your browser, click on "Create a RequestBin" button, and get an URL with a random token on the end, such as "". Have the victim do a GET or POST to that URL, equivalent to:

curl -X POST -d "fizz=buzz"
# or
curl -X GET\?param1\=5555
Then in browser, go to "" to see the data that came across.


Recording your work

See the Recording Desktop Activity and Recording CLI Activity sections of my Linux page.

Browser add-on "Nimbus", called "Nimbus Screenshot & Screen Video Recorder" on Chrome and "Nimbus Screen Capture: Screenshot, Edit, Annotate" on Firefox.

For Windows: Greenshot.

Managing the project

I want something that will:
  • Maintain a list/diagram of apps and services.
  • Maintain a list/diagram of servers and networks and IP addresses.
  • Maintain a list/diagram of domains and other public services (email, FTP).
  • Maintain a list of tests run against everything.
  • Maintain a list of vulnerabilities found.
  • Maintain a list of vulnerabilities exploited.
  • Maintain a list of permissions achieved on apps and services and servers.
  • Maintain a list of changes made to the target.
  • Produce reports of all of the above.
  • Hold attached files such as a saved testing Context from OWASP ZAP.
  • Free.
A lot to ask.

There seem to be a lot of tools for managing N people testing one business.
I want a tool for 1 person testing N businesses/apps.

Refined my thinking a bit, and asked this:
Looking for a test-organizing app for bug-bounty-hunting

I am looking for some "dashboard" app that presents a matrix of combinations: role in app, type of client device, type of client browser, app functional area. Then for each point in the matrix, there are buttons to launch apps such as Burp Suite, OWASP ZAP, Metasploit, nmap. Also buttons to list vulnerabilities found at that point in the matrix.

I would use this to manage my bug-bounty-hunting process. Within each app such as Burp Suite, some operations would be automatic and some manual. But I'm not looking for the test-organizing app to run any of the tests, just to be a dashboard and connect me to the appropriate lower-level apps, probably giving a label such as "normal user, using desktop Firefox, doing login/logout".

Does anything like this exist ? I've looked at a few things, such as OpenVAS. Couldn't get Dradis install to work. Looked at sh00t. I've used Burp Suite and OWASP ZAP and nmap, haven't tried Metasploit yet. Many other apps on my list to install and try.

Does something like Selenium do this ? I don't want to run automated tests, I want to manage the process and point to other tools.

Thanks for any help.

I don't want to replicate any of the port-mapping or page-tracking or report-generating features of big suites such as Burp or ZAP. I want a dashboard where I can see which areas have been covered and which haven't, and click to launch into the appropriate tool to do testing or to see the existing vuln/exploit/report or to see the relevant app pages and documentation pages.
"test matrix application"
"Test management tool"
"requirements traceability matrix"
But I don't need multi-user, I don't need graphs and reports and data analysis, don't need links to version control, don't need build control, trouble tickets.

Actual so far

  • SwiftnessX:

    Heitor Gouvea's "How to better organize your notes while hunting for bugs"
    ehrishirajsharma / SwiftnessX

  • sh00t:

    Task manager, to-do checklists, reporting.

    pavanw3b / sh00t

    Installed sh00t on my Mint 19.1 system:
    # I already had Python3 and pip installed.
    cd ~
    git clone
    cd sh00t
    pip install -r requirements.txt
    # failed with "No matching distribution found for Django==2.0.8"
    pip install Django
    # but it installed "Django-1.11.20 pytz-2018.9"
    pip install -r requirements.txt
    # failed with "No matching distribution found for Django==2.0.8"
    pip3 install -r requirements.txt
    # worked !
    python3 migrate
    python3 createsuperuser
    # create user account; asks for username, email address, password
    # password must be 8+ chars and not "simple"
    # to pre-load with content:
    # alarming message about resetting everything in database
    # I said yes

    To run sh00t:
    cd ~/sh00t
    python3 runserver
    # go to browser
    # go to
    # log in
    # when finished, kill the sh00t server:
    The app is organized as a hierarchy of Project - Assessment - Flag - Sh0t (although confusingly the order is shown as P - A - S - F). You can define Project and Assessment as you wish, but a Flag is a test case to be tested, and a Sh0t is a confirmed bug.
    Maybe reasonable definitions would be:
    • Project: company you're testing + month.
    • Assessment: app + methodology
    • Flag: test case.
    • Sh0t: confirmed bug.
    So you might have:
    "Project / Assessment / Flag / Sh0t".
    "Amazon + Jan 2019 / retail app + OWASP / SQL injection / bug1".
    "Amazon + Jan 2019 / associates app + OWASP / SQL injection / bug1".

    The "Configuration" is organized as a hierarchy of Methodology Master - Module Master - Case Master - Template.
    The definitions seem to be:
    • Methodology: OWASP or WAHH.
    • Module: class of activity (such as "testing error handling")
    • Case: strategy (such as "OSINT") and directions on how to do it.
    • Template: ???
    But the README says nothing about any of the things in "Configuration".

    Start by adding a Project, then adding an Assessment in that Project. You will get to enable a Methodology or various Flags, and now interesting things will appear under Flags / All. But there are no Templates.

    Much of the content seems to be vintage 2014 or so, including references to tool names and such.

    Submitted three Issues on GitHub, dev responded within a day.

  • Jira:

    Atlassian's Jira
    wikipedia's "Jira (software)"
    Costs $10/month.
    Way too complex for my needs.

  • Vulnreport:

    salesforce / vulnreport
    Malicious.Link's "VulnReport Install"

    Has an "organization and users" structure that a solo hunter doesn't need.

  • Dradis:
    Dradis on GitHub
    Dradis - Installing Dradis on Ubuntu

    Haxf4rall's "Dradis Framework - Collaboration and reporting for IT Security teams"

    Community edition is free. Plug-ins to import from Qualys, Nexpose, Acunetix, Burp, Nessus, nmap, more.

    [Bad] Installed Dradis CE on my Mint 19.1 system 1/2018, mostly using
    Dradis - Installing Dradis on Ubuntu:
    gpg --keyserver hkp:// --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
    bash -s stable < <(curl -s
    source /home/user1/.rvm/scripts/rvm
    rvm -v   # got 1.29.7
    for package in zlib openssl libxslt libxml2; do rvm pkg install $package; done
    # got several errors about configuration or make failures
    # script said do "rvm reinstall all --force", but that gave error
    rvm install 1.29.7  # not found
    rvm install 1.9.3  # not supported any more
    rvm install 2.6.0  # worked
    ruby -v  # got "ruby 2.6.0p0 (2018-12-25 revision 66547) [x86_64-linux]"
    echo "gem: --no-rdoc --no-ri" > ~/.gemrc
    gem install bundler
    bundle -v
    mkdir dradis-git
    cd dradis-git/
    git clone server
    for file in verify reset start; do curl -O$; done
    chmod +x *.sh
    cd server/
    [Missing command in the web page; stopped and reported it.]
    [But Support is useless, keeps not answering my questions, says they're working
    on new guides.  Referred me to
    which seems totally different from the steps I'm halfway through.]
    [Went back and forth with Support, and finally they said the instructions I used are a 5-year-old deprecated page that they thought no one had access to ! Use a different page: ]

    [Bad] Installed Dradis CE on my Mint 19.1 system 1/2018, using
    agreenbhm's "How-To: Install Dradis-CE 3 on Ubuntu Server 16.04":
    sudo apt update
    sudo apt install -y git redis-server ruby ruby-dev gcc make zlib1g-dev libsqlite3-dev libmysqlclient-dev g++
    cd /opt
    sudo git clone
    cd /opt/dradis-ce
    sudo ruby bin/setup
    # got a couple of "can't find gem bundler" errors, failed
    # Support said do
    bundle install --path /opt/dradis-ce
    # it failed, Gemfile not found, but that dir does contain a Gemfile
    # and Support sends me to the community forum
    # (Dradis framework Community forums),
    # I guess they can't help me
    # but they kept helping
    sudo gem install bundler
    bundle install --path /opt/dradis-ce
    cd /opt/dradis-ce
    sudo gem uninstall bundler
    # but got "bundler not installed" error
    cd /opt/dradis-ce
    sudo gem install bundler -v 1.16.4
    ruby bin/setup
    # failed with perms on ~/.bundle
    sudo chmod 777 ~/.bundle
    cd /opt/dradis-ce
    ruby bin/setup
    # did lots of fetching and installing, then another perm error, more chmod'ing
    ruby bin/setup
    # some more fetching and installing, then some kind of database access error

  • Faraday:

    Gathers info from many different tools, to show visuals and reports and analysis.

    Community edition is free, but doesn't include reporting and analysis.

    infobyte / faraday

    Seems to emphasize collaboration, multi-user, manager, CISO.

  • AttackForge:

    [Have to register to see any info about it.]

    Cyber Security Hub

  • Serpico:

    Mostly a report-template tool ?

    SerpicoProject / Serpico
    Shellntel's "The Number One Pentesting Tool You're Not Using"

  • TestLink:

    Test plans, test specifications, links back to requirements.

    franciscom / TestLink


    Bug Tracker extension connects to Bugzilla and GitHub.

ghostwriter by spectreOPS. PlexTrac.

Ceos3c's "The different Phases of a Penetration Test"
Luke Rixson's "Hacking how-to's: Developing your process"
Barrow's "How to Organize Your Tools by Pentest Stages"
Occupy4eles's "Use Magic Tree to Organize Your Projects"
OccupyTheWeb's "The Hacker Methodology"

Your own OpSec

You may create new vulnerabilities in the target. You may create a tunnel that violates all of their security policies. You may see trade secret or proprietary or PII data. Your report is confidential, unless and until the client approves release of it. How are you going to protect those things from someone else coming in and trying to exploit/grab them ?

Assume someone smarter than you is trying to get into the same target that you are, and may be targeting YOU, trying to piggyback on you. Is some new plug-in or script or exploit that you grab from somewhere really safe, does what it says it does, can you trust it ? Are your tools updating themselves over unencrypted connections ?

Have you changed default passwords on Kali, the big tools, etc ? Are you using 2FA on your important online accounts ? Are you storing data in encrypted containers, that are open only when you're using them ?

Catalin Cimpanu's "Years-long campaign targets hackers through trojanized hacking tools"
DEF CON 23 - Wesley McGrew - I Hunt Penetration Testers: More Weaknesses in Tools and Procedures (video)
"Be careful what you OSINT with"
My "Computer Security and Privacy" page

Probably a bigger risk is that some ISP or big corp might blacklist you:
Mike Felch article (ignore the title)

Monty Python's "How Not To Be Seen" (video)

The main tools I'll be using for web-app testing (I think)


Tools / Methodology

Barrow's "How to Organize Your Tools by Pentest Stages"
Bugcrowd's "Researcher Resources - Tools"
JDow's "Web Application Penetration Testing Cheat Sheet"
"Web Application Penetration Testing Cheat Sheet"
Apriorit's "Web Application Penetration Testing: Minimum Checklist Based on the OWASP Testing Guide"
OWASP Testing Guide v4 Table of Contents

These are loosely organized into the phases where you'd use them. But many tools straddle several phases. And the exact names and definitions of the phases differ from source to source.

The organization I've chosen


  1. Start on a target.

  2. Learn the application: log in as user, do normal things, understand the application.

  3. Domain/server Discovery: OSINT and DNS work to get lists of domains and servers.

  4. Port scanning those domains/servers: scanning to verify domains and servers and ports exist.

  5. Verifying domains/servers/services: scanning to get banner pages etc to show what services are running.


  1. Site/server Analysis: get software versions and patch levels etc.

  2. Content Discovery: find files on servers.


  1. Probe pages/scripts with bad parameters: attack bad input-handling.

  2. Attack application code and logic: more complicated attacks (XSS, SQLi, etc).

[Cleanup and Reporting]

  1. Remove anything you've installed or modified.

  2. Report what was done and results.

  3. Post-Reporting.

  4. Trying Again Later.


Some tools or techniques are forbidden in some bounty-hunting programs, maybe because they generate so much network traffic or tie up the servers or affect real users.

Use a VPN (unless you're doing custom traffic inside a LAN). Some clients may have automatic software that bans IP addresses that produce suspicious traffic, even if you're authorized to do testing. And it may add your IP to a blacklist that many companies use, not just the target. [This begs the question: are you going to get your VPN company blacklisted ?]

Don't just push the "scan" button on some huge framework and hope the right thing happens. Set the scope and configuration for the scanning, know what it's going to be doing.

From "Penetration Testing" by Georgia Weidman:
"Be forewarned: Not all public exploit code does what it claims to do. Some exploit code may destroy the target system or even attack your system instead of the target. You should always be vigilant when running anything you find online and read through the code carefully before trusting it."

"Scanning for vulns" is not the same as "penetration testing". Scanners make mistakes or give false positives. Follow up each hit with manual testing, and make sure you know what is happening, and try to broaden the scope of the problem. Don't just report scanner results and expect a bounty. Clients often have contracted with expensive pentesting companies that produce huge lists of scanner-hits, but then the client finds only 3 of them are worth fixing.


See "Strategies for choosing a target"

  1. Start on a target (tactics):

    From /u/cym13 on reddit:
    The advice with bug bounties is always the same: look for things nobody else thought of in places nobody else thought of.

    It is good practice for websites setting up a bug bounty program to first perform a security assessment of the platform, or at the very least launch automatic detection tools.

    Furthermore you're in competition with thousands of other researchers, so finding the obvious is not something you should strive for: if it's obvious, someone else will have found it before you. Maybe you'll be the lucky one but that's a game where there's not always a winner and always thousands of losers.

    This means your efforts are best spent:
    • Looking for things not usually found by such detection tools (I'd recommend against XSS on that part as that's the most basic thing ever; things like CSRF, Oauth misconfig or SSRF would be better in that regard).

    • Looking for websites that just started a bug bounty (to decrease the number of other researchers having already worked on it).

    • Looking for forgotten servers on old and big websites (nobody might notice if Google sets up a new debug server somewhere, and that's something you can take advantage of).

    I wonder about this: suppose you find a bug in some foundational library or product, such as Electron or libssl ? Can you make reports to N companies who all use that dependency, getting $N from each of them ? [I guess you'd have to show POC for each of them, giving specific URLs and demonstrations for each app.] Or do you just report once to the source of the problem, getting one (smaller or zero) payment from that source ?

    A variant of this: find a misconfiguration or misuse of some common library or product, and see if N other companies make the same mistake.

    zseano's "Turning your time into bugs"
    zseano's methodology
    Ben Sadeghipour's "Doing recon like a boss" (video)

  2. Learn the application:

    [Maybe most of this is more applicable to corporate apps, not consumer apps. But apps are getting more complex all the time.]

    • RTFM. Read sales literature or watch videos. Is there a demo on the target's web site ? Can you subscribe to a newsletter ?

    • Log in to the app, do normal things, understand the application. Look at the sitemap. Don't just hit standard things such as login, search, file upload. Explore things that other bug-hunters may not get to.

    • Maybe diagram the flow and reach of the application. What are the roles, the data, the operations/transactions, the states of the application ? Make a matrix of roles and permissions ? (See ZAP's "session comparison" feature.)
      Alex Wauters' "How to get started with Threat Modeling"

    • What is the most valuable information in the application ?
      How important is availability/uptime of the application ?
      Are some parts critical to regulations such as PCI, HIPAA, GDPR, DFARS, FERPA, COPPA ?
      Where is there money, where is there PII ?
      If it's a messaging app, the integrity of the messages is a key item, compromising that is severe. Look for similar features that are important to the app, where a logic compromise or something can violate the integrity, instead of having to find some tricky technical flaw.

    • Is there a privacy policy page ?
      Can users control collection of their data, get a copy of their data, delete their data, delete their account ? Do these things work and conform to regulations ?

    • Are there things where one user could affect another user ? Such as messaging, creating a new public theme, creating a new store, offering items for sale, commenting on another user's page ?

    • Are there points where a user uploads content (files, notes, comments, URLs, requests, problem reports, themes) into the application ?

    • Are there points where the user is sent to somewhere "else" ? How is that done ? Look at any place where a path or filename or page name is in an URL parameter.

    • Try different roles ("authorizations", or "auth-z"s) in the application, different transactions, maybe create multiple users, try deep features that may be less-tested, try unusual features such as password reset, change username, delete account, cancel order. [The bounty program may impose rules about how to create test users, and what operations are allowed.] Try desktop and mobile, different human languages, different browsers.

    • What are the default or standard accounts and passwords ? Are there demo or example or admin accounts ? Suppose the installer blindly followed the examples and defaults in the manual, what accounts and passwords and server names would be created ? Are there interesting URLs in the PDF documentation of the app ?

    • Are there demo or example pages ? Or a complete example application, that might accidentally been left on the server ?

    • Does the application require that users modify their computers, installing a certificate or app or applet or browser extension, or naming the web-app's domain in a "trusted" security zone of the browser ? What behavior do those things have ? Is there messaging between them ? What kind ?

    • If the application handles internal corporate users as well as public users, are the internal users required to use some ancient browser such as IE6 ? Do they use ActiveX controls ?

    • What frameworks and technologies and libraries does the application use ?
      Katie Explains: Modern Web Development (video)
      Are some scripts loaded dynamically, as in ad-networks ? Ad code is more likely to have vulnerabilities or provide a path to create a vulnerability in the application.

    • Does the application use old, deprecated technologies, such as Flash or Silverlight ? PDF documents, while not deprecated, have their problems.

    • How is authentication ("auth-n") done, and persisted ? Are there different login points, different types of authentication ? Encryption ? Is there rate-limiting, timeout, lockout ? Rules to enforce strong passwords ? Can usernames be enumerated somehow ?

    • Are there different parts of the application that look different or are built differently ? Are parts of it "legacy" and parts of it new ? Are parts of it free and other parts behind a paywall ? Check how each part is made, and the boundaries between them. How is authentication done, and passed between them ?

    • Are there sub-domains or parts of the application that are listed as "out of scope" for testing ? Maybe they're neglected or full of bugs. You might look at them to see if anything in them might be replicated in the in-scope areas.

    • After you've learned the application a bit, go back and re-read the bounty program rules, which may make more sense now.
      Vickie Li's "Out of Scope"

    • Learn and use "out of scope" parts of the application, but don't attack them. Understanding them might help you understand the in-scope parts better.

    • Is there an issues or to-do list on GitHub or or or somewhere else ? A forum where users are grousing about problems ? Same for any of the frameworks or major libraries the app is using.

    • Can you install the application locally, on your own machine(s) ? This will make it much easier and safer to learn it, brute-force it, create privileged users, dig into internals and source code, examine log files, etc. Where are the log or audit files ? Is there a master config file ? (ghostlulz's "Exposed Log and Configuration Files") Is there a debug mode ? Are there hooks or modes for testing ? Where and how are credentials stored ? What OS user is the app server code running as ? How does it update or get patched ? How is it backed up and restored ? How are patches applied ? Are there cron jobs or daemons ? Can you extract version numbers of internal modules, packages or libraries ? Does the app depend on any other services ? Can you install those locally too ?

    • If you can get the source code, you could try running static code-analysis tools on it. And read it.
      Will Butler's "How to Find Vulnerabilities in Code: Bad Words"
      Vickie Li's "Code Review 101"
      Seth & Ken's Excellent Adventures (in Code Review)
      wireghoul / graudit
      Philippe Arteau's "OWASP Find Security Bugs" (PDF)

    This is a lot of work. Maybe if you're very good, or very specialized, or feeling reckless, or just looking for a quick score, you can skip much of this learning, and just plunge into the app and see what the pages look like.

    But learning the app may give you a big edge over other hunters, and you may be able to test features they can't get to. If the same app is used by other targets, maybe learning it well is worthwhile. What company wrote this app ? Maybe look at other apps they've written.

    You could always alternate both styles: take a quick shot at the app, read the manual a bit, take another shot, learn more about the app, do some more poking, etc.

    Static code analysis:
    A Bug'z Life's "Bug Hunting Methodology from an Average Bug Hunter"
    n00bie's "Web Application Hacking - Analyzing the Application"

  3. Domain/server Discovery:
    OSINT and DNS work to get lists of domains and servers.

    Also see OSINT.

    [For testing corporate web apps, probably this whole phase is almost useless. The company's bug-bounty program will define a scope that lists the exact domains to be tested.]

    Don't re-invent the wheel, especially when it comes to scanning across the internet. There are a bazillion tools already available. Use Google Search, see Crawler.Ninja, Common Crawl, Shodan, more.

    redhuntlabs / Awesome-Asset-Discovery
    Fox-IT's "Getting in the Zone: dumping Active Directory DNS using adidnsdump"
    Adam Todd's "Active Directory for Script Kiddies"
    Adam Todd's "More Active Directory for Script Kiddies"
    adrecon / ADRecon
    ghostlulz's "Certificate Transparency Logs"

  4. Port scanning those domains/servers:
    Scanning to verify domains and servers and ports exist.

    John Anderson's "Still Scanning IP Addresses? You're Doing it Wrong"

    [For testing corporate web apps, probably this whole phase is almost useless. The company's bug-bounty program will declare this out of bounds; they don't want their network or servers bombarded, they want you to find application logic or coding errors.]

    But even if port-scanning is outlawed, try opening a few ports manually:
    • 88, 464, 543, 544, 749-754, 760, 1109: Kerberos.
    • 118, 156: SQL Service.
    • 161: SNMP.
    • 389, 636, 3268, 3269: LDAP.
    • 396: Novell Netware.
    • 445: Microsoft-DS (Active Directory, SMB, more).
    • 901: Samba.
    • 902, 903, 8222, 8333, 9443: VMWare.
    • 1433, 1434: MS SQL Server.
    • 1512: MS WINS.
    • 1521, 1522, 1525, 1527, 1529, 2483, 2484: Oracle SQL.
    • 2049: NFS.
    • 2375-2377, 4243, 5000, 7946: Docker.
    • 2638: SQL Anywhere.
    • 3000: Ruby on Rails development default, and others.
    • 3020: CIFS.
    • 3306: MySQL.
    • 3389: RDP.
    • 3702: WS-Discovery.
    • 3872, 4444, 5555, 5556, 6201, 7777, 16000, 16225: Oracle Enterprise Manager and other Oracle.
    • 4125: Microsoft Remote Web Workplace.
    • 4848: Java, Glassfish Application Server administration default.
    • 5000: uPNP, Flask, Docker, more.
    • 6379: Redis.
    • 8000: Django Development Webserver.
    • 8009, 8080, 8243, 8280, 8443, 8983, 9006, 9042: Apache various.
    • 8082, 8083, 8443, many more: Citrix.
    • 8172: MS IIS remote admin.
    • 8840: Opera Unite.
    • 8880, 9043, 9060, 9080: IBM WebSphere various.
    • 9001: Microsoft SharePoint.
    • 9200: Elasticsearch.
    • 9800: WebDAV.
    • 10000: Webmin.
    • 10250: Kubelet / Kubernetes.
    • 11371: OpenPGP HTTP key server.
    • 12201: Graylog.
    • 20000: Usermin.
    • 24444: NetBeans.
    • 27017: MongoDB.
    • 33848: Jenkins.

    sanspentest's "Web Application Scanning Automation"

    See the Port scanning or router testing section of my Testing Your Security and Privacy page.

  5. Verifying domains/servers/services:
    Scanning to get banner pages etc to show what services are running.

    Public Suffix List


See Chapter 4 "Mapping the Application" in "The Web Application Hacker's Handbook" by Stuttard and Pinto.

  1. Site/server Analysis:
    Run standard tests that you'd run against your own personal web site, to see if the basics are covered. See the "Periodically check your site" section of my "Your Personal Web Site" page.

    Get software versions and patch levels etc. Get the site headers / policies (htaccess). Are the security settings tight ?

    Also see Web Apps.

    Once you know what libraries or products the app is using, look for CVEs for those.

    Guru99's "How to Hack a Web Server"
    Anant Shrivastava's "Web Application finger printing"
    David Fletcher's "Finding: Server Supports Weak Transport Layer Security (SSL/TLS)"

  2. Content Discovery:
    Find files on servers.

    Try various user-agent strings; application may have different files for different clients.

    Try logging in as users with various privilege levels; application may have different files for different clients.

    Kathan Patel's "How You Can Use JavaScript In BugBounty"

    Page not found on empty toilet-paper roll


  1. Probe pages/scripts with bad parameters:
    +/- QA engineer walks into a bar
    Attack bad input-handling.

    Generally, by now (or in earlier phases), you're using a special "intercepting proxy" between you (browser or app) and the network. The proxy supports recording the outgoing requests and the incoming results, and then analyzing them, repeating them, altering them. Some proxies are the one in Burp, OWASP ZAP, Telerik Fiddler.

  2. Attack application code and logic:
    More complicated attacks (XSS, SQLi, etc).

    What is the structure of a web page ? Is the application using frameworks ?
    Katie Explains: Modern Web Development (video)
    Are there iframes ? Is there messaging among parts of a page ? Is data on app server being changed via form posts, or page-gets ? How are sessions identified ?

    A key thing is to track where inputs go to, what they affect. Are they sanitized ? How are special characters handled ? Do inputs change tags on the page ? How are they sent down to the app server ?

    Sanitizing/escaping probably should be done differently for URLs, form fields, and variables. If they're all done the same way, probably one of them is vulnerable.

    If you have accounts with different levels of privilege, try doing all operations as the high-privilege user, then log out, log in as low-privilege user, and replay all the operations (changing session ID or CSRF token to new value).

    See "Penetration Testing and Bug-Bounty Hunting Attacks" page.

    OWASP's Xenotix: XSS tester.

    Netsparker ($5K per year)
    Acunetix ($9K) (free for VERY limited version, about $500/year for "Starter" version)
    HTTPCS (about $650/year for "Basic" version)
    IronWASP (free; essentially Windows-based; latest release in 2015)

    Try to find the biggest scope for the bug. Multiple browsers, multiple OS's, desktop and mobile, multiple versions, multiple countries, multiple users, etc.

    Tools for specific targets


[Cleanup and Reporting]

  1. Cleanup of the target system(s):
    Keep good notes, so you can clean up at the end of the testing, or tell the target what was modified.

    If there's something you can't clean up, notify the client/target so they can clean it up.

  2. Reporting:
    • Report in some standard file format, probably Markdown.

    • Start building your report as you test, don't leave it all until the end.

    • Explain the severity and effects, for both developer and non-technical audiences. Can the attacker steal money or PII ? Create fraudulent orders ? Send messages to other users, to get them to transfer money or give up credentials or PII ? Delete or corrupt or ransomware the database ?

      This is critical; don't report a bug without it. You can't just say "well, I did XSS, your code let me pop up an alert". You have to say "I was able to grab THIS private information THIS way".

    • Don't report results you don't quite understand, from scans, in the hopes that some of them gain a bounty. The company probably has done scans already. Scanners are fairly unreliable. You don't want to flood the company with false positives or incoherent reports. You need to drill down manually on each item and get a clear understanding of it.

    • Don't report some picky error or weakness, such as HTTP or CSP headers not as tight as they could be.

    • Re-read the allowed scope and known (excluded) vulnerabilities, to make sure your bug is okay.
      Vickie Li's "Out of Scope"

    • Double-check the bug, run it again from a clean state. If possible, run it in a clean browser with no add-ons and no intercepting proxy. If it's a mobile bug you found through an emulator, re-check using a real device. If you found it on a rooted device, retry on a non-rooted device.

    • Target may have a standard form for reporting bugs.

    • Document clearly, with exact URLs and with pictures and video, for both vulnerability and exploit (if separate). Assume that your report will go to some triage person who isn't familiar with the app, then maybe to some junior programmer. Don't rely on technical bug-bounty jargon or assume the developers know it.

    • Document browser, OS, country, language, app version, etc if relevant. Make sure you're on latest browser and OS, no browser add-ons are interfering, if these are relevant.

    • If the bug is proven by exfiltrating user data, don't exfiltrate real data. Create a new independent account containing dummy data and exfiltrate that data.

    • Note the range of the bug. Are all web pages of the app vulnerable in the same way ? Does it affect multiple users ? Does it affect admins ?

    • Maybe refer to standard classifications, such as Bugcrowd's Vulnerability Rating Taxonomy. Some people say CVSS is not a good system to use. OWASP Risk Assessment Calculator

    • Maybe note any possible regulatory or legal impacts, but be careful, this is not your area of expertise.

    • Maybe suggest a fix, but be careful, you may not know enough about the app.

    • Don't editorialize or be harsh or advocate an urgent fix; let the facts speak for themselves.

    • You're reporting to busy professionals in a business, who will decide whether to give money to you. Write concisely and professionally, with correct grammar and spelling. Format the report in some reasonable way, with headings and lists as appropriate. Don't waste their time, or use hacker slang, or try to come across as a tough-guy hacker-wizard.

    • It would be nice to have a second person proofread your report and see if they understand it, but maybe that would violate confidentiality.

    • Make sure your name and contact information is on the report. Copyright ? Statement that this report is your work and opinion, not that of any company you might work for.

    • If you've done anything to a production server that you were unable to clean up afterward, explain and give details so the company can clean it up.

    Be especially rigorous in your first few reports, when you're unsure of the process and trying to build a reputation.

    If your report is rejected as a duplicate, in some programs you can ask to be added as a collaborator, to see the prior report and verify that yours really is a duplicate ?

    Even if your report is rejected as a duplicate, or not serious, or out of scope, generally you are NOT free to disclose the issue publicly. If you really want to publish it, first get permission of both the target and the company running the bug-bounty program.

    John Stauffacher's "Advice for Writing a Great Vulnerability Report"
    Ryan Satterfield's "How To Write a Proof Of Concept For Security Holes"
    Gwendal Le Coguic's "How to write a report"
    Vickie Li's "How to Write a Better Vulnerability Report"
    Google Bughunter University's "Improving your reports"
    ZephrFish / BugBountyTemplates
    SSD Secure Disclosure's "Report Template" (more intended for binaries ?)
    Nicholas Handy's "Bug Reporting for Bug Bounties"
    tolo7010's "Writing a good and detailed vulnerability report"
    Bugcrowd University - How to Make a Good Bug Submission (video)
    Melisa Wachs' "DOs and DON'Ts of Pentest Report Writing"
    Brian B. King's "Your Reporting Matters: How to Improve Pen Test Reporting"
    Pentester Land's "List of bug bounty writeups" (very uneven, more articles than reports, but ...)

  3. Post-Reporting:
    Do you have a lot of the target's data saved on your systems ? That is a legal liability to you; you are responsible for protecting it, perhaps to standards dictated by GDPR or some other regulations. Probably best to delete all of it.

    At some point, after ALL is done, you may even want to delete your report, or at least redact it to remove the target's sensitive data from it. What could happen if someone steals it from your system ? What could happen if the data is published (not because of a breach of your system), and there is an investigation of everyone (including you) who possessed that information ?

  4. Trying Again Later:
    It's possible the target may want to make a fix and then have you re-test.

    The same approach may work on previous targets you've attacked. So don't throw away info about your previous work, even unsuccessful work.

    And as you go along, you're developing your own techniques and payloads. Maybe you can go back and use them against targets you previously tried.

    clirimemini / Keye (tool to detect changes in pages)

SecTools.Org (a bit stale)
Pentesting Tutorials' "Pentesting Methodology Tutorial"
EdOverflow / bugbounty-cheatsheet /
Janidu Jayasanka's "Penetration Testing & Hacking Tools List for Hackers"

OnlineHashCrack (hash identifier)
TunnelsUp's "Hash Analyzer"
psypanda / hashID
Code Beautify (many converters, decryptors, validators)
MD5 conversion and MD5 reverse lookup (MD5 = 32 hex digits)
CrackStation (hash cracker)
Browserling's "Web Developer Tools"
Web Toolkit Online URL-shortener that supports any protocol.
HTTPie: command-line HTTP client.


Questions / issues

  • How is my home ISP going to react if they see me doing intensive scans of some web site on the public internet ?
    TokyoNeon's "The White Hat's Guide to Choosing a Virtual Private Server"

  • May have to do port-forwarding in my home router to allow incoming connections.

  • Would be best to have 3 machines:
    • Daily desktop machine (stable, no incoming services, no open ports).
    • Hacking machine (running Kali, or loaded up with tools).
    • Target machine (running web server, web app, other targets).
    Or if you have a LOT of RAM in your machine, run Hacking and Target as VMs ?