80,443 - Pentesting Web metodologija
Reading time: 20 minutes
tip
Učite i vežbajte AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Učite i vežbajte GCP Hacking:
HackTricks Training GCP Red Team Expert (GRTE)
Učite i vežbajte Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Podržite HackTricks
- Proverite planove pretplate!
- Pridružite se 💬 Discord grupi ili telegram grupi ili pratite nas na Twitteru 🐦 @hacktricks_live.
- Podelite hakerske trikove slanjem PR-ova na HackTricks i HackTricks Cloud github repozitorijume.
Osnovne informacije
Web servis je najčešća i najopsežnija usluga i postoji mnogo različitih vrsta ranjivosti.
Podrazumevani port: 80 (HTTP), 443(HTTPS)
PORT STATE SERVICE
80/tcp open http
443/tcp open ssl/https
nc -v domain.com 80 # GET / HTTP/1.0
openssl s_client -connect domain.com:443 # GET / HTTP/1.0
Web API smernice
Sažetak metodologije
U ovoj metodologiji pretpostavićemo da ćete napasti jedan domen (ili poddomen) i samo njega. Dakle, ovu metodologiju treba primeniti na svaki otkriveni domen, poddomen ili IP sa nedeterminisanim web serverom unutar scope-a.
- Počnite sa identifikacijom tehnologija koje koristi web server. Potražite trikove koje treba imati na umu tokom ostatka testa ako uspešno identifikujete tehnologiju.
- Ima li poznatih ranjivosti za verziju te tehnologije?
- Koristi li se neka dobro poznata tehnologija? Postoji li neki koristan trik da se izvuku dodatne informacije?
- Postoji li neki specijalizovani skener koji treba pokrenuti (npr. wpscan)?
- Pokrenite skenere opšte namene. Nikad ne znate da li će nešto pronaći ili otkriti interesantne informacije.
- Počnite sa početnim proverama: robots, sitemap, 404 greška i SSL/TLS skeniranje (ako je HTTPS).
- Počnite sa spidering-om web stranice: vreme je da pronađete sve moguće fajlove, foldere i parametre koji se koriste. Takođe, proverite za specijalna otkrića.
- Napomena: svaki put kada se otkrije novi direktorijum tokom brute-forcing-a ili spidering-a, trebalo bi ga spiderovati.
- Directory Brute-Forcing: Pokušajte da brute-force-ujete sve otkrivene foldere u potrazi za novim fajlovima i direktorijumima.
- Napomena: svaki put kada se otkrije novi direktorijum tokom brute-forcing-a ili spidering-a, trebalo bi ga Brute-Forcovati.
- Backups checking: Testirajte da li možete pronaći backup-e otkrivenih fajlova dodavanjem uobičajenih backup ekstenzija.
- Brute-Force parameters: Pokušajte da pronađete skrivene parametre.
- Kada identifikujete sve moguće endpointe koji prihvataju ulaz od korisnika, proverite sve vrste ranjivosti vezanih za njih.
- Pratite ovaj kontrolni spisak
Verzija servera (Ranjiv?)
Identifikacija
Check if there are known vulnerabilities for the server version that is running.
The HTTP headers and cookies of the response could be very useful to identify the technologies and/or version being used. Nmap scan can identify the server version, but it could also be useful the tools whatweb, webtech or https://builtwith.com/:
whatweb -a 1 <URL> #Stealthy
whatweb -a 3 <URL> #Aggresive
webtech -u <URL>
webanalyze -host https://google.com -crawl 2
Potražite vulnerabilities of the web application version
Proveri da li postoji WAF
- https://github.com/EnableSecurity/wafw00f
- https://github.com/Ekultek/WhatWaf.git
- https://nmap.org/nsedoc/scripts/http-waf-detect.html
Trikovi za web tehnologije
Neki trikovi za pronalaženje ranjivosti u različitim dobro poznatim tehnologijama koje se koriste:
- AEM - Adobe Experience Cloud
- Apache
- Artifactory
- Buckets
- CGI
- Drupal
- Flask
- Git
- Golang
- GraphQL
- H2 - Java SQL database
- ISPConfig
- IIS tricks
- Microsoft SharePoint
- JBOSS
- Jenkins
- Jira
- Joomla
- JSP
- Laravel
- Moodle
- Nginx
- PHP (php has a lot of interesting tricks that could be exploited)
- Python
- Spring Actuators
- Symphony
- Tomcat
- VMWare
- Web API Pentesting
- WebDav
- Werkzeug
- Wordpress
- Electron Desktop (XSS to RCE)
- Sitecore
- Zabbix
Uzmite u obzir da isti domen može koristiti različite tehnologije na različitim portovima, folderima i poddomenima.
Ako web aplikacija koristi neku dobro poznatu tehnologiju/platformu navedenu gore ili neku drugu, ne zaboravite da pretražite Internet za nove trikove (i javite mi!).
Pregled izvornog koda
Ako je source code aplikacije dostupan na github, pored sprovođenja sopstvenog White box test aplikacije, postoji neka informacija koja bi mogla biti korisna za trenutni Black-Box testing:
- Da li postoji Change-log or Readme or Version file ili bilo šta sa version info accessible putem weba?
- Kako i gde su sačuvani credentials? Postoji li neki (dostupan?) file sa credentials-ima (usernames or passwords)?
- Da li su passwords u plain text, encrypted ili koji hashing algorithm se koristi?
- Da li koristi neki master key za enkriptovanje nečega? Koji algorithm se koristi?
- Možete li access any of these files iskorišćavajući neku ranjivost?
- Ima li neke interesting information in the github (solved and not solved) u issues? Ili u commit history (možda je neki password introduced inside an old commit)?
Source code Review / SAST Tools
Automatski skeneri
Opšti automatski skeneri
nikto -h <URL>
whatweb -a 4 <URL>
wapiti -u <URL>
W3af
zaproxy #You can use an API
nuclei -ut && nuclei -target <URL>
# https://github.com/ignis-sec/puff (client side vulns fuzzer)
node puff.js -w ./wordlist-examples/xss.txt -u "http://www.xssgame.com/f/m4KKGHi2rVUN/?query=FUZZ"
CMS skeneri
Ako se koristi CMS, ne zaboravite da pokrenete skener, možda se pronađe nešto sočno:
Clusterd: JBoss, ColdFusion, WebLogic, Tomcat, Railo, Axis2, Glassfish
CMSScan: WordPress, Drupal, Joomla, vBulletin web sajtove za bezbednosne propuste. (GUI)
VulnX: Joomla, Wordpress, Drupal, PrestaShop, Opencart
CMSMap: (W)ordpress, (J)oomla, (D)rupal ili (M)oodle
droopscan: Drupal, Joomla, Moodle, Silverstripe, Wordpress
cmsmap [-f W] -F -d <URL>
wpscan --force update -e --url <URL>
joomscan --ec -u <URL>
joomlavs.rb #https://github.com/rastating/joomlavs
U ovom trenutku trebalo bi već da imate neke informacije o web serveru koji koristi klijent (ako su podaci dati) i neke trikove koje treba imati na umu tokom testa. Ako imate sreće, možda ste čak pronašli CMS i pokrenuli neki scanner.
Step-by-step Web Application Discovery
From this point we are going to start interacting with the web application.
Initial checks
Default pages with interesting info:
- /robots.txt
- /sitemap.xml
- /crossdomain.xml
- /clientaccesspolicy.xml
- /.well-known/
- Check also comments in the main and secondary pages.
Forcing errors
Web servers may behave unexpectedly when weird data is sent to them. This may open vulnerabilities or disclosure sensitive information.
- Access fake pages like /whatever_fake.php (.aspx,.html,.etc)
- Add "[]", "]]", and "[[" in cookie values and parameter values to create errors
- Generate error by giving input as
/~randomthing/%sat the end of URL - Try different HTTP Verbs like PATCH, DEBUG or wrong like FAKE
Check if you can upload files (PUT verb, WebDav)
If you find that WebDav is enabled but you don't have enough permissions for uploading files in the root folder try to:
- Brute Force credentials
- Upload files via WebDav to the rest of found folders inside the web page. You may have permissions to upload files in other folders.
SSL/TLS vulnerabilites
- If the application isn't forcing the user of HTTPS in any part, then it's vulnerable to MitM
- If the application is sending sensitive data (passwords) using HTTP. Then it's a high vulnerability.
Use testssl.sh to checks for vulnerabilities (In Bug Bounty programs probably these kind of vulnerabilities won't be accepted) and use a2sv to recheck the vulnerabilities:
./testssl.sh [--htmlfile] 10.10.10.10:443
#Use the --htmlfile to save the output inside an htmlfile also
# You can also use other tools, by testssl.sh at this momment is the best one (I think)
sslscan <host:port>
sslyze --regular <ip:port>
Informacije o SSL/TLS ranjivostima:
- https://www.gracefulsecurity.com/tls-ssl-vulnerabilities/
- https://www.acunetix.com/blog/articles/tls-vulnerabilities-attacks-final-part/
Spidering
Pokrenite neku vrstu spider unutar web-a. Cilj spidera je da pronađe što više puteva iz testirane aplikacije. Zato treba koristiti web crawling i eksterne izvore kako biste pronašli što više validnih puteva.
- gospider (go): HTML spider, LinkFinder u JS fajlovima i eksternim izvorima (Archive.org, CommonCrawl.org, VirusTotal.com).
- hakrawler (go): HML spider, sa LinkFinder za JS fajlove i Archive.org kao eksterni izvor.
- dirhunt (python): HTML spider, takođe označava "juicy files".
- evine (go): Interaktivni CLI HTML spider. Takođe pretražuje Archive.org
- meg (go): Ovaj alat nije spider ali može biti koristan. Možete navesti fajl sa hosts i fajl sa paths i meg će fetch-ovati svaki path na svakom hostu i sačuvati response.
- urlgrab (go): HTML spider sa mogućnostima renderovanja JS-a. Međutim, izgleda nedovoljno održavan, prekompajlirana verzija je stara i trenutni kod se ne kompajlira.
- gau (go): HTML spider koji koristi eksterne provajdere (wayback, otx, commoncrawl)
- ParamSpider: Skripta koja će pronaći URL-ove sa parametrima i izlistati ih.
- galer (go): HTML spider sa mogućnostima renderovanja JS-a.
- LinkFinder (python): HTML spider, sa mogućnostima JS beautify koji može tražiti nove puteve u JS fajlovima. Vredi pogledati i JSScanner, koji je wrapper za LinkFinder.
- goLinkFinder (go): Za ekstrakciju endpoint-a kako iz HTML source-a tako i iz embedded javascript fajlova. Korisno za bug hunter-e, red team-ere, infosec ninje.
- JSParser (python2.7): Python 2.7 skripta koristeći Tornado i JSBeautifier za parsiranje relativnih URL-ova iz JavaScript fajlova. Korisno za lako otkrivanje AJAX zahteva. Izgleda neodržavano.
- relative-url-extractor (ruby): Dajući fajl (HTML) izvući će URL-ove iz njega koristeći zgodne regularne izraze da pronađe i izdvoji relativne URL-ove iz "ružnih" (minified) fajlova.
- JSFScan (bash, several tools): Prikuplja interesantne informacije iz JS fajlova koristeći više alata.
- subjs (go): Pronalazi JS fajlove.
- page-fetch (go): Učita stranicu u headless browser-u i ispiše sve URL-ove koji su učitani da bi se stranica prikazala.
- Feroxbuster (rust): Alat za otkrivanje sadržaja koji meša nekoliko opcija prethodnih alata.
- Javascript Parsing: Burp ekstenzija za pronalaženje puteva i parametara u JS fajlovima.
- Sourcemapper: Alat koji, dato .js.map URL, dohvata beautified JS kod.
- xnLinkFinder: Alat za otkrivanje endpoint-a za dati target.
- waymore: Otkrij linkove iz wayback machine (takođe preuzima response-ove iz wayback-a i traži dalje linkove).
- HTTPLoot (go): Crawl-uje (čak i popunjavanjem formi) i takođe nalazi sensitivan info koristeći specifične regex-e.
- SpiderSuite: Spider Suite je napredni multi-feature GUI web security Crawler/Spider dizajniran za cybersecurity profesionalce.
- jsluice (go): Go package i command-line tool za ekstrakciju URL-ova, path-ova, secret-a i drugih interesantnih podataka iz JavaScript source koda.
- ParaForge: ParaForge je jednostavna Burp Suite extension za extract the paramters and endpoints iz request-a da bi se napravila custom wordlist za fuzzing i enumeraciju.
- katana (go): Awesome tool za ovo.
- Crawley (go): Ispisuje svaki link koji uspe da pronađe.
Brute Force directories and files
Start brute-forcing from the root folder and be sure to brute-force all the directories found using this method and all the directories discovered by the Spidering (you can do this brute-forcing recursively and appending at the beginning of the used wordlist the names of the found directories).
Tools:
- Dirb / Dirbuster - Included in Kali, old (and slow) but functional. Allow auto-signed certificates and recursive search. Too slow compared with th other options.
- Dirsearch (python): It doesn't allow auto-signed certificates but allows recursive search.
- Gobuster (go): It allows auto-signed certificates, it doesn't have recursive search.
- Feroxbuster - Fast, supports recursive search.
- wfuzz
wfuzz -w /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt https://domain.com/api/FUZZ - ffuf - Fast:
ffuf -c -w /usr/share/wordlists/dirb/big.txt -u http://10.10.10.10/FUZZ - uro (python): This isn't a spider but a tool that given the list of found URLs will to delete "duplicated" URLs.
- Scavenger: Burp Extension to create a list of directories from the burp history of different pages
- TrashCompactor: Remove URLs with duplicated functionalities (based on js imports)
- Chamaleon: It uses wapalyzer to detect used technologies and select the wordlists to use.
Recommended dictionaries:
- https://github.com/carlospolop/Auto_Wordlists/blob/main/wordlists/bf_directories.txt
- Dirsearch included dictionary
- http://gist.github.com/jhaddix/b80ea67d85c13206125806f0828f4d10
- Assetnote wordlists
- https://github.com/danielmiessler/SecLists/tree/master/Discovery/Web-Content
- raft-large-directories-lowercase.txt
- directory-list-2.3-medium.txt
- RobotsDisallowed/top10000.txt
- https://github.com/random-robbie/bruteforce-lists
- https://github.com/google/fuzzing/tree/master/dictionaries
- https://github.com/six2dez/OneListForAll
- https://github.com/random-robbie/bruteforce-lists
- https://github.com/ayoubfathi/leaky-paths
- /usr/share/wordlists/dirb/common.txt
- /usr/share/wordlists/dirb/big.txt
- /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt
Note that anytime a new directory is discovered during brute-forcing or spidering, it should be Brute-Forced.
What to check on each file found
- Broken link checker: Find broken links inside HTMLs that may be prone to takeovers
- File Backups: Once you have found all the files, look for backups of all the executable files (".php", ".aspx"...). Common variations for naming a backup are: file.ext~, #file.ext#, ~file.ext, file.ext.bak, file.ext.tmp, file.ext.old, file.bak, file.tmp and file.old. You can also use the tool bfac or backup-gen.
- Discover new parameters: You can use tools like Arjun, parameth, x8 and Param Miner to discover hidden parameters. If you can, you could try to search hidden parameters on each executable web file.
- Arjun all default wordlists: https://github.com/s0md3v/Arjun/tree/master/arjun/db
- Param-miner “params” : https://github.com/PortSwigger/param-miner/blob/master/resources/params
- Assetnote “parameters_top_1m”: https://wordlists.assetnote.io/
- nullenc0de “params.txt”: https://gist.github.com/nullenc0de/9cb36260207924f8e1787279a05eb773
- Comments: Check the comments of all the files, you can find credentials or hidden functionality.
- If you are playing CTF, a "common" trick is to hide information inside comments at the right of the page (using hundreds of spaces so you don't see the data if you open the source code with the browser). Other possibility is to use several new lines and hide information in a comment at the bottom of the web page.
- API keys: If you find any API key there is guide that indicates how to use API keys of different platforms: keyhacks, zile, truffleHog, SecretFinder, RegHex, DumpsterDive, EarlyBird
- Google API keys: If you find any API key looking like AIzaSyA-qLheq6xjDiEIRisP_ujUseYLQCHUjik you can use the project gmapapiscanner to check which apis the key can access.
- S3 Buckets: While spidering look if any subdomain or any link is related with some S3 bucket. In that case, check the permissions of the bucket.
Special findings
While performing the spidering and brute-forcing you could find interesting things that you have to notice.
Interesting files
- Look for links to other files inside the CSS files.
- If you find a .git file some information can be extracted
- If you find a .env information such as api keys, dbs passwords and other information can be found.
- If you find API endpoints you should also test them. These aren't files, but will probably "look like" them.
- JS files: In the spidering section several tools that can extract path from JS files were mentioned. Also, It would be interesting to monitor each JS file found, as in some ocations, a change may indicate that a potential vulnerability was introduced in the code. You could use for example JSMon.
- You should also check discovered JS files with RetireJS or JSHole to find if it's vulnerable.
- Javascript Deobfuscator and Unpacker: https://lelinhtinh.github.io/de4js/, https://www.dcode.fr/javascript-unobfuscator
- Javascript Beautifier: http://jsbeautifier.org/, http://jsnice.org/
- JsFuck deobfuscation (javascript with chars:"[]!+" https://enkhee-osiris.github.io/Decoder-JSFuck/)
- TrainFuck](https://github.com/taco-c/trainfuck):
+72.+29.+7..+3.-67.-12.+55.+24.+3.-6.-8.-67.-23. - On several occasions, you will need to understand the regular expressions used. This will be useful: https://regex101.com/ or https://pythonium.net/regex
- You could also monitor the files were forms were detected, as a change in the parameter or the apearance f a new form may indicate a potential new vulnerable functionality.
403 Forbidden/Basic Authentication/401 Unauthorized (bypass)
502 Proxy Error
If any page responds with that code, it's probably a bad configured proxy. If you send a HTTP request like: GET https://google.com HTTP/1.1 (with the host header and other common headers), the proxy will try to access google.com and you will have found a SSRF.
NTLM Authentication - Info disclosure
If the running server asking for authentication is Windows or you find a login asking for your credentials (and asking for domain name), you can provoke an information disclosure.
Send the header: “Authorization: NTLM TlRMTVNTUAABAAAAB4IIAAAAAAAAAAAAAAAAAAAAAAA=” and due to how the NTLM authentication works, the server will respond with internal info (IIS version, Windows version...) inside the header "WWW-Authenticate".
You can automate this using the nmap plugin "http-ntlm-info.nse".
HTTP Redirect (CTF)
It is possible to put content inside a Redirection. This content won't be shown to the user (as the browser will execute the redirection) but something could be hidden in there.
Web Vulnerabilities Checking
Now that a comprehensive enumeration of the web application has been performed it's time to check for a lot of possible vulnerabilities. You can find the checklist here:
Web Vulnerabilities Methodology
Find more info about web vulns in:
- https://six2dez.gitbook.io/pentest-book/others/web-checklist
- https://kennel209.gitbooks.io/owasp-testing-guide-v4/content/en/web_application_security_testing/configuration_and_deployment_management_testing.html
- https://owasp-skf.gitbook.io/asvs-write-ups/kbid-111-client-side-template-injection
Monitor Pages for changes
You can use tools such as https://github.com/dgtlmoon/changedetection.io to monitor pages for modifications that might insert vulnerabilities.
HackTricks Automatic Commands
Protocol_Name: Web #Protocol Abbreviation if there is one.
Port_Number: 80,443 #Comma separated if there is more than one.
Protocol_Description: Web #Protocol Abbreviation Spelled out
Entry_1:
Name: Notes
Description: Notes for Web
Note: |
https://book.hacktricks.wiki/en/network-services-pentesting/pentesting-web/index.html
Entry_2:
Name: Quick Web Scan
Description: Nikto and GoBuster
Command: nikto -host {Web_Proto}://{IP}:{Web_Port} &&&& gobuster dir -w {Small_Dirlist} -u {Web_Proto}://{IP}:{Web_Port} && gobuster dir -w {Big_Dirlist} -u {Web_Proto}://{IP}:{Web_Port}
Entry_3:
Name: Nikto
Description: Basic Site Info via Nikto
Command: nikto -host {Web_Proto}://{IP}:{Web_Port}
Entry_4:
Name: WhatWeb
Description: General purpose auto scanner
Command: whatweb -a 4 {IP}
Entry_5:
Name: Directory Brute Force Non-Recursive
Description: Non-Recursive Directory Brute Force
Command: gobuster dir -w {Big_Dirlist} -u {Web_Proto}://{IP}:{Web_Port}
Entry_6:
Name: Directory Brute Force Recursive
Description: Recursive Directory Brute Force
Command: python3 {Tool_Dir}dirsearch/dirsearch.py -w {Small_Dirlist} -e php,exe,sh,py,html,pl -f -t 20 -u {Web_Proto}://{IP}:{Web_Port} -r 10
Entry_7:
Name: Directory Brute Force CGI
Description: Common Gateway Interface Brute Force
Command: gobuster dir -u {Web_Proto}://{IP}:{Web_Port}/ -w /usr/share/seclists/Discovery/Web-Content/CGIs.txt -s 200
Entry_8:
Name: Nmap Web Vuln Scan
Description: Tailored Nmap Scan for web Vulnerabilities
Command: nmap -vv --reason -Pn -sV -p {Web_Port} --script=`banner,(http* or ssl*) and not (brute or broadcast or dos or external or http-slowloris* or fuzzer)` {IP}
Entry_9:
Name: Drupal
Description: Drupal Enumeration Notes
Note: |
git clone https://github.com/immunIT/drupwn.git for low hanging fruit and git clone https://github.com/droope/droopescan.git for deeper enumeration
Entry_10:
Name: WordPress
Description: WordPress Enumeration with WPScan
Command: |
?What is the location of the wp-login.php? Example: /Yeet/cannon/wp-login.php
wpscan --url {Web_Proto}://{IP}{1} --enumerate ap,at,cb,dbe && wpscan --url {Web_Proto}://{IP}{1} --enumerate u,tt,t,vp --passwords {Big_Passwordlist} -e
Entry_11:
Name: WordPress Hydra Brute Force
Description: Need User (admin is default)
Command: hydra -l admin -P {Big_Passwordlist} {IP} -V http-form-post '/wp-login.php:log=^USER^&pwd=^PASS^&wp-submit=Log In&testcookie=1:S=Location'
Entry_12:
Name: Ffuf Vhost
Description: Simple Scan with Ffuf for discovering additional vhosts
Command: ffuf -w {Subdomain_List}:FUZZ -u {Web_Proto}://{Domain_Name} -H "Host:FUZZ.{Domain_Name}" -c -mc all {Ffuf_Filters}
tip
Učite i vežbajte AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Učite i vežbajte GCP Hacking:
HackTricks Training GCP Red Team Expert (GRTE)
Učite i vežbajte Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Podržite HackTricks
- Proverite planove pretplate!
- Pridružite se 💬 Discord grupi ili telegram grupi ili pratite nas na Twitteru 🐦 @hacktricks_live.
- Podelite hakerske trikove slanjem PR-ova na HackTricks i HackTricks Cloud github repozitorijume.
HackTricks