2. ๋ฐ์ดํฐ ์ํ๋ง
Tip
AWS ํดํน ๋ฐฐ์ฐ๊ธฐ ๋ฐ ์ฐ์ตํ๊ธฐ:
HackTricks Training AWS Red Team Expert (ARTE)
GCP ํดํน ๋ฐฐ์ฐ๊ธฐ ๋ฐ ์ฐ์ตํ๊ธฐ:HackTricks Training GCP Red Team Expert (GRTE)
Azure ํดํน ๋ฐฐ์ฐ๊ธฐ ๋ฐ ์ฐ์ตํ๊ธฐ:
HackTricks Training Azure Red Team Expert (AzRTE)
HackTricks ์ง์ํ๊ธฐ
- ๊ตฌ๋ ๊ณํ ํ์ธํ๊ธฐ!
- **๐ฌ ๋์ค์ฝ๋ ๊ทธ๋ฃน ๋๋ ํ ๋ ๊ทธ๋จ ๊ทธ๋ฃน์ ์ฐธ์ฌํ๊ฑฐ๋ ํธ์ํฐ ๐ฆ @hacktricks_live๋ฅผ ํ๋ก์ฐํ์ธ์.
- HackTricks ๋ฐ HackTricks Cloud ๊นํ๋ธ ๋ฆฌํฌ์งํ ๋ฆฌ์ PR์ ์ ์ถํ์ฌ ํดํน ํธ๋ฆญ์ ๊ณต์ ํ์ธ์.
๋ฐ์ดํฐ ์ํ๋ง
๋ฐ์ดํฐ ์ํ๋ง์ GPT์ ๊ฐ์ ๋ํ ์ธ์ด ๋ชจ๋ธ(LLM)์ ํ๋ จํ๊ธฐ ์ํ ๋ฐ์ดํฐ ์ค๋น ๊ณผ์ ์์ ์ค์ํ ๋จ๊ณ์ ๋๋ค. ์ด๋ ๋ชจ๋ธ์ด ์ด์ ๋จ์ด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ๋ค์ ๋จ์ด(๋๋ ํ ํฐ)๋ฅผ ์์ธกํ๋ ๋ฐฉ๋ฒ์ ํ์ตํ๋ ๋ฐ ์ฌ์ฉํ๋ ์ ๋ ฅ ๋ฐ ๋ชฉํ ์ํ์ค๋ก ํ ์คํธ ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ฑํ๋ ๊ฒ์ ํฌํจํฉ๋๋ค. ์ ์ ํ ๋ฐ์ดํฐ ์ํ๋ง์ ๋ชจ๋ธ์ด ์ธ์ด ํจํด๊ณผ ์์กด์ฑ์ ํจ๊ณผ์ ์ผ๋ก ํฌ์ฐฉํ๋๋ก ๋ณด์ฅํฉ๋๋ค.
Tip
์ด ๋ ๋ฒ์งธ ๋จ๊ณ์ ๋ชฉํ๋ ๋งค์ฐ ๊ฐ๋จํฉ๋๋ค: ์ ๋ ฅ ๋ฐ์ดํฐ๋ฅผ ์ํ๋งํ๊ณ ํ๋ จ ๋จ๊ณ์ ์ค๋นํ๋ ๊ฒ์ผ๋ก, ์ผ๋ฐ์ ์ผ๋ก ๋ฐ์ดํฐ์ ์ ํน์ ๊ธธ์ด์ ๋ฌธ์ฅ์ผ๋ก ๋ถ๋ฆฌํ๊ณ ์์ ์๋ต๋ ์์ฑํฉ๋๋ค.
๋ฐ์ดํฐ ์ํ๋ง์ ์ค์์ฑ
GPT์ ๊ฐ์ LLM์ ์ด์ ๋จ์ด๊ฐ ์ ๊ณตํ๋ ๋งฅ๋ฝ์ ์ดํดํ์ฌ ํ ์คํธ๋ฅผ ์์ฑํ๊ฑฐ๋ ์์ธกํ๋๋ก ํ๋ จ๋ฉ๋๋ค. ์ด๋ฅผ ๋ฌ์ฑํ๊ธฐ ์ํด ํ๋ จ ๋ฐ์ดํฐ๋ ๋ชจ๋ธ์ด ๋จ์ด ์ํ์ค์ ๊ทธ ํ์ ๋จ์ด ๊ฐ์ ๊ด๊ณ๋ฅผ ํ์ตํ ์ ์๋ ๋ฐฉ์์ผ๋ก ๊ตฌ์กฐํ๋์ด์ผ ํฉ๋๋ค. ์ด๋ฌํ ๊ตฌ์กฐํ๋ ์ ๊ทผ ๋ฐฉ์์ ๋ชจ๋ธ์ด ์ผ๋ฐํํ๊ณ ์ผ๊ด๋๋ฉฐ ๋งฅ๋ฝ์ ์ ํฉํ ํ ์คํธ๋ฅผ ์์ฑํ ์ ์๋๋ก ํฉ๋๋ค.
๋ฐ์ดํฐ ์ํ๋ง์ ์ฃผ์ ๊ฐ๋
- ํ ํฐํ: ํ ์คํธ๋ฅผ ํ ํฐ(์: ๋จ์ด, ํ์ ๋จ์ด ๋๋ ๋ฌธ์)์ด๋ผ๊ณ ํ๋ ๋ ์์ ๋จ์๋ก ๋๋๋ ๊ณผ์ .
- ์ํ์ค ๊ธธ์ด (max_length): ๊ฐ ์ ๋ ฅ ์ํ์ค์ ํ ํฐ ์.
- ์ฌ๋ผ์ด๋ฉ ์๋์ฐ: ํ ํฐํ๋ ํ ์คํธ ์์ ์ฐฝ์ ์ด๋์์ผ ๊ฒน์น๋ ์ ๋ ฅ ์ํ์ค๋ฅผ ์์ฑํ๋ ๋ฐฉ๋ฒ.
- ์คํธ๋ผ์ด๋: ์ฌ๋ผ์ด๋ฉ ์๋์ฐ๊ฐ ๋ค์ ์ํ์ค๋ฅผ ์์ฑํ๊ธฐ ์ํด ์์ผ๋ก ์ด๋ํ๋ ํ ํฐ ์.
๋จ๊ณ๋ณ ์์
๋ฐ์ดํฐ ์ํ๋ง์ ์ค๋ช ํ๊ธฐ ์ํด ์์ ๋ฅผ ์ดํด๋ณด๊ฒ ์ต๋๋ค.
์์ ํ ์คํธ
"Lorem ipsum dolor sit amet, consectetur adipiscing elit."
ํ ํฐํ
์ฐ๋ฆฌ๊ฐ ํ ์คํธ๋ฅผ ๋จ์ด์ ๊ตฌ๋์ ์ผ๋ก ๋๋๋ ๊ธฐ๋ณธ ํ ํฌ๋์ด์ ๋ฅผ ์ฌ์ฉํ๋ค๊ณ ๊ฐ์ ํด ๋ณด๊ฒ ์ต๋๋ค:
Tokens: ["Lorem", "ipsum", "dolor", "sit", "amet,", "consectetur", "adipiscing", "elit."]
๋งค๊ฐ๋ณ์
- ์ต๋ ์ํ์ค ๊ธธ์ด (max_length): 4 ํ ํฐ
- ์ฌ๋ผ์ด๋ฉ ์๋์ฐ ๋ณดํญ: 1 ํ ํฐ
์ ๋ ฅ ๋ฐ ํ๊ฒ ์ํ์ค ์์ฑ
- ์ฌ๋ผ์ด๋ฉ ์๋์ฐ ์ ๊ทผ๋ฒ:
- ์
๋ ฅ ์ํ์ค: ๊ฐ ์
๋ ฅ ์ํ์ค๋
max_lengthํ ํฐ์ผ๋ก ๊ตฌ์ฑ๋ฉ๋๋ค. - ํ๊ฒ ์ํ์ค: ๊ฐ ํ๊ฒ ์ํ์ค๋ ํด๋น ์ ๋ ฅ ์ํ์ค์ ๋ฐ๋ก ๋ค๋ฐ๋ฅด๋ ํ ํฐ์ผ๋ก ๊ตฌ์ฑ๋ฉ๋๋ค.
- ์ํ์ค ์์ฑ:
| ์๋์ฐ ์์น | ์ ๋ ฅ ์ํ์ค | ํ๊ฒ ์ํ์ค |
|---|---|---|
| 1 | ["Lorem", "ipsum", "dolor", "sit"] | ["ipsum", "dolor", "sit", "amet,"] |
| 2 | ["ipsum", "dolor", "sit", "amet,"] | ["dolor", "sit", "amet,", "consectetur"] |
| 3 | ["dolor", "sit", "amet,", "consectetur"] | ["sit", "amet,", "consectetur", "adipiscing"] |
| 4 | ["sit", "amet,", "consectetur", "adipiscing"] | ["amet,", "consectetur", "adipiscing", "elit."] |
- ๊ฒฐ๊ณผ ์ ๋ ฅ ๋ฐ ํ๊ฒ ๋ฐฐ์ด:
- ์ ๋ ฅ:
[
["Lorem", "ipsum", "dolor", "sit"],
["ipsum", "dolor", "sit", "amet,"],
["dolor", "sit", "amet,", "consectetur"],
["sit", "amet,", "consectetur", "adipiscing"],
]
- ํ๊ฒ:
[
["ipsum", "dolor", "sit", "amet,"],
["dolor", "sit", "amet,", "consectetur"],
["sit", "amet,", "consectetur", "adipiscing"],
["amet,", "consectetur", "adipiscing", "elit."],
]
์๊ฐ์ ํํ
| ํ ํฐ ์์น | ํ ํฐ |
|---|---|
| 1 | Lorem |
| 2 | ipsum |
| 3 | dolor |
| 4 | sit |
| 5 | amet, |
| 6 | consectetur |
| 7 | adipiscing |
| 8 | elit. |
๋ณดํญ 1์ ์ฌ๋ผ์ด๋ฉ ์๋์ฐ:
- ์ฒซ ๋ฒ์งธ ์๋์ฐ (์์น 1-4): [โLoremโ, โipsumโ, โdolorโ, โsitโ] โ ํ๊ฒ: [โipsumโ, โdolorโ, โsitโ, โamet,โ]
- ๋ ๋ฒ์งธ ์๋์ฐ (์์น 2-5): [โipsumโ, โdolorโ, โsitโ, โamet,โ] โ ํ๊ฒ: [โdolorโ, โsitโ, โamet,โ, โconsecteturโ]
- ์ธ ๋ฒ์งธ ์๋์ฐ (์์น 3-6): [โdolorโ, โsitโ, โamet,โ, โconsecteturโ] โ ํ๊ฒ: [โsitโ, โamet,โ, โconsecteturโ, โadipiscingโ]
- ๋ค ๋ฒ์งธ ์๋์ฐ (์์น 4-7): [โsitโ, โamet,โ, โconsecteturโ, โadipiscingโ] โ ํ๊ฒ: [โamet,โ, โconsecteturโ, โadipiscingโ, โelit.โ]
๋ณดํญ ์ดํดํ๊ธฐ
- ๋ณดํญ 1: ์๋์ฐ๊ฐ ๋งค๋ฒ ํ ํ ํฐ์ฉ ์์ผ๋ก ์ด๋ํ์ฌ ๋งค์ฐ ๊ฒน์น๋ ์ํ์ค๋ฅผ ์์ฑํฉ๋๋ค. ์ด๋ ๋งฅ๋ฝ์ ๊ด๊ณ๋ฅผ ๋ ์ ํ์ตํ ์ ์์ง๋ง, ์ ์ฌํ ๋ฐ์ดํฐ ํฌ์ธํธ๊ฐ ๋ฐ๋ณต๋๋ฏ๋ก ๊ณผ์ ํฉ์ ์ํ์ด ์ฆ๊ฐํ ์ ์์ต๋๋ค.
- ๋ณดํญ 2: ์๋์ฐ๊ฐ ๋งค๋ฒ ๋ ํ ํฐ์ฉ ์์ผ๋ก ์ด๋ํ์ฌ ๊ฒน์นจ์ ์ค์ ๋๋ค. ์ด๋ ์ค๋ณต์ฑ๊ณผ ๊ณ์ฐ ๋ถํ๋ฅผ ๊ฐ์์ํค์ง๋ง, ์ผ๋ถ ๋งฅ๋ฝ์ ๋์์ค๋ฅผ ๋์น ์ ์์ต๋๋ค.
- max_length์ ๊ฐ์ ๋ณดํญ: ์๋์ฐ๊ฐ ์ ์ฒด ์๋์ฐ ํฌ๊ธฐ๋งํผ ์์ผ๋ก ์ด๋ํ์ฌ ๊ฒน์น์ง ์๋ ์ํ์ค๋ฅผ ์์ฑํฉ๋๋ค. ์ด๋ ๋ฐ์ดํฐ ์ค๋ณต์ฑ์ ์ต์ํํ์ง๋ง, ์ํ์ค ๊ฐ์ ์์กด์ฑ์ ํ์ตํ๋ ๋ชจ๋ธ์ ๋ฅ๋ ฅ์ ์ ํํ ์ ์์ต๋๋ค.
๋ณดํญ 2์ ์์:
๊ฐ์ ํ ํฐํ๋ ํ
์คํธ์ max_length 4๋ฅผ ์ฌ์ฉํ์ฌ:
- ์ฒซ ๋ฒ์งธ ์๋์ฐ (์์น 1-4): [โLoremโ, โipsumโ, โdolorโ, โsitโ] โ ํ๊ฒ: [โipsumโ, โdolorโ, โsitโ, โamet,โ]
- ๋ ๋ฒ์งธ ์๋์ฐ (์์น 3-6): [โdolorโ, โsitโ, โamet,โ, โconsecteturโ] โ ํ๊ฒ: [โsitโ, โamet,โ, โconsecteturโ, โadipiscingโ]
- ์ธ ๋ฒ์งธ ์๋์ฐ (์์น 5-8): [โamet,โ, โconsecteturโ, โadipiscingโ, โelit.โ] โ ํ๊ฒ: [โconsecteturโ, โadipiscingโ, โelit.โ, โsedโ] (๊ณ์๋๋ค๊ณ ๊ฐ์ )
์ฝ๋ ์์
https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb์์ ์ฝ๋ ์์๋ฅผ ํตํด ์ด๋ฅผ ๋ ์ ์ดํดํด ๋ด ์๋ค.
# Download the text to pre-train the LLM
import urllib.request
url = ("https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/ch02/01_main-chapter-code/the-verdict.txt")
file_path = "the-verdict.txt"
urllib.request.urlretrieve(url, file_path)
with open("the-verdict.txt", "r", encoding="utf-8") as f:
raw_text = f.read()
"""
Create a class that will receive some params lie tokenizer and text
and will prepare the input chunks and the target chunks to prepare
the LLM to learn which next token to generate
"""
import torch
from torch.utils.data import Dataset, DataLoader
class GPTDatasetV1(Dataset):
def __init__(self, txt, tokenizer, max_length, stride):
self.input_ids = []
self.target_ids = []
# Tokenize the entire text
token_ids = tokenizer.encode(txt, allowed_special={"<|endoftext|>"})
# Use a sliding window to chunk the book into overlapping sequences of max_length
for i in range(0, len(token_ids) - max_length, stride):
input_chunk = token_ids[i:i + max_length]
target_chunk = token_ids[i + 1: i + max_length + 1]
self.input_ids.append(torch.tensor(input_chunk))
self.target_ids.append(torch.tensor(target_chunk))
def __len__(self):
return len(self.input_ids)
def __getitem__(self, idx):
return self.input_ids[idx], self.target_ids[idx]
"""
Create a data loader which given the text and some params will
prepare the inputs and targets with the previous class and
then create a torch DataLoader with the info
"""
import tiktoken
def create_dataloader_v1(txt, batch_size=4, max_length=256,
stride=128, shuffle=True, drop_last=True,
num_workers=0):
# Initialize the tokenizer
tokenizer = tiktoken.get_encoding("gpt2")
# Create dataset
dataset = GPTDatasetV1(txt, tokenizer, max_length, stride)
# Create dataloader
dataloader = DataLoader(
dataset,
batch_size=batch_size,
shuffle=shuffle,
drop_last=drop_last,
num_workers=num_workers
)
return dataloader
"""
Finally, create the data loader with the params we want:
- The used text for training
- batch_size: The size of each batch
- max_length: The size of each entry on each batch
- stride: The sliding window (how many tokens should the next entry advance compared to the previous one). The smaller the more overfitting, usually this is equals to the max_length so the same tokens aren't repeated.
- shuffle: Re-order randomly
"""
dataloader = create_dataloader_v1(
raw_text, batch_size=8, max_length=4, stride=1, shuffle=False
)
data_iter = iter(dataloader)
first_batch = next(data_iter)
print(first_batch)
# Note the batch_size of 8, the max_length of 4 and the stride of 1
[
# Input
tensor([[ 40, 367, 2885, 1464],
[ 367, 2885, 1464, 1807],
[ 2885, 1464, 1807, 3619],
[ 1464, 1807, 3619, 402],
[ 1807, 3619, 402, 271],
[ 3619, 402, 271, 10899],
[ 402, 271, 10899, 2138],
[ 271, 10899, 2138, 257]]),
# Target
tensor([[ 367, 2885, 1464, 1807],
[ 2885, 1464, 1807, 3619],
[ 1464, 1807, 3619, 402],
[ 1807, 3619, 402, 271],
[ 3619, 402, 271, 10899],
[ 402, 271, 10899, 2138],
[ 271, 10899, 2138, 257],
[10899, 2138, 257, 7026]])
]
# With stride=4 this will be the result:
[
# Input
tensor([[ 40, 367, 2885, 1464],
[ 1807, 3619, 402, 271],
[10899, 2138, 257, 7026],
[15632, 438, 2016, 257],
[ 922, 5891, 1576, 438],
[ 568, 340, 373, 645],
[ 1049, 5975, 284, 502],
[ 284, 3285, 326, 11]]),
# Target
tensor([[ 367, 2885, 1464, 1807],
[ 3619, 402, 271, 10899],
[ 2138, 257, 7026, 15632],
[ 438, 2016, 257, 922],
[ 5891, 1576, 438, 568],
[ 340, 373, 645, 1049],
[ 5975, 284, 502, 284],
[ 3285, 326, 11, 287]])
]
๊ณ ๊ธ ์ํ๋ง ์ ๋ต (2023-2025)
1. ์จ๋ ๊ธฐ๋ฐ ํผํฉ ๊ฐ์ค์น
์ต์ LLM์ ๋จ์ผ ๋ง๋ญ์น์์ ํ๋ จ๋๋ ๊ฒฝ์ฐ๊ฐ ๋๋ญ ๋๋ค. ๋์ , ์ฌ๋ฌ ์ด์ง์ ์ธ ๋ฐ์ดํฐ ์์ค(์ฝ๋, ์น, ํ์ ๋ ผ๋ฌธ, ํฌ๋ผ ๋ฑ)์์ ์ํ๋งํฉ๋๋ค. ๊ฐ ์์ค์ ์๋์ ์ธ ๋น์จ์ ํ์ ์ฑ๋ฅ์ ๊ฐํ ์ํฅ์ ๋ฏธ์น ์ ์์ต๋๋ค. ์ต๊ทผ ์คํ ์์ค ๋ชจ๋ธ์ธ Llama 2๋ ์จ๋ ๊ธฐ๋ฐ ์ํ๋ง ๋ฐฉ์์ ๋์ ํ์ฌ ๋ง๋ญ์น i์์ ๋ฌธ์๋ฅผ ์ถ์ถํ ํ๋ฅ ์ด ๋ค์๊ณผ ๊ฐ์ด ๋ฉ๋๋ค.
p(i) = \frac{w_i^{\alpha}}{\sum_j w_j^{\alpha}}
โข wi โ ์ฝํผ์ค i์ ์์ ํ ํฐ ๋น์จ
โข ฮฑ (โ์จ๋โ) โ (0,1]์ ๊ฐ. ฮฑ < 1์ ๋ถํฌ๋ฅผ ํํํ๊ฒ ํ์ฌ ๋ ์์ ๊ณ ํ์ง ์ฝํผ์ค์ ๋ ๋ง์ ๊ฐ์ค์น๋ฅผ ๋ถ์ฌํฉ๋๋ค.
Llama 2๋ ฮฑ = 0.7์ ์ฌ์ฉํ์๊ณ , ฮฑ๋ฅผ ์ค์ด๋ฉด ํ๋ จ ํผํฉ์ ์์ ์ ์ผ๋ก ์ ์งํ๋ฉด์ ์ง์ ์ค์ฌ ์์ ์์ ํ๊ฐ ์ ์๋ฅผ ๋์ธ๋ค๋ ๊ฒ์ ๋ณด์ฌ์ฃผ์์ต๋๋ค. ๊ฐ์ ๊ธฐ๋ฒ์ด Mistral (2023)์ Claude 3์ ์ํด ์ฑํ๋์์ต๋๋ค.
from collections import Counter
def temperature_sample(corpus_ids, alpha=0.7):
counts = Counter(corpus_ids) # number of tokens seen per corpus
probs = {c: c_count**alpha for c, c_count in counts.items()}
Z = sum(probs.values())
probs = {c: p/Z for c, p in probs.items()}
# Now draw according to probs to fill every batch
### 2. Sequence Packing / Dynamic Batching
GPU memory is wasted when every sequence in a batch is padded to the longest example. "Packing" concatenates multiple shorter sequences until the **exact** `max_length` is reached and builds a parallel `attention_mask` so that tokens do not attend across segment boundaries. Packing can improve throughput by 20โ40 % with no gradient change and is supported out-of-the-box in
* PyTorch `torchtext.experimental.agents.PackedBatch`
* HuggingFace `DataCollatorForLanguageModeling(pad_to_multiple_of=โฆ)`
Dynamic batching frameworks (e.g. FlashAttention 2, vLLM 2024) combine sequence packing with just-in-time kernel selection, enabling thousand-token context training at 400+ K tokens/s on A100-80G.
### 3. Deduplication & Quality Filtering
Repeated passages cause memorization and provide an easy channel for data-poisoning. Modern pipelines therefore:
1. MinHash/FAISS near-duplicate detection at **document** and **128-gram** level.
2. Filter documents whose perplexity under a small reference model is > ยต + 3ฯ (noisy OCR, garbled HTML).
3. Block-list documents that contain PII or CWE keywords using regex & spaCy NER.
The Llama 2 team deduplicated with 8-gram MinHash and removed ~15 % of CommonCrawl before sampling. OpenAIโs 2024 "Deduplicate Everything" paper demonstrates โค0.04 duplicate ratio reduces over-fitting and speeds convergence.
## Security & Privacy Considerations During Sampling
### Data-Poisoning / Backdoor Attacks
Researchers showed that inserting <1 % backdoored sentences can make a model obey a hidden trigger ("PoisonGPT", 2023). Recommended mitigations:
* **Shuffled mixing** โ make sure adjacent training examples originate from different sources; this dilutes gradient alignment of malicious spans.
* **Gradient similarity scoring** โ compute cosine similarity of example gradient to batch average; outliers are candidates for removal.
* **Dataset versioning & hashes** โ freeze immutable tarballs and verify SHA-256 before each training run.
### Membership-Inference & Memorization
Long overlap between sliding-window samples increases the chance that rare strings (telephone numbers, secret keys) are memorized. OpenAIโs 2024 study on ChatGPT memorization reports that raising stride from 1 ร `max_length` to 4 ร reduces verbatim leakage by โ50 % with negligible loss in perplexity.
Practical recommendations:
* Use **stride โฅ max_length** except for <1B parameter models where data volume is scarce.
* Add random masking of 1-3 tokens per window during training; this lowers memorization while preserving utility.
---
## References
- [Build a Large Language Model from Scratch (Manning, 2024)](https://www.manning.com/books/build-a-large-language-model-from-scratch)
- [Llama 2: Open Foundation and Fine-Tuned Chat Models (2023)](https://arxiv.org/abs/2307.09288)
- [PoisonGPT: Assessing Backdoor Vulnerabilities in Large Language Models (BlackHat EU 2023)](https://arxiv.org/abs/2308.12364)
> [!TIP]
> AWS ํดํน ๋ฐฐ์ฐ๊ธฐ ๋ฐ ์ฐ์ตํ๊ธฐ:<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="../../../../../images/arte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">\
> GCP ํดํน ๋ฐฐ์ฐ๊ธฐ ๋ฐ ์ฐ์ตํ๊ธฐ: <img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training GCP Red Team Expert (GRTE)**](https://training.hacktricks.xyz/courses/grte)<img src="../../../../../images/grte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">
> Azure ํดํน ๋ฐฐ์ฐ๊ธฐ ๋ฐ ์ฐ์ตํ๊ธฐ: <img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">[**HackTricks Training Azure Red Team Expert (AzRTE)**](https://training.hacktricks.xyz/courses/azrte)<img src="../../../../../images/azrte.png" alt="" style="width:auto;height:24px;vertical-align:middle;">
>
> <details>
>
> <summary>HackTricks ์ง์ํ๊ธฐ</summary>
>
> - [**๊ตฌ๋
๊ณํ**](https://github.com/sponsors/carlospolop) ํ์ธํ๊ธฐ!
> - **๐ฌ [**๋์ค์ฝ๋ ๊ทธ๋ฃน**](https://discord.gg/hRep4RUj7f) ๋๋ [**ํ
๋ ๊ทธ๋จ ๊ทธ๋ฃน**](https://t.me/peass)์ ์ฐธ์ฌํ๊ฑฐ๋ **ํธ์ํฐ** ๐ฆ [**@hacktricks_live**](https://twitter.com/hacktricks_live)**๋ฅผ ํ๋ก์ฐํ์ธ์.**
> - **[**HackTricks**](https://github.com/carlospolop/hacktricks) ๋ฐ [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) ๊นํ๋ธ ๋ฆฌํฌ์งํ ๋ฆฌ์ PR์ ์ ์ถํ์ฌ ํดํน ํธ๋ฆญ์ ๊ณต์ ํ์ธ์.**
>
> </details>


