6. Predobučavanje i učitavanje modela

tip

Učite i vežbajte AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Učite i vežbajte GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE) Učite i vežbajte Azure Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Podržite HackTricks

Generisanje teksta

Da bismo obučili model, potrebno je da taj model može da generiše nove tokene. Zatim ćemo uporediti generisane tokene sa očekivanim kako bismo obučili model da nauči tokene koje treba da generiše.

Kao u prethodnim primerima, već smo predvideli neke tokene, moguće je ponovo koristiti tu funkciju u tu svrhu.

tip

Cilj ove šeste faze je vrlo jednostavan: Obučiti model od nule. Za to će se koristiti prethodna LLM arhitektura sa nekim petljama koje prolaze kroz skupove podataka koristeći definisane funkcije gubitka i optimizator za obučavanje svih parametara modela.

Evaluacija teksta

Da bismo izvršili ispravnu obuku, potrebno je meriti predikcije dobijene za očekivani token. Cilj obuke je maksimizovati verovatnoću ispravnog tokena, što podrazumeva povećanje njegove verovatnoće u odnosu na druge tokene.

Da bismo maksimizovali verovatnoću ispravnog tokena, težine modela moraju biti modifikovane tako da se ta verovatnoća maksimizuje. Ažuriranje težina se vrši putem backpropagation. Ovo zahteva funkciju gubitka koju treba maksimizovati. U ovom slučaju, funkcija će biti razlika između izvršene predikcije i željene.

Međutim, umesto da radimo sa sirovim predikcijama, radiće se sa logaritmom sa bazom n. Dakle, ako je trenutna predikcija očekivanog tokena bila 7.4541e-05, prirodni logaritam (baza e) od 7.4541e-05 je otprilike -9.5042.
Zatim, za svaki unos sa dužinom konteksta od 5 tokena, na primer, model će morati da predvidi 5 tokena, pri čemu su prva 4 tokena poslednja u ulazu, a peti je predviđeni. Stoga, za svaki unos ćemo imati 5 predikcija u tom slučaju (čak i ako su prva 4 bila u ulazu, model to ne zna) sa 5 očekivanih tokena i stoga 5 verovatnoća koje treba maksimizovati.

Dakle, nakon izvršavanja prirodnog logaritma na svaku predikciju, izračunava se prosek, minus simbol se uklanja (to se zove cross entropy loss) i to je broj koji treba smanjiti što bliže 0 jer je prirodni logaritam 1 jednak 0:

https://camo.githubusercontent.com/3c0ab9c55cefa10b667f1014b6c42df901fa330bb2bc9cea88885e784daec8ba/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830355f636f6d707265737365642f63726f73732d656e74726f70792e776562703f313233

Drugi način da se izmeri koliko je model dobar zove se perplexity. Perplexity je metrika koja se koristi za procenu koliko dobro model verovatnoće predviđa uzorak. U modelovanju jezika, predstavlja nesigurnost modela prilikom predviđanja sledećeg tokena u nizu.
Na primer, vrednost perplexity od 48725 znači da kada je potrebno predvideti token, nije siguran koji od 48,725 tokena u rečniku je dobar.

Primer predobučavanja

Ovo je inicijalni kod predložen u https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/01_main-chapter-code/ch05.ipynb koji je ponekad malo modifikovan

Prethodni kod korišćen ovde, ali već objašnjen u prethodnim sekcijama
python
"""
This is code explained before so it won't be exaplained
"""

import tiktoken
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader


class GPTDatasetV1(Dataset):
def __init__(self, txt, tokenizer, max_length, stride):
self.input_ids = []
self.target_ids = []

# Tokenize the entire text
token_ids = tokenizer.encode(txt, allowed_special={"<|endoftext|>"})

# Use a sliding window to chunk the book into overlapping sequences of max_length
for i in range(0, len(token_ids) - max_length, stride):
input_chunk = token_ids[i:i + max_length]
target_chunk = token_ids[i + 1: i + max_length + 1]
self.input_ids.append(torch.tensor(input_chunk))
self.target_ids.append(torch.tensor(target_chunk))

def __len__(self):
return len(self.input_ids)

def __getitem__(self, idx):
return self.input_ids[idx], self.target_ids[idx]


def create_dataloader_v1(txt, batch_size=4, max_length=256,
stride=128, shuffle=True, drop_last=True, num_workers=0):
# Initialize the tokenizer
tokenizer = tiktoken.get_encoding("gpt2")

# Create dataset
dataset = GPTDatasetV1(txt, tokenizer, max_length, stride)

# Create dataloader
dataloader = DataLoader(
dataset, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last, num_workers=num_workers)

return dataloader


class MultiHeadAttention(nn.Module):
def __init__(self, d_in, d_out, context_length, dropout, num_heads, qkv_bias=False):
super().__init__()
assert d_out % num_heads == 0, "d_out must be divisible by n_heads"

self.d_out = d_out
self.num_heads = num_heads
self.head_dim = d_out // num_heads  # Reduce the projection dim to match desired output dim

self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)
self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)
self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)
self.out_proj = nn.Linear(d_out, d_out)  # Linear layer to combine head outputs
self.dropout = nn.Dropout(dropout)
self.register_buffer('mask', torch.triu(torch.ones(context_length, context_length), diagonal=1))

def forward(self, x):
b, num_tokens, d_in = x.shape

keys = self.W_key(x)  # Shape: (b, num_tokens, d_out)
queries = self.W_query(x)
values = self.W_value(x)

# We implicitly split the matrix by adding a `num_heads` dimension
# Unroll last dim: (b, num_tokens, d_out) -> (b, num_tokens, num_heads, head_dim)
keys = keys.view(b, num_tokens, self.num_heads, self.head_dim)
values = values.view(b, num_tokens, self.num_heads, self.head_dim)
queries = queries.view(b, num_tokens, self.num_heads, self.head_dim)

# Transpose: (b, num_tokens, num_heads, head_dim) -> (b, num_heads, num_tokens, head_dim)
keys = keys.transpose(1, 2)
queries = queries.transpose(1, 2)
values = values.transpose(1, 2)

# Compute scaled dot-product attention (aka self-attention) with a causal mask
attn_scores = queries @ keys.transpose(2, 3)  # Dot product for each head

# Original mask truncated to the number of tokens and converted to boolean
mask_bool = self.mask.bool()[:num_tokens, :num_tokens]

# Use the mask to fill attention scores
attn_scores.masked_fill_(mask_bool, -torch.inf)

attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1)
attn_weights = self.dropout(attn_weights)

# Shape: (b, num_tokens, num_heads, head_dim)
context_vec = (attn_weights @ values).transpose(1, 2)

# Combine heads, where self.d_out = self.num_heads * self.head_dim
context_vec = context_vec.reshape(b, num_tokens, self.d_out)
context_vec = self.out_proj(context_vec)  # optional projection

return context_vec


class LayerNorm(nn.Module):
def __init__(self, emb_dim):
super().__init__()
self.eps = 1e-5
self.scale = nn.Parameter(torch.ones(emb_dim))
self.shift = nn.Parameter(torch.zeros(emb_dim))

def forward(self, x):
mean = x.mean(dim=-1, keepdim=True)
var = x.var(dim=-1, keepdim=True, unbiased=False)
norm_x = (x - mean) / torch.sqrt(var + self.eps)
return self.scale * norm_x + self.shift


class GELU(nn.Module):
def __init__(self):
super().__init__()

def forward(self, x):
return 0.5 * x * (1 + torch.tanh(
torch.sqrt(torch.tensor(2.0 / torch.pi)) *
(x + 0.044715 * torch.pow(x, 3))
))


class FeedForward(nn.Module):
def __init__(self, cfg):
super().__init__()
self.layers = nn.Sequential(
nn.Linear(cfg["emb_dim"], 4 * cfg["emb_dim"]),
GELU(),
nn.Linear(4 * cfg["emb_dim"], cfg["emb_dim"]),
)

def forward(self, x):
return self.layers(x)


class TransformerBlock(nn.Module):
def __init__(self, cfg):
super().__init__()
self.att = MultiHeadAttention(
d_in=cfg["emb_dim"],
d_out=cfg["emb_dim"],
context_length=cfg["context_length"],
num_heads=cfg["n_heads"],
dropout=cfg["drop_rate"],
qkv_bias=cfg["qkv_bias"])
self.ff = FeedForward(cfg)
self.norm1 = LayerNorm(cfg["emb_dim"])
self.norm2 = LayerNorm(cfg["emb_dim"])
self.drop_shortcut = nn.Dropout(cfg["drop_rate"])

def forward(self, x):
# Shortcut connection for attention block
shortcut = x
x = self.norm1(x)
x = self.att(x)   # Shape [batch_size, num_tokens, emb_size]
x = self.drop_shortcut(x)
x = x + shortcut  # Add the original input back

# Shortcut connection for feed-forward block
shortcut = x
x = self.norm2(x)
x = self.ff(x)
x = self.drop_shortcut(x)
x = x + shortcut  # Add the original input back

return x


class GPTModel(nn.Module):
def __init__(self, cfg):
super().__init__()
self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])
self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])
self.drop_emb = nn.Dropout(cfg["drop_rate"])

self.trf_blocks = nn.Sequential(
*[TransformerBlock(cfg) for _ in range(cfg["n_layers"])])

self.final_norm = LayerNorm(cfg["emb_dim"])
self.out_head = nn.Linear(cfg["emb_dim"], cfg["vocab_size"], bias=False)

def forward(self, in_idx):
batch_size, seq_len = in_idx.shape
tok_embeds = self.tok_emb(in_idx)
pos_embeds = self.pos_emb(torch.arange(seq_len, device=in_idx.device))
x = tok_embeds + pos_embeds  # Shape [batch_size, num_tokens, emb_size]
x = self.drop_emb(x)
x = self.trf_blocks(x)
x = self.final_norm(x)
logits = self.out_head(x)
return logits
python
# Download contents to train the data with
import os
import urllib.request

file_path = "the-verdict.txt"
url = "https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/ch02/01_main-chapter-code/the-verdict.txt"

if not os.path.exists(file_path):
with urllib.request.urlopen(url) as response:
text_data = response.read().decode('utf-8')
with open(file_path, "w", encoding="utf-8") as file:
file.write(text_data)
else:
with open(file_path, "r", encoding="utf-8") as file:
text_data = file.read()

total_characters = len(text_data)
tokenizer = tiktoken.get_encoding("gpt2")
total_tokens = len(tokenizer.encode(text_data))

print("Data downloaded")
print("Characters:", total_characters)
print("Tokens:", total_tokens)

# Model initialization
GPT_CONFIG_124M = {
"vocab_size": 50257,   # Vocabulary size
"context_length": 256, # Shortened context length (orig: 1024)
"emb_dim": 768,        # Embedding dimension
"n_heads": 12,         # Number of attention heads
"n_layers": 12,        # Number of layers
"drop_rate": 0.1,      # Dropout rate
"qkv_bias": False      # Query-key-value bias
}

torch.manual_seed(123)
model = GPTModel(GPT_CONFIG_124M)
model.eval()
print ("Model initialized")


# Functions to transform from tokens to ids and from to ids to tokens
def text_to_token_ids(text, tokenizer):
encoded = tokenizer.encode(text, allowed_special={'<|endoftext|>'})
encoded_tensor = torch.tensor(encoded).unsqueeze(0) # add batch dimension
return encoded_tensor

def token_ids_to_text(token_ids, tokenizer):
flat = token_ids.squeeze(0) # remove batch dimension
return tokenizer.decode(flat.tolist())



# Define loss functions
def calc_loss_batch(input_batch, target_batch, model, device):
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
logits = model(input_batch)
loss = torch.nn.functional.cross_entropy(logits.flatten(0, 1), target_batch.flatten())
return loss


def calc_loss_loader(data_loader, model, device, num_batches=None):
total_loss = 0.
if len(data_loader) == 0:
return float("nan")
elif num_batches is None:
num_batches = len(data_loader)
else:
# Reduce the number of batches to match the total number of batches in the data loader
# if num_batches exceeds the number of batches in the data loader
num_batches = min(num_batches, len(data_loader))
for i, (input_batch, target_batch) in enumerate(data_loader):
if i < num_batches:
loss = calc_loss_batch(input_batch, target_batch, model, device)
total_loss += loss.item()
else:
break
return total_loss / num_batches


# Apply Train/validation ratio and create dataloaders
train_ratio = 0.90
split_idx = int(train_ratio * len(text_data))
train_data = text_data[:split_idx]
val_data = text_data[split_idx:]

torch.manual_seed(123)

train_loader = create_dataloader_v1(
train_data,
batch_size=2,
max_length=GPT_CONFIG_124M["context_length"],
stride=GPT_CONFIG_124M["context_length"],
drop_last=True,
shuffle=True,
num_workers=0
)

val_loader = create_dataloader_v1(
val_data,
batch_size=2,
max_length=GPT_CONFIG_124M["context_length"],
stride=GPT_CONFIG_124M["context_length"],
drop_last=False,
shuffle=False,
num_workers=0
)


# Sanity checks
if total_tokens * (train_ratio) < GPT_CONFIG_124M["context_length"]:
print("Not enough tokens for the training loader. "
"Try to lower the `GPT_CONFIG_124M['context_length']` or "
"increase the `training_ratio`")

if total_tokens * (1-train_ratio) < GPT_CONFIG_124M["context_length"]:
print("Not enough tokens for the validation loader. "
"Try to lower the `GPT_CONFIG_124M['context_length']` or "
"decrease the `training_ratio`")

print("Train loader:")
for x, y in train_loader:
print(x.shape, y.shape)

print("\nValidation loader:")
for x, y in val_loader:
print(x.shape, y.shape)

train_tokens = 0
for input_batch, target_batch in train_loader:
train_tokens += input_batch.numel()

val_tokens = 0
for input_batch, target_batch in val_loader:
val_tokens += input_batch.numel()

print("Training tokens:", train_tokens)
print("Validation tokens:", val_tokens)
print("All tokens:", train_tokens + val_tokens)


# Indicate the device to use
if torch.cuda.is_available():
device = torch.device("cuda")
elif torch.backends.mps.is_available():
device = torch.device("mps")
else:
device = torch.device("cpu")

print(f"Using {device} device.")

model.to(device) # no assignment model = model.to(device) necessary for nn.Module classes



# Pre-calculate losses without starting yet
torch.manual_seed(123) # For reproducibility due to the shuffling in the data loader

with torch.no_grad(): # Disable gradient tracking for efficiency because we are not training, yet
train_loss = calc_loss_loader(train_loader, model, device)
val_loss = calc_loss_loader(val_loader, model, device)

print("Training loss:", train_loss)
print("Validation loss:", val_loss)


# Functions to train the data
def train_model_simple(model, train_loader, val_loader, optimizer, device, num_epochs,
eval_freq, eval_iter, start_context, tokenizer):
# Initialize lists to track losses and tokens seen
train_losses, val_losses, track_tokens_seen = [], [], []
tokens_seen, global_step = 0, -1

# Main training loop
for epoch in range(num_epochs):
model.train()  # Set model to training mode

for input_batch, target_batch in train_loader:
optimizer.zero_grad() # Reset loss gradients from previous batch iteration
loss = calc_loss_batch(input_batch, target_batch, model, device)
loss.backward() # Calculate loss gradients
optimizer.step() # Update model weights using loss gradients
tokens_seen += input_batch.numel()
global_step += 1

# Optional evaluation step
if global_step % eval_freq == 0:
train_loss, val_loss = evaluate_model(
model, train_loader, val_loader, device, eval_iter)
train_losses.append(train_loss)
val_losses.append(val_loss)
track_tokens_seen.append(tokens_seen)
print(f"Ep {epoch+1} (Step {global_step:06d}): "
f"Train loss {train_loss:.3f}, Val loss {val_loss:.3f}")

# Print a sample text after each epoch
generate_and_print_sample(
model, tokenizer, device, start_context
)

return train_losses, val_losses, track_tokens_seen


def evaluate_model(model, train_loader, val_loader, device, eval_iter):
model.eval()
with torch.no_grad():
train_loss = calc_loss_loader(train_loader, model, device, num_batches=eval_iter)
val_loss = calc_loss_loader(val_loader, model, device, num_batches=eval_iter)
model.train()
return train_loss, val_loss


def generate_and_print_sample(model, tokenizer, device, start_context):
model.eval()
context_size = model.pos_emb.weight.shape[0]
encoded = text_to_token_ids(start_context, tokenizer).to(device)
with torch.no_grad():
token_ids = generate_text(
model=model, idx=encoded,
max_new_tokens=50, context_size=context_size
)
decoded_text = token_ids_to_text(token_ids, tokenizer)
print(decoded_text.replace("\n", " "))  # Compact print format
model.train()


# Start training!
import time
start_time = time.time()

torch.manual_seed(123)
model = GPTModel(GPT_CONFIG_124M)
model.to(device)
optimizer = torch.optim.AdamW(model.parameters(), lr=0.0004, weight_decay=0.1)

num_epochs = 10
train_losses, val_losses, tokens_seen = train_model_simple(
model, train_loader, val_loader, optimizer, device,
num_epochs=num_epochs, eval_freq=5, eval_iter=5,
start_context="Every effort moves you", tokenizer=tokenizer
)

end_time = time.time()
execution_time_minutes = (end_time - start_time) / 60
print(f"Training completed in {execution_time_minutes:.2f} minutes.")



# Show graphics with the training process
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import math
def plot_losses(epochs_seen, tokens_seen, train_losses, val_losses):
fig, ax1 = plt.subplots(figsize=(5, 3))
ax1.plot(epochs_seen, train_losses, label="Training loss")
ax1.plot(
epochs_seen, val_losses, linestyle="-.", label="Validation loss"
)
ax1.set_xlabel("Epochs")
ax1.set_ylabel("Loss")
ax1.legend(loc="upper right")
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
ax2 = ax1.twiny()
ax2.plot(tokens_seen, train_losses, alpha=0)
ax2.set_xlabel("Tokens seen")
fig.tight_layout()
plt.show()

# Compute perplexity from the loss values
train_ppls = [math.exp(loss) for loss in train_losses]
val_ppls = [math.exp(loss) for loss in val_losses]
# Plot perplexity over tokens seen
plt.figure()
plt.plot(tokens_seen, train_ppls, label='Training Perplexity')
plt.plot(tokens_seen, val_ppls, label='Validation Perplexity')
plt.xlabel('Tokens Seen')
plt.ylabel('Perplexity')
plt.title('Perplexity over Training')
plt.legend()
plt.show()

epochs_tensor = torch.linspace(0, num_epochs, len(train_losses))
plot_losses(epochs_tensor, tokens_seen, train_losses, val_losses)


torch.save({
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
},
"/tmp/model_and_optimizer.pth"
)

Hajde da vidimo objašnjenje korak po korak

Funkcije za transformaciju teksta <--> id-ova

Ovo su neke jednostavne funkcije koje se mogu koristiti za transformaciju teksta iz rečnika u id-ove i obrnuto. Ovo je potrebno na početku obrade teksta i na kraju predikcija:

python
# Functions to transform from tokens to ids and from to ids to tokens
def text_to_token_ids(text, tokenizer):
encoded = tokenizer.encode(text, allowed_special={'<|endoftext|>'})
encoded_tensor = torch.tensor(encoded).unsqueeze(0) # add batch dimension
return encoded_tensor

def token_ids_to_text(token_ids, tokenizer):
flat = token_ids.squeeze(0) # remove batch dimension
return tokenizer.decode(flat.tolist())

Generiši funkcije za tekst

U prethodnom odeljku funkcija je samo uzela najverovatniji token nakon dobijanja logita. Međutim, to će značiti da će za svaki unos uvek biti generisan isti izlaz, što ga čini veoma determinističkim.

Sledeća generate_text funkcija će primeniti koncepte top-k, temperature i multinomial.

  • top-k znači da ćemo početi da smanjujemo na -inf sve verovatnoće svih tokena osim za top k tokena. Dakle, ako je k=3, pre donošenja odluke samo će 3 najverovatnija tokena imati verovatnoću različitu od -inf.
  • temperature znači da će svaka verovatnoća biti podeljena sa vrednošću temperature. Vrednost od 0.1 će poboljšati najvišu verovatnoću u poređenju sa najnižom, dok će temperatura od 5, na primer, učiniti da bude ravnija. Ovo pomaže da se poboljša varijacija u odgovorima koje bismo želeli da LLM ima.
  • Nakon primene temperature, funkcija softmax se ponovo primenjuje da bi svi preostali tokeni imali ukupnu verovatnoću od 1.
  • Na kraju, umesto da se bira token sa najvećom verovatnoćom, funkcija multinomial se primenjuje da predvidi sledeći token prema konačnim verovatnoćama. Dakle, ako je token 1 imao 70% verovatnoće, token 2 20% i token 3 10%, 70% vremena biće izabran token 1, 20% vremena biće token 2, a 10% vremena će biti token 3.
python
# Generate text function
def generate_text(model, idx, max_new_tokens, context_size, temperature=0.0, top_k=None, eos_id=None):

# For-loop is the same as before: Get logits, and only focus on last time step
for _ in range(max_new_tokens):
idx_cond = idx[:, -context_size:]
with torch.no_grad():
logits = model(idx_cond)
logits = logits[:, -1, :]

# New: Filter logits with top_k sampling
if top_k is not None:
# Keep only top_k values
top_logits, _ = torch.topk(logits, top_k)
min_val = top_logits[:, -1]
logits = torch.where(logits < min_val, torch.tensor(float("-inf")).to(logits.device), logits)

# New: Apply temperature scaling
if temperature > 0.0:
logits = logits / temperature

# Apply softmax to get probabilities
probs = torch.softmax(logits, dim=-1)  # (batch_size, context_len)

# Sample from the distribution
idx_next = torch.multinomial(probs, num_samples=1)  # (batch_size, 1)

# Otherwise same as before: get idx of the vocab entry with the highest logits value
else:
idx_next = torch.argmax(logits, dim=-1, keepdim=True)  # (batch_size, 1)

if idx_next == eos_id:  # Stop generating early if end-of-sequence token is encountered and eos_id is specified
break

# Same as before: append sampled index to the running sequence
idx = torch.cat((idx, idx_next), dim=1)  # (batch_size, num_tokens+1)

return idx

tip

Postoji uobičajena alternativa za top-k pod nazivom top-p, takođe poznata kao uzorkovanje jezgra, koja umesto da uzima k uzoraka sa najvišom verovatnoćom, organizuje sav rezultatni rečnik prema verovatnoćama i sabira ih od najviše verovatnoće do najniže dok se ne dostigne prag.

Tada će se samo te reči iz rečnika uzeti u obzir prema njihovim relativnim verovatnoćama.

Ovo omogućava da ne bude potrebno odabrati broj k uzoraka, jer optimalni k može biti različit u svakom slučaju, već samo prag.

Napomena da ovo poboljšanje nije uključeno u prethodni kod.

tip

Drugi način da se poboljša generisani tekst je korišćenjem Beam search umesto pohlepnog pretraživanja korišćenog u ovom primeru.
Za razliku od pohlepnog pretraživanja, koje bira najverovatniju sledeću reč na svakom koraku i gradi jednu sekvencu, beam search prati top 𝑘 k najviših delimičnih sekvenci (nazvanih "beams") na svakom koraku. Istražujući više mogućnosti istovremeno, balansira efikasnost i kvalitet, povećavajući šanse za pronalazak bolje ukupne sekvence koja bi mogla biti propuštena pohlepnim pristupom zbog ranih, suboptimalnih izbora.

Napomena da ovo poboljšanje nije uključeno u prethodni kod.

Funkcije gubitka

Funkcija calc_loss_batch izračunava unakrsnu entropiju predikcije jednog batch-a.
Funkcija calc_loss_loader dobija unakrsnu entropiju svih batch-eva i izračunava prosečnu unakrsnu entropiju.

python
# Define loss functions
def calc_loss_batch(input_batch, target_batch, model, device):
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
logits = model(input_batch)
loss = torch.nn.functional.cross_entropy(logits.flatten(0, 1), target_batch.flatten())
return loss

def calc_loss_loader(data_loader, model, device, num_batches=None):
total_loss = 0.
if len(data_loader) == 0:
return float("nan")
elif num_batches is None:
num_batches = len(data_loader)
else:
# Reduce the number of batches to match the total number of batches in the data loader
# if num_batches exceeds the number of batches in the data loader
num_batches = min(num_batches, len(data_loader))
for i, (input_batch, target_batch) in enumerate(data_loader):
if i < num_batches:
loss = calc_loss_batch(input_batch, target_batch, model, device)
total_loss += loss.item()
else:
break
return total_loss / num_batches

tip

Gradient clipping je tehnika koja se koristi za poboljšanje stabilnosti obuke u velikim neuronskim mrežama postavljanjem maksimalnog praga za magnitudu gradijenata. Kada gradijenti premaše ovaj unapred definisani max_norm, smanjuju se proporcionalno kako bi se osiguralo da ažuriranja parametara modela ostanu unutar upravljivog opsega, sprečavajući probleme poput eksplodirajućih gradijenata i obezbeđujući kontrolisaniju i stabilniju obuku.

Napomena da ovo poboljšanje nije uključeno u prethodni kod.

Proverite sledeći primer:

Učitavanje podataka

Funkcije create_dataloader_v1 i create_dataloader_v1 su već raspravljane u prethodnom odeljku.

Odavde primetite kako je definisano da će 90% teksta biti korišćeno za obuku dok će 10% biti korišćeno za validaciju, a oba skupa su smeštena u 2 različita učitavača podataka.
Napomena da je ponekad deo skupa podataka takođe ostavljen za testni skup kako bi se bolje procenila performansa modela.

Oba učitavača podataka koriste istu veličinu serije, maksimalnu dužinu i korak i broj radnika (0 u ovom slučaju).
Glavne razlike su podaci koje koristi svaki od njih, a validatori ne odbacuju poslednji niti mešaju podatke jer to nije potrebno za svrhe validacije.

Takođe, činjenica da je korak jednak dužini konteksta, znači da neće biti preklapanja između konteksta korišćenih za obuku podataka (smanjuje prekomerno prilagođavanje, ali i skup podataka za obuku).

Štaviše, primetite da je veličina serije u ovom slučaju 2 kako bi se podaci podelili u 2 serije, a glavni cilj ovoga je omogućiti paralelnu obradu i smanjiti potrošnju po seriji.

python
train_ratio = 0.90
split_idx = int(train_ratio * len(text_data))
train_data = text_data[:split_idx]
val_data = text_data[split_idx:]

torch.manual_seed(123)

train_loader = create_dataloader_v1(
train_data,
batch_size=2,
max_length=GPT_CONFIG_124M["context_length"],
stride=GPT_CONFIG_124M["context_length"],
drop_last=True,
shuffle=True,
num_workers=0
)

val_loader = create_dataloader_v1(
val_data,
batch_size=2,
max_length=GPT_CONFIG_124M["context_length"],
stride=GPT_CONFIG_124M["context_length"],
drop_last=False,
shuffle=False,
num_workers=0
)

Provere ispravnosti

Cilj je proveriti da li ima dovoljno tokena za obuku, da li su oblici očekivani i dobiti neke informacije o broju tokena korišćenih za obuku i za validaciju:

python
# Sanity checks
if total_tokens * (train_ratio) < GPT_CONFIG_124M["context_length"]:
print("Not enough tokens for the training loader. "
"Try to lower the `GPT_CONFIG_124M['context_length']` or "
"increase the `training_ratio`")

if total_tokens * (1-train_ratio) < GPT_CONFIG_124M["context_length"]:
print("Not enough tokens for the validation loader. "
"Try to lower the `GPT_CONFIG_124M['context_length']` or "
"decrease the `training_ratio`")

print("Train loader:")
for x, y in train_loader:
print(x.shape, y.shape)

print("\nValidation loader:")
for x, y in val_loader:
print(x.shape, y.shape)

train_tokens = 0
for input_batch, target_batch in train_loader:
train_tokens += input_batch.numel()

val_tokens = 0
for input_batch, target_batch in val_loader:
val_tokens += input_batch.numel()

print("Training tokens:", train_tokens)
print("Validation tokens:", val_tokens)
print("All tokens:", train_tokens + val_tokens)

Izbor uređaja za obuku i prethodne proračune

Sledeći kod samo bira uređaj koji će se koristiti i izračunava gubitak obuke i gubitak validacije (bez da je još bilo šta obučeno) kao početnu tačku.

python
# Indicate the device to use

if torch.cuda.is_available():
device = torch.device("cuda")
elif torch.backends.mps.is_available():
device = torch.device("mps")
else:
device = torch.device("cpu")

print(f"Using {device} device.")

model.to(device) # no assignment model = model.to(device) necessary for nn.Module classes

# Pre-calculate losses without starting yet
torch.manual_seed(123) # For reproducibility due to the shuffling in the data loader

with torch.no_grad(): # Disable gradient tracking for efficiency because we are not training, yet
train_loss = calc_loss_loader(train_loader, model, device)
val_loss = calc_loss_loader(val_loader, model, device)

print("Training loss:", train_loss)
print("Validation loss:", val_loss)

Funkcije obuke

Funkcija generate_and_print_sample će samo uzeti kontekst i generisati neke tokene kako bi se stekao osećaj o tome koliko je model dobar u tom trenutku. Ovo se poziva iz train_model_simple na svakom koraku.

Funkcija evaluate_model se poziva onoliko često koliko je naznačeno u funkciji obuke i koristi se za merenje gubitka tokom obuke i gubitka validacije u tom trenutku obuke modela.

Zatim, velika funkcija train_model_simple je ta koja zapravo obučava model. Očekuje:

  • Učitavač podataka za obuku (sa podacima već odvojenim i pripremljenim za obuku)
  • Učitavač validacije
  • optimizator koji će se koristiti tokom obuke: Ovo je funkcija koja će koristiti gradijente i ažurirati parametre kako bi smanjila gubitak. U ovom slučaju, kao što ćete videti, koristi se AdamW, ali ima mnogo drugih.
  • optimizer.zero_grad() se poziva da resetuje gradijente na svakoj rundi kako bi ih akumulacija bila sprečena.
  • lr parametar je stopa učenja koja određuje veličinu koraka koji se preduzimaju tokom procesa optimizacije prilikom ažuriranja parametara modela. Manja stopa učenja znači da optimizator pravi manje ažuriranja težina, što može dovesti do preciznijeg konvergiranja, ali može usporiti obuku. Veća stopa učenja može ubrzati obuku, ali rizikuje prekomerno prelaženje minimuma funkcije gubitka (preskoči tačku gde je funkcija gubitka minimizovana).
  • Weight Decay modifikuje korak Izračunavanja Gubitka dodavanjem dodatnog člana koji kažnjava velike težine. Ovo podstiče optimizator da pronađe rešenja sa manjim težinama, balansirajući između dobrog prilagođavanja podacima i održavanja modela jednostavnim, sprečavajući prekomerno prilagođavanje u modelima mašinskog učenja tako što obeshrabruje model da dodeljuje preveliku važnost bilo kojoj pojedinačnoj karakteristici.
  • Tradicionalni optimizatori poput SGD sa L2 regularizacijom povezuju weight decay sa gradijentom funkcije gubitka. Međutim, AdamW (varijanta Adam optimizatora) odvaja weight decay od ažuriranja gradijenta, što dovodi do efikasnije regularizacije.
  • Uređaj koji će se koristiti za obuku
  • Broj epoha: Broj puta da se prođe kroz podatke za obuku
  • Učestalost evaluacije: Učestalost pozivanja evaluate_model
  • Iteracija evaluacije: Broj serija koje će se koristiti prilikom evaluacije trenutnog stanja modela kada se poziva generate_and_print_sample
  • Početni kontekst: Koja rečenica će se koristiti prilikom pozivanja generate_and_print_sample
  • Tokenizer
python
# Functions to train the data
def train_model_simple(model, train_loader, val_loader, optimizer, device, num_epochs,
eval_freq, eval_iter, start_context, tokenizer):
# Initialize lists to track losses and tokens seen
train_losses, val_losses, track_tokens_seen = [], [], []
tokens_seen, global_step = 0, -1

# Main training loop
for epoch in range(num_epochs):
model.train()  # Set model to training mode

for input_batch, target_batch in train_loader:
optimizer.zero_grad() # Reset loss gradients from previous batch iteration
loss = calc_loss_batch(input_batch, target_batch, model, device)
loss.backward() # Calculate loss gradients
optimizer.step() # Update model weights using loss gradients
tokens_seen += input_batch.numel()
global_step += 1

# Optional evaluation step
if global_step % eval_freq == 0:
train_loss, val_loss = evaluate_model(
model, train_loader, val_loader, device, eval_iter)
train_losses.append(train_loss)
val_losses.append(val_loss)
track_tokens_seen.append(tokens_seen)
print(f"Ep {epoch+1} (Step {global_step:06d}): "
f"Train loss {train_loss:.3f}, Val loss {val_loss:.3f}")

# Print a sample text after each epoch
generate_and_print_sample(
model, tokenizer, device, start_context
)

return train_losses, val_losses, track_tokens_seen


def evaluate_model(model, train_loader, val_loader, device, eval_iter):
model.eval() # Set in eval mode to avoid dropout
with torch.no_grad():
train_loss = calc_loss_loader(train_loader, model, device, num_batches=eval_iter)
val_loss = calc_loss_loader(val_loader, model, device, num_batches=eval_iter)
model.train() # Back to training model applying all the configurations
return train_loss, val_loss


def generate_and_print_sample(model, tokenizer, device, start_context):
model.eval() # Set in eval mode to avoid dropout
context_size = model.pos_emb.weight.shape[0]
encoded = text_to_token_ids(start_context, tokenizer).to(device)
with torch.no_grad():
token_ids = generate_text(
model=model, idx=encoded,
max_new_tokens=50, context_size=context_size
)
decoded_text = token_ids_to_text(token_ids, tokenizer)
print(decoded_text.replace("\n", " "))  # Compact print format
model.train() # Back to training model applying all the configurations

tip

Da biste poboljšali brzinu učenja, postoji nekoliko relevantnih tehnika nazvanih linear warmup i cosine decay.

Linear warmup se sastoji od definisanja inicijalne brzine učenja i maksimalne brzine, i doslednog ažuriranja nakon svake epohe. To je zato što započinjanje obuke sa manjim ažuriranjima težina smanjuje rizik da model naiđe na velike, destabilizujuće ažuriranja tokom svoje faze obuke.
Cosine decay je tehnika koja postepeno smanjuje brzinu učenja prateći polu-kosinusnu krivu nakon faze zagrevanja, usporavajući ažuriranja težina kako bi minimizovala rizik od prekomernog skakanja ispod minimuma gubitka i osigurala stabilnost obuke u kasnijim fazama.

Napomena da ova poboljšanja nisu uključena u prethodni kod.

Start training

python
import time
start_time = time.time()

torch.manual_seed(123)
model = GPTModel(GPT_CONFIG_124M)
model.to(device)
optimizer = torch.optim.AdamW(model.parameters(), lr=0.0004, weight_decay=0.1)

num_epochs = 10
train_losses, val_losses, tokens_seen = train_model_simple(
model, train_loader, val_loader, optimizer, device,
num_epochs=num_epochs, eval_freq=5, eval_iter=5,
start_context="Every effort moves you", tokenizer=tokenizer
)

end_time = time.time()
execution_time_minutes = (end_time - start_time) / 60
print(f"Training completed in {execution_time_minutes:.2f} minutes.")

Sa sledećom funkcijom je moguće ispisati evoluciju modela dok je bio obučavan.

python
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import math
def plot_losses(epochs_seen, tokens_seen, train_losses, val_losses):
fig, ax1 = plt.subplots(figsize=(5, 3))
ax1.plot(epochs_seen, train_losses, label="Training loss")
ax1.plot(
epochs_seen, val_losses, linestyle="-.", label="Validation loss"
)
ax1.set_xlabel("Epochs")
ax1.set_ylabel("Loss")
ax1.legend(loc="upper right")
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
ax2 = ax1.twiny()
ax2.plot(tokens_seen, train_losses, alpha=0)
ax2.set_xlabel("Tokens seen")
fig.tight_layout()
plt.show()

# Compute perplexity from the loss values
train_ppls = [math.exp(loss) for loss in train_losses]
val_ppls = [math.exp(loss) for loss in val_losses]
# Plot perplexity over tokens seen
plt.figure()
plt.plot(tokens_seen, train_ppls, label='Training Perplexity')
plt.plot(tokens_seen, val_ppls, label='Validation Perplexity')
plt.xlabel('Tokens Seen')
plt.ylabel('Perplexity')
plt.title('Perplexity over Training')
plt.legend()
plt.show()

epochs_tensor = torch.linspace(0, num_epochs, len(train_losses))
plot_losses(epochs_tensor, tokens_seen, train_losses, val_losses)

Sačuvajte model

Moguće je sačuvati model + optimizator ako želite da nastavite obuku kasnije:

python
# Save the model and the optimizer for later training
torch.save({
"model_state_dict": model.state_dict(),
"optimizer_state_dict": optimizer.state_dict(),
},
"/tmp/model_and_optimizer.pth"
)
# Note that this model with the optimizer occupied close to 2GB

# Restore model and optimizer for training
checkpoint = torch.load("/tmp/model_and_optimizer.pth", map_location=device)

model = GPTModel(GPT_CONFIG_124M)
model.load_state_dict(checkpoint["model_state_dict"])
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-4, weight_decay=0.1)
optimizer.load_state_dict(checkpoint["optimizer_state_dict"])
model.train(); # Put in training mode

Ili samo model ako planirate da ga koristite:

python
# Save the model
torch.save(model.state_dict(), "model.pth")

# Load it
model = GPTModel(GPT_CONFIG_124M)

model.load_state_dict(torch.load("model.pth", map_location=device))

model.eval() # Put in eval mode

Učitavanje GPT2 težina

Postoje 2 brza skripta za lokalno učitavanje GPT2 težina. Za obe možete klonirati repozitorij https://github.com/rasbt/LLMs-from-scratch lokalno, zatim:

Reference

tip

Učite i vežbajte AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Učite i vežbajte GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE) Učite i vežbajte Azure Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Podržite HackTricks