2. Data Sampling
Reading time: 8 minutes
tip
Jifunze na fanya mazoezi ya AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Jifunze na fanya mazoezi ya GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Jifunze na fanya mazoezi ya Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Angalia mpango wa usajili!
- Jiunge na 💬 kikundi cha Discord au kikundi cha telegram au tufuatilie kwenye Twitter 🐦 @hacktricks_live.
- Shiriki mbinu za hacking kwa kuwasilisha PRs kwa HackTricks na HackTricks Cloud repos za github.
Data Sampling
Data Sampling ni mchakato muhimu katika kuandaa data kwa ajili ya mafunzo ya mifano mikubwa ya lugha (LLMs) kama GPT. Inahusisha kuandaa data ya maandiko katika mfuatano wa ingizo na malengo ambayo mfano hutumia kujifunza jinsi ya kutabiri neno linalofuata (au token) kulingana na maneno yaliyotangulia. Sampuli sahihi za data zinahakikisha kwamba mfano unapata kwa ufanisi mifumo ya lugha na utegemezi.
tip
Lengo la awamu hii ya pili ni rahisi sana: Sampuli data ya ingizo na kuandaa kwa ajili ya awamu ya mafunzo kwa kawaida kwa kutenganisha dataset katika sentensi za urefu maalum na pia kuzalisha jibu linalotarajiwa.
Why Data Sampling Matters
LLMs kama GPT zinafundishwa kuzalisha au kutabiri maandiko kwa kuelewa muktadha unaotolewa na maneno ya awali. Ili kufikia hili, data ya mafunzo inapaswa kuandaliwa kwa njia ambayo mfano unaweza kujifunza uhusiano kati ya mfuatano wa maneno na maneno yao yanayofuata. Njia hii iliyopangwa inaruhusu mfano kuweza kujumlisha na kuzalisha maandiko yanayofaa na yanayoendana na muktadha.
Key Concepts in Data Sampling
- Tokenization: Kugawanya maandiko katika vitengo vidogo vinavyoitwa tokens (mfano, maneno, subwords, au wahusika).
- Sequence Length (max_length): Idadi ya tokens katika kila mfuatano wa ingizo.
- Sliding Window: Njia ya kuunda mfuatano wa ingizo unaoshirikiana kwa kusogeza dirisha juu ya maandiko yaliyotolewa tokens.
- Stride: Idadi ya tokens ambayo dirisha linalosogea linahamia mbele ili kuunda mfuatano unaofuata.
Step-by-Step Example
Tufanye mfano ili kuonyesha sampuli za data.
Example Text
"Lorem ipsum dolor sit amet, consectetur adipiscing elit."
Tokenization
Fikiria tunatumia basic tokenizer inayogawanya maandiko katika maneno na alama za uakifishaji:
Tokens: ["Lorem", "ipsum", "dolor", "sit", "amet,", "consectetur", "adipiscing", "elit."]
Parameters
- Max Sequence Length (max_length): 4 tokens
- Sliding Window Stride: 1 token
Creating Input and Target Sequences
- Sliding Window Approach:
- Input Sequences: Kila mfuatano wa ingizo unajumuisha
max_length
tokens. - Target Sequences: Kila mfuatano wa lengo unajumuisha tokens ambazo zinafuatia moja kwa moja mfuatano wa ingizo husika.
- Generating Sequences:
Window Position | Input Sequence | Target Sequence |
---|---|---|
1 | ["Lorem", "ipsum", "dolor", "sit"] | ["ipsum", "dolor", "sit", "amet,"] |
2 | ["ipsum", "dolor", "sit", "amet,"] | ["dolor", "sit", "amet,", "consectetur"] |
3 | ["dolor", "sit", "amet,", "consectetur"] | ["sit", "amet,", "consectetur", "adipiscing"] |
4 | ["sit", "amet,", "consectetur", "adipiscing"] | ["amet,", "consectetur", "adipiscing", "elit."] |
- Resulting Input and Target Arrays:
- Input:
[
["Lorem", "ipsum", "dolor", "sit"],
["ipsum", "dolor", "sit", "amet,"],
["dolor", "sit", "amet,", "consectetur"],
["sit", "amet,", "consectetur", "adipiscing"],
]
- Target:
[
["ipsum", "dolor", "sit", "amet,"],
["dolor", "sit", "amet,", "consectetur"],
["sit", "amet,", "consectetur", "adipiscing"],
["amet,", "consectetur", "adipiscing", "elit."],
]
Visual Representation
Token Position | Token |
---|---|
1 | Lorem |
2 | ipsum |
3 | dolor |
4 | sit |
5 | amet, |
6 | consectetur |
7 | adipiscing |
8 | elit. |
Sliding Window with Stride 1:
- First Window (Positions 1-4): ["Lorem", "ipsum", "dolor", "sit"] → Target: ["ipsum", "dolor", "sit", "amet,"]
- Second Window (Positions 2-5): ["ipsum", "dolor", "sit", "amet,"] → Target: ["dolor", "sit", "amet,", "consectetur"]
- Third Window (Positions 3-6): ["dolor", "sit", "amet,", "consectetur"] → Target: ["sit", "amet,", "consectetur", "adipiscing"]
- Fourth Window (Positions 4-7): ["sit", "amet,", "consectetur", "adipiscing"] → Target: ["amet,", "consectetur", "adipiscing", "elit."]
Understanding Stride
- Stride of 1: Dirisha huenda mbele kwa token moja kila wakati, na kusababisha mfuatano unaoshirikiwa sana. Hii inaweza kuleta kujifunza bora ya uhusiano wa muktadha lakini inaweza kuongeza hatari ya overfitting kwani data zinazofanana zinajirudia.
- Stride of 2: Dirisha huenda mbele kwa token mbili kila wakati, kupunguza ushirikiano. Hii inapunguza kurudiwa na mzigo wa kompyuta lakini inaweza kukosa baadhi ya nuances za muktadha.
- Stride Equal to max_length: Dirisha huenda mbele kwa ukubwa mzima wa dirisha, na kusababisha mfuatano usio na ushirikiano. Hii inapunguza kurudiwa kwa data lakini inaweza kupunguza uwezo wa mfano kujifunza utegemezi kati ya mfuatano.
Example with Stride of 2:
Using the same tokenized text and max_length
of 4:
- First Window (Positions 1-4): ["Lorem", "ipsum", "dolor", "sit"] → Target: ["ipsum", "dolor", "sit", "amet,"]
- Second Window (Positions 3-6): ["dolor", "sit", "amet,", "consectetur"] → Target: ["sit", "amet,", "consectetur", "adipiscing"]
- Third Window (Positions 5-8): ["amet,", "consectetur", "adipiscing", "elit."] → Target: ["consectetur", "adipiscing", "elit.", "sed"] (Assuming continuation)
Code Example
Let's understand this better from a code example from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb:
# Download the text to pre-train the LLM
import urllib.request
url = ("https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/ch02/01_main-chapter-code/the-verdict.txt")
file_path = "the-verdict.txt"
urllib.request.urlretrieve(url, file_path)
with open("the-verdict.txt", "r", encoding="utf-8") as f:
raw_text = f.read()
"""
Create a class that will receive some params lie tokenizer and text
and will prepare the input chunks and the target chunks to prepare
the LLM to learn which next token to generate
"""
import torch
from torch.utils.data import Dataset, DataLoader
class GPTDatasetV1(Dataset):
def __init__(self, txt, tokenizer, max_length, stride):
self.input_ids = []
self.target_ids = []
# Tokenize the entire text
token_ids = tokenizer.encode(txt, allowed_special={"<|endoftext|>"})
# Use a sliding window to chunk the book into overlapping sequences of max_length
for i in range(0, len(token_ids) - max_length, stride):
input_chunk = token_ids[i:i + max_length]
target_chunk = token_ids[i + 1: i + max_length + 1]
self.input_ids.append(torch.tensor(input_chunk))
self.target_ids.append(torch.tensor(target_chunk))
def __len__(self):
return len(self.input_ids)
def __getitem__(self, idx):
return self.input_ids[idx], self.target_ids[idx]
"""
Create a data loader which given the text and some params will
prepare the inputs and targets with the previous class and
then create a torch DataLoader with the info
"""
import tiktoken
def create_dataloader_v1(txt, batch_size=4, max_length=256,
stride=128, shuffle=True, drop_last=True,
num_workers=0):
# Initialize the tokenizer
tokenizer = tiktoken.get_encoding("gpt2")
# Create dataset
dataset = GPTDatasetV1(txt, tokenizer, max_length, stride)
# Create dataloader
dataloader = DataLoader(
dataset,
batch_size=batch_size,
shuffle=shuffle,
drop_last=drop_last,
num_workers=num_workers
)
return dataloader
"""
Finally, create the data loader with the params we want:
- The used text for training
- batch_size: The size of each batch
- max_length: The size of each entry on each batch
- stride: The sliding window (how many tokens should the next entry advance compared to the previous one). The smaller the more overfitting, usually this is equals to the max_length so the same tokens aren't repeated.
- shuffle: Re-order randomly
"""
dataloader = create_dataloader_v1(
raw_text, batch_size=8, max_length=4, stride=1, shuffle=False
)
data_iter = iter(dataloader)
first_batch = next(data_iter)
print(first_batch)
# Note the batch_size of 8, the max_length of 4 and the stride of 1
[
# Input
tensor([[ 40, 367, 2885, 1464],
[ 367, 2885, 1464, 1807],
[ 2885, 1464, 1807, 3619],
[ 1464, 1807, 3619, 402],
[ 1807, 3619, 402, 271],
[ 3619, 402, 271, 10899],
[ 402, 271, 10899, 2138],
[ 271, 10899, 2138, 257]]),
# Target
tensor([[ 367, 2885, 1464, 1807],
[ 2885, 1464, 1807, 3619],
[ 1464, 1807, 3619, 402],
[ 1807, 3619, 402, 271],
[ 3619, 402, 271, 10899],
[ 402, 271, 10899, 2138],
[ 271, 10899, 2138, 257],
[10899, 2138, 257, 7026]])
]
# With stride=4 this will be the result:
[
# Input
tensor([[ 40, 367, 2885, 1464],
[ 1807, 3619, 402, 271],
[10899, 2138, 257, 7026],
[15632, 438, 2016, 257],
[ 922, 5891, 1576, 438],
[ 568, 340, 373, 645],
[ 1049, 5975, 284, 502],
[ 284, 3285, 326, 11]]),
# Target
tensor([[ 367, 2885, 1464, 1807],
[ 3619, 402, 271, 10899],
[ 2138, 257, 7026, 15632],
[ 438, 2016, 257, 922],
[ 5891, 1576, 438, 568],
[ 340, 373, 645, 1049],
[ 5975, 284, 502, 284],
[ 3285, 326, 11, 287]])
]
Advanced Sampling Strategies (2023-2025)
1. Temperature-Based Mixture Weighting
LLMs za kisasa mara chache hujifunza kwenye mkusanyiko mmoja. Badala yake, wanachukua sampuli kutoka vyanzo kadhaa tofauti (msimbo, wavuti, karatasi za kitaaluma, majukwaa…). Sehemu ya uhusiano wa kila chanzo inaweza kuathiri kwa nguvu utendaji wa chini. Mifano ya hivi karibuni ya chanzo wazi kama Llama 2 ilianzisha temperature‐based sampling scheme ambapo uwezekano wa kuchora hati kutoka kwa mkusanyiko i unakuwa
p(i) = \frac{w_i^{\alpha}}{\sum_j w_j^{\alpha}}
• wi – asilimia ya token ghafi ya corpus i
• α ("joto") – thamani katika (0,1]. α < 1 inafanya usambazaji kuwa tambarare, ikitoa uzito zaidi kwa corpus ndogo zenye ubora wa juu.
Llama 2 ilitumia α = 0.7 na kuonyesha kwamba kupunguza α kuliongeza alama za tathmini kwenye kazi zenye maarifa mengi huku ikihifadhi mchanganyiko wa mafunzo kuwa thabiti. Njia hiyo hiyo inachukuliwa na Mistral (2023) na Claude 3.
from collections import Counter
def temperature_sample(corpus_ids, alpha=0.7):
counts = Counter(corpus_ids) # number of tokens seen per corpus
probs = {c: c_count**alpha for c, c_count in counts.items()}
Z = sum(probs.values())
probs = {c: p/Z for c, p in probs.items()}
# Now draw according to probs to fill every batch