2. Data Sampling
Reading time: 8 minutes
tip
Leer en oefen AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Leer en oefen GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Leer en oefen Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Ondersteun HackTricks
- Kyk na die subskripsie planne!
- Sluit aan by die π¬ Discord groep of die telegram groep of volg ons op Twitter π¦ @hacktricks_live.
- Deel hacking truuks deur PRs in te dien na die HackTricks en HackTricks Cloud github repos.
Data Sampling
Data Sampling is 'n belangrike proses om data voor te berei vir die opleiding van groot taalmodelle (LLMs) soos GPT. Dit behels die organisering van teksdata in invoer- en teikensekwensies wat die model gebruik om te leer hoe om die volgende woord (of token) te voorspel op grond van die voorafgaande woorde. Korrek data sampling verseker dat die model effektief taalpatrone en afhanklikhede vasvang.
tip
Die doel van hierdie tweede fase is baie eenvoudig: Steek die invoerdata en berei dit voor vir die opleidingsfase deur gewoonlik die dataset in sinne van 'n spesifieke lengte te skei en ook die verwagte reaksie te genereer.
Why Data Sampling Matters
LLMs soos GPT word opgelei om teks te genereer of te voorspel deur die konteks wat deur vorige woorde verskaf word, te verstaan. Om dit te bereik, moet die opleidingsdata op 'n manier gestruktureer wees sodat die model die verhouding tussen sekwensies van woorde en hul daaropvolgende woorde kan leer. Hierdie gestruktureerde benadering stel die model in staat om te generaliseer en samehangende en konteksueel relevante teks te genereer.
Key Concepts in Data Sampling
- Tokenization: Om teks in kleiner eenhede genaamd tokens (bv. woorde, subwoorde of karakters) te verdeel.
- Sequence Length (max_length): Die aantal tokens in elke invoersekwensie.
- Sliding Window: 'n Metode om oorvleuelende invoersekwensies te skep deur 'n venster oor die getokeniseerde teks te beweeg.
- Stride: Die aantal tokens wat die glijdende venster vorentoe beweeg om die volgende sekwensie te skep.
Step-by-Step Example
Laat ons deur 'n voorbeeld stap om data sampling te illustreer.
Example Text
"Lorem ipsum dolor sit amet, consectetur adipiscing elit."
Tokenisering
Neem aan ons gebruik 'n basiese tokenizer wat die teks in woorde en leestekens verdeel:
Tokens: ["Lorem", "ipsum", "dolor", "sit", "amet,", "consectetur", "adipiscing", "elit."]
Parameters
- Max Sequence Length (max_length): 4 tokens
- Sliding Window Stride: 1 token
Creating Input and Target Sequences
- Sliding Window Approach:
- Input Sequences: Elke invoerreeks bestaan uit
max_length
tokens. - Target Sequences: Elke teikenreeks bestaan uit die tokens wat onmiddellik volg op die ooreenstemmende invoerreeks.
- Generating Sequences:
Window Position | Input Sequence | Target Sequence |
---|---|---|
1 | ["Lorem", "ipsum", "dolor", "sit"] | ["ipsum", "dolor", "sit", "amet,"] |
2 | ["ipsum", "dolor", "sit", "amet,"] | ["dolor", "sit", "amet,", "consectetur"] |
3 | ["dolor", "sit", "amet,", "consectetur"] | ["sit", "amet,", "consectetur", "adipiscing"] |
4 | ["sit", "amet,", "consectetur", "adipiscing"] | ["amet,", "consectetur", "adipiscing", "elit."] |
- Resulting Input and Target Arrays:
- Input:
[
["Lorem", "ipsum", "dolor", "sit"],
["ipsum", "dolor", "sit", "amet,"],
["dolor", "sit", "amet,", "consectetur"],
["sit", "amet,", "consectetur", "adipiscing"],
]
- Target:
[
["ipsum", "dolor", "sit", "amet,"],
["dolor", "sit", "amet,", "consectetur"],
["sit", "amet,", "consectetur", "adipiscing"],
["amet,", "consectetur", "adipiscing", "elit."],
]
Visual Representation
Token Position | Token |
---|---|
1 | Lorem |
2 | ipsum |
3 | dolor |
4 | sit |
5 | amet, |
6 | consectetur |
7 | adipiscing |
8 | elit. |
Sliding Window with Stride 1:
- First Window (Positions 1-4): ["Lorem", "ipsum", "dolor", "sit"] β Target: ["ipsum", "dolor", "sit", "amet,"]
- Second Window (Positions 2-5): ["ipsum", "dolor", "sit", "amet,"] β Target: ["dolor", "sit", "amet,", "consectetur"]
- Third Window (Positions 3-6): ["dolor", "sit", "amet,", "consectetur"] β Target: ["sit", "amet,", "consectetur", "adipiscing"]
- Fourth Window (Positions 4-7): ["sit", "amet,", "consectetur", "adipiscing"] β Target: ["amet,", "consectetur", "adipiscing", "elit."]
Understanding Stride
- Stride of 1: Die venster beweeg vorentoe met een token elke keer, wat lei tot hoogs oorvleuelende reekse. Dit kan lei tot beter leer van kontekstuele verhoudings, maar kan die risiko van oorpassing verhoog aangesien soortgelyke datapunte herhaal word.
- Stride of 2: Die venster beweeg vorentoe met twee tokens elke keer, wat oorvleueling verminder. Dit verminder redundans en rekenaarlading, maar mag dalk sommige kontekstuele nuanses mis.
- Stride Equal to max_length: Die venster beweeg vorentoe met die hele venstergrootte, wat lei tot nie-oorvleuelende reekse. Dit minimaliseer data redundans, maar mag die model se vermoΓ« om afhanklikhede oor reekse te leer beperk.
Example with Stride of 2:
Gebruik die dieselfde getokeniseerde teks en max_length
van 4:
- First Window (Positions 1-4): ["Lorem", "ipsum", "dolor", "sit"] β Target: ["ipsum", "dolor", "sit", "amet,"]
- Second Window (Positions 3-6): ["dolor", "sit", "amet,", "consectetur"] β Target: ["sit", "amet,", "consectetur", "adipiscing"]
- Third Window (Positions 5-8): ["amet,", "consectetur", "adipiscing", "elit."] β Target: ["consectetur", "adipiscing", "elit.", "sed"] (Assuming continuation)
Code Example
Let's understand this better from a code example from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb:
# Download the text to pre-train the LLM
import urllib.request
url = ("https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/ch02/01_main-chapter-code/the-verdict.txt")
file_path = "the-verdict.txt"
urllib.request.urlretrieve(url, file_path)
with open("the-verdict.txt", "r", encoding="utf-8") as f:
raw_text = f.read()
"""
Create a class that will receive some params lie tokenizer and text
and will prepare the input chunks and the target chunks to prepare
the LLM to learn which next token to generate
"""
import torch
from torch.utils.data import Dataset, DataLoader
class GPTDatasetV1(Dataset):
def __init__(self, txt, tokenizer, max_length, stride):
self.input_ids = []
self.target_ids = []
# Tokenize the entire text
token_ids = tokenizer.encode(txt, allowed_special={"<|endoftext|>"})
# Use a sliding window to chunk the book into overlapping sequences of max_length
for i in range(0, len(token_ids) - max_length, stride):
input_chunk = token_ids[i:i + max_length]
target_chunk = token_ids[i + 1: i + max_length + 1]
self.input_ids.append(torch.tensor(input_chunk))
self.target_ids.append(torch.tensor(target_chunk))
def __len__(self):
return len(self.input_ids)
def __getitem__(self, idx):
return self.input_ids[idx], self.target_ids[idx]
"""
Create a data loader which given the text and some params will
prepare the inputs and targets with the previous class and
then create a torch DataLoader with the info
"""
import tiktoken
def create_dataloader_v1(txt, batch_size=4, max_length=256,
stride=128, shuffle=True, drop_last=True,
num_workers=0):
# Initialize the tokenizer
tokenizer = tiktoken.get_encoding("gpt2")
# Create dataset
dataset = GPTDatasetV1(txt, tokenizer, max_length, stride)
# Create dataloader
dataloader = DataLoader(
dataset,
batch_size=batch_size,
shuffle=shuffle,
drop_last=drop_last,
num_workers=num_workers
)
return dataloader
"""
Finally, create the data loader with the params we want:
- The used text for training
- batch_size: The size of each batch
- max_length: The size of each entry on each batch
- stride: The sliding window (how many tokens should the next entry advance compared to the previous one). The smaller the more overfitting, usually this is equals to the max_length so the same tokens aren't repeated.
- shuffle: Re-order randomly
"""
dataloader = create_dataloader_v1(
raw_text, batch_size=8, max_length=4, stride=1, shuffle=False
)
data_iter = iter(dataloader)
first_batch = next(data_iter)
print(first_batch)
# Note the batch_size of 8, the max_length of 4 and the stride of 1
[
# Input
tensor([[ 40, 367, 2885, 1464],
[ 367, 2885, 1464, 1807],
[ 2885, 1464, 1807, 3619],
[ 1464, 1807, 3619, 402],
[ 1807, 3619, 402, 271],
[ 3619, 402, 271, 10899],
[ 402, 271, 10899, 2138],
[ 271, 10899, 2138, 257]]),
# Target
tensor([[ 367, 2885, 1464, 1807],
[ 2885, 1464, 1807, 3619],
[ 1464, 1807, 3619, 402],
[ 1807, 3619, 402, 271],
[ 3619, 402, 271, 10899],
[ 402, 271, 10899, 2138],
[ 271, 10899, 2138, 257],
[10899, 2138, 257, 7026]])
]
# With stride=4 this will be the result:
[
# Input
tensor([[ 40, 367, 2885, 1464],
[ 1807, 3619, 402, 271],
[10899, 2138, 257, 7026],
[15632, 438, 2016, 257],
[ 922, 5891, 1576, 438],
[ 568, 340, 373, 645],
[ 1049, 5975, 284, 502],
[ 284, 3285, 326, 11]]),
# Target
tensor([[ 367, 2885, 1464, 1807],
[ 3619, 402, 271, 10899],
[ 2138, 257, 7026, 15632],
[ 438, 2016, 257, 922],
[ 5891, 1576, 438, 568],
[ 340, 373, 645, 1049],
[ 5975, 284, 502, 284],
[ 3285, 326, 11, 287]])
]
Verwysings
tip
Leer en oefen AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Leer en oefen GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Leer en oefen Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Ondersteun HackTricks
- Kyk na die subskripsie planne!
- Sluit aan by die π¬ Discord groep of die telegram groep of volg ons op Twitter π¦ @hacktricks_live.
- Deel hacking truuks deur PRs in te dien na die HackTricks en HackTricks Cloud github repos.