linux_dsm_epyc7002/Documentation/arm64/booting.txt
Catalin Marinas c218bca74e arm64: Relax the kernel cache requirements for boot
With system caches for the host OS or architected caches for guest OS we
cannot easily guarantee that there are no dirty or stale cache lines for
the areas of memory written by the kernel during boot with the MMU off
(therefore non-cacheable accesses).

This patch adds the necessary cache maintenance during boot and relaxes
the booting requirements.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-05 10:06:18 +01:00

186 lines
7.1 KiB
Plaintext

Booting AArch64 Linux
=====================
Author: Will Deacon <will.deacon@arm.com>
Date : 07 September 2012
This document is based on the ARM booting document by Russell King and
is relevant to all public releases of the AArch64 Linux kernel.
The AArch64 exception model is made up of a number of exception levels
(EL0 - EL3), with EL0 and EL1 having a secure and a non-secure
counterpart. EL2 is the hypervisor level and exists only in non-secure
mode. EL3 is the highest priority level and exists only in secure mode.
For the purposes of this document, we will use the term `boot loader'
simply to define all software that executes on the CPU(s) before control
is passed to the Linux kernel. This may include secure monitor and
hypervisor code, or it may just be a handful of instructions for
preparing a minimal boot environment.
Essentially, the boot loader should provide (as a minimum) the
following:
1. Setup and initialise the RAM
2. Setup the device tree
3. Decompress the kernel image
4. Call the kernel image
1. Setup and initialise RAM
---------------------------
Requirement: MANDATORY
The boot loader is expected to find and initialise all RAM that the
kernel will use for volatile data storage in the system. It performs
this in a machine dependent manner. (It may use internal algorithms
to automatically locate and size all RAM, or it may use knowledge of
the RAM in the machine, or any other method the boot loader designer
sees fit.)
2. Setup the device tree
-------------------------
Requirement: MANDATORY
The device tree blob (dtb) must be placed on an 8-byte boundary within
the first 512 megabytes from the start of the kernel image and must not
cross a 2-megabyte boundary. This is to allow the kernel to map the
blob using a single section mapping in the initial page tables.
3. Decompress the kernel image
------------------------------
Requirement: OPTIONAL
The AArch64 kernel does not currently provide a decompressor and
therefore requires decompression (gzip etc.) to be performed by the boot
loader if a compressed Image target (e.g. Image.gz) is used. For
bootloaders that do not implement this requirement, the uncompressed
Image target is available instead.
4. Call the kernel image
------------------------
Requirement: MANDATORY
The decompressed kernel image contains a 64-byte header as follows:
u32 code0; /* Executable code */
u32 code1; /* Executable code */
u64 text_offset; /* Image load offset */
u64 res0 = 0; /* reserved */
u64 res1 = 0; /* reserved */
u64 res2 = 0; /* reserved */
u64 res3 = 0; /* reserved */
u64 res4 = 0; /* reserved */
u32 magic = 0x644d5241; /* Magic number, little endian, "ARM\x64" */
u32 res5 = 0; /* reserved */
Header notes:
- code0/code1 are responsible for branching to stext.
The image must be placed at the specified offset (currently 0x80000)
from the start of the system RAM and called there. The start of the
system RAM must be aligned to 2MB.
Before jumping into the kernel, the following conditions must be met:
- Quiesce all DMA capable devices so that memory does not get
corrupted by bogus network packets or disk data. This will save
you many hours of debug.
- Primary CPU general-purpose register settings
x0 = physical address of device tree blob (dtb) in system RAM.
x1 = 0 (reserved for future use)
x2 = 0 (reserved for future use)
x3 = 0 (reserved for future use)
- CPU mode
All forms of interrupts must be masked in PSTATE.DAIF (Debug, SError,
IRQ and FIQ).
The CPU must be in either EL2 (RECOMMENDED in order to have access to
the virtualisation extensions) or non-secure EL1.
- Caches, MMUs
The MMU must be off.
Instruction cache may be on or off.
The address range corresponding to the loaded kernel image must be
cleaned to the PoC. In the presence of a system cache or other
coherent masters with caches enabled, this will typically require
cache maintenance by VA rather than set/way operations.
System caches which respect the architected cache maintenance by VA
operations must be configured and may be enabled.
System caches which do not respect architected cache maintenance by VA
operations (not recommended) must be configured and disabled.
- Architected timers
CNTFRQ must be programmed with the timer frequency and CNTVOFF must
be programmed with a consistent value on all CPUs. If entering the
kernel at EL1, CNTHCTL_EL2 must have EL1PCTEN (bit 0) set where
available.
- Coherency
All CPUs to be booted by the kernel must be part of the same coherency
domain on entry to the kernel. This may require IMPLEMENTATION DEFINED
initialisation to enable the receiving of maintenance operations on
each CPU.
- System registers
All writable architected system registers at the exception level where
the kernel image will be entered must be initialised by software at a
higher exception level to prevent execution in an UNKNOWN state.
The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level.
The boot loader is expected to enter the kernel on each CPU in the
following manner:
- The primary CPU must jump directly to the first instruction of the
kernel image. The device tree blob passed by this CPU must contain
an 'enable-method' property for each cpu node. The supported
enable-methods are described below.
It is expected that the bootloader will generate these device tree
properties and insert them into the blob prior to kernel entry.
- CPUs with a "spin-table" enable-method must have a 'cpu-release-addr'
property in their cpu node. This property identifies a
naturally-aligned 64-bit zero-initalised memory location.
These CPUs should spin outside of the kernel in a reserved area of
memory (communicated to the kernel by a /memreserve/ region in the
device tree) polling their cpu-release-addr location, which must be
contained in the reserved region. A wfe instruction may be inserted
to reduce the overhead of the busy-loop and a sev will be issued by
the primary CPU. When a read of the location pointed to by the
cpu-release-addr returns a non-zero value, the CPU must jump to this
value. The value will be written as a single 64-bit little-endian
value, so CPUs must convert the read value to their native endianness
before jumping to it.
- CPUs with a "psci" enable method should remain outside of
the kernel (i.e. outside of the regions of memory described to the
kernel in the memory node, or in a reserved area of memory described
to the kernel by a /memreserve/ region in the device tree). The
kernel will issue CPU_ON calls as described in ARM document number ARM
DEN 0022A ("Power State Coordination Interface System Software on ARM
processors") to bring CPUs into the kernel.
The device tree should contain a 'psci' node, as described in
Documentation/devicetree/bindings/arm/psci.txt.
- Secondary CPU general-purpose register settings
x0 = 0 (reserved for future use)
x1 = 0 (reserved for future use)
x2 = 0 (reserved for future use)
x3 = 0 (reserved for future use)