arm64: efi: correctly map runtime regions

The kernel may use a page granularity of 4K, 16K, or 64K depending on
configuration.

When mapping EFI runtime regions, we use memrange_efi_to_native to round
the physical base address of a region down to a kernel page boundary,
and round the size up to a kernel page boundary, adding the residue left
over from rounding down the physical base address. We do not round down
the virtual base address.

In __create_mapping we account for the offset of the virtual base from a
granule boundary, adding the residue to the size before rounding the
base down to said granule boundary.

Thus we account for the residue twice, and when the residue is non-zero
will cause __create_mapping to map an additional page at the end of the
region. Depending on the memory map, this page may be in a region we are
not intended/permitted to map, or may clash with a different region that
we wish to map. In typical cases, mapping the next item in the memory
map will overwrite the erroneously created entry, as we sort the memory
map in the stub.

As __create_mapping can cope with base addresses which are not page
aligned, we can instead rely on it to map the region appropriately, and
simplify efi_virtmap_init by removing the unnecessary code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This commit is contained in:
Mark Rutland 2015-11-23 11:09:11 +00:00 committed by Catalin Marinas
parent c03784ee8a
commit 3b12acf4c9

View File

@ -227,7 +227,6 @@ static bool __init efi_virtmap_init(void)
init_new_context(NULL, &efi_mm); init_new_context(NULL, &efi_mm);
for_each_efi_memory_desc(&memmap, md) { for_each_efi_memory_desc(&memmap, md) {
u64 paddr, npages, size;
pgprot_t prot; pgprot_t prot;
if (!(md->attribute & EFI_MEMORY_RUNTIME)) if (!(md->attribute & EFI_MEMORY_RUNTIME))
@ -235,11 +234,6 @@ static bool __init efi_virtmap_init(void)
if (md->virt_addr == 0) if (md->virt_addr == 0)
return false; return false;
paddr = md->phys_addr;
npages = md->num_pages;
memrange_efi_to_native(&paddr, &npages);
size = npages << PAGE_SHIFT;
pr_info(" EFI remap 0x%016llx => %p\n", pr_info(" EFI remap 0x%016llx => %p\n",
md->phys_addr, (void *)md->virt_addr); md->phys_addr, (void *)md->virt_addr);
@ -256,7 +250,8 @@ static bool __init efi_virtmap_init(void)
else else
prot = PAGE_KERNEL; prot = PAGE_KERNEL;
create_pgd_mapping(&efi_mm, paddr, md->virt_addr, size, create_pgd_mapping(&efi_mm, md->phys_addr, md->virt_addr,
md->num_pages << EFI_PAGE_SHIFT,
__pgprot(pgprot_val(prot) | PTE_NG)); __pgprot(pgprot_val(prot) | PTE_NG));
} }
return true; return true;