Commit Graph

662343 Commits

Author SHA1 Message Date
AKASHI Takahiro
78fd584cde arm64: kdump: implement machine_crash_shutdown()
Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus
and save registers' status in per-cpu ELF notes before starting crash
dump kernel. See kernel_kexec().
Even if not all secondary cpus have shut down, we do kdump anyway.

As we don't have to make non-boot(crashed) cpus offline (to preserve
correct status of cpus at crash dump) before shutting down, this patch
also adds a variant of smp_send_stop().

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05 18:29:15 +01:00
AKASHI Takahiro
254a41c0ba arm64: hibernate: preserve kdump image around hibernation
Since arch_kexec_protect_crashkres() removes a mapping for crash dump
kernel image, the loaded data won't be preserved around hibernation.

In this patch, helper functions, crash_prepare_suspend()/
crash_post_resume(), are additionally called before/after hibernation so
that the relevant memory segments will be mapped again and preserved just
as the others are.

In addition, to minimize the size of hibernation image, crash_is_nosave()
is added to pfn_is_nosave() in order to recognize only the pages that hold
loaded crash dump kernel image as saveable. Hibernation excludes any pages
that are marked as Reserved and yet "nosave."

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05 18:28:50 +01:00
Takahiro Akashi
98d2e1539b arm64: kdump: protect crash dump kernel memory
arch_kexec_protect_crashkres() and arch_kexec_unprotect_crashkres()
are meant to be called by kexec_load() in order to protect the memory
allocated for crash dump kernel once the image is loaded.

The protection is implemented by unmapping the relevant segments in crash
dump kernel memory, rather than making it read-only as other archs do,
to prevent coherency issues due to potential cache aliasing (with
mismatched attributes).

Page-level mappings are consistently used here so that we can change
the attributes of segments in page granularity as well as shrink the region
also in page granularity through /sys/kernel/kexec_crash_size, putting
the freed memory back to buddy system.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05 18:28:35 +01:00
AKASHI Takahiro
9b0aa14e31 arm64: mm: add set_memory_valid()
This function validates and invalidates PTE entries, and will be utilized
in kdump to protect loaded crash dump kernel image.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05 18:27:53 +01:00
AKASHI Takahiro
764b51ead1 arm64: kdump: reserve memory for crash dump kernel
"crashkernel=" kernel parameter specifies the size (and optionally
the start address) of the system ram to be used by crash dump kernel.
reserve_crashkernel() will allocate and reserve that memory at boot time
of primary kernel.

The memory range will be exposed to userspace as a resource named
"Crash kernel" in /proc/iomem.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Pratyush Anand <panand@redhat.com>
Reviewed-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05 18:26:57 +01:00
AKASHI Takahiro
8f579b1c4e arm64: limit memory regions based on DT property, usable-memory-range
Crash dump kernel uses only a limited range of available memory as System
RAM. On arm64 kdump, This memory range is advertised to crash dump kernel
via a device-tree property under /chosen,
   linux,usable-memory-range = <BASE SIZE>

Crash dump kernel reads this property at boot time and calls
memblock_cap_memory_range() to limit usable memory which are listed either
in UEFI memory map table or "memory" nodes of a device tree blob.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Geoff Levand <geoff@infradead.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05 18:26:54 +01:00
AKASHI Takahiro
c9ca9b4e21 memblock: add memblock_cap_memory_range()
Add memblock_cap_memory_range() which will remove all the memblock regions
except the memory range specified in the arguments. In addition, rework is
done on memblock_mem_limit_remove_map() to re-implement it using
memblock_cap_memory_range().

This function, like memblock_mem_limit_remove_map(), will not remove
memblocks with MEMMAP_NOMAP attribute as they may be mapped and accessed
later as "device memory."
See the commit a571d4eb55 ("mm/memblock.c: add new infrastructure to
address the mem limit issue").

This function is used, in a succeeding patch in the series of arm64 kdump
suuport, to limit the range of usable memory, or System RAM, on crash dump
kernel.
(Please note that "mem=" parameter is of little use for this purpose.)

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Dennis Chen <dennis.chen@arm.com>
Cc: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05 18:26:50 +01:00
AKASHI Takahiro
4c546b8a34 memblock: add memblock_clear_nomap()
This function, with a combination of memblock_mark_nomap(), will be used
in a later kdump patch for arm64 when it temporarily isolates some range
of memory from the other memory blocks in order to create a specific
kernel mapping at boot time.

Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-05 18:26:46 +01:00
Catalin Marinas
dffb0113d5 Merge branch 'arm64/common-sysreg' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux into for-next/core
* 'arm64/common-sysreg' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux:
  arm64: sysreg: add Set/Way sys encodings
  arm64: sysreg: add register encodings used by KVM
  arm64: sysreg: add physical timer registers
  arm64: sysreg: subsume GICv3 sysreg definitions
  arm64: sysreg: add performance monitor registers
  arm64: sysreg: add debug system registers
  arm64: sysreg: sort by encoding
2017-04-04 18:08:47 +01:00
Catalin Marinas
9349e81e38 ACPI ARM64 specific changes for v4.12.
Patches contain:
 
 - IORT kernel interface misc clean-ups
 - IORT id mapping interface refactoring in preparation for platform
   MSI (IORT named components -> GIC ITS mappings) devid mapping code
 - IORT id mapping implementation for named components nodes to ITS nodes,
   in order to provide the kernel with a firmware interface to map
   platform devices devids to GIC ITS components
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJY3RKBAAoJEIKLOaai0TZ/+yMP/2JnANwQXx9+HYQUFSaRmRVi
 fBe0MxJT9bPRhTVZ29N1rMrK1jo1RL93ENVu0xxIpbbJ7W2L3jafiqG2lVduWyp5
 dtjrUS+XWhGs44YZKV+NXTlbwbkD+GZaZxar1P9eaNaX7Mut3UwEqnsE0A01a0Qr
 hoGsamTvRUXZMFxN3PW44eSpxNm4ETnHrVdNgy5B2Hs/QMRSyIuSOFDEPbDYzM3Q
 1wtqSRdRn8hLYbKi7igSpoUhX7juTZ1fpfGijWzFaiqsfKOgRwVVqoX8eOzC2BDg
 JCQwpklLPR2ZvfBpqtaJeywW2Iu3RQ7ldARQfeNnTkQKNUKOeod1uudLw+TJkjw/
 xMCRaYBWRbrFXvxjmWqbZbC91YN0WFlrPQjY9mUivx7+Mf8XVRYDWWIWxfyDqhhW
 LcRPXhGBXN0uZJba1/hHYpgKA/2PlohpaQ+E4CtZ8V2OoZx7a0gH9V+s0Fp3WP0c
 vGgUot8n2c0vgQWfBHXx4CwuTIgNbanKrbHpgPn+CtyAcB1o7W65LaCeqiw+j2BS
 qQ5rIbUKWlMVAeV/FjXMdAyiGzYHoPmGQJ7oOdFuoJgTEB6Il0AHEqok7/wjutqm
 7ujtF7cGt++BjtSQen7xzF+Bmr+6JEMZWJY97qA2GwXNgg3fDAZIty3UZSKqdebz
 5g++dndIcI+HV5IHXPgr
 =mDUc
 -----END PGP SIGNATURE-----

Merge tag 'acpi-arm64-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux into for-next/core

ACPI ARM64 specific changes for v4.12.

Patches contain:

- IORT kernel interface misc clean-ups
- IORT id mapping interface refactoring in preparation for platform
  MSI (IORT named components -> GIC ITS mappings) devid mapping code
- IORT id mapping implementation for named components nodes to ITS nodes,
  in order to provide the kernel with a firmware interface to map
  platform devices devids to GIC ITS components

* tag 'acpi-arm64-for-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux:
  ACPI: platform: setup MSI domain for ACPI based platform device
  ACPI: platform-msi: retrieve devid from IORT
  ACPI/IORT: Introduce iort_node_map_platform_id() to retrieve dev id
  ACPI/IORT: Rename iort_node_map_rid() to make it generic
  ACPI/IORT: Rework iort_match_node_callback() return value handling
  ACPI/IORT: Add missing comment for iort_dev_find_its_id()
  ACPI/IORT: Fix the indentation in iort_scan_node()
2017-04-04 18:01:55 +01:00
Ard Biesheuvel
cad27ef27e arm64: efi: split Image code and data into separate PE/COFF sections
To prevent unintended modifications to the kernel text (malicious or
otherwise) while running the EFI stub, describe the kernel image as
two separate sections: a .text section with read-execute permissions,
covering .text, .rodata and .init.text, and a .data section with
read-write permissions, covering .init.data, .data and .bss.

This relies on the firmware to actually take the section permission
flags into account, but this is something that is currently being
implemented in EDK2, which means we will likely start seeing it in
the wild between one and two years from now.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:50:59 +01:00
Ard Biesheuvel
f1eb542f39 arm64: efi: replace open coded constants with symbolic ones
Replace open coded constants with symbolic ones throughout the
Image and the EFI headers. No binary level changes are intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:50:50 +01:00
Ard Biesheuvel
effc7b027a arm64: efi: remove pointless dummy .reloc section
The kernel's EFI PE/COFF header contains a dummy .reloc section, and
an explanatory comment that claims that this is required for the EFI
application loader to accept the Image as a relocatable image (i.e.,
one that can be loaded at any offset and fixed up in place)

This was inherited from the x86 implementation, which has elaborate host
tooling to mangle the PE/COFF header post-link time, and which populates
the .reloc section with a single dummy base relocation. On ARM, no such
tooling exists, and the .reloc section remains empty, and is never even
exposed via the BaseRelocationTable directory entry, which is where the
PE/COFF loader looks for it.

The PE/COFF spec is unclear about relocatable images that do not require
any fixups, but the EDK2 implementation, which is the de facto reference
for PE/COFF in the UEFI space, clearly does not care, and explicitly
mentions (in a comment) that relocatable images with no base relocations
are perfectly fine, as long as they don't have the RELOCS_STRIPPED
attribute set (which is not the case for our PE/COFF image)

So simply remove the .reloc section altogether.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Peter Jones <pjones@redhat.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:50:41 +01:00
Ard Biesheuvel
f328ba470c arm64: efi: remove forbidden values from the PE/COFF header
Bring the PE/COFF header in line with the PE/COFF spec, by setting
NumberOfSymbols to 0, and removing the section alignment flags.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:50:34 +01:00
Ard Biesheuvel
99922257cf arm64: efi: clean up Image header after PE header has been split off
After having split off the PE header, clean up the bits that remain:
use .long consistently, merge two adjacent #ifdef CONFIG_EFI blocks,
fix the offset of the PE header pointer and remove the redundant .align
that follows it.

Also, since we will be eliminating all open coded constants from the
EFI header in subsequent patches, let's replace the open coded "ARM\x64"
magic number with its .ascii equivalent.

No changes to the resulting binary image are intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:50:28 +01:00
Ard Biesheuvel
b5f4a214b8 arm64: efi: move EFI header and related data to a separate .S file
In preparation of yet another round of modifications to the PE/COFF
header, macroize it and move the definition into a separate source
file.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:50:09 +01:00
Mark Rutland
6f5541ba0e include: pe.h: add some missing definitions
Add the missing IMAGE_FILE_MACHINE_ARM64 and IMAGE_DEBUG_TYPE_CODEVIEW
definitions.

We'll need them for the arm64 EFI stub...

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[ardb: add IMAGE_DEBUG_TYPE_CODEVIEW as well]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:49:49 +01:00
Mark Rutland
65c2e69b3c include: pe.h: allow for use in assembly
Some of the definitions in include/linux/pe.h would be useful for the
EFI stub headers, where values are currently open-coded. Unfortunately
they cannot be used as some structures are also defined in pe.h without
!__ASSEMBLY__ guards.

This patch moves the structure definitions into an #ifdef __ASSEMBLY__
block, so that the common value definitions can be used from assembly.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:49:26 +01:00
Ard Biesheuvel
214fad5507 arm64: relocation testing module
This module tests the module loader's ELF relocation processing
routines. When loaded, it logs output like below.

    Relocation test:
    -------------------------------------------------------
    R_AARCH64_ABS64                 0xffff880000cccccc pass
    R_AARCH64_ABS32                 0x00000000f800cccc pass
    R_AARCH64_ABS16                 0x000000000000f8cc pass
    R_AARCH64_MOVW_SABS_Gn          0xffff880000cccccc pass
    R_AARCH64_MOVW_UABS_Gn          0xffff880000cccccc pass
    R_AARCH64_ADR_PREL_LO21         0xffffff9cf4d1a400 pass
    R_AARCH64_PREL64                0xffffff9cf4d1a400 pass
    R_AARCH64_PREL32                0xffffff9cf4d1a400 pass
    R_AARCH64_PREL16                0xffffff9cf4d1a400 pass

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 17:03:32 +01:00
Dave Martin
46823dd17c arm64: cpufeature: Make ID reg accessor naming less counterintuitive
read_system_reg() can readily be confused with read_sysreg(),
whereas these are really quite different in their meaning.

This patches attempts to reduce the ambiguity be reserving "sysreg"
for the actual system register accessors.

read_system_reg() is instead renamed to read_sanitised_ftr_reg(),
to make it more obvious that the Linux-defined sanitised feature
register cache is being accessed here, not the underlying
architectural system registers.

cpufeature.c's internal __raw_read_system_reg() function is renamed
in line with its actual purpose: a form of read_sysreg() that
indexes on (non-compiletime-constant) encoding rather than symbolic
register name.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04 16:55:41 +01:00
Hanjun Guo
d4f54a1866 ACPI: platform: setup MSI domain for ACPI based platform device
By allowing platform MSI domain to be created on ACPI platforms,
a platform device MSI domain can be set-up when it is probed.

In order to do that, the MSI domain the platform device connects
to should be retrieved, so the iort_get_platform_device_domain() is
introduced to retrieve the domain from the IORT kernel layer.

With the domain retrieved, we need a proper way to set the
domain to platform device.

Given that some platform devices (irqchips) require the MSI irqdomain
to be their interrupt parent domain, the MSI irqdomain should be
determined before platform device is probed but after the platform
device is allocated which means that the code setting up the MSI
irqdomain, ie acpi_configure_pmsi_domain() should be called in
acpi_platform_notify() (that is triggered after adding a device but
before the respective driver is probed) for the platform MSI domain
code set-up path to work properly.

Acked-by: Rafael J. Wysocki <rafael@kernel.org> [for glue.c]
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
[lorenzo.pieralisi@arm.com: rewrote commit log]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Ming Lei <ming.lei@canonical.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Tomasz Nowicki <tn@semihalf.com>
2017-03-30 10:20:01 +01:00
Hanjun Guo
ae7c183804 ACPI: platform-msi: retrieve devid from IORT
For devices connecting to an ITS, the devices need to identify themself
through a devid; this devid is represented in the IORT table in named
component node [1] for platform devices, so this patch adds code that
scans the IORT table to retrieve the devices devid.

Add an IORT interface to collect ITS devices devid to carry out platform
devices MSI mappings with IORT tables.

[1]: https://static.docs.arm.com/den0049/b/DEN0049B_IO_Remapping_Table.pdf

Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
[lorenzo.pieralisi@arm.com: rewrote commit log/dropped ITS changes]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Ming Lei <ming.lei@canonical.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Tomasz Nowicki <tn@semihalf.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
2017-03-30 10:12:40 +01:00
Hanjun Guo
8ca4f1d3fb ACPI/IORT: Introduce iort_node_map_platform_id() to retrieve dev id
To retrieve dev id for IORT named components nodes there are
two steps involved (second is optional):

(1) Retrieve the initial id (this may well provide the final mapping)
(2) Map the id (optional if (1) represents the map type we need), this
    is needed for use cases such as NC (named component) -> SMMU -> ITS
    mappings.

the iort_node_get_id() function was created for step (1) above and
iort_node_map_rid() for step (2).

Create a wrapper, named iort_node_map_platform_id(), that encompasses
the two steps at once to retrieve the dev id to provide steps (1)-(2)
functionality.

iort_node_map_platform_id() will handle the parent type so type handling
in iort_node_get_id() is duplicated, remove it and update current
iort_node_get_id() users to move them over to iort_node_map_platform_id().

Suggested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Suggested-by: Tomasz Nowicki <tn@semihalf.com>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
[lorenzo.pieralisi@arm.com: rewrote commit log]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Ming Lei <ming.lei@canonical.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Tomasz Nowicki <tn@semihalf.com>
2017-03-29 10:47:10 +01:00
Hanjun Guo
697f609358 ACPI/IORT: Rename iort_node_map_rid() to make it generic
iort_node_map_rid() was designed to take an input id (that is not
necessarily a PCI requester id) and map it to an output id (eg an SMMU
streamid or an ITS deviceid) according to the mappings provided by an
IORT node mapping entries. This means that the iort_node_map_rid() input
id is not always a PCI requester id as its name, parameters and local
variables suggest, which is misleading.

Apply the s/rid/id substitution to the iort_node_map_rid() mapping
function and its users to make sure its intended usage is clearer.

Suggested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Ming Lei <ming.lei@canonical.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Tomasz Nowicki <tn@semihalf.com>
2017-03-29 10:47:09 +01:00
Kefeng Wang
29d981217a arm64: drop unnecessary newlines in show_regs()
There are two unnecessary newlines, one is in show_regs, another
is in __show_regs(), drop them.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 14:20:41 +00:00
Ard Biesheuvel
d27cfa1fc8 arm64: mm: set the contiguous bit for kernel mappings where appropriate
This is the third attempt at enabling the use of contiguous hints for
kernel mappings. The most recent attempt 0bfc445dec was reverted after
it turned out that updating permission attributes on live contiguous ranges
may result in TLB conflicts. So this time, the contiguous hint is not set
for .rodata or for the linear alias of .text/.rodata, both of which are
mapped read-write initially, and remapped read-only at a later stage.
(Note that the latter region could also be unmapped and remapped again
with updated permission attributes, given that the region, while live, is
only mapped for the convenience of the hibernation code, but that also
means the TLB footprint is negligible anyway, so why bother)

This enables the following contiguous range sizes for the virtual mapping
of the kernel image, and for the linear mapping:

          granule size |  cont PTE  |  cont PMD  |
          -------------+------------+------------+
               4 KB    |    64 KB   |   32 MB    |
              16 KB    |     2 MB   |    1 GB*   |
              64 KB    |     2 MB   |   16 GB*   |

* Only when built for 3 or more levels of translation. This is due to the
  fact that a 2 level configuration only consists of PGDs and PTEs, and the
  added complexity of dealing with folded PMDs is not justified considering
  that 16 GB contiguous ranges are likely to be ignored by the hardware (and
  16k/2 levels is a niche configuration)

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 14:09:23 +00:00
Ard Biesheuvel
5bd4669471 arm64/mm: remove pointless map/unmap sequences when creating page tables
The routines __pud_populate and __pmd_populate only create a table
entry at their respective level which refers to the next level page
by its physical address, so there is no reason to map this page and
then unmap it immediately after.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 14:09:12 +00:00
Ard Biesheuvel
c0951366d4 arm64/mmu: replace 'page_mappings_only' parameter with flags argument
In preparation of extending the policy for manipulating kernel mappings
with whether or not contiguous hints may be used in the page tables,
replace the bool 'page_mappings_only' with a flags field and a flag
NO_BLOCK_MAPPINGS.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:55:51 +00:00
Ard Biesheuvel
141d1497aa arm64/mmu: add contiguous bit to sanity bug check
A mapping with the contiguous bit cannot be safely manipulated while
live, regardless of whether the bit changes between the old and new
mapping. So take this into account when deciding whether the change
is safe.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:55:35 +00:00
Ard Biesheuvel
eccc1bff1b arm64/mmu: ignore debug_pagealloc for kernel segments
The debug_pagealloc facility manipulates kernel mappings in the linear
region at page granularity to detect out of bounds or use-after-free
accesses. Since the kernel segments are not allocated dynamically,
there is no point in taking the debug_pagealloc_enabled flag into
account for them, and we can use block mappings unconditionally.

Note that this applies equally to the linear alias of text/rodata:
we will never have dynamic allocations there given that the same
memory is statically in use by the kernel image.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:55:21 +00:00
Ard Biesheuvel
e393cf40ae arm64/mmu: align alloc_init_pte prototype with pmd/pud versions
Align the function prototype of alloc_init_pte() with its pmd and pud
counterparts by replacing the pfn parameter with the equivalent physical
address.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:55:09 +00:00
Ard Biesheuvel
2ebe088b73 arm64: mmu: apply strict permissions to .init.text and .init.data
To avoid having mappings that are writable and executable at the same
time, split the init region into a .init.text region that is mapped
read-only, and a .init.data region that is mapped non-executable.

This is possible now that the alternative patching occurs via the linear
mapping, and the linear alias of the init region is always mapped writable
(but never executable).

Since the alternatives descriptions themselves are read-only data, move
those into the .init.text region.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:54:50 +00:00
Ard Biesheuvel
28b066da69 arm64: mmu: map .text as read-only from the outset
Now that alternatives patching code no longer relies on the primary
mapping of .text being writable, we can remove the code that removes
the writable permissions post-init time, and map it read-only from
the outset.

To preserve the existing behavior under rodata=off, which is relied
upon by external debuggers to manage software breakpoints (as pointed
out by Mark), add an early_param() check for rodata=, and use RWX
permissions if it set to 'off'.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:54:33 +00:00
Ard Biesheuvel
5ea5306c32 arm64: alternatives: apply boot time fixups via the linear mapping
One important rule of thumb when desiging a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.

For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.

Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:54:19 +00:00
Ard Biesheuvel
aa8c09be7a arm64: mmu: move TLB maintenance from callers to create_mapping_late()
In preparation of refactoring the kernel mapping logic so that text regions
are never mapped writable, which would require adding explicit TLB
maintenance to new call sites of create_mapping_late() (which is currently
invoked twice from the same function), move the TLB maintenance from the
call site into create_mapping_late() itself, and change it from a full
TLB flush into a flush by VA, which is more appropriate here.

Also, given that create_mapping_late() has evolved into a routine that only
updates protection bits on existing mappings, rename it to
update_mapping_prot()

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:54:13 +00:00
Ard Biesheuvel
63d7c6afc5 arm: kvm: move kvm_vgic_global_state out of .text section
The kvm_vgic_global_state struct contains a static key which is
written to by jump_label_init() at boot time. So in preparation of
making .text regions truly (well, almost truly) read-only, mark
kvm_vgic_global_state __ro_after_init so it moves to the .rodata
section instead.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:53:46 +00:00
Ard Biesheuvel
3b3c6c24de arm64: Revert "arm64: kaslr: fix breakage with CONFIG_MODVERSIONS=y"
This reverts commit 9c0e83c371, which
is no longer needed now that the modversions code plays nice with
relocatable PIE kernels.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-23 13:15:20 +00:00
Mark Rutland
d61c97a777 arm64: move !VHE work to end of el2_setup
We only need to initialise sctlr_el1 if we're installing an EL2 stub, so
we may as well defer this until we're doing so. Similarly, we can defer
intialising CPTR_EL2 until then, as we do not access any trapped
functionality as part of el2_setup.

This patch modified el2_setup accordingly, allowing us to remove a
branch and simplify the code flow.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-22 17:21:38 +00:00
Mark Rutland
3ad47d055a arm64: reduce el2_setup branching
The early el2_setup code is a little convoluted, with two branches where
one would do. This makes the code more painful to read than is
necessary.

We can remove a branch and simplify the logic by moving the early return
in the booted-at-EL1 case earlier in the function. This separates it
from all the setup logic that only makes sense for EL2.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-22 17:21:38 +00:00
Chris Redmon
fda89d9efc arm64: struct debug_info: Check CONFIG_HAVE_HW_BREAKPOINT
Check if CONFIG_HAVE_HW_BREAKPOINT is enabled before compiling in extra
data required for hardware breakpoints. Compiling out this code when hw
breakpoints are disabled saves about 272 bytes per struct task_struct.

Signed-off-by: Chris Redmon <credmonster@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-22 17:21:38 +00:00
Geert Uytterhoeven
44176bb38f arm64: Add support for DMA_ATTR_FORCE_CONTIGUOUS to IOMMU
Add support for allocating physically contiguous DMA buffers on arm64
systems with an IOMMU.  This can be useful when two or more devices
with different memory requirements are involved in buffer sharing.

Note that as this uses the CMA allocator, setting the
DMA_ATTR_FORCE_CONTIGUOUS attribute has a runtime-dependency on
CONFIG_DMA_CMA, just like on arm32.

For arm64 systems using swiotlb, no changes are needed to support the
allocation of physically contiguous DMA buffers:
  - swiotlb always uses physically contiguous buffers (up to
    IO_TLB_SEGSIZE = 128 pages),
  - arm64's __dma_alloc_coherent() already calls
    dma_alloc_from_contiguous() when CMA is available.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-22 17:21:37 +00:00
Arnd Bergmann
f13d52cb3f arm64: define BUG() instruction without CONFIG_BUG
This mirrors commit e9c38ceba8 ("ARM: 8455/1: define __BUG as
asm(BUG_INSTR) without CONFIG_BUG") to make the behavior of
arm64 consistent with arm and x86, and avoids lots of warnings in
randconfig builds, such as:

kernel/seccomp.c: In function '__seccomp_filter':
kernel/seccomp.c:666:1: error: no return statement in function returning non-void [-Werror=return-type]

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-22 17:21:37 +00:00
Hanjun Guo
c92bdfe8e7 ACPI/IORT: Rework iort_match_node_callback() return value handling
The return value handling in iort_match_node_callback() is
too convoluted; update the iort_match_node_callback() return
value handling to make code easier to read.

Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
[lorenzo.pieralisi@arm.com: rewrote commit log]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Ming Lei <ming.lei@canonical.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Tomasz Nowicki <tn@semihalf.com>
2017-03-21 18:09:51 +00:00
Hanjun Guo
6cb6bf56dc ACPI/IORT: Add missing comment for iort_dev_find_its_id()
Add missing req_id parameter to the iort_dev_find_its_id() function
kernel-doc comment.

Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Ming Lei <ming.lei@canonical.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Tomasz Nowicki <tn@semihalf.com>
2017-03-21 18:09:51 +00:00
Hanjun Guo
d89cf2e418 ACPI/IORT: Fix the indentation in iort_scan_node()
The indentation in the iort_scan_node() function is wrong, fix it.

Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
[lorenzo.pieralisi@arm.com: massaged commit log]
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Ming Lei <ming.lei@canonical.com>
Tested-by: Wei Xu <xuwei5@hisilicon.com>
Tested-by: Sinan Kaya <okaya@codeaurora.org>
2017-03-21 18:09:51 +00:00
Suzuki K Poulose
c651aae5a7 arm64: v8.3: Support for weaker release consistency
ARMv8.3 adds new instructions to support Release Consistent
processor consistent (RCpc) model, which is weaker than the
RCsc model.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20 16:30:22 +00:00
Suzuki K Poulose
cb567e79fa arm64: v8.3: Support for complex number instructions
ARM v8.3 adds support for new instructions to aid floating-point
multiplication and addition of complex numbers. Expose the support
via HWCAP and MRS emulation

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20 16:30:08 +00:00
Suzuki K Poulose
c8c3798d23 arm64: v8.3: Support for Javascript conversion instruction
ARMv8.3 adds support for a new instruction to perform conversion
from double precision floating point to integer  to match the
architected behaviour of the equivalent Javascript conversion.
Expose the availability via HWCAP and MRS emulation.

Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20 16:29:28 +00:00
Will Deacon
87da236ebc arm64: KVM: Add support for VPIPT I-caches
A VPIPT I-cache has two main properties:

1. Lines allocated into the cache are tagged by VMID and a lookup can
   only hit lines that were allocated with the current VMID.

2. I-cache invalidation from EL1/0 only invalidates lines that match the
   current VMID of the CPU doing the invalidation.

This can cause issues with non-VHE configurations, where the host runs
at EL1 and wants to invalidate I-cache entries for a guest running with
a different VMID. VHE is not affected, because the host runs at EL2 and
I-cache invalidation applies as expected.

This patch solves the problem by invalidating the I-cache when unmapping
a page at stage 2 on a system with a VPIPT I-cache but not running with
VHE enabled. Hopefully this is an obscure enough configuration that the
overhead isn't anything to worry about, although it does mean that the
by-range I-cache invalidation currently performed when mapping at stage
2 can be elided on such systems, because the I-cache will be clean for
the guest VMID following a rollover event.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20 16:25:45 +00:00
Will Deacon
dda288d7e4 arm64: cache: Identify VPIPT I-caches
Add support for detecting VPIPT I-caches, as introduced by ARMv8.2.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-20 16:17:02 +00:00