Commit Graph

1906 Commits

Author SHA1 Message Date
Suzuki K. Poulose
686e783869 arm64: Introduce helpers for page table levels
Introduce helpers for finding the number of page table
levels required for a given VA width, shift for a particular
page table level.

Convert the existing users to the new helpers. More users
to follow.

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19 17:53:08 +01:00
Suzuki K. Poulose
b433dce056 arm64: Handle section maps for swapper/idmap
We use section maps with 4K page size to create the swapper/idmaps.
So far we have used !64K or 4K checks to handle the case where we
use the section maps.
This patch adds a new symbol, ARM64_SWAPPER_USES_SECTION_MAPS, to
handle cases where we use section maps, instead of using the page size
symbols.

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19 17:52:36 +01:00
Suzuki K. Poulose
87d1587bef arm64: Move swapper pagetable definitions
Move the kernel pagetable (both swapper and idmap) definitions
from the generic asm/page.h to a new file, asm/kernel-pgtable.h.

This is mostly a cosmetic change, to clean up the asm/page.h to
get rid of the arch specific details which are not needed by the
generic code.

Also renames the symbols to prevent conflicts. e.g,
 	BLOCK_SHIFT => SWAPPER_BLOCK_SHIFT

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-19 17:52:14 +01:00
Yang Shi
fbc61a26e6 arm64: debug: Fix typo in debug-monitors.c
Fix handers to handlers.

Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-16 18:21:12 +01:00
Mark Salyzyn
77f3228f77 arm64: AArch32 user space PC alignment exception
ARMv7 does not have a PC alignment exception. ARMv8 AArch32
user space however can produce a PC alignment exception. Add
handler so that we do not dump an unexpected stack trace in
the logs.

Signed-off-by: Mark Salyzyn <salyzyn@android.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-16 14:55:49 +01:00
Catalin Marinas
7db743c6d8 arm64: Minor coding style fixes for kc_offset_to_vaddr and kc_vaddr_to_offset
These were introduced by commit 03875ad52f (arm64: add
kc_offset_to_vaddr and kc_vaddr_to_offset macro).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-16 14:34:53 +01:00
Catalin Marinas
52fa1927e9 Revert "arm64: ioremap: add ioremap_cache macro"
This reverts commit 1b6d7f8742.

This patch would conflict with Dan Williams' "tree-wide convert to
memremap()" series (ioremap_cache replaced by arch_memremap)

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13 16:18:17 +01:00
yalin wang
03875ad52f arm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macro
This patch add kc_offset_to_vaddr() and kc_vaddr_to_offset(),
the default version doesn't work on arm64, because arm64 kernel address
is below the PAGE_OFFSET, like module address and vmemmap address are
all below PAGE_OFFSET address.

Signed-off-by: yalin wang <yalin.wang2010@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13 15:15:24 +01:00
yalin wang
1b6d7f8742 arm64: ioremap: add ioremap_cache macro
Add ioremap_cache macro, because some code will test if this macro
is defined or not, and will generate a generric version if not defined,
for example, memremap.c do like this.

Signed-off-by: yalin wang <yalin.wang2010@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13 15:10:25 +01:00
Will Deacon
83040123fd arm64: kasan: fix issues reported by sparse
Sparse reports some new issues introduced by the kasan patches:

  arch/arm64/mm/kasan_init.c:91:13: warning: no previous prototype for
  'kasan_early_init' [-Wmissing-prototypes] void __init kasan_early_init(void)
             ^
  arch/arm64/mm/kasan_init.c:91:13: warning: symbol 'kasan_early_init'
  was not declared. Should it be static? [sparse]

This patch resolves the problem by adding a prototype for
kasan_early_init and marking the function as asmlinkage, since it's only
called from head.S.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-13 14:54:42 +01:00
Linus Walleij
ee7f881b59 ARM64: kasan: print memory assignment
This prints out the virtual memory assigned to KASan in the
boot crawl along with other memory assignments, if and only
if KASan is activated.

Example dmesg from the Juno Development board:

Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata,
2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved)
Virtual kernel memory layout:
    kasan   : 0xffffff8000000000 - 0xffffff9000000000   (    64 GB)
    vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000   (   182 GB)
    vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
              0xffffffbdc2000000 - 0xffffffbdc3fc0000   (    31 MB actual)
    fixed   : 0xffffffbffabfd000 - 0xffffffbffac00000   (    12 KB)
    PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000   (    16 MB)
    modules : 0xffffffbffc000000 - 0xffffffc000000000   (    64 MB)
    memory  : 0xffffffc000000000 - 0xffffffc07f000000   (  2032 MB)
      .init : 0xffffffc0007f5000 - 0xffffffc00084a000   (   340 KB)
      .text : 0xffffffc000080000 - 0xffffffc0007f45b4   (  7634 KB)
      .data : 0xffffffc000850000 - 0xffffffc0008bf200   (   445 KB)

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12 17:46:43 +01:00
Andrey Ryabinin
39d114ddc6 arm64: add KASAN support
This patch adds arch specific code for kernel address sanitizer
(see Documentation/kasan.txt).

1/8 of kernel addresses reserved for shadow memory. There was no
big enough hole for this, so virtual addresses for shadow were
stolen from vmalloc area.

At early boot stage the whole shadow region populated with just
one physical page (kasan_zero_page). Later, this page reused
as readonly zero shadow for some memory that KASan currently
don't track (vmalloc).
After mapping the physical memory, pages for shadow memory are
allocated and mapped.

Functions like memset/memmove/memcpy do a lot of memory accesses.
If bad pointer passed to one of these function it is important
to catch this. Compiler's instrumentation cannot do this since
these functions are written in assembly.
KASan replaces memory functions with manually instrumented variants.
Original functions declared as weak symbols so strong definitions
in mm/kasan/kasan.c could replace them. Original functions have aliases
with '__' prefix in name, so we could call non-instrumented variant
if needed.
Some files built without kasan instrumentation (e.g. mm/slub.c).
Original mem* function replaced (via #define) with prefixed variants
to disable memory access checks for such files.

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Tested-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12 17:46:36 +01:00
Andrey Ryabinin
fd2203dd35 arm64: move PGD_SIZE definition to pgalloc.h
This will be used by KASAN latter.

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12 17:46:30 +01:00
Will Deacon
305d454aaa arm64: atomics: implement native {relaxed, acquire, release} atomics
Commit 654672d4ba ("locking/atomics: Add _{acquire|release|relaxed}()
variants of some atomic operation") introduced a relaxed atomic API to
Linux that maps nicely onto the arm64 memory model, including the new
ARMv8.1 atomic instructions.

This patch hooks up the API to our relaxed atomic instructions, rather
than have them all expand to the full-barrier variants as they do
currently.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12 17:36:58 +01:00
Ard Biesheuvel
e8f3010f73 arm64/efi: isolate EFI stub from the kernel proper
Since arm64 does not use a builtin decompressor, the EFI stub is built
into the kernel proper. So far, this has been working fine, but actually,
since the stub is in fact a PE/COFF relocatable binary that is executed
at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we
should not be seamlessly sharing code with the kernel proper, which is a
position dependent executable linked at a high virtual offset.

So instead, separate the contents of libstub and its dependencies, by
putting them into their own namespace by prefixing all of its symbols
with __efistub. This way, we have tight control over what parts of the
kernel proper are referenced by the stub.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12 16:20:12 +01:00
Ard Biesheuvel
207918461e arm64: use ENDPIPROC() to annotate position independent assembler routines
For more control over which functions are called with the MMU off or
with the UEFI 1:1 mapping active, annotate some assembler routines as
position independent. This is done by introducing ENDPIPROC(), which
replaces the ENDPROC() declaration of those routines.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12 16:19:45 +01:00
Catalin Marinas
cb50ce324e arm64: Fix missing #include in hw_breakpoint.c
A prior commit used to detect the hw breakpoint ABI behaviour based on
the target state missed the asm/compat.h include and the build fails
with !CONFIG_COMPAT.

Fixes: 8f48c06290 ("arm64: hw_breakpoint: use target state to determine ABI behaviour")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-12 12:10:53 +01:00
Yang Yingliang
217d453d47 arm64: fix a migrating irq bug when hotplug cpu
When cpu is disabled, all irqs will be migratged to another cpu.
In some cases, a new affinity is different, the old affinity need
to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
the old affinity can not be updated. Fix it by using irq_do_set_affinity.

And migrating interrupts is a core code matter, so use the generic
function irq_migrate_all_off_this_cpu() to migrate interrupts in
kernel/irq/migration.c.

Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-09 17:40:35 +01:00
Jeremy Linton
348a65cdcb arm64: Mark kernel page ranges contiguous
With 64k pages, the next larger segment size is 512M. The linux
kernel also uses different protection flags to cover its code and data.
Because of this requirement, the vast majority of the kernel code and
data structures end up being mapped with 64k pages instead of the larger
pages common with a 4k page kernel.

Recent ARM processors support a contiguous bit in the
page tables which allows the a TLB to cover a range larger than a
single PTE if that range is mapped into physically contiguous
ram.

So, for the kernel its a good idea to set this flag. Some basic
micro benchmarks show it can significantly reduce the number of
L1 dTLB refills.

Add boot option to enable/disable CONT marking, as well as fix a
bug found by Steve Capper.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
[catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08 18:44:14 +01:00
Jeremy Linton
202e41a1c2 arm64: Make the kernel page dump utility aware of the CONT bit
The kernel page dump utility needs to be aware of the CONT bit before
it will break up pages ranges for display.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08 18:39:57 +01:00
Jeremy Linton
06f90d2527 arm64: Default kernel pages should be contiguous
The default page attributes for a PMD being broken should have the CONT bit
set. Create a new definition for an early boot range of PTE's that are
contiguous.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08 18:39:46 +01:00
Jeremy Linton
93ef666a09 arm64: Macros to check/set/unset the contiguous bit
Add the supporting macros to check if the contiguous bit
is set, set the bit, or clear it in a PTE entry.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08 18:39:19 +01:00
Jeremy Linton
ecf35a237a arm64: PTE/PMD contiguous bit definition
Define the bit positions in the PTE and PMD for the
contiguous bit.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08 18:39:10 +01:00
Jeremy Linton
2ff4439bb4 arm64: Add contiguous page flag shifts and constants
Add the number of pages required to form a contiguous range,
as well as some supporting constants.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-08 18:39:03 +01:00
Mark Rutland
01a507a371 arm64: dts: juno: describe PMUs separately
The A57 and A53 PMUs in Juno support different events, so describe them
separately in both the Juno and Juno R1 DTs.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Liviu Dudau <liviu.dudau@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 14:25:41 +01:00
Mark Rutland
62a4dda9d6 arm64: perf: add Cortex-A57 support
The Cortex-A57 PMU supports a few events outside of the required PMUv3
set that are rather useful.

This patch adds the event map data for said events.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 14:25:24 +01:00
Mark Rutland
ac82d12772 arm64: perf: add Cortex-A53 support
The Cortex-A53 PMU supports a few events outside of the required PMUv3
set that are rather useful.

This patch adds the event map data for said events.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 14:25:07 +01:00
Mark Rutland
6475b2d846 arm64: perf: move to shared arm_pmu framework
Now that the arm_pmu framework has been factored out to drivers/perf we
can make use of it for arm64, gaining support for heterogeneous PMUs
and unifying the two codebases before they diverge further.

The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching
the style previously applied to the 32-bit PMUs.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 14:24:48 +01:00
Will Deacon
8f48c06290 arm64: hw_breakpoint: use target state to determine ABI behaviour
The arm64 hw_breakpoint interface is slightly less flexible than its
32-bit counterpart, thanks to some changes in the architecture rendering
unaligned watchpoint addresses obselete for AArch64.

However, in a multi-arch environment (i.e. debugging a 32-bit target
with a 64-bit GDB under a 64-bit kernel), we need to provide a feature
compatible interface to GDB in order for debugging to function correctly.

This patch adds a new helper, is_compat_bp,  to our hw_breakpoint
implementation which changes the interface behaviour based on the
architecture of the debug target as opposed to the debugger itself.
This allows debugged to function as expected for multi-arch
configurations without relying on deprecated architectural behaviours
when debugging native applications.

Cc: Yao Qi <yao.qi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 14:19:10 +01:00
Will Deacon
120798d2e7 arm64: mm: remove dsb from update_mmu_cache
update_mmu_cache() consists of a dsb(ishst) instruction so that new user
mappings are guaranteed to be visible to the page table walker on
exception return.

In reality this can be a very expensive operation which is rarely needed.
Removing this barrier shows a modest improvement in hackbench scores and
, in the worst case, we re-take the user fault and establish that there
was nothing to do.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:56:37 +01:00
Will Deacon
28c6fbc3b4 arm64: tlb: remove redundant barrier from __flush_tlb_pgtable
__flush_tlb_pgtable is used to invalidate intermediate page table
entries after they have been cleared and are about to be freed. Since
pXd_clear imply memory barriers, we don't need the extra one here.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:56:33 +01:00
Will Deacon
38d9628750 arm64: mm: kill mm_cpumask usage
mm_cpumask isn't actually used for anything on arm64, so remove all the
code trying to keep it up-to-date.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:56:29 +01:00
Will Deacon
c2775b2ee5 arm64: switch_mm: simplify mm and CPU checks
switch_mm performs some checks to try and avoid entering the ASID
allocator:

  (1) If we're switching to the init_mm (no user mappings), then simply
      set a reserved TTBR0 value with no page table (the zero page)

  (2) If prev == next *and* the mm_cpumask indicates that we've run on
      this CPU before, then we can skip the allocator.

However, there is plenty of redundancy here. With the new ASID allocator,
if prev == next, then we know that our ASID is valid and do not need to
worry about re-allocation. Consequently, we can drop the mm_cpumask check
in (2) and move the prev == next check before the init_mm check, since
if prev == next == init_mm then there's nothing to do.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:56:25 +01:00
Will Deacon
5a7862e830 arm64: tlbflush: avoid flushing when fullmm == 1
The TLB gather code sets fullmm=1 when tearing down the entire address
space for an mm_struct on exit or execve. Given that the ASID allocator
will never re-allocate a dirty ASID, this flushing is not needed and can
simply be avoided in the flushing code.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:56:21 +01:00
Will Deacon
f3e002c24e arm64: tlbflush: remove redundant ASID casts to (unsigned long)
The ASID macro returns a 64-bit (long long) value, so there is no need
to cast to (unsigned long) before shifting prior to a TLBI operation.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:56:06 +01:00
Will Deacon
5aec715d7d arm64: mm: rewrite ASID allocator and MM context-switching code
Our current switch_mm implementation suffers from a number of problems:

  (1) The ASID allocator relies on IPIs to synchronise the CPUs on a
      rollover event

  (2) Because of (1), we cannot allocate ASIDs with interrupts disabled
      and therefore make use of a TIF_SWITCH_MM flag to postpone the
      actual switch to finish_arch_post_lock_switch

  (3) We run context switch with a reserved (invalid) TTBR0 value, even
      though the ASID and pgd are updated atomically

  (4) We take a global spinlock (cpu_asid_lock) during context-switch

  (5) We use h/w broadcast TLB operations when they are not required
      (e.g. in flush_context)

This patch addresses these problems by rewriting the ASID algorithm to
match the bitmap-based arch/arm/ implementation more closely. This in
turn allows us to remove much of the complications surrounding switch_mm,
including the ugly thread flag.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:55:41 +01:00
Will Deacon
8e63d38876 arm64: flush: use local TLB and I-cache invalidation
There are a number of places where a single CPU is running with a
private page-table and we need to perform maintenance on the TLB and
I-cache in order to ensure correctness, but do not require the operation
to be broadcast to other CPUs.

This patch adds local variants of tlb_flush_all and __flush_icache_all
to support these use-cases and updates the callers respectively.
__local_flush_icache_all also implies an isb, since it is intended to be
used synchronously.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: David Daney <david.daney@cavium.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:45:27 +01:00
Will Deacon
fa7aae8a42 arm64: proc: de-scope TLBI operation during cold boot
When cold-booting a CPU, we must invalidate any junk entries from the
local TLB prior to enabling the MMU. This doesn't require broadcasting
within the inner-shareable domain, so de-scope the operation to apply
only to the local CPU.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:44:25 +01:00
Will Deacon
c51e97d89e arm64: mm: remove unused cpu_set_idmap_tcr_t0sz function
With commit b08d4640a3 ("arm64: remove dead code"),
cpu_set_idmap_tcr_t0sz is no longer called and can therefore be removed
from the kernel.

This patch removes the function and effectively inlines the helper
function __cpu_set_tcr_t0sz into cpu_set_default_tcr_t0sz.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:44:05 +01:00
Andrey Ryabinin
127db024a7 arm64: introduce VA_START macro - the first kernel virtual address.
In order to not use lengthy (UL(0xffffffffffffffff) << VA_BITS) everywhere,
replace it with VA_START.

Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:41:32 +01:00
Feng Kan
404268828c arm64: copy_to-from-in_user optimization using copy template
This patch optimize copy_to-from-in_user for arm 64bit architecture. The
copy template is used as template file for all the copy*.S files. Minor
change was made to it to accommodate the copy to/from/in user files.

Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Balamurugan Shanmugam <bshanmugam@apm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:34:44 +01:00
Feng Kan
e5c88e3f2f arm64: Change memcpy in kernel to use the copy template file
This converts the memcpy.S to use the copy template file. The copy
template file was based originally on the memcpy.S

Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Balamurugan Shanmugam <bshanmugam@apm.com>
[catalin.marinas@arm.com: removed tmp3(w) .req statements as they are not used]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:34:43 +01:00
Alim Akhtar
efa773fe91 arm64: defconfig: Enable samsung serial and mmc
This patch update defconfig, adds samsung serial and
Synopsys Designware MMC configs related to exynos SoC

Signed-off-by: Alim Akhtar <alim.akhtar@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07 11:34:24 +01:00
Linus Torvalds
a758379b03 Merge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull EFI fixes from Ingo Molnar:
 "Two EFI fixes: one for x86, one for ARM, fixing a boot crash bug that
  can trigger under newer EFI firmware"

* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  arm64/efi: Fix boot crash by not padding between EFI_MEMORY_RUNTIME regions
  x86/efi: Fix boot crash by mapping EFI memmap entries bottom-up at runtime, instead of top-down
2015-10-03 10:46:41 -04:00
Li Bin
ee556d00cf arm64: ftrace: fix function_graph tracer panic
When function graph tracer is enabled, the following operation
will trigger panic:

mount -t debugfs nodev /sys/kernel
echo next_tgid > /sys/kernel/tracing/set_ftrace_filter
echo function_graph > /sys/kernel/tracing/current_tracer
ls /proc/

------------[ cut here ]------------
[  198.501417] Unable to handle kernel paging request at virtual address cb88537fdc8ba316
[  198.506126] pgd = ffffffc008f79000
[  198.509363] [cb88537fdc8ba316] *pgd=00000000488c6003, *pud=00000000488c6003, *pmd=0000000000000000
[  198.517726] Internal error: Oops: 94000005 [#1] SMP
[  198.518798] Modules linked in:
[  198.520582] CPU: 1 PID: 1388 Comm: ls Tainted: G
[  198.521800] Hardware name: linux,dummy-virt (DT)
[  198.522852] task: ffffffc0fa9e8000 ti: ffffffc0f9ab0000 task.ti: ffffffc0f9ab0000
[  198.524306] PC is at next_tgid+0x30/0x100
[  198.525205] LR is at return_to_handler+0x0/0x20
[  198.526090] pc : [<ffffffc0002a1070>] lr : [<ffffffc0000907c0>] pstate: 60000145
[  198.527392] sp : ffffffc0f9ab3d40
[  198.528084] x29: ffffffc0f9ab3d40 x28: ffffffc0f9ab0000
[  198.529406] x27: ffffffc000d6a000 x26: ffffffc000b786e8
[  198.530659] x25: ffffffc0002a1900 x24: ffffffc0faf16c00
[  198.531942] x23: ffffffc0f9ab3ea0 x22: 0000000000000002
[  198.533202] x21: ffffffc000d85050 x20: 0000000000000002
[  198.534446] x19: 0000000000000002 x18: 0000000000000000
[  198.535719] x17: 000000000049fa08 x16: ffffffc000242efc
[  198.537030] x15: 0000007fa472b54c x14: ffffffffff000000
[  198.538347] x13: ffffffc0fada84a0 x12: 0000000000000001
[  198.539634] x11: ffffffc0f9ab3d70 x10: ffffffc0f9ab3d70
[  198.540915] x9 : ffffffc0000907c0 x8 : ffffffc0f9ab3d40
[  198.542215] x7 : 0000002e330f08f0 x6 : 0000000000000015
[  198.543508] x5 : 0000000000000f08 x4 : ffffffc0f9835ec0
[  198.544792] x3 : cb88537fdc8ba316 x2 : cb88537fdc8ba306
[  198.546108] x1 : 0000000000000002 x0 : ffffffc000d85050
[  198.547432]
[  198.547920] Process ls (pid: 1388, stack limit = 0xffffffc0f9ab0020)
[  198.549170] Stack: (0xffffffc0f9ab3d40 to 0xffffffc0f9ab4000)
[  198.582568] Call trace:
[  198.583313] [<ffffffc0002a1070>] next_tgid+0x30/0x100
[  198.584359] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
[  198.585503] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
[  198.586574] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
[  198.587660] [<ffffffc0000907bc>] ftrace_graph_caller+0x6c/0x70
[  198.588896] Code: aa0003f5 2a0103f4 b4000102 91004043 (885f7c60)
[  198.591092] ---[ end trace 6a346f8f20949ac8 ]---

This is because when using function graph tracer, if the traced
function return value is in multi regs ([x0-x7]), return_to_handler
may corrupt them. So in return_to_handler, the parameter regs should
be protected properly.

Cc: <stable@vger.kernel.org> # 3.18+
Signed-off-by: Li Bin <huawei.libin@huawei.com>
Acked-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-02 11:12:56 +01:00
Steve Capper
1a541b4e3c arm64: Fix THP protection change logic
6910fa1 ("arm64: enable PTE type bit in the mask for pte_modify") fixes
a problem whereby a large block of PROT_NONE mapped memory is
incorrectly mapped as block descriptors when mprotect is called.

Unfortunately, a subtle bug was introduced by this fix to the THP logic.

If one mmaps a large block of memory, then faults it such that it is
collapsed into THPs; resulting calls to mprotect on this area of memory
will lead to incorrect table descriptors being written instead of block
descriptors. This is because pmd_modify calls pte_modify which is now
allowed to modify the type of the page table entry.

This patch reverts commit 6910fa16db, and
fixes the problem it was trying to address by adjusting PAGE_NONE to
represent a table entry. Thus no change in pte type is required when
moving from PROT_NONE to a different protection.

Fixes: 6910fa16db ("arm64: enable PTE type bit in the mask for pte_modify")
Cc: <stable@vger.kernel.org> # 4.0+
Cc: Feng Kan <fkan@apm.com>
Reported-by: Ganapatrao Kulkarni <Ganapatrao.Kulkarni@caviumnetworks.com>
Tested-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-01 18:02:21 +01:00
Ard Biesheuvel
0ce3cc008e arm64/efi: Fix boot crash by not padding between EFI_MEMORY_RUNTIME regions
The new Properties Table feature introduced in UEFIv2.5 may
split memory regions that cover PE/COFF memory images into
separate code and data regions. Since these regions only differ
in the type (runtime code vs runtime data) and the permission
bits, but not in the memory type attributes (UC/WC/WT/WB), the
spec does not require them to be aligned to 64 KB.

Since the relative offset of PE/COFF .text and .data segments
cannot be changed on the fly, this means that we can no longer
pad out those regions to be mappable using 64 KB pages.
Unfortunately, there is no annotation in the UEFI memory map
that identifies data regions that were split off from a code
region, so we must apply this logic to all adjacent runtime
regions whose attributes only differ in the permission bits.

So instead of rounding each memory region to 64 KB alignment at
both ends, only round down regions that are not directly
preceded by another runtime region with the same type
attributes. Since the UEFI spec does not mandate that the memory
map be sorted, this means we also need to sort it first.

Note that this change will result in all EFI_MEMORY_RUNTIME
regions whose start addresses are not aligned to the OS page
size to be mapped with executable permissions (i.e., on kernels
compiled with 64 KB pages). However, since these mappings are
only active during the time that UEFI Runtime Services are being
invoked, the window for abuse is rather small.

Tested-by: Mark Salter <msalter@redhat.com>
Tested-by: Mark Rutland <mark.rutland@arm.com> [UEFI 2.4 only]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Reviewed-by: Mark Salter <msalter@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Cc: <stable@vger.kernel.org> # v4.0+
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Leif Lindholm <leif.lindholm@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1443218539-7610-3-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-10-01 12:51:28 +02:00
Linus Torvalds
b6d980f493 AMD fixes for bugs introduced in the 4.2 merge window,
and a few PPC bug fixes too.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJWBSn7AAoJEL/70l94x66Dxd4H/RT6kWWj9x4grEYUkcJUDyK2
 AXm7XcKQm04auwAic8Otr+ts/Qix/50kWmBe/TU0QLgqb8rj5Dj3yGFK6Z1y6mAz
 KvaxqMJd4tZGTqN0DDvC2ItEdzjfAdeJZo/FHXqPHVspG0G14T7STLna02LTBBEJ
 tNzY9qor8nFhg2fT2szqKaudUNgTqkCTpo57o2BrHE96SHG+m0WdpQCV1F5hPVpg
 Te0Pb7qX9xng5n3sQ7IV/t3QYbrza1ACwNQS9XJa0Yu6iEz7JdmVmzHQASK9ynn6
 hUHhsNYGx4IsPjPtfJk2GroNaRDZL+VMzw07tfcOvPx8xkS9hS63pwzmSBqfLrM=
 =Ywqn
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Paolo Bonzini:
 "AMD fixes for bugs introduced in the 4.2 merge window, and a few PPC
  bug fixes too"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM: disable halt_poll_ns as default for s390x
  KVM: x86: fix off-by-one in reserved bits check
  KVM: x86: use correct page table format to check nested page table reserved bits
  KVM: svm: do not call kvm_set_cr0 from init_vmcb
  KVM: x86: trap AMD MSRs for the TSeg base and mask
  KVM: PPC: Book3S: Take the kvm->srcu lock in kvmppc_h_logical_ci_load/store()
  KVM: PPC: Book3S HV: Pass the correct trap argument to kvmhv_commence_exit
  KVM: PPC: Book3S HV: Fix handling of interrupted VCPUs
  kvm: svm: reset mmu on VCPU reset
2015-09-25 10:51:40 -07:00
David Hildenbrand
920552b213 KVM: disable halt_poll_ns as default for s390x
We observed some performance degradation on s390x with dynamic
halt polling. Until we can provide a proper fix, let's enable
halt_poll_ns as default only for supported architectures.

Architectures are now free to set their own halt_poll_ns
default value.

Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-09-25 10:31:30 +02:00
Linus Torvalds
4401555a98 DeviceTree fixes for 4.3:
- Silence bogus warning for of_irq_parse_pci
 - Fix typo in ARM idle-states binding doc and dts files
 - Various minor binding documentation updates
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJWBI2/AAoJEMhvYp4jgsXi08cH/11S/92w1PLNmWCv4y7TXHQw
 XTJi7/Hk+q6otV/FigXukdYClR17h/3mFyCKKXHrkAgy/AoSrvN/ABe0bLLoT2AQ
 xh0Rx8F6vZwa6ro1MZcrn3ZxQkoJlNUhoIXtY84oSPWd1ernLOar6HonFiynCQQc
 bDZo5zoLj6DBbSO+UpVvEQN57ogwPgFZ1hGDjJeyyH8c1755z2OVA+k8O0dwjmqW
 Xav/7TO8bFEIbUnfWeKnVyK45qmJxHcTbn3nxUgYFQj3DMI2Hn86WEY4cg8QvTo+
 SpdO1Aio6b9NSTNnpiSvPnc2MFNPFaWjiqm1w86w+PTm8oUT+p1V0OG/EE27fcY=
 =VUfk
 -----END PGP SIGNATURE-----

Merge tag 'devicetree-fixes-for-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux

Pull DeviceTree fixes from Rob Herring:
 - Silence bogus warning for of_irq_parse_pci
 - Fix typo in ARM idle-states binding doc and dts files
 - Various minor binding documentation updates

* tag 'devicetree-fixes-for-4.3' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
  Documentation: arm: Fix typo in the idle-states bindings examples
  gpio: mention in DT binding doc that <name>-gpio is deprecated
  of_pci_irq: Silence bogus "of_irq_parse_pci() failed ..." messages.
  devicetree: bindings: Extend the bma180 bindings with bma250 info
  of: thermal: Mark cooling-*-level properties optional
  of: thermal: Fix inconsitency between cooling-*-state and cooling-*-level
  Docs: dt: add #msi-cells to GICv3 ITS binding
  of: add vendor prefix for Socionext Inc.
2015-09-24 17:46:38 -07:00