Commit Graph

1046 Commits

Author SHA1 Message Date
Zi Shen Lim
5e6e15a2c4 arm64: introduce aarch64_insn_gen_logical_shifted_reg()
Introduce function to generate logical (shifted register)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:21 +01:00
Zi Shen Lim
27f95ba59b arm64: introduce aarch64_insn_gen_data3()
Introduce function to generate data-processing (3 source) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim
6481063989 arm64: introduce aarch64_insn_gen_data2()
Introduce function to generate data-processing (2 source) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim
546dd36b44 arm64: introduce aarch64_insn_gen_data1()
Introduce function to generate data-processing (1 source) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim
5fdc639a7a arm64: introduce aarch64_insn_gen_add_sub_shifted_reg()
Introduce function to generate add/subtract (shifted register)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim
6098f2d5c7 arm64: introduce aarch64_insn_gen_movewide()
Introduce function to generate move wide (immediate) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim
4a89d2c98e arm64: introduce aarch64_insn_gen_bitfield()
Introduce function to generate bitfield instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim
9951a157fa arm64: introduce aarch64_insn_gen_add_sub_imm()
Introduce function to generate add/subtract (immediate) instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim
1bba567d0f arm64: introduce aarch64_insn_gen_load_store_pair()
Introduce function to generate load/store pair instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:20 +01:00
Zi Shen Lim
17cac17988 arm64: introduce aarch64_insn_gen_load_store_reg()
Introduce function to generate load/store (register offset)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Zi Shen Lim
345e0d35ec arm64: introduce aarch64_insn_gen_cond_branch_imm()
Introduce function to generate conditional branch (immediate)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Zi Shen Lim
c0cafbae20 arm64: introduce aarch64_insn_gen_branch_reg()
Introduce function to generate unconditional branch (register)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Zi Shen Lim
617d2fbc45 arm64: introduce aarch64_insn_gen_comp_branch_imm()
Introduce function to generate compare & branch (immediate)
instructions.

Signed-off-by: Zi Shen Lim <zlim.lnx@gmail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Behan Webster
a4ceab1adb arm64: LLVMLinux: Use global stack pointer in return_address()
The global register current_stack_pointer holds the current stack pointer.
This change supports being able to compile the kernel with both gcc and clang.

Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Mark Charlebois
34ccf8f455 arm64: LLVMLinux: Use global stack register variable for aarch64
To support both Clang and GCC, use the global stack register variable vs
a local register variable.

Author: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Behan Webster
2128df143d arm64: LLVMLinux: Use current_stack_pointer in kernel/traps.c
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.

Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Behan Webster
786248705e arm64: LLVMLinux: Calculate current_thread_info from current_stack_pointer
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.

Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Behan Webster
bb28cec4ea arm64: LLVMLinux: Use current_stack_pointer in save_stack_trace_tsk
Use the global current_stack_pointer to get the value of the stack pointer.
This change supports being able to compile the kernel with both gcc and clang.

Signed-off-by: Behan Webster <behanw@converseincode.com>
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Behan Webster
3337a10e0d arm64: LLVMLinux: Add current_stack_pointer() for arm64
Define a global named register for current_stack_pointer. The use of this new
variable guarantees that both gcc and clang can access this register in C code.

Signed-off-by: Behan Webster <behanw@converseincode.com>
Reviewed-by: Jan-Simon Möller <dl9pf@gmx.de>
Reviewed-by: Mark Charlebois <charlebm@gmail.com>
Reviewed-by: Olof Johansson <olof@lixom.net>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:19 +01:00
Laura Abbott
11d91a770f arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support
In a similar fashion to other architecture, add the infrastructure
and Kconfig to enable DEBUG_SET_MODULE_RONX support. When
enabled, module ranges will be marked read-only/no-execute as
appropriate.

Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[will: fixed off-by-one in module end check]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Laura Abbott
b6d4f2800b arm64: Introduce {set,clear}_pte_bit
It's useful to be able to change individual bits in ptes at times.
Introduce functions for this and update existing pte_mk* functions
to use these primatives.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
[will: added missing inline keyword for new header functions]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Arun Chandran
5e05153144 arm64: convert part of soft_restart() to assembly
The current soft_restart() and setup_restart implementations incorrectly
assume that compiler will not spill/fill values to/from stack. However
this assumption seems to be wrong, revealed by the disassembly of the
currently existing code (v3.16) built with Linaro GCC 4.9-2014.05.

ffffffc000085224 <soft_restart>:
ffffffc000085224:  a9be7bfd  stp    x29, x30, [sp,#-32]!
ffffffc000085228:  910003fd  mov    x29, sp
ffffffc00008522c:  f9000fa0  str    x0, [x29,#24]
ffffffc000085230:  94003d21  bl     ffffffc0000946b4 <setup_mm_for_reboot>
ffffffc000085234:  94003b33  bl     ffffffc000093f00 <flush_cache_all>
ffffffc000085238:  94003dfa  bl     ffffffc000094a20 <cpu_cache_off>
ffffffc00008523c:  94003b31  bl     ffffffc000093f00 <flush_cache_all>
ffffffc000085240:  b0003321  adrp   x1, ffffffc0006ea000 <reset_devices>

ffffffc000085244:  f9400fa0  ldr    x0, [x29,#24] ----> spilled addr
ffffffc000085248:  f942fc22  ldr    x2, [x1,#1528] ----> global memstart_addr

ffffffc00008524c:  f0000061  adrp   x1, ffffffc000094000 <__inval_cache_range+0x40>
ffffffc000085250:  91290021  add    x1, x1, #0xa40
ffffffc000085254:  8b010041  add    x1, x2, x1
ffffffc000085258:  d2c00802  mov    x2, #0x4000000000           // #274877906944
ffffffc00008525c:  8b020021  add    x1, x1, x2
ffffffc000085260:  d63f0020  blr    x1
...

Here the compiler generates memory accesses after the cache is disabled,
loading stale values for the spilled value and global variable. As we cannot
control when the compiler will access memory we must rewrite the
functions in assembly to stash values we need in registers prior to
disabling the cache, avoiding the use of memory.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Ard Biesheuvel
58015ec6b8 arm64/efi: efistub: don't abort if base of DRAM is occupied
If we cannot relocate the kernel Image to its preferred offset of base of DRAM
plus TEXT_OFFSET, instead relocate it to the lowest available 2 MB boundary plus
TEXT_OFFSET. We may lose a bit of memory at the low end, but we can still
proceed normally otherwise.

Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Ard Biesheuvel
c16173fa56 arm64/efi: efistub: cover entire static mem footprint in PE/COFF .text
The static memory footprint of a kernel Image at boot is larger than the
Image file itself. Things like .bss data and initial page tables are allocated
statically but populated dynamically so their content is not contained in the
Image file.

However, if EFI (or GRUB) has loaded the Image at precisely the desired offset
of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have
to make sure that the allocation done by the PE/COFF loader is large enough.

Fix this by growing the PE/COFF .text section to cover the entire static
memory footprint. The part of the section that is not covered by the payload
will be zero initialised by the PE/COFF loader.

Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Mark Rutland
113954c646 arm64: spin-table: handle unmapped cpu-release-addrs
In certain cases the cpu-release-addr of a CPU may not fall in the
linear mapping (e.g. when the kernel is loaded above this address due to
the presence of other images in memory). This is problematic for the
spin-table code as it assumes that it can trivially convert a
cpu-release-addr to a valid VA in the linear map.

This patch modifies the spin-table code to use a temporary cached
mapping to write to a given cpu-release-addr, enabling us to support
addresses regardless of whether they are covered by the linear mapping.

Acked-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Leif Lindholm <leif.lindholm@linaro.org>
Tested-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[ardb: added (__force void *) cast]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Ard Biesheuvel
169c018de7 arm64: don't flag non-aliasing VIPT I-caches as aliasing
VIPT caches are non-aliasing if the index is derived from address bits that
are always equal between VA and PA. Classifying these as aliasing results in
unnecessary flushing which may hurt performance.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Ard Biesheuvel
80c517b0ff arm64: add helper functions to read I-cache attributes
This adds helper functions and #defines to <asm/cachetype.h> to read the
line size and the number of sets from the level 1 instruction cache.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-08 14:39:18 +01:00
Linus Torvalds
2b12164b55 A smattering of bug fixes across most architectures.
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJUCi7SAAoJEBvWZb6bTYbyuhgP+wQXwL1W4f6c219PMmz/Hpiz
 OCRCTFpz8eOC/e1VG5zCcocu7FisG43auH5WqDnyr8RcC8RZlfKcpIwvIAgGk1oP
 RusqDbhHXSo+mWCYAlVDwGeAagc1sgxjpHAKJ0uLT7Rsv7hJp88g/KE5JbOX+M6k
 UfO+bMSvys5eFCct57O5ZgLWR58k++9CC0f6U5B/uMkwYCnPz32YQL83ebpBLPvT
 vHcRVgUPXj7pg4ng8uVALvLJTewEKK8aG5o8LXtOmmOdYgccxSouy5wrX00g48pb
 lCB3UZrKZ07AfEgdK/06oz9RwqUUpu58VZSE4DVlEhEcPTx0Uy6k1PMw6utAJi5r
 WZ+Ws3IBQMfp3oqWJmdBLte0JAjK89glhqjrrseXjtux1piknyPfquPB/tGw6wxv
 rBMq4r64KJwcpL2DMYZGbHpZ7vbfDTJ7aYZvHBp2YRFnR9YE7lqGt3VJpp9WHkZT
 7SMvIVFEdF1SN4jXLZ4+3tno5hPlH+MbeteCT6ZweqVfSQzHEo7AvriKQS0wPoBm
 rOMZvO7SMSctHkBBqnTkXHnZ8r0J+v89VZr85ayQC/FHUEp6nFdYNiqqO54VkTfE
 BKuoepRfvjAK40hnWIWlPUMmzK1tzZwxB9vU+ghJ2yJWZMo4oIXjwg6fvYfa8Ar/
 r68L21dou0TNpUpyIdvN
 =vBFm
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
 "A smattering of bug fixes across most architectures"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  powerpc/kvm/cma: Fix panic introduces by signed shift operation
  KVM: s390/mm: Fix guest storage key corruption in ptep_set_access_flags
  KVM: s390/mm: Fix storage key corruption during swapping
  arm/arm64: KVM: Complete WFI/WFE instructions
  ARM/ARM64: KVM: Nuke Hyp-mode tlbs before enabling MMU
  KVM: s390/mm: try a cow on read only pages for key ops
  KVM: s390: Fix user triggerable bug in dead code
2014-09-06 16:42:12 -07:00
Sudeep Holla
3d8afe3099 arm64: use irq_set_affinity with force=false when migrating irqs
The arm64 interrupt migration code on cpu offline calls
irqchip.irq_set_affinity() with the argument force=true. Originally
this argument had no effect because it was not used by any interrupt
chip driver and there was no semantics defined.

This changed with commit 01f8fa4f01 ("genirq: Allow forcing cpu
affinity of interrupts") which made the force argument useful to route
interrupts to not yet online cpus without checking the target cpu
against the cpu online mask. The following commit ffde1de640
("irqchip: gic: Support forced affinity setting") implemented this for
the GIC interrupt controller.

As a consequence the cpu offline irq migration fails if CPU0 is
offlined, because CPU0 is still set in the affinity mask and the
validation against cpu online mask is skipped to the force argument
being true. The following first_cpu(mask) selection always selects
CPU0 as the target.

Commit 601c942176d8("arm64: use cpu_online_mask when using forced
irq_set_affinity") intended to fix the above mentioned issue but
introduced another issue where affinity can be migrated to a wrong
CPU due to unconditional copy of cpu_online_mask.

As with for arm, solve the issue by calling irq_set_affinity() with
force=false from the CPU offline irq migration code so the GIC driver
validates the affinity mask against CPU online mask and therefore
removes CPU0 from the possible target candidates. Also revert the
changes done in the commit 601c942176 as it's no longer needed.

Tested on Juno platform.

Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced
	irq_set_affinity")
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 3.10.x
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-03 19:24:38 +01:00
Marc Zyngier
c59e1ef874 arm64: Get rid of handle_IRQ
All the arm64 irqchip drivers have been converted to handle_domain_irq,
making it possible to remove the handle_IRQ stub entierely.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lkml.kernel.org/r/1409047421-27649-26-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2014-09-03 13:11:00 +00:00
Marc Zyngier
a1ddc74a23 arm64: Convert handle_IRQ to use __handle_domain_irq
In order to limit code duplication, convert the architecture specific
handle_IRQ to use the generic __handle_domain_irq function.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lkml.kernel.org/r/1409047421-27649-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2014-09-03 12:58:13 +00:00
Will Deacon
5e39977edf Revert "arm64: cpuinfo: print info for all CPUs"
It turns out that vendors are relying on the format of /proc/cpuinfo,
and we've even spotted out-of-tree hacks attempting to make it look
identical to the format used by arch/arm/. That means we can't afford to
churn this interface in mainline, so revert the recent reformatting of
the file for arm64 pending discussions on the list to find out what
people actually want.

This reverts commit d7a49086f2.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-01 15:55:22 +01:00
Leo Yan
7c68a9cc04 arm64: fix bug for reloading FPSIMD state after cpu power off
Now arm64 defers reloading FPSIMD state, but this optimization also
introduces the bug after cpu resume back from low power mode.

The reason is after the cpu has been powered off, s/w need set the
cpu's fpsimd_last_state to NULL so that it will force to reload
FPSIMD state for the thread, otherwise there has the chance to meet
the condition for both the task's fpsimd_state.cpu field contains the
id of the current cpu, and the cpu's fpsimd_last_state per-cpu variable
points to the task's fpsimd_state, so finally kernel will skip to reload
the context during it return back to userland.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Leo Yan <leoy@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-01 12:55:21 +01:00
Will Deacon
3168a74346 arm64: report correct stack pointer in KSTK_ESP for compat tasks
The KSTK_ESP macro is used to determine the user stack pointer for a
given task. In particular, this is used to to report the '[stack]' VMA
in /proc/self/maps, which is used by Android to determine the stack
location for children of the main thread.

This patch fixes the macro to use user_stack_pointer instead of directly
returning sp. This means that we report w13 instead of sp, since the
former is used as the stack pointer when executing in AArch32 state.

Cc: <stable@vger.kernel.org>
Reported-by: Serban Constantinescu <Serban.Constantinescu@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-29 16:11:10 +01:00
Catalin Marinas
2520d03972 arm64: Add brackets around user_stack_pointer()
Commit 5f888a1d33 (ARM64: perf: support dwarf unwinding in compat mode)
changes user_stack_pointer() to return the compat SP for 32-bit tasks
but without brackets around the whole definition, with possible issues
on the call sites (noticed with a subsequent fix for KSTK_ESP).

Fixes: 5f888a1d33 (ARM64: perf: support dwarf unwinding in compat mode)
Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-29 16:11:00 +01:00
Radim Krčmář
13a34e067e KVM: remove garbage arg to *hardware_{en,dis}able
In the beggining was on_each_cpu(), which required an unused argument to
kvm_arch_ops.hardware_{en,dis}able, but this was soon forgotten.

Remove unnecessary arguments that stem from this.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-29 16:35:55 +02:00
Radim Krčmář
0865e636ae KVM: static inline empty kvm_arch functions
Using static inline is going to save few bytes and cycles.
For example on powerpc, the difference is 700 B after stripping.
(5 kB before)

This patch also deals with two overlooked empty functions:
kvm_arch_flush_shadow was not removed from arch/mips/kvm/mips.c
  2df72e9bc KVM: split kvm_arch_flush_shadow
and kvm_arch_sched_in never made it into arch/ia64/kvm/kvm-ia64.c.
  e790d9ef6 KVM: add kvm_arch_sched_in

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-29 16:35:55 +02:00
Paolo Bonzini
656473003b KVM: forward declare structs in kvm_types.h
Opaque KVM structs are useful for prototypes in asm/kvm_host.h, to avoid
"'struct foo' declared inside parameter list" warnings (and consequent
breakage due to conflicting types).

Move them from individual files to a generic place in linux/kvm_types.h.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-08-29 16:35:53 +02:00
Feng Kan
ab81873974 arm64: dts: add random number generator dts node to APM X-Gene platform.
This adds random number generator dts node to APM X-Gene platform.

Signed-off-by: Feng Kan <fkan@apm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-08-29 21:47:12 +08:00
Christoffer Dall
05e0127f9e arm/arm64: KVM: Complete WFI/WFE instructions
The architecture specifies that when the processor wakes up from a WFE
or WFI instruction, the instruction is considered complete, however we
currrently return to EL1 (or EL0) at the WFI/WFE instruction itself.

While most guests may not be affected by this because their local
exception handler performs an exception returning setting the event bit
or with an interrupt pending, some guests like UEFI will get wedged due
this little mishap.

Simply skip the instruction when we have completed the emulation.

Cc: <stable@vger.kernel.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-29 11:53:53 +02:00
Pranavkumar Sawargaonkar
f6edbbf36d ARM/ARM64: KVM: Nuke Hyp-mode tlbs before enabling MMU
X-Gene u-boot runs in EL2 mode with MMU enabled hence we might
have stale EL2 tlb enteris when we enable EL2 MMU on each host CPU.

This can happen on any ARM/ARM64 board running bootloader in
Hyp-mode (or EL2-mode) with MMU enabled.

This patch ensures that we flush all Hyp-mode (or EL2-mode) TLBs
on each host CPU before enabling Hyp-mode (or EL2-mode) MMU.

Cc: <stable@vger.kernel.org>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-29 11:53:26 +02:00
Will Deacon
5b75a6af11 arm64: perf: don't rely on layout of pt_regs when grabbing sp or pc
The current perf_regs code relies on sp and pc sitting just off the end
of the pt_regs->regs array. This is ugly and fragile, so this patch
checks for these register explicitly and returns the appropriate field.

Acked-by: Jean Pihet <jean.pihet@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-28 20:01:50 +01:00
Will Deacon
85487edd25 arm64: ptrace: fix compat reg getter/setter return values
copy_{to,from}_user return the number of bytes remaining on failure, not
an error code.

This patch returns -EFAULT when the copy operation didn't complete,
rather than expose the number of bytes not copied directly to userspace.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-28 20:01:42 +01:00
Will Deacon
27d7ff273c arm64: ptrace: fix compat hardware watchpoint reporting
I'm not sure what I was on when I wrote this, but when iterating over
the hardware watchpoint array (hbp_watch_array), our index is off by
ARM_MAX_BRP, so we walk off the end of our thread_struct...

... except, a dodgy condition in the loop means that it never executes
at all (bp cannot be NULL).

This patch fixes the code so that we remove the bp check and use the
correct index for accessing the watchpoint structures.

Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-28 20:01:36 +01:00
Will Deacon
bd218bce92 KVM: ARM/arm64: return -EFAULT if copy_from_user fails in set_timer_reg
We currently return the number of bytes not copied if set_timer_reg
fails, which is almost certainly not what userspace would like.

This patch returns -EFAULT instead.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-27 22:49:45 +02:00
Will Deacon
18d457661f KVM: ARM/arm64: avoid returning negative error code as bool
is_valid_cache returns true if the specified cache is valid.
Unfortunately, if the parameter passed it out of range, we return
-ENOENT, which ends up as true leading to potential hilarity.

This patch returns false on the failure path instead.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-27 22:49:45 +02:00
Will Deacon
4000be423c KVM: ARM/arm64: fix broken __percpu annotation
Running sparse results in a bunch of noisy address space mismatches
thanks to the broken __percpu annotation on kvm_get_running_vcpus.

This function returns a pcpu pointer to a pointer, not a pointer to a
pcpu pointer. This patch fixes the annotation, which kills the warnings
from sparse.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-27 22:49:45 +02:00
Will Deacon
6951e48bff KVM: ARM/arm64: fix non-const declaration of function returning const
Sparse kicks up about a type mismatch for kvm_target_cpu:

arch/arm64/kvm/guest.c:271:25: error: symbol 'kvm_target_cpu' redeclared with different type (originally declared at ./arch/arm64/include/asm/kvm_host.h:45) - different modifiers

so fix this by adding the missing const attribute to the function
declaration.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-27 22:49:45 +02:00
Christoffer Dall
98047888bb arm/arm64: KVM: Support KVM_CAP_READONLY_MEM
When userspace loads code and data in a read-only memory regions, KVM
needs to be able to handle this on arm and arm64.  Specifically this is
used when running code directly from a read-only flash device; the
common scenario is a UEFI blob loaded with the -bios option in QEMU.

Note that the MMIO exit on writes to a read-only memory is ABI and can
be used to emulate block-erase style flash devices.

Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-27 22:46:09 +02:00
Geoff Levand
5843be2279 arm64: Remove unused variable in head.S
Remove an unused local variable from head.S.  It seems this was never
used even from the initial commit
9703d9d7f7 (arm64: Kernel booting and
initialisation), and is a left over from a previous implementation
of __calc_phys_offset.

Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-26 19:24:00 +01:00
Colin Ian King
031cb42838 arm64/crypto: remove redundant update of data
Originally found by cppcheck:

[arch/arm64/crypto/sha2-ce-glue.c:153]: (warning) Assignment of
  function parameter has no effect outside the function. Did you
  forget dereferencing it?

Updating data by blocks * SHA256_BLOCK_SIZE at the end of
sha2_finup is redundant code and can be removed.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-26 11:42:22 +01:00
Linus Torvalds
7be141d055 Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Ingo Molnar:
 "A couple of EFI fixes, plus misc fixes all around the map"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  efi/arm64: Store Runtime Services revision
  firmware: Do not use WARN_ON(!spin_is_locked())
  x86_32, entry: Clean up sysenter_badsys declaration
  x86/doc: Fix the 'tlb_single_page_flush_ceiling' sysconfig path
  x86/mm: Fix sparse 'tlb_single_page_flush_ceiling' warning and make the variable read-mostly
  x86/mm: Fix RCU splat from new TLB tracepoints
2014-08-24 16:17:41 -07:00
Semen Protsenko
6a7519e813 efi/arm64: Store Runtime Services revision
"efi" global data structure contains "runtime_version" field which must
be assigned in order to use it later in Runtime Services virtual calls
(virt_efi_* functions).

Before this patch "runtime_version" was unassigned (0), so each
Runtime Service virtual call that checks revision would fail.

Signed-off-by: Semen Protsenko <semen.protsenko@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-08-22 08:45:41 +01:00
Will Deacon
44b375070f Revert "arm64: Do not invoke audit_syscall_* functions if !CONFIG_AUDIT_SYSCALL"
For some reason, the audit patches didn't make it out of -next this
merge window, so revert our temporary hack and let the audit guys deal
with fixing up -next.

This reverts commit 2a8f45b040.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 22:05:45 +01:00
Ganapatrao Kulkarni
07a15dd55a arm64: mm: update max pa bits to 48
Now that we support 48-bit physical addressing, update MAX_PHYSMEM_BITS
accordingly.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ganapatrao Kulkarni <ganapatrao.kulkarni@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 20:23:02 +01:00
Leif Lindholm
86c8b27a01 arm64: ignore DT memreserve entries when booting in UEFI mode
UEFI provides its own method for marking regions to reserve, via the
memory map which is also used to initialise memblock. So when using the
UEFI memory map, ignore any memreserve entries present in the DT.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 20:22:03 +01:00
Mark Brown
49d947face arm64: configs: Enable X-Gene SATA and ethernet in defconfig
Currently when run on an APM platform the ARMv8 defconfig has no viable
options for rootfs other than ramdisk which is rather limiting. Since
we already have both SATA and the bits needed for NFS root enabled we just
need to enable the relevant drivers so do that, helping enable direct
testing of upstream.

If the configuration ends up becoming too big we can consider modularising
some of the drivers and asking people to use an initramfs but for now this
is not an issue.

Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 19:26:09 +01:00
Ard Biesheuvel
4190312beb arm64: align randomized TEXT_OFFSET on 4 kB boundary
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and
the embedded EFI stub is executed in place. The EFI stub relocates the
Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into
the kernel proper.

In AArch64, PC relative symbol references are emitted using adrp/add or
adrp/ldr pairs, where the offset into a 4 kB page is resolved using a
separate :lo12: relocation. This implicitly assumes that the code will
always be executed at the same relative offset with respect to a 4 kB
boundary, or the references will point to the wrong address.

This means we should link the kernel at a 4 kB aligned base address in
order to remain compatible with the base address the UEFI loader uses
when doing the initial load of Image. So update the code that generates
TEXT_OFFSET to choose a multiple of 4 kB.

At the same time, update the code so it chooses from the interval [0..2MB)
as the author originally intended.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-19 19:26:09 +01:00
Will Deacon
a97a42c476 arm64: compat: wire up memfd_create and getrandom syscalls for aarch32
arch/arm/ just grew support for the new memfd_create and getrandom
syscalls, so add them to our compat layer too.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-18 19:47:04 +01:00
Ard Biesheuvel
a3a80544ac arm64: fix typo in I-cache policy detection
This removes an unfortunately placed semi-colon resulting in all instruction
caches being classified as AIVIVT.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-18 19:47:03 +01:00
Peter Zijlstra
92ba1f530b locking,arch,arm64: Fold atomic_ops
Many of the atomic op implementations are the same except for one
instruction; fold the lot into a few CPP macros and reduce LoC.

This also prepares for easy addition of new ops.

Requires the asm_op due to eor.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chen Gang <gang.chen@asianux.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/20140508135851.995123148@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-08-14 12:48:04 +02:00
Linus Torvalds
f0094b28f3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:
 "Several networking final fixes and tidies for the merge window:

   1) Changes during the merge window unintentionally took away the
      ability to build bluetooth modular, fix from Geert Uytterhoeven.

   2) Several phy_node reference count bug fixes from Uwe Kleine-König.

   3) Fix ucc_geth build failures, also from Uwe Kleine-König.

   4) Fix klog false positivies when netlink messages go to network
      taps, by properly resetting the network header.  Fix from Daniel
      Borkmann.

   5) Sizing estimate of VF netlink messages is too small, from Jiri
      Benc.

   6) New APM X-Gene SoC ethernet driver, from Iyappan Subramanian.

   7) VLAN untagging is erroneously dependent upon whether the VLAN
      module is loaded or not, but there are generic dependencies that
      matter wrt what can be expected as the SKB enters the stack.
      Make the basic untagging generic code, and do it unconditionally.
      From Vlad Yasevich.

   8) xen-netfront only has so many slots in it's transmit queue so
      linearize packets that have too many frags.  From Zoltan Kiss.

   9) Fix suspend/resume PHY handling in bcmgenet driver, from Florian
      Fainelli"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (55 commits)
  net: bcmgenet: correctly resume adapter from Wake-on-LAN
  net: bcmgenet: update UMAC_CMD only when link is detected
  net: bcmgenet: correctly suspend and resume PHY device
  net: bcmgenet: request and enable main clock earlier
  net: ethernet: myricom: myri10ge: myri10ge.c: Cleaning up missing null-terminate after strncpy call
  xen-netfront: Fix handling packets on compound pages with skb_linearize
  net: fec: Support phys probed from devicetree and fixed-link
  smsc: replace WARN_ON() with WARN_ON_SMP()
  xen-netback: Don't deschedule NAPI when carrier off
  net: ethernet: qlogic: qlcnic: Remove duplicate object file from Makefile
  wan: wanxl: Remove typedefs from struct names
  m68k/atari: EtherNEC - ethernet support (ne)
  net: ethernet: ti: cpmac.c: Cleaning up missing null-terminate after strncpy call
  hdlc: Remove typedefs from struct names
  airo_cs: Remove typedef local_info_t
  atmel: Remove typedef atmel_priv_ioctl
  com20020_cs: Remove typedef com20020_dev_t
  ethernet: amd: Remove typedef local_info_t
  net: Always untag vlan-tagged traffic on input.
  drivers: net: Add APM X-Gene SoC ethernet driver support.
  ...
2014-08-13 18:27:40 -06:00
Iyappan Subramanian
3d390425a6 dts: Add bindings for APM X-Gene SoC ethernet driver
This patch adds bindings for APM X-Gene SoC ethernet driver.

Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Ravi Patel <rapatel@apm.com>
Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2014-08-11 11:50:33 -07:00
Linus Torvalds
c23190c0bf Nicolas Pitre added generic tracepoints for tracing IPIs and updated the
arm and arm64 architectures. It required some minor updates to the generic
 tracepoint system, so it had to wait for me to implement them.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJT5N6DAAoJEKQekfcNnQGuv60H/2NXDO/kUtvdF0L7ewaGbDaO
 sjGOXMHDDgF4fQixPsIYNHdra0iGSPL59NBjIaLsESFsB8SUOVqXSclV0MSiZJQc
 1PgTduE19p2kEMsqw6F4l8Ir8hPrUT8V8pQScR9lUkww3ANpyTB6Bbg1rZHcmTYA
 yAq20q85rfQrAGwbvvhg40UYF8/su0FMUAbt/a180kVL8yeQI2liAkNOJTMCVq35
 PpL7if4dlqAhKMqne71ae080PIPOH34q2lmZX3/SbpRvT2tSkS4dkoSFtCAD4pvx
 c2TKNOxEDDWlinN/305PXH2yQ87MTIm44SBaTu/WPllUSQoO//EKI7+13tNS8Qc=
 =/VeP
 -----END PGP SIGNATURE-----

Merge tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull IPI tracepoints for ARM from Steven Rostedt:
 "Nicolas Pitre added generic tracepoints for tracing IPIs and updated
  the arm and arm64 architectures.  It required some minor updates to
  the generic tracepoint system, so it had to wait for me to implement
  them"

* tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
  ARM64: add IPI tracepoints
  ARM: add IPI tracepoints
  tracepoint: add generic tracepoint definitions for IPI tracing
  tracing: Do not do anything special with tracepoint_string when tracing is disabled
2014-08-09 17:33:44 -07:00
Linus Torvalds
63b12bdb0d Merge branch 'signal-cleanup' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc
Pull arch signal handling cleanup from Richard Weinberger:
 "This patch series moves all remaining archs to the get_signal(),
  signal_setup_done() and sigsp() functions.

  Currently these archs use open coded variants of the said functions.
  Further, unused parameters get removed from get_signal_to_deliver(),
  tracehook_signal_handler() and signal_delivered().

  At the end of the day we save around 500 lines of code."

* 'signal-cleanup' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc: (43 commits)
  powerpc: Use sigsp()
  openrisc: Use sigsp()
  mn10300: Use sigsp()
  mips: Use sigsp()
  microblaze: Use sigsp()
  metag: Use sigsp()
  m68k: Use sigsp()
  m32r: Use sigsp()
  hexagon: Use sigsp()
  frv: Use sigsp()
  cris: Use sigsp()
  c6x: Use sigsp()
  blackfin: Use sigsp()
  avr32: Use sigsp()
  arm64: Use sigsp()
  arc: Use sigsp()
  sas_ss_flags: Remove nested ternary if
  Rip out get_signal_to_deliver()
  Clean up signal_delivered()
  tracehook_signal_handler: Remove sig, info, ka and regs
  ...
2014-08-09 09:58:12 -07:00
Linus Torvalds
8065be8d03 Merge branch 'akpm' (second patchbomb from Andrew Morton)
Merge more incoming from Andrew Morton:
 "Two new syscalls:

     memfd_create in "shm: add memfd_create() syscall"
     kexec_file_load in "kexec: implementation of new syscall kexec_file_load"

  And:

   - Most (all?) of the rest of MM

   - Lots of the usual misc bits

   - fs/autofs4

   - drivers/rtc

   - fs/nilfs

   - procfs

   - fork.c, exec.c

   - more in lib/

   - rapidio

   - Janitorial work in filesystems: fs/ufs, fs/reiserfs, fs/adfs,
     fs/cramfs, fs/romfs, fs/qnx6.

   - initrd/initramfs work

   - "file sealing" and the memfd_create() syscall, in tmpfs

   - add pci_zalloc_consistent, use it in lots of places

   - MAINTAINERS maintenance

   - kexec feature work"

* emailed patches from Andrew Morton <akpm@linux-foundation.org: (193 commits)
  MAINTAINERS: update nomadik patterns
  MAINTAINERS: update usb/gadget patterns
  MAINTAINERS: update DMA BUFFER SHARING patterns
  kexec: verify the signature of signed PE bzImage
  kexec: support kexec/kdump on EFI systems
  kexec: support for kexec on panic using new system call
  kexec-bzImage64: support for loading bzImage using 64bit entry
  kexec: load and relocate purgatory at kernel load time
  purgatory: core purgatory functionality
  purgatory/sha256: provide implementation of sha256 in purgaotory context
  kexec: implementation of new syscall kexec_file_load
  kexec: new syscall kexec_file_load() declaration
  kexec: make kexec_segment user buffer pointer a union
  resource: provide new functions to walk through resources
  kexec: use common function for kimage_normal_alloc() and kimage_crash_alloc()
  kexec: move segment verification code in a separate function
  kexec: rename unusebale_pages to unusable_pages
  kernel: build bin2c based on config option CONFIG_BUILD_BIN2C
  bin2c: move bin2c in scripts/basic
  shm: wait for pins to be released when sealing
  ...
2014-08-08 15:57:47 -07:00
Andy Lutomirski
a6c19dfe39 arm64,ia64,ppc,s390,sh,tile,um,x86,mm: remove default gate area
The core mm code will provide a default gate area based on
FIXADDR_USER_START and FIXADDR_USER_END if
!defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR).

This default is only useful for ia64.  arm64, ppc, s390, sh, tile, 64-bit
UML, and x86_32 have their own code just to disable it.  arm, 32-bit UML,
and x86_64 have gate areas, but they have their own implementations.

This gets rid of the default and moves the code into ia64.

This should save some code on architectures without a gate area: it's now
possible to inline the gate_area functions in the default case.

Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Acked-by: Nathan Lynch <nathan_lynch@mentor.com>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [in principle]
Acked-by: Richard Weinberger <richard@nod.at> [for um]
Acked-by: Will Deacon <will.deacon@arm.com> [for arm64]
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Nathan Lynch <Nathan_Lynch@mentor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-08 15:57:27 -07:00
Laura Abbott
308c09f17d lib/scatterlist: make ARCH_HAS_SG_CHAIN an actual Kconfig
Rather than have architectures #define ARCH_HAS_SG_CHAIN in an
architecture specific scatterlist.h, make it a proper Kconfig option and
use that instead.  At same time, remove the header files are are now
mostly useless and just include asm-generic/scatterlist.h.

[sfr@canb.auug.org.au: powerpc files now need asm/dma.h]
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>			[x86]
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>	[powerpc]
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-08 15:57:26 -07:00
Nicolas Pitre
45ed695ac1 ARM64: add IPI tracepoints
The strings used to list IPIs in /proc/interrupts are reused for tracing
purposes.

While at it, the code is slightly cleaned up so the ipi_types array
indices are no longer offset by IPI_RESCHEDULE whose value is 0 anyway.

Link: http://lkml.kernel.org/p/1406318733-26754-5-git-send-email-nicolas.pitre@linaro.org

Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-08-07 20:40:42 -04:00
Linus Torvalds
66bb0aa077 Here are the PPC and ARM changes for KVM, which I separated because
they had small conflicts (respectively within KVM documentation,
 and with 3.16-rc changes).  Since they were all within the subsystem,
 I took care of them.
 
 Stephen Rothwell reported some snags in PPC builds, but they are all
 fixed now; the latest linux-next report was clean.
 
 New features for ARM include:
 - KVM VGIC v2 emulation on GICv3 hardware
 - Big-Endian support for arm/arm64 (guest and host)
 - Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
 
 And for PPC:
 - Book3S: Good number of LE host fixes, enable HV on LE
 - Book3S HV: Add in-guest debug support
 
 This release drops support for KVM on the PPC440.  As a result, the
 PPC merge removes more lines than it adds. :)
 
 I also included an x86 change, since Davidlohr tied it to an independent
 bug report and the reporter quickly provided a Tested-by; there was no
 reason to wait for -rc2.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABAgAGBQJT4iIJAAoJEBvWZb6bTYbyZqoP/3Wxy8NWPFJ8HGt81NHlGnDS
 a9UbL7EibcOEG+aaKqmtBglTD5YDiGBDNCxxiSJaDHt+grLN4fsWIliJob1nJFoO
 90f89EWN2XjeCrJXA5nUoeg5tpc5OoYKsiP6pTgzIwkP8vvs/H1+zpcTS/UmYsr/
 qipVMMsM+zZeHWZcSbqjW88z7YqIn1sr5282wJ85cbyv4KGizb/G4dyPuDqLb6np
 hkAD8Ah6VV2suQ2FSy7G2fg20R0vglUi60hkEHLoCBPVqJCl7SmC8MvxNbjBnP8S
 J36R0R0u1wHYKzAGooLJGVOZ/o/gSiVqKX+++L2EvJBN+kuA6u/7fxLyBT+LwDAE
 IF/Aln5rpg1fe+eywvhz86WljTVEQ8bO1zVsIQUPY+/ZOPedZHMwyvXft8ogbjSp
 2m9OJ/3e8Aggh0OeHpCDoeow+QDUXvX0YdCw+2Yh0p+7VMXqkyp0QEiBu38jrusC
 rB3VNifJbDSWLKdG9LfCAPHnxZD2XYEwv2WFBo6KQOGMGHfx0GXpCOL/jQihrhA6
 HtEG5Bs3lvnHQemdpUZ58xojiABbMaUPdcnPXQQEp23WhZzrfLMLzqVG0VYnhSsC
 9pi7MJj8c31rqx5WU2oRM28i/BvNxN0NCtkDpineO5s3f89Ws1xnwxqlm38AKP0J
 irJQTYFEqec+GM9JK1rG
 =hyQP
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull second round of KVM changes from Paolo Bonzini:
 "Here are the PPC and ARM changes for KVM, which I separated because
  they had small conflicts (respectively within KVM documentation, and
  with 3.16-rc changes).  Since they were all within the subsystem, I
  took care of them.

  Stephen Rothwell reported some snags in PPC builds, but they are all
  fixed now; the latest linux-next report was clean.

  New features for ARM include:
   - KVM VGIC v2 emulation on GICv3 hardware
   - Big-Endian support for arm/arm64 (guest and host)
   - Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)

  And for PPC:
   - Book3S: Good number of LE host fixes, enable HV on LE
   - Book3S HV: Add in-guest debug support

  This release drops support for KVM on the PPC440.  As a result, the
  PPC merge removes more lines than it adds.  :)

  I also included an x86 change, since Davidlohr tied it to an
  independent bug report and the reporter quickly provided a Tested-by;
  there was no reason to wait for -rc2"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (122 commits)
  KVM: Move more code under CONFIG_HAVE_KVM_IRQFD
  KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
  KVM: nVMX: Fix nested vmexit ack intr before load vmcs01
  KVM: PPC: Enable IRQFD support for the XICS interrupt controller
  KVM: Give IRQFD its own separate enabling Kconfig option
  KVM: Move irq notifier implementation into eventfd.c
  KVM: Move all accesses to kvm::irq_routing into irqchip.c
  KVM: irqchip: Provide and use accessors for irq routing table
  KVM: Don't keep reference to irq routing table in irqfd struct
  KVM: PPC: drop duplicate tracepoint
  arm64: KVM: fix 64bit CP15 VM access for 32bit guests
  KVM: arm64: GICv3: mandate page-aligned GICV region
  arm64: KVM: GICv3: move system register access to msr_s/mrs_s
  KVM: PPC: PR: Handle FSCR feature deselects
  KVM: PPC: HV: Remove generic instruction emulation
  KVM: PPC: BOOKEHV: rename e500hv_spr to bookehv_spr
  KVM: PPC: Remove DCR handling
  KVM: PPC: Expose helper functions for data/inst faults
  KVM: PPC: Separate loadstore emulation from priv emulation
  KVM: PPC: Handle magic page in kvmppc_ld/st
  ...
2014-08-07 11:35:30 -07:00
Richard Weinberger
38a7be3c28 arm64: Use sigsp()
Use sigsp() instead of the open coded variant.

Signed-off-by: Richard Weinberger <richard@nod.at>
2014-08-06 13:03:45 +02:00
Richard Weinberger
00554fa4f8 arm64: Use get_signal() signal_setup_done()
Use the more generic functions get_signal() signal_setup_done()
for signal delivery.

Signed-off-by: Richard Weinberger <richard@nod.at>
2014-08-06 12:56:16 +02:00
Linus Torvalds
e7fda6c4c3 Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer and time updates from Thomas Gleixner:
 "A rather large update of timers, timekeeping & co

   - Core timekeeping code is year-2038 safe now for 32bit machines.
     Now we just need to fix all in kernel users and the gazillion of
     user space interfaces which rely on timespec/timeval :)

   - Better cache layout for the timekeeping internal data structures.

   - Proper nanosecond based interfaces for in kernel users.

   - Tree wide cleanup of code which wants nanoseconds but does hoops
     and loops to convert back and forth from timespecs.  Some of it
     definitely belongs into the ugly code museum.

   - Consolidation of the timekeeping interface zoo.

   - A fast NMI safe accessor to clock monotonic for tracing.  This is a
     long standing request to support correlated user/kernel space
     traces.  With proper NTP frequency correction it's also suitable
     for correlation of traces accross separate machines.

   - Checkpoint/restart support for timerfd.

   - A few NOHZ[_FULL] improvements in the [hr]timer code.

   - Code move from kernel to kernel/time of all time* related code.

   - New clocksource/event drivers from the ARM universe.  I'm really
     impressed that despite an architected timer in the newer chips SoC
     manufacturers insist on inventing new and differently broken SoC
     specific timers.

[ Ed. "Impressed"? I don't think that word means what you think it means ]

   - Another round of code move from arch to drivers.  Looks like most
     of the legacy mess in ARM regarding timers is sorted out except for
     a few obnoxious strongholds.

   - The usual updates and fixlets all over the place"

* 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (114 commits)
  timekeeping: Fixup typo in update_vsyscall_old definition
  clocksource: document some basic timekeeping concepts
  timekeeping: Use cached ntp_tick_length when accumulating error
  timekeeping: Rework frequency adjustments to work better w/ nohz
  timekeeping: Minor fixup for timespec64->timespec assignment
  ftrace: Provide trace clocks monotonic
  timekeeping: Provide fast and NMI safe access to CLOCK_MONOTONIC
  seqcount: Add raw_write_seqcount_latch()
  seqcount: Provide raw_read_seqcount()
  timekeeping: Use tk_read_base as argument for timekeeping_get_ns()
  timekeeping: Create struct tk_read_base and use it in struct timekeeper
  timekeeping: Restructure the timekeeper some more
  clocksource: Get rid of cycle_last
  clocksource: Move cycle_last validation to core code
  clocksource: Make delta calculation a function
  wireless: ath9k: Get rid of timespec conversions
  drm: vmwgfx: Use nsec based interfaces
  drm: i915: Use nsec based interfaces
  timekeeping: Provide ktime_get_raw()
  hangcheck-timer: Use ktime_get_ns()
  ...
2014-08-05 17:46:42 -07:00
Paolo Bonzini
5d57686605 KVM/ARM New features for 3.17 include:
- Fixes and code refactoring for stage2 kvm MMU unmap_range
  - Support unmapping IPAs on deleting memslots for arm and arm64
  - Support MMIO mappings in stage2 faults
  - KVM VGIC v2 emulation on GICv3 hardware
  - Big-Endian support for arm/arm64 (guest and host)
  - Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
  - Detect non page-aligned GICV regions and bail out (plugs guest-can-crash host bug)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJT3oZTAAoJEEtpOizt6ddyKQIH/1Bj/cZYSkkSf3IJfQhHRbWN
 jS37IBsvcHwjHkRJxCNmQuKP/Ho5XEusluPGrVY25PAgBMl+ouPqAuKzUk+GEab6
 snjJjDFqw0zs0x0h3tg6UwfZdF+eyyIkmFGn8/IATD5P3PPd8kWBVtYnSnZmYK+R
 KJNVcp6RPDrt9kvUDY8Ln9fW99Jl+7CdgQAnc3QkHcXUlanLyrfq+fE1lSzyrbhZ
 ETzyMFAX4kCdc8tflgyyBS4A7+RvfQ6ZIQummxoAMFHIoSk90dtK7ovX68rd9U3e
 yL+mpe130+dTIFpUMbxCnIdE7C0eud3vcgXC6MuWtFjUrxQoaEgsVE+ffGC5tX0=
 =axkp
 -----END PGP SIGNATURE-----

Merge tag 'kvm-arm-for-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm

KVM/ARM New features for 3.17 include:
 - Fixes and code refactoring for stage2 kvm MMU unmap_range
 - Support unmapping IPAs on deleting memslots for arm and arm64
 - Support MMIO mappings in stage2 faults
 - KVM VGIC v2 emulation on GICv3 hardware
 - Big-Endian support for arm/arm64 (guest and host)
 - Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)

Conflicts:
	virt/kvm/arm/vgic.c [last minute cherry-pick from 3.17 to 3.16]
2014-08-05 09:47:45 +02:00
Linus Torvalds
76f09aa464 Merge branch 'x86-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull EFI changes from Ingo Molnar:
 "Main changes in this cycle are:

   - arm64 efi stub fixes, preservation of FP/SIMD registers across
     firmware calls, and conversion of the EFI stub code into a static
     library - Ard Biesheuvel

   - Xen EFI support - Daniel Kiper

   - Support for autoloading the efivars driver - Lee, Chun-Yi

   - Use the PE/COFF headers in the x86 EFI boot stub to request that
     the stub be loaded with CONFIG_PHYSICAL_ALIGN alignment - Michael
     Brown

   - Consolidate all the x86 EFI quirks into one file - Saurabh Tangri

   - Additional error logging in x86 EFI boot stub - Ulf Winkelvos

   - Support loading initrd above 4G in EFI boot stub - Yinghai Lu

   - EFI reboot patches for ACPI hardware reduced platforms"

* 'x86-efi-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (31 commits)
  efi/arm64: Handle missing virtual mapping for UEFI System Table
  arch/x86/xen: Silence compiler warnings
  xen: Silence compiler warnings
  x86/efi: Request desired alignment via the PE/COFF headers
  x86/efi: Add better error logging to EFI boot stub
  efi: Autoload efivars
  efi: Update stale locking comment for struct efivars
  arch/x86: Remove efi_set_rtc_mmss()
  arch/x86: Replace plain strings with constants
  xen: Put EFI machinery in place
  xen: Define EFI related stuff
  arch/x86: Remove redundant set_bit(EFI_MEMMAP) call
  arch/x86: Remove redundant set_bit(EFI_SYSTEM_TABLES) call
  efi: Introduce EFI_PARAVIRT flag
  arch/x86: Do not access EFI memory map if it is not available
  efi: Use early_mem*() instead of early_io*()
  arch/ia64: Define early_memunmap()
  x86/reboot: Add EFI reboot quirk for ACPI Hardware Reduced flag
  efi/reboot: Allow powering off machines using EFI
  efi/reboot: Add generic wrapper around EfiResetSystem()
  ...
2014-08-04 17:13:50 -07:00
Linus Torvalds
8efb90cf1e Merge branch 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking updates from Ingo Molnar:
 "The main changes in this cycle are:

   - big rtmutex and futex cleanup and robustification from Thomas
     Gleixner
   - mutex optimizations and refinements from Jason Low
   - arch_mutex_cpu_relax() removal and related cleanups
   - smaller lockdep tweaks"

* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits)
  arch, locking: Ciao arch_mutex_cpu_relax()
  locking/lockdep: Only ask for /proc/lock_stat output when available
  locking/mutexes: Optimize mutex trylock slowpath
  locking/mutexes: Try to acquire mutex only if it is unlocked
  locking/mutexes: Delete the MUTEX_SHOW_NO_WAITER macro
  locking/mutexes: Correct documentation on mutex optimistic spinning
  rtmutex: Make the rtmutex tester depend on BROKEN
  futex: Simplify futex_lock_pi_atomic() and make it more robust
  futex: Split out the first waiter attachment from lookup_pi_state()
  futex: Split out the waiter check from lookup_pi_state()
  futex: Use futex_top_waiter() in lookup_pi_state()
  futex: Make unlock_pi more robust
  rtmutex: Avoid pointless requeueing in the deadlock detection chain walk
  rtmutex: Cleanup deadlock detector debug logic
  rtmutex: Confine deadlock logic to futex
  rtmutex: Simplify remove_waiter()
  rtmutex: Document pi chain walk
  rtmutex: Clarify the boost/deboost part
  rtmutex: No need to keep task ref for lock owner check
  rtmutex: Simplify and document try_to_take_rtmutex()
  ...
2014-08-04 16:09:06 -07:00
Linus Torvalds
5167d09ffa arm64 updates for 3.17
Changes include:
  - Context tracking support (NO_HZ_FULL) which narrowly missed 3.16
  - vDSO layout rework following Andy's work on x86
  - TEXT_OFFSET fuzzing for bootloader testing
  - /proc/cpuinfo tidy-up
  - Preliminary work to support 48-bit virtual addresses, but this is
    currently disabled until KVM has been ported to use it (the patches
    do, however, bring some nice clean-up)
  - Boot-time CPU sanity checks (especially useful on heterogenous
    systems)
  - Support for syscall auditing
  - Support for CC_STACKPROTECTOR
  - defconfig updates
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCgAGBQJT3qkzAAoJEC379FI+VC/ZxwEP/3uYs9glDLTd1hmVFr1cRutg
 j4m1Kc7RCO+zpbYCXJLAQLPjwjOaUWPZUeZPQZib6bO+4sTqFYe9vsaqRyvn/bxM
 BaQhytpyxymfG8m3rmXaI97TzBwnRB2oQ0k36rsjMwG/VQMLf9kVuEwURoAHF07l
 RyMK2sAwE0/8XIJZQFNo5SAbkO52EiHlehdlTzCXGWWOWdHDyVfks/k6YhIS991r
 0W9Y0ghHaMz+mAumTSq7jzPQa3aF3GjTp0W7gJjk/PRBDHfPisphEO36zsA0yHtE
 3uvEH0kUQK/ve4ZUQiNvuEZCSqalPFag6j5Z8BnFtafa66J5h414CGPAfER6Kz7+
 KGpoEve+7Rpvvb1S4T0tTMg7HoGrvqc5wKS3uFxfoGooGUcUOchSkYiVTBMDJSKn
 QlJbb1QSvuNFGhcKntTOe1QMT+x0w9urq/e+QfnQrZ/m5Er7J3qCZzeOfA2JFTjQ
 sB24yjzAz5a5VwbKbuB2b4gDILY9oYNe94HFP08o/rJfANnL0dpP1Oyl0b12ILsI
 a69EMdpaeEQo8703KLIlzfW6u92PqYs6UkYvya8o27FAvmNvDfB/PffjgVsOAHFi
 Qc+dpYbnzNfwJgG9w0qhJ+MR8g5fiBYHqNpfGOY+g5M50j0hZUX9comoWw1xkl0X
 HlvG7xzrTF7/VbWEtZ2o
 =6XMc
 -----END PGP SIGNATURE-----

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Will Deacon:
 "Once again, Catalin's off on holiday and I'm looking after the arm64
  tree.  Please can you pull the following arm64 updates for 3.17?

  Note that this branch also includes the new GICv3 driver (merged via a
  stable tag from Jason's irqchip tree), since there is a fix for older
  binutils on top.

  Changes include:
   - context tracking support (NO_HZ_FULL) which narrowly missed 3.16
   - vDSO layout rework following Andy's work on x86
   - TEXT_OFFSET fuzzing for bootloader testing
   - /proc/cpuinfo tidy-up
   - preliminary work to support 48-bit virtual addresses, but this is
     currently disabled until KVM has been ported to use it (the patches
     do, however, bring some nice clean-up)
   - boot-time CPU sanity checks (especially useful on heterogenous
     systems)
   - support for syscall auditing
   - support for CC_STACKPROTECTOR
   - defconfig updates"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (55 commits)
  arm64: add newline to I-cache policy string
  Revert "arm64: dmi: Add SMBIOS/DMI support"
  arm64: fpsimd: fix a typo in fpsimd_save_partial_state ENDPROC
  arm64: don't call break hooks for BRK exceptions from EL0
  arm64: defconfig: enable devtmpfs mount option
  arm64: vdso: fix build error when switching from LE to BE
  arm64: defconfig: add virtio support for running as a kvm guest
  arm64: gicv3: Allow GICv3 compilation with older binutils
  arm64: fix soft lockup due to large tlb flush range
  arm64/crypto: fix makefile rule for aes-glue-%.o
  arm64: Do not invoke audit_syscall_* functions if !CONFIG_AUDIT_SYSCALL
  arm64: Fix barriers used for page table modifications
  arm64: Add support for 48-bit VA space with 64KB page configuration
  arm64: asm/pgtable.h pmd/pud definitions clean-up
  arm64: Determine the vmalloc/vmemmap space at build time based on VA_BITS
  arm64: Clean up the initial page table creation in head.S
  arm64: Remove asm/pgtable-*level-types.h files
  arm64: Remove asm/pgtable-*level-hwdef.h files
  arm64: Convert bool ARM64_x_LEVELS to int ARM64_PGTABLE_LEVELS
  arm64: mm: Implement 4 levels of translation tables
  ...
2014-08-04 12:31:53 -07:00
Linus Torvalds
b8c0aa46b3 This pull request has a lot of work done. The main thing is the changes
to the ftrace function callback infrastructure. It's introducing a
 way to allow different functions to call directly different trampolines
 instead of all calling the same "mcount" one.
 
 The only user of this for now is the function graph tracer, which always
 had a different trampoline, but the function tracer trampoline was called
 and did basically nothing, and then the function graph tracer trampoline
 was called. The difference now, is that the function graph tracer
 trampoline can be called directly if a function is only being traced by
 the function graph trampoline. If function tracing is also happening on
 the same function, the old way is still done.
 
 The accounting for this takes up more memory when function graph tracing
 is activated, as it needs to keep track of which functions it uses.
 I have a new way that wont take as much memory, but it's not ready yet
 for this merge window, and will have to wait for the next one.
 
 Another big change was the removal of the ftrace_start/stop() calls that
 were used by the suspend/resume code that stopped function tracing when
 entering into suspend and resume paths. The stop of ftrace was done
 because there was some function that would crash the system if one called
 smp_processor_id()! The stop/start was a big hammer to solve the issue
 at the time, which was when ftrace was first introduced into Linux.
 Now ftrace has better infrastructure to debug such issues, and I found
 the problem function and labeled it with "notrace" and function tracing
 can now safely be activated all the way down into the guts of suspend
 and resume.
 
 Other changes include clean ups of uprobe code.
 Clean up of the trace_seq() code.
 And other various small fixes and clean ups to ftrace and tracing.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJT35zXAAoJEKQekfcNnQGuOz0H/38zqM0nLFhrgvz3EPk2UOjn
 xqpX8qyb2V7TJZL+IqeXU2a5cQZl5ba0D4WtBGpxbTae3CJYiuQ87iKUNFoH0om5
 FDpn80igb368k8V3qRdRsziKVCCf0XBd/NkHJXc0ZkfXGyzB2Ga4bBxALxp2gj9y
 bnO+vKo6+tWYKG4hyQb4P3LRXUrK8/LWEsPr39cH2QH1Rdj69Lx9CgrCdUVJmwcb
 Bj8hEiLXL/RYCFNn79A3wNTUvW0rG/AOIf4SLqXtasSRZ0ToaU0ZyDnrNv+0Ol47
 rX8tSk+LfXchL9hpIvjCf1vlAYq3pO02favteR/jip3lx/dTjEDE4RJ9qtJzZ4Q=
 =fwQY
 -----END PGP SIGNATURE-----

Merge tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:
 "This pull request has a lot of work done.  The main thing is the
  changes to the ftrace function callback infrastructure.  It's
  introducing a way to allow different functions to call directly
  different trampolines instead of all calling the same "mcount" one.

  The only user of this for now is the function graph tracer, which
  always had a different trampoline, but the function tracer trampoline
  was called and did basically nothing, and then the function graph
  tracer trampoline was called.  The difference now, is that the
  function graph tracer trampoline can be called directly if a function
  is only being traced by the function graph trampoline.  If function
  tracing is also happening on the same function, the old way is still
  done.

  The accounting for this takes up more memory when function graph
  tracing is activated, as it needs to keep track of which functions it
  uses.  I have a new way that wont take as much memory, but it's not
  ready yet for this merge window, and will have to wait for the next
  one.

  Another big change was the removal of the ftrace_start/stop() calls
  that were used by the suspend/resume code that stopped function
  tracing when entering into suspend and resume paths.  The stop of
  ftrace was done because there was some function that would crash the
  system if one called smp_processor_id()! The stop/start was a big
  hammer to solve the issue at the time, which was when ftrace was first
  introduced into Linux.  Now ftrace has better infrastructure to debug
  such issues, and I found the problem function and labeled it with
  "notrace" and function tracing can now safely be activated all the way
  down into the guts of suspend and resume

  Other changes include clean ups of uprobe code, clean up of the
  trace_seq() code, and other various small fixes and clean ups to
  ftrace and tracing"

* tag 'trace-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (57 commits)
  ftrace: Add warning if tramp hash does not match nr_trampolines
  ftrace: Fix trampoline hash update check on rec->flags
  ring-buffer: Use rb_page_size() instead of open coded head_page size
  ftrace: Rename ftrace_ops field from trampolines to nr_trampolines
  tracing: Convert local function_graph functions to static
  ftrace: Do not copy old hash when resetting
  tracing: let user specify tracing_thresh after selecting function_graph
  ring-buffer: Always run per-cpu ring buffer resize with schedule_work_on()
  tracing: Remove function_trace_stop and HAVE_FUNCTION_TRACE_MCOUNT_TEST
  s390/ftrace: remove check of obsolete variable function_trace_stop
  arm64, ftrace: Remove check of obsolete variable function_trace_stop
  Blackfin: ftrace: Remove check of obsolete variable function_trace_stop
  metag: ftrace: Remove check of obsolete variable function_trace_stop
  microblaze: ftrace: Remove check of obsolete variable function_trace_stop
  MIPS: ftrace: Remove check of obsolete variable function_trace_stop
  parisc: ftrace: Remove check of obsolete variable function_trace_stop
  sh: ftrace: Remove check of obsolete variable function_trace_stop
  sparc64,ftrace: Remove check of obsolete variable function_trace_stop
  tile: ftrace: Remove check of obsolete variable function_trace_stop
  ftrace: x86: Remove check of obsolete variable function_trace_stop
  ...
2014-08-04 11:50:00 -07:00
Mark Rutland
ea1719672f arm64: add newline to I-cache policy string
Due to a missing newline in the I-cache policy detection log output,
it's possible to get some ratehr unfortunate output at boot time:

CPU1: Booted secondary processor
Detected VIPT I-cache on CPU1CPU2: Booted secondary processor
Detected VIPT I-cache on CPU2CPU3: Booted secondary processor
Detected VIPT I-cache on CPU3CPU4: Booted secondary processor
Detected PIPT I-cache on CPU4CPU5: Booted secondary processor
Detected PIPT I-cache on CPU5Brought up 6 CPUs
SMP: Total of 6 processors activated.

This patch adds the missing newline to the format string, cleaning up
the output.

Fixes: 59ccc0d41b ("arm64: cachetype: report weakest cache policy")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-08-01 14:00:06 +01:00
Marc Zyngier
dedf97e8ff arm64: KVM: fix 64bit CP15 VM access for 32bit guests
Commit f0a3eaff71 (ARM64: KVM: fix big endian issue in
access_vm_reg for 32bit guest) changed the way we handle CP15
VM accesses, so that all 64bit accesses are done via vcpu_sys_reg.

This looks like a good idea as it solves indianness issues in an
elegant way, except for one small detail: the register index is
doesn't refer to the same array! We end up corrupting some random
data structure instead.

Fix this by reverting to the original code, except for the introduction
of a vcpu_cp15_64_high macro that deals with the endianness thing.

Tested on Juno with 32bit SMP guests.

Cc: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-01 14:05:06 +02:00
Marc Zyngier
f4c321eb26 arm64: KVM: GICv3: move system register access to msr_s/mrs_s
Commit 72c5839515 (arm64: gicv3: Allow GICv3 compilation with
older binutils) changed the way we express the GICv3 system registers,
but couldn't change the occurences used by KVM as the code wasn't
merged yet.

Just fix the accessors.

Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-07-31 15:52:14 +02:00
Will Deacon
9415667584 Revert "arm64: dmi: Add SMBIOS/DMI support"
This reverts commit a28e3f4b90.

Ard and Yi Li report that this patch is broken by design, so revert it
and let them sort it out for 3.18 instead.

Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-31 14:00:03 +01:00
byungchul.park
e4aa297a49 arm64: fpsimd: fix a typo in fpsimd_save_partial_state ENDPROC
Commit 190f1ca85d ("arm64: add support for kernel mode NEON in interrupt
context") introduced a typing error in fpsimd_save_partial_state ENDPROC.

This patch fixes the typing error.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: byungchul.park <byungchul.park@lge.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-31 11:42:42 +01:00
Will Deacon
c878e0cff5 arm64: don't call break hooks for BRK exceptions from EL0
Our break hooks are used to handle brk exceptions from kgdb (and potentially
kprobes if that code ever resurfaces), so don't bother calling them if
the BRK exception comes from userspace.

This prevents userspace from trapping to a kdb shell on systems where
kgdb is enabled and active.

Cc: <stable@vger.kernel.org>
Reported-by: Omar Sandoval <osandov@osandov.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-31 11:36:08 +01:00
Robert Richter
3666f88010 arm64: defconfig: enable devtmpfs mount option
Matching x86 and making it more convenient to run the arm64 default
kernel as distros like Ubuntu need this option.

Signed-off-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-30 16:54:21 +01:00
Arun Chandran
1915e2ad1c arm64: vdso: fix build error when switching from LE to BE
Building a kernel with CPU_BIG_ENDIAN fails if there are stale objects
from a !CPU_BIG_ENDIAN build. Due to a missing FORCE prerequisite on an
if_changed rule in the VDSO Makefile, we attempt to link a stale LE
object into the new BE kernel.

According to Documentation/kbuild/makefiles.txt, FORCE is required for
if_changed rules and forgetting it is a common mistake, so fix it by
'Forcing' the build of vdso. This patch fixes build errors like these:

arch/arm64/kernel/vdso/note.o: compiled for a little endian system and target is big endian
failed to merge target specific data of file arch/arm64/kernel/vdso/note.o

arch/arm64/kernel/vdso/sigreturn.o: compiled for a little endian system and target is big endian
failed to merge target specific data of file arch/arm64/kernel/vdso/sigreturn.o

Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arun Chandran <achandran@mvista.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-30 15:06:35 +01:00
Will Deacon
af9b99647c arm64: defconfig: add virtio support for running as a kvm guest
When running as a kvm guest on a para-virtualised platform, it is useful
to have virtio implementations of console, 9pfs and network.

This adds these options to the arm64 defconfig, so we can easily run a
defconfig kernel build as both host and as a kvm guest.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-07-29 16:20:02 +01:00
Linus Torvalds
31dab719fa Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull ARM AES crypto fixes from Herbert Xu:
 "This push fixes a regression on ARM where odd-sized blocks supplied to
  AES may cause crashes"

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: arm-aes - fix encryption of unaligned data
  crypto: arm64-aes - fix encryption of unaligned data
2014-07-28 11:35:30 -07:00
Mikulas Patocka
f960d2093f crypto: arm64-aes - fix encryption of unaligned data
cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
See https://bugzilla.redhat.com/show_bug.cgi?id=1122937

The bug is caused by incorrect handling of unaligned data in
arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
socket to encrypt data in the buffer. The arm64 crypto accelerator causes
data corruption or crashes in the scatterwalk_pagedone.

This patch fixes the bug by passing the residue bytes that were not
processed as the last parameter to blkcipher_walk_done.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-07-28 22:01:02 +08:00
Catalin Marinas
72c5839515 arm64: gicv3: Allow GICv3 compilation with older binutils
GICv3 introduces new system registers accessible with the full msr/mrs
syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent
binutils understand the new syntax. This patch introduces msr_s/mrs_s
assembly macros which generate the equivalent instructions above and
converts the existing GICv3 code (both drivers/irqchip/ and
arch/arm64/kernel/).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Olof Johansson <olof@lixom.net>
Tested-by: Olof Johansson <olof@lixom.net>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
2014-07-25 13:12:15 +01:00
Catalin Marinas
ecb3c2bbf2 Merge tag 'deps-irqchip-gic-3.17' of git://git.infradead.org/users/jcooper/linux
* tag 'deps-irqchip-gic-3.17' of git://git.infradead.org/users/jcooper/linux:
  irqchip: gic-v3: Initial support for GICv3
  irqchip: gic: Move some bits of GICv2 to a library-type file

Conflicts:
	arch/arm64/Kconfig
2014-07-25 13:03:22 +01:00
Mark Salter
05ac653054 arm64: fix soft lockup due to large tlb flush range
Under certain loads, this soft lockup has been observed:

   BUG: soft lockup - CPU#2 stuck for 22s! [ip6tables:1016]
   Modules linked in: ip6t_rpfilter ip6t_REJECT cfg80211 rfkill xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw vfat fat efivarfs xfs libcrc32c

   CPU: 2 PID: 1016 Comm: ip6tables Not tainted 3.13.0-0.rc7.30.sa2.aarch64 #1
   task: fffffe03e81d1400 ti: fffffe03f01f8000 task.ti: fffffe03f01f8000
   PC is at __cpu_flush_kern_tlb_range+0xc/0x40
   LR is at __purge_vmap_area_lazy+0x28c/0x3ac
   pc : [<fffffe000009c5cc>] lr : [<fffffe0000182710>] pstate: 80000145
   sp : fffffe03f01fbb70
   x29: fffffe03f01fbb70 x28: fffffe03f01f8000
   x27: fffffe0000b19000 x26: 00000000000000d0
   x25: 000000000000001c x24: fffffe03f01fbc50
   x23: fffffe03f01fbc58 x22: fffffe03f01fbc10
   x21: fffffe0000b2a3f8 x20: 0000000000000802
   x19: fffffe0000b2a3c8 x18: 000003fffdf52710
   x17: 000003ff9d8bb910 x16: fffffe000050fbfc
   x15: 0000000000005735 x14: 000003ff9d7e1a5c
   x13: 0000000000000000 x12: 000003ff9d7e1a5c
   x11: 0000000000000007 x10: fffffe0000c09af0
   x9 : fffffe0000ad1000 x8 : 000000000000005c
   x7 : fffffe03e8624000 x6 : 0000000000000000
   x5 : 0000000000000000 x4 : 0000000000000000
   x3 : fffffe0000c09cc8 x2 : 0000000000000000
   x1 : 000fffffdfffca80 x0 : 000fffffcd742150

The __cpu_flush_kern_tlb_range() function looks like:

  ENTRY(__cpu_flush_kern_tlb_range)
	dsb	sy
	lsr	x0, x0, #12
	lsr	x1, x1, #12
  1:	tlbi	vaae1is, x0
	add	x0, x0, #1
	cmp	x0, x1
	b.lo	1b
	dsb	sy
	isb
	ret
  ENDPROC(__cpu_flush_kern_tlb_range)

The above soft lockup shows the PC at tlbi insn with:

  x0 = 0x000fffffcd742150
  x1 = 0x000fffffdfffca80

So __cpu_flush_kern_tlb_range has 0x128ba930 tlbi flushes left
after it has already been looping for 23 seconds!.

Looking up one frame at __purge_vmap_area_lazy(), there is:

	...
	list_for_each_entry_rcu(va, &vmap_area_list, list) {
		if (va->flags & VM_LAZY_FREE) {
			if (va->va_start < *start)
				*start = va->va_start;
			if (va->va_end > *end)
				*end = va->va_end;
			nr += (va->va_end - va->va_start) >> PAGE_SHIFT;
			list_add_tail(&va->purge_list, &valist);
			va->flags |= VM_LAZY_FREEING;
			va->flags &= ~VM_LAZY_FREE;
		}
	}
	...
	if (nr || force_flush)
		flush_tlb_kernel_range(*start, *end);

So if two areas are being freed, the range passed to
flush_tlb_kernel_range() may be as large as the vmalloc
space. For arm64, this is ~240GB for 4k pagesize and ~2TB
for 64kpage size.

This patch works around this problem by adding a loop limit.
If the range is larger than the limit, use flush_tlb_all()
rather than flushing based on individual pages. The limit
chosen is arbitrary as the TLB size is implementation
specific and not accessible in an architected way. The aim
of the arbitrary limit is to avoid soft lockup.

Signed-off-by: Mark Salter <msalter@redhat.com>
[catalin.marinas@arm.com: commit log update]
[catalin.marinas@arm.com: marginal optimisation]
[catalin.marinas@arm.com: changed to MAX_TLB_RANGE and added comment]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-24 18:41:13 +01:00
Andreas Schwab
7c2105fbe9 arm64/crypto: fix makefile rule for aes-glue-%.o
This fixes the following build failure when building with CONFIG_MODVERSIONS
enabled:

  CC [M]  arch/arm64/crypto/aes-glue-ce.o
ld: cannot find arch/arm64/crypto/aes-glue-ce.o: No such file or directory
make[1]: *** [arch/arm64/crypto/aes-ce-blk.o] Error 1
make: *** [arch/arm64/crypto] Error 2

The $(obj)/aes-glue-%.o rule only creates $(obj)/.tmp_aes-glue-ce.o, it
should use if_changed_rule instead of if_changed_dep.

Signed-off-by: Andreas Schwab <schwab@suse.de>
[ardb: mention CONFIG_MODVERSIONS in commit log]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-24 17:46:50 +01:00
Catalin Marinas
2a8f45b040 arm64: Do not invoke audit_syscall_* functions if !CONFIG_AUDIT_SYSCALL
This is a temporary patch to be able to compile the kernel in linux-next
where the audit_syscall_* API has been changed. To be reverted once the
proper arm64 fix can be applied.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-24 16:01:17 +01:00
Catalin Marinas
7f0b1bf045 arm64: Fix barriers used for page table modifications
The architecture specification states that both DSB and ISB are required
between page table modifications and subsequent memory accesses using the
corresponding virtual address. When TLB invalidation takes place, the
tlb_flush_* functions already have the necessary barriers. However, there are
other functions like create_mapping() for which this is not the case.

The patch adds the DSB+ISB instructions in the set_pte() function for
valid kernel mappings. The invalid pte case is handled by tlb_flush_*
and the user mappings in general have a corresponding update_mmu_cache()
call containing a DSB. Even when update_mmu_cache() isn't called, the
kernel can still cope with an unlikely spurious page fault by
re-executing the instruction.

In addition, the set_pmd, set_pud() functions gain an ISB for
architecture compliance when block mappings are created.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Leif Lindholm <leif.lindholm@linaro.org>
Acked-by: Steve Capper <steve.capper@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org>
2014-07-24 10:25:42 +01:00
Linus Torvalds
98de5ab713 Fix arm64 regression introduced by limiting the CMA buffer to ZONE_DMA
on platforms where RAM starts above 4GB (and ZONE_DMA becoming 0).
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.9 (GNU/Linux)
 
 iQIcBAABAgAGBQJTz+NOAAoJEGvWsS0AyF7xmxQP+wYU76kz2mAyroFdVSCAidqc
 k8DgpYDkDRpxWmkoTYYAuLlMbMk6po+HyX+Gz0ZZw2A8c7bnf3+/ljVk7Y88Shs5
 B+uRLrvIb6BCuroM9oGMUxYF5s+jPmBSLvh+jefypSLrffYoGBAbXN2w9M0CP4Uh
 NiUFSNczs/F6Se7uP1vd8PiN6pJxZfWsb2nFKW3Gzi5Oo2x5JR/kLhJXM/9k6ahE
 XAskRjiHE52pfoyiC66ZLvT1PeZ/smz6quaWquoQ/9obwISWPqIRkF5UoMwpuJOY
 n9R4G7BjJH/dD7yYpgMYzANxOYMwKLtSbwJ6YO1iazWXy3t6+aF1ZsfFb7f42Ysw
 Rnpa4OGUCgWqqge+B8j2Zm7nX/TW/HpX/5YKHhTYH4dU/xsHcM4/IR+jkZ91HyJ5
 RVa9SC3wH6OEASxmN4T3pP24pT9IHKR4ckviBrrAskby7QZ1Ud+HPp2lrxdt8R3N
 m/Ee8iUPOA+uZMXYihx2Bb7E5IaMr2Dzk040nzOc+9lOOpgv2iVqgHQ5gme2DRko
 6QX6575IvqTgl4gqiQmpII4Pqu9r0NR2EWYlsLqXsiqnoWBy4ltRdvsZqy4VpP0s
 FIIC1cOXDhiNWHU5Q+QAC4CEnsnMlnb9hfbAfrLOVpyotY7ry27zGx67EdF4h8d0
 4U+HGdmgxDJf9WtAbR0X
 =NrXJ
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fix from Catalin Marinas:
 "Fix arm64 regression introduced by limiting the CMA buffer to ZONE_DMA
  on platforms where RAM starts above 4GB (and ZONE_DMA becoming 0)"

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: Create non-empty ZONE_DMA when DRAM starts above 4GB
2014-07-23 17:47:36 -07:00
Thomas Gleixner
d28ede8379 timekeeping: Create struct tk_read_base and use it in struct timekeeper
The members of the new struct are the required ones for the new NMI
safe accessor to clcok monotonic. In order to reuse the existing
timekeeping code and to make the update of the fast NMI safe
timekeepers a simple memcpy use the struct for the timekeeper as well
and convert all users.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
2014-07-23 15:01:53 -07:00
Thomas Gleixner
4a0e637738 clocksource: Get rid of cycle_last
cycle_last was added to the clocksource to support the TSC
validation. We moved that to the core code, so we can get rid of the
extra copy.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <john.stultz@linaro.org>
2014-07-23 15:01:52 -07:00
Catalin Marinas
383c279911 arm64: Add support for 48-bit VA space with 64KB page configuration
This patch allows support for 3 levels of page tables with 64KB page
configuration allowing 48-bit VA space. The pgd is no longer a full
PAGE_SIZE (PTRS_PER_PGD is 64) and (swapper|idmap)_pg_dir are not fully
populated (pgd_alloc falls back to kzalloc).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:28:15 +01:00
Catalin Marinas
7078db4621 arm64: asm/pgtable.h pmd/pud definitions clean-up
Non-functional change to group together the pmd/pud definitions and
reduce the amount of #if CONFIG_ARM64_PGTABLE_LEVELS.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:28:10 +01:00
Catalin Marinas
08375198b0 arm64: Determine the vmalloc/vmemmap space at build time based on VA_BITS
Rather than guessing what the maximum vmmemap space should be, this
patch allows the calculation based on the VA_BITS and sizeof(struct
page). The vmalloc space extends to the beginning of the vmemmap space.

Since the virtual kernel memory layout now depends on the build
configuration, this patch removes the detailed description in
Documentation/arm64/memory.txt in favour of information printed during
kernel booting.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:28:05 +01:00
Catalin Marinas
b4a0d8b377 arm64: Clean up the initial page table creation in head.S
This patch adds a create_table_entry macro which is used to populate pgd
and pud entries, also reducing the number of arguments for
create_pgd_entry.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:28:01 +01:00
Catalin Marinas
0f1740252b arm64: Remove asm/pgtable-*level-types.h files
The macros and typedefs in these files are already duplicated, so just
use a single pgtable-types.h file with the corresponding #ifdefs.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:56 +01:00
Catalin Marinas
6b4fee241d arm64: Remove asm/pgtable-*level-hwdef.h files
The macros in these files can easily be computed based on PAGE_SHIFT and
VA_BITS, so just remove them and add the corresponding macros to
asm/pgtable-hwdef.h

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:51 +01:00
Catalin Marinas
abe669d7e1 arm64: Convert bool ARM64_x_LEVELS to int ARM64_PGTABLE_LEVELS
Rather than having several Kconfig options, define int
ARM64_PGTABLE_LEVELS which will be also useful in converting some of the
pgtable macros.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:46 +01:00
Jungseok Lee
c79b954bf6 arm64: mm: Implement 4 levels of translation tables
This patch implements 4 levels of translation tables since 3 levels
of page tables with 4KB pages cannot support 40-bit physical address
space described in [1] due to the following issue.

It is a restriction that kernel logical memory map with 4KB + 3 levels
(0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from
544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create
mapping for this region in map_mem function since __phys_to_virt for
this region reaches to address overflow.

If SoC design follows the document, [1], over 32GB RAM would be placed
from 544GB. Even 64GB system is supposed to use the region from 544GB
to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels
of page tables to avoid hacking __virt_to_phys and __phys_to_virt.

However, it is recommended 4 levels of page table should be only enabled
if memory map is too sparse or there is about 512GB RAM.

References
----------
[1]: Principles of ARM Memory Maps, White Paper, Issue C

Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Steve Capper <steve.capper@linaro.org>
[catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE]
[catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels]
[catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:40 +01:00
Jungseok Lee
57e0139041 arm64: Add 4 levels of page tables definition with 4KB pages
This patch adds hardware definition and types for 4 levels of
translation tables with 4KB pages.

Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:34 +01:00
Jungseok Lee
e41ceed035 arm64: Introduce VA_BITS and translation level options
This patch adds virtual address space size and a level of translation
tables to kernel configuration. It facilicates introduction of
different MMU options, such as 4KB + 4 levels, 16KB + 4 levels and
64KB + 3 levels, easily.

The idea is based on the discussion with Catalin Marinas:
http://www.spinics.net/linux/lists/arm-kernel/msg319552.html

Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:21 +01:00
Catalin Marinas
7edd88ad7e arm64: Do not initialise the fixmap page tables in head.S
The early_ioremap_init() function already handles fixmap pte
initialisation, so upgrade this to cover all of pud/pmd/pte and remove
one page from swapper_pg_dir.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Jungseok Lee <jungseoklee85@gmail.com>
2014-07-23 15:27:00 +01:00
Catalin Marinas
d50314a6b0 arm64: Create non-empty ZONE_DMA when DRAM starts above 4GB
ZONE_DMA is created to allow 32-bit only devices to access memory in the
absence of an IOMMU. On systems where the memory starts above 4GB, it is
expected that some devices have a DMA offset hardwired to be able to
access the bottom of the memory. Linux currently supports DT bindings
for the DMA offsets but they are not (easily) available early during
boot.

This patch tries to guess a DMA offset and assumes that ZONE_DMA
corresponds to the 32-bit mask above the start of DRAM.

Fixes: 2d5a5612bc (arm64: Limit the CMA buffer to 32-bit if ZONE_DMA)
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Mark Salter <msalter@redhat.com>
Tested-by: Mark Salter <msalter@redhat.com>
Tested-by: Anup Patel <anup.patel@linaro.org>
2014-07-23 11:23:58 +01:00
Mark Brown
affeafbb84 arm64: Remove stray ARCH_HAS_OPP reference
A reference to ARCH_HAS_OPP was added in commit 333d17e56 (arm64: add
ARCH_HAS_OPP to allow enabling OPP library) however this symbol is no
longer needed after commit 049d595a4d (PM / OPP: Make OPP invisible
to users in Kconfig).

Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-23 09:40:49 +01:00
Yi Li
a28e3f4b90 arm64: dmi: Add SMBIOS/DMI support
SMbios is important for server hardware vendors. It implements a spec for
providing descriptive information about the platform. Things like serial
numbers, physical layout of the ports, build configuration data, and the like.

This has been tested by dmidecode and lshw tools.

Signed-off-by: Yi Li <yi.li@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-21 10:22:21 +01:00
Linus Torvalds
d057190925 Merge branch 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fixes from Thomas Gleixner:
 "The locking department delivers:

   - A rather large and intrusive bundle of fixes to address serious
     performance regressions introduced by the new rwsem / mcs
     technology.  Simpler solutions have been discussed, but they would
     have been ugly bandaids with more risk than doing the right thing.

   - Make the rwsem spin on owner technology opt-in for architectures
     and enable it only on the known to work ones.

   - A few fixes to the lockdep userspace library"

* 'locking-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNER
  locking/mutex: Disable optimistic spinning on some architectures
  locking/rwsem: Reduce the size of struct rw_semaphore
  locking/rwsem: Rename 'activity' to 'count'
  locking/spinlocks/mcs: Micro-optimize osq_unlock()
  locking/spinlocks/mcs: Introduce and use init macro and function for osq locks
  locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overhead
  locking/spinlocks/mcs: Rename optimistic_spin_queue() to optimistic_spin_node()
  locking/rwsem: Allow conservative optimistic spinning when readers have lock
  tools/liblockdep: Account for bitfield changes in lockdeps lock_acquire
  tools/liblockdep: Remove debug print left over from development
  tools/liblockdep: Fix comparison of a boolean value with a value of 2
2014-07-19 06:27:55 -10:00
Ard Biesheuvel
99a5603e2a efi/arm64: Handle missing virtual mapping for UEFI System Table
If we cannot resolve the virtual address of the UEFI System Table, its
physical offset must be missing from the virtual memory map, and there
is really no point in proceeding with installing the virtual memory map
and the runtime services dispatch table. So back out gracefully.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Salter <msalter@redhat.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:24:04 +01:00
Ard Biesheuvel
f4f75ad574 efi: efistub: Convert into static library
This patch changes both x86 and arm64 efistub implementations
from #including shared .c files under drivers/firmware/efi to
building shared code as a static library.

The x86 code uses a stub built into the boot executable which
uncompresses the kernel at boot time. In this case, the library is
linked into the decompressor.

In the arm64 case, the stub is part of the kernel proper so the library
is linked into the kernel proper as well.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-07-18 21:22:19 +01:00
Steven Rostedt (Red Hat)
ac694fda32 arm64, ftrace: Remove check of obsolete variable function_trace_stop
Nothing sets function_trace_stop to disable function tracing anymore.
Remove the check for it in the arch code.

arm64 was broken anyway, as it had an ifdef testing
 CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST which is only set if
the arch supports the code (which it obviously did not), and
it was testing a non existent ftrace_trace_stop instead of
function_trace_stop.

Link: http://lkml.kernel.org/r/20140627124421.GP26276@arm.com

Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18 13:58:10 -04:00
Mark Rutland
d7a49086f2 arm64: cpuinfo: print info for all CPUs
Currently reading /proc/cpuinfo will result in information being read
out of the MIDR_EL1 of the current CPU, and the information is not
associated with any particular logical CPU number.

This is problematic for systems with heterogeneous CPUs (i.e.
big.LITTLE) where MIDR fields will vary across CPUs, and the output will
differ depending on the executing CPU.

This patch reorganises the code responsible for /proc/cpuinfo to print
information per-cpu. In the process, we perform several cleanups:

* Property names are coerced to lower-case (to match "processor" as per
  glibc's expectations).
* Property names are simplified and made to match the MIDR field names.
* Revision is changed to hex as with every other field.
* The meaningless Architecture property is removed.
* The ripe-for-abuse Machine field is removed.

The features field (a human-readable representation of the hwcaps)
remains printed once, as this is expected to remain in use as the
globally support CPU features. To enable the possibility of the addition
of per-cpu HW feature information later, this is printed before any
CPU-specific information.

Comments are added to guide userspace developers in the right direction
(using the hwcaps provided in auxval). Hopefully where userspace
applications parse /proc/cpuinfo rather than using the readily available
hwcaps, they limit themselves to reading said first line.

If CPU features differ from each other, the previously installed sanity
checks will give us some advance notice with warnings and
TAINT_CPU_OUT_OF_SPEC. If we are lucky, we will never see such systems.
Rework will be required in many places to support such systems anyway.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marcus Shawcroft <marcus.shawcroft@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
[catalin.marinas@arm.com: remove machine_name as it is no longer reported]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 18:33:23 +01:00
Mark Rutland
127161aaf0 arm64: add runtime system sanity checks
Unexpected variation in certain system register values across CPUs is an
indicator of potential problems with a system. The kernel expects CPUs
to be mostly identical in terms of supported features, even in systems
with heterogeneous CPUs, with uniform instruction set support being
critical for the correct operation of userspace.

To help detect issues early where hardware violates the expectations of
the kernel, this patch adds simple runtime sanity checks on important ID
registers in the bring up path of each CPU.

Where CPUs are fundamentally mismatched, set TAINT_CPU_OUT_OF_SPEC.
Given that the kernel assumes CPUs are identical feature wise, let's not
pretend that we expect such configurations to work. Supporting such
configurations would require massive rework, and hopefully they will
never exist.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:24:11 +01:00
Mark Rutland
59ccc0d41b arm64: cachetype: report weakest cache policy
In big.LITTLE systems, the I-cache policy may differ across CPUs, and
thus we must always meet the most stringent maintenance requirements of
any I-cache in the system when performing maintenance to ensure
correctness. Unfortunately this requirement is not met as we always look
at the current CPU's cache type register to determine the maintenance
requirements.

This patch causes the I-cache policy of all CPUs to be taken into
account for icache_is_aliasing and icache_is_aivivt. If any I-cache in
the system is aliasing or AIVIVT, the respective function will return
true. At boot each CPU may set flags to identify that at least one
I-cache in the system is aliasing and/or AIVIVT.

The now unused and potentially misleading icache_policy function is
removed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:24:10 +01:00
Mark Rutland
df857416a1 arm64: cpuinfo: record cpu system register values
Several kernel subsystems need to know details about CPU system register
values, sometimes for CPUs other than that they are executing on. Rather
than hard-coding system register accesses and cross-calls for these
cases, this patch adds logic to record various system register values at
boot-time. This may be used for feature reporting, firmware bug
detection, etc.

Separate hooks are added for the boot and hotplug paths to enable
one-time intialisation and cold/warm boot value mismatch detection in
later patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:24:09 +01:00
Mark Rutland
89c4a306e7 arm64: add MIDR_EL1 field accessors
The MIDR_EL1 register is composed of a number of bitfields, and uses of
the fields has so far involved open-coding of the shifts and masks
required.

This patch adds shifts and masks for each of the MIDR_EL1 subfields, and
also provides accessors built atop of these. Existing uses within
cputype.h are updated to use these accessors.

The read_cpuid_part_number macro is modified to return the extracted
bitfield rather than returning the value in-place with all other fields
(including revision) masked out, to better match the other accessors.
As the value is only used in comparison with the *_CPU_PART_* macros
which are similarly updated, and these values are never exposed to
userspace, this change should not affect any functionality.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:24:08 +01:00
Lorenzo Pieralisi
18ab7db6b7 arm64: kernel: add missing __init section marker to cpu_suspend_init
Suspend init function must be marked as __init, since it is not needed
after the kernel has booted. This patch moves the cpu_suspend_init()
function to the __init section.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:23:59 +01:00
Lorenzo Pieralisi
b9e97ef93c arm64: kernel: add __init marker to PSCI init functions
PSCI init functions must be marked as __init so that they are freed
by the kernel upon boot.

This patch marks the PSCI init functions as such since they need not
be persistent in the kernel address space after the kernel has booted.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:23:45 +01:00
Lorenzo Pieralisi
756854d9b9 arm64: kernel: enable PSCI cpu operations on UP systems
PSCI CPU operations have to be enabled on UP kernels so that calls
like eg cpu_suspend can be made functional on UP too.

This patch reworks the PSCI CPU operations so that they can be
enabled on UP systems.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 15:23:25 +01:00
Will Deacon
5959e25729 arm64: fpsimd: avoid restoring fpcr if the contents haven't changed
Writing to the FPCR is commonly implemented as a self-synchronising
operation in the CPU, so avoid writing to the register when the saved
value matches that in the hardware already.

Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-18 10:21:17 +01:00
Ian Campbell
ad789ba5f7 arm64: Align the kbuild output for VDSOL and VDSOA
Signed-off-by: Ian Campbell <ijc@hellion.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kbuild@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:19:29 +01:00
Will Deacon
601255ae3c arm64: vdso: move data page before code pages
Andy pointed out that binutils generates additional sections in the vdso
image (e.g. section string table) which, if our .text section gets big
enough, could cross a page boundary and end up screwing up the location
where the kernel expects to put the data page.

This patch solves the issue in the same manner as x86_32, by moving the
data page before the code pages.

Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:18:57 +01:00
Will Deacon
2fea7f6c98 arm64: vdso: move to _install_special_mapping and remove arch_vma_name
_install_special_mapping replaces install_special_mapping and removes
the need to detect special VMA in arch_vma_name.

This patch moves the vdso and compat vectors page code over to the new
API.

Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:18:46 +01:00
Will Deacon
8715493852 arm64: vdso: put vdso datapage in a separate vma
The VDSO datapage doesn't need to be executable (no code there) or
CoW-able (the kernel writes the page, so a private copy is totally
useless).

This patch moves the datapage into its own VMA, identified as "[vvar]"
in /proc/<pid>/maps.

Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:18:36 +01:00
Catalin Marinas
b2f8c07bcb arm64: Remove duplicate (SWAPPER|IDMAP)_DIR_SIZE definitions
Just keep the asm/page.h definition as this is included in vmlinux.lds.S
as well.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
2014-07-17 16:02:42 +01:00
Jungseok Lee
ac7b406c1a arm64: Use pr_* instead of printk
This patch fixed the following checkpatch complaint as using pr_*
instead of printk.

WARNING: printk() should include KERN_ facility level

Signed-off-by: Jungseok Lee <jays.lee@samsung.com>
Reviewed-by: Sungjinn Chung <sungjinn.chung@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-07-17 16:02:32 +01:00
Davidlohr Bueso
3a6bfbc91d arch, locking: Ciao arch_mutex_cpu_relax()
The arch_mutex_cpu_relax() function, introduced by 34b133f, is
hacky and ugly. It was added a few years ago to address the fact
that common cpu_relax() calls include yielding on s390, and thus
impact the optimistic spinning functionality of mutexes. Nowadays
we use this function well beyond mutexes: rwsem, qrwlock, mcs and
lockref. Since the macro that defines the call is in the mutex header,
any users must include mutex.h and the naming is misleading as well.

This patch (i) renames the call to cpu_relax_lowlatency  ("relax, but
only if you can do it with very low latency") and (ii) defines it in
each arch's asm/processor.h local header, just like for regular cpu_relax
functions. On all archs, except s390, cpu_relax_lowlatency is simply cpu_relax,
and thus we can take it out of mutex.h. While this can seem redundant,
I believe it is a good choice as it allows us to move out arch specific
logic from generic locking primitives and enables future(?) archs to
transparently define it, similarly to System Z.

Signed-off-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Bharat Bhushan <r65777@freescale.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chen Liqin <liqin.linux@gmail.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: David Howells <dhowells@redhat.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
Cc: Dominik Dingel <dingel@linux.vnet.ibm.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Jesper Nilsson <jesper.nilsson@axis.com>
Cc: Joe Perches <joe@perches.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Joseph Myers <joseph@codesourcery.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michael Neuling <mikey@neuling.org>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Nicolas Pitre <nico@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Qais Yousef <qais.yousef@imgtec.com>
Cc: Qiaowei Ren <qiaowei.ren@intel.com>
Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Steven Miao <realmz6@gmail.com>
Cc: Steven Rostedt <srostedt@redhat.com>
Cc: Stratos Karafotis <stratosk@semaphore.gr>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Vasily Kulikov <segoon@openwall.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Vineet Gupta <Vineet.Gupta1@synopsys.com>
Cc: Waiman Long <Waiman.Long@hp.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Wolfram Sang <wsa@the-dreams.de>
Cc: adi-buildroot-devel@lists.sourceforge.net
Cc: linux390@de.ibm.com
Cc: linux-alpha@vger.kernel.org
Cc: linux-am33-list@redhat.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-c6x-dev@linux-c6x.org
Cc: linux-cris-kernel@axis.com
Cc: linux-hexagon@vger.kernel.org
Cc: linux-ia64@vger.kernel.org
Cc: linux@lists.openrisc.net
Cc: linux-m32r-ja@ml.linux-m32r.org
Cc: linux-m32r@ml.linux-m32r.org
Cc: linux-m68k@lists.linux-m68k.org
Cc: linux-metag@vger.kernel.org
Cc: linux-mips@linux-mips.org
Cc: linux-parisc@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s390@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: linux-xtensa@linux-xtensa.org
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/1404079773.2619.4.camel@buesod1.americas.hpqcorp.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-17 12:32:47 +02:00
Peter Zijlstra
4badad352a locking/mutex: Disable optimistic spinning on some architectures
The optimistic spin code assumes regular stores and cmpxchg() play nice;
this is found to not be true for at least: parisc, sparc32, tile32,
metag-lock1, arc-!llsc and hexagon.

There is further wreckage, but this in particular seemed easy to
trigger, so blacklist this.

Opt in for known good archs.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Jason Low <jason.low2@hp.com>
Cc: Waiman Long <waiman.long@hp.com>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: John David Anglin <dave.anglin@bell.net>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: stable@vger.kernel.org
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: sparclinux@vger.kernel.org
Link: http://lkml.kernel.org/r/20140606175316.GV13930@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-07-16 14:57:07 +02:00
H. Peter Anvin
e0463e42d7 * Remove a duplicate copy of linux_banner from the arm64 EFI stub
which, apart from reducing code duplication also stops the arm64 stub
    being rebuilt every time make is invoked - Ard Biesheuvel
 
  * Fix the EFI fdt code to not report a boot error if UEFI is
    unavailable since booting without UEFI parameters is a valid use case
    for non-UEFI platforms - Catalin Marinas
 
  * Include a .bss section in the EFI boot stub PE/COFF headers to fix a
    memory corruption bug - Michael Brown
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJTw9I4AAoJEC84WcCNIz1VJcwQAKBZXaIjsxWqtMsqvB7RmCtw
 5wi44hf2sdrwKCBUdxRBHBoiISps3f+5VJZN25eJ5QG+eqcqoQkGoQsAVlMbGrOJ
 iyYF79REUj/ujovtodKQPOci4ueYhiRemBc8o6abOjSwdAe3fj5vpQgKQuS9RwtG
 SIy4MBucE58Zle6vGHXLxHKdJVCB0/ALESwjg9fXWluopHdWkmmN0kKhYlysElHx
 7zn2C2RoVDNQQwknueUznktKUH9pxLBEW74qE98CBZ9Spt8QpjvT22UkxoOU8grU
 RnogYgJhwiy1+GNQ5524rZwcM2XP1qFhCcWoKxP3I0UhTOe2Za7Ogi8ggUAXbvEh
 +hCsCIM3DqzAROOQMsmUZHkMJ0Gi2HSL12tt1KHNZ2zh74hiMPqDAmzkjBuf2KE0
 5Lxdnc47UmHE9qVduj8fg2A+cMLV/K4NBlitOXq6lJp9+n8Wa93MLF0WENd9akE+
 c9u7sAoTwG/5DUDy3rB1H5P65/WKyXNe9Db8JdZp2+62dxmsNyF++zkApDJT3Op7
 MyFTpVkeZrWZbZVwZJXZuKNdh19Jv/adZrvC7OKvcDBfjow8AGzbDDnr4ez42QT8
 OkM3uGyBuVTgtUdmPYNREXP5zjr1NfZV/w1fbslaJl/UbFZrNUILTP5YyqTphv2M
 W9eO6CyrmX10gd5v47A7
 =Cvne
 -----END PGP SIGNATURE-----

Merge tag 'efi-urgent' into x86/urgent

 * Remove a duplicate copy of linux_banner from the arm64 EFI stub
   which, apart from reducing code duplication also stops the arm64 stub
   being rebuilt every time make is invoked - Ard Biesheuvel

 * Fix the EFI fdt code to not report a boot error if UEFI is
   unavailable since booting without UEFI parameters is a valid use case
   for non-UEFI platforms - Catalin Marinas

 * Include a .bss section in the EFI boot stub PE/COFF headers to fix a
   memory corruption bug - Michael Brown

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2014-07-14 13:45:44 -07:00
Marc Zyngier
d329de0933 arm64: KVM: enable trapping of all debug registers
Enable trapping of the debug registers, preventing the guests to
mess with the host state (and allowing guests to use the debug
infrastructure as well).

Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:46 -07:00
Marc Zyngier
b0e626b380 arm64: KVM: implement lazy world switch for debug registers
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers "only").

Also, we only save/restore them when MDSCR_EL1 has debug enabled,
or when we've flagged the debug registers as dirty. It means that
most of the time, we only save/restore MDSCR_EL1.

Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:46 -07:00
Marc Zyngier
bdfb4b389c arm64: KVM: add trap handlers for AArch32 debug registers
Add handlers for all the AArch32 debug registers that are accessible
from EL0 or EL1. The code follow the same strategy as the AArch64
counterpart with regards to tracking the dirty state of the debug
registers.

Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:45 -07:00
Marc Zyngier
e6a9551760 arm64: KVM: check ordering of all system register tables
We now have multiple tables for the various system registers
we trap. Make sure we check the order of all of them, as it is
critical that we get the order right (been there, done that...).

Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:45 -07:00
Marc Zyngier
a9866ba0cd arm64: KVM: use separate tables for AArch32 32 and 64bit traps
An interesting "feature" of the CP14 encoding is that there is
an overlap between 32 and 64bit registers, meaning they cannot
live in the same table as we did for CP15.

Create separate tables for 64bit CP14 and CP15 registers, and
let the top level handler use the right one.

Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:45 -07:00
Marc Zyngier
72564016aa arm64: KVM: common infrastructure for handling AArch32 CP14/CP15
As we're about to trap a bunch of CP14 registers, let's rework
the CP15 handling so it can be generalized and work with multiple
tables.

Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:44 -07:00
Marc Zyngier
0c557ed498 arm64: KVM: add trap handlers for AArch64 debug registers
Add handlers for all the AArch64 debug registers that are accessible
from EL0 or EL1. The trapping code keeps track of the state of the
debug registers, allowing for the switch code to implement a lazy
switching strategy.

Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:44 -07:00
Marc Zyngier
51ba248164 arm64: move DBG_MDSCR_* to asm/debug-monitors.h
In order to be able to use the DBG_MDSCR_* macros from the KVM code,
move the relevant definitions to the obvious include file.

Also move the debug_el enum to a portion of the file that is guarded
by #ifndef __ASSEMBLY__ in order to use that file from assembly code.

Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Anup Patel <anup.patel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:43 -07:00
Marc Zyngier
7609c1251f arm64: KVM: rename pm_fake handler to trap_raz_wi
pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).

As we're about to use it (a lot) in a different context, rename it
with a (admitedly cryptic) name that make sense for all users.

Reviewed-by: Anup Patel <anup.patel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:43 -07:00
Victor Kamensky
f0a3eaff71 ARM64: KVM: fix big endian issue in access_vm_reg for 32bit guest
Fix issue with 32bit guests running on top of BE KVM host.
Indexes of high and low words of 64bit cp15 register are
swapped in case of big endian code, since 64bit cp15 state is
restored or saved with double word write or read instruction.

Define helper macro to access low words of 64bit cp15 register.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:43 -07:00
Victor Kamensky
26c99af101 ARM64: KVM: set and get of sys registers in BE case
Since size of all sys registers is always 8 bytes. Current
code is actually endian agnostic. Just clean it up a bit.
Removed comment about little endian. Change type of pointer
from 'void *' to 'u64 *' to enforce stronger type checking.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:42 -07:00
Victor Kamensky
ba083d20d8 ARM64: KVM: store kvm_vcpu_fault_info est_el2 as word
esr_el2 field of struct kvm_vcpu_fault_info has u32 type.
It should be stored as word. Current code works in LE case
because existing puts least significant word of x1 into
esr_el2, and it puts most significant work of x1 into next
field, which accidentally is OK because it is updated again
by next instruction. But existing code breaks in BE case.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:41 -07:00
Victor Kamensky
b30070862e ARM64: KVM: MMIO support BE host running LE code
In case of guest CPU running in LE mode and host runs in
BE mode we need byteswap data, so read/write is emulated correctly.

Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:41 -07:00
Marc Zyngier
67b2abfedb arm64: KVM: vgic: enable GICv2 emulation on top on GICv3 hardware
Add the last missing bits that enable GICv2 emulation on top of
GICv3 hardware.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:37 -07:00
Marc Zyngier
754d377260 arm64: KVM: vgic: add GICv3 world switch
Introduce the GICv3 world switch code used to save/restore the
GICv3 context.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:37 -07:00
Marc Zyngier
b2fb1c0d37 KVM: ARM: vgic: add the GICv3 backend
Introduce the support code for emulating a GICv2 on top of GICv3
hardware.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-07-11 04:57:36 -07:00