arm and arm64 architectures. It required some minor updates to the generic
tracepoint system, so it had to wait for me to implement them.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJT5N6DAAoJEKQekfcNnQGuv60H/2NXDO/kUtvdF0L7ewaGbDaO
sjGOXMHDDgF4fQixPsIYNHdra0iGSPL59NBjIaLsESFsB8SUOVqXSclV0MSiZJQc
1PgTduE19p2kEMsqw6F4l8Ir8hPrUT8V8pQScR9lUkww3ANpyTB6Bbg1rZHcmTYA
yAq20q85rfQrAGwbvvhg40UYF8/su0FMUAbt/a180kVL8yeQI2liAkNOJTMCVq35
PpL7if4dlqAhKMqne71ae080PIPOH34q2lmZX3/SbpRvT2tSkS4dkoSFtCAD4pvx
c2TKNOxEDDWlinN/305PXH2yQ87MTIm44SBaTu/WPllUSQoO//EKI7+13tNS8Qc=
=/VeP
-----END PGP SIGNATURE-----
Merge tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull IPI tracepoints for ARM from Steven Rostedt:
"Nicolas Pitre added generic tracepoints for tracing IPIs and updated
the arm and arm64 architectures. It required some minor updates to
the generic tracepoint system, so it had to wait for me to implement
them"
* tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ARM64: add IPI tracepoints
ARM: add IPI tracepoints
tracepoint: add generic tracepoint definitions for IPI tracing
tracing: Do not do anything special with tracepoint_string when tracing is disabled
The strings used to list IPIs in /proc/interrupts are reused for tracing
purposes.
While at it, prevent a negative ipinr from escaping the range check
in handle_IPI().
Link: http://lkml.kernel.org/p/1406318733-26754-4-git-send-email-nicolas.pitre@linaro.org
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
they had small conflicts (respectively within KVM documentation,
and with 3.16-rc changes). Since they were all within the subsystem,
I took care of them.
Stephen Rothwell reported some snags in PPC builds, but they are all
fixed now; the latest linux-next report was clean.
New features for ARM include:
- KVM VGIC v2 emulation on GICv3 hardware
- Big-Endian support for arm/arm64 (guest and host)
- Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
And for PPC:
- Book3S: Good number of LE host fixes, enable HV on LE
- Book3S HV: Add in-guest debug support
This release drops support for KVM on the PPC440. As a result, the
PPC merge removes more lines than it adds. :)
I also included an x86 change, since Davidlohr tied it to an independent
bug report and the reporter quickly provided a Tested-by; there was no
reason to wait for -rc2.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJT4iIJAAoJEBvWZb6bTYbyZqoP/3Wxy8NWPFJ8HGt81NHlGnDS
a9UbL7EibcOEG+aaKqmtBglTD5YDiGBDNCxxiSJaDHt+grLN4fsWIliJob1nJFoO
90f89EWN2XjeCrJXA5nUoeg5tpc5OoYKsiP6pTgzIwkP8vvs/H1+zpcTS/UmYsr/
qipVMMsM+zZeHWZcSbqjW88z7YqIn1sr5282wJ85cbyv4KGizb/G4dyPuDqLb6np
hkAD8Ah6VV2suQ2FSy7G2fg20R0vglUi60hkEHLoCBPVqJCl7SmC8MvxNbjBnP8S
J36R0R0u1wHYKzAGooLJGVOZ/o/gSiVqKX+++L2EvJBN+kuA6u/7fxLyBT+LwDAE
IF/Aln5rpg1fe+eywvhz86WljTVEQ8bO1zVsIQUPY+/ZOPedZHMwyvXft8ogbjSp
2m9OJ/3e8Aggh0OeHpCDoeow+QDUXvX0YdCw+2Yh0p+7VMXqkyp0QEiBu38jrusC
rB3VNifJbDSWLKdG9LfCAPHnxZD2XYEwv2WFBo6KQOGMGHfx0GXpCOL/jQihrhA6
HtEG5Bs3lvnHQemdpUZ58xojiABbMaUPdcnPXQQEp23WhZzrfLMLzqVG0VYnhSsC
9pi7MJj8c31rqx5WU2oRM28i/BvNxN0NCtkDpineO5s3f89Ws1xnwxqlm38AKP0J
irJQTYFEqec+GM9JK1rG
=hyQP
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull second round of KVM changes from Paolo Bonzini:
"Here are the PPC and ARM changes for KVM, which I separated because
they had small conflicts (respectively within KVM documentation, and
with 3.16-rc changes). Since they were all within the subsystem, I
took care of them.
Stephen Rothwell reported some snags in PPC builds, but they are all
fixed now; the latest linux-next report was clean.
New features for ARM include:
- KVM VGIC v2 emulation on GICv3 hardware
- Big-Endian support for arm/arm64 (guest and host)
- Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
And for PPC:
- Book3S: Good number of LE host fixes, enable HV on LE
- Book3S HV: Add in-guest debug support
This release drops support for KVM on the PPC440. As a result, the
PPC merge removes more lines than it adds. :)
I also included an x86 change, since Davidlohr tied it to an
independent bug report and the reporter quickly provided a Tested-by;
there was no reason to wait for -rc2"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (122 commits)
KVM: Move more code under CONFIG_HAVE_KVM_IRQFD
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
KVM: nVMX: Fix nested vmexit ack intr before load vmcs01
KVM: PPC: Enable IRQFD support for the XICS interrupt controller
KVM: Give IRQFD its own separate enabling Kconfig option
KVM: Move irq notifier implementation into eventfd.c
KVM: Move all accesses to kvm::irq_routing into irqchip.c
KVM: irqchip: Provide and use accessors for irq routing table
KVM: Don't keep reference to irq routing table in irqfd struct
KVM: PPC: drop duplicate tracepoint
arm64: KVM: fix 64bit CP15 VM access for 32bit guests
KVM: arm64: GICv3: mandate page-aligned GICV region
arm64: KVM: GICv3: move system register access to msr_s/mrs_s
KVM: PPC: PR: Handle FSCR feature deselects
KVM: PPC: HV: Remove generic instruction emulation
KVM: PPC: BOOKEHV: rename e500hv_spr to bookehv_spr
KVM: PPC: Remove DCR handling
KVM: PPC: Expose helper functions for data/inst faults
KVM: PPC: Separate loadstore emulation from priv emulation
KVM: PPC: Handle magic page in kvmppc_ld/st
...
Pull security subsystem updates from James Morris:
"In this release:
- PKCS#7 parser for the key management subsystem from David Howells
- appoint Kees Cook as seccomp maintainer
- bugfixes and general maintenance across the subsystem"
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (94 commits)
X.509: Need to export x509_request_asymmetric_key()
netlabel: shorter names for the NetLabel catmap funcs/structs
netlabel: fix the catmap walking functions
netlabel: fix the horribly broken catmap functions
netlabel: fix a problem when setting bits below the previously lowest bit
PKCS#7: X.509 certificate issuer and subject are mandatory fields in the ASN.1
tpm: simplify code by using %*phN specifier
tpm: Provide a generic means to override the chip returned timeouts
tpm: missing tpm_chip_put in tpm_get_random()
tpm: Properly clean sysfs entries in error path
tpm: Add missing tpm_do_selftest to ST33 I2C driver
PKCS#7: Use x509_request_asymmetric_key()
Revert "selinux: fix the default socket labeling in sock_graft()"
X.509: x509_request_asymmetric_keys() doesn't need string length arguments
PKCS#7: fix sparse non static symbol warning
KEYS: revert encrypted key change
ima: add support for measuring and appraising firmware
firmware_class: perform new LSM checks
security: introduce kernel_fw_from_file hook
PKCS#7: Missing inclusion of linux/err.h
...
Pull ARM updates from Russell King:
"Included in this update:
- perf updates from Will Deacon:
The main changes are callchain stability fixes from Jean Pihet and
event mapping and PMU name rework from Mark Rutland
The latter is preparatory work for enabling some code re-use with
arm64 in the future.
- updates for nommu from Uwe Kleine-König:
Two different fixes for the same problem making some ARM nommu
configurations not boot since 3.6-rc1. The problem is that
user_addr_max returned the biggest available RAM address which
makes some copy_from_user variants fail to read from XIP memory.
- deprecate legacy OMAP DMA API, in preparation for it's removal.
The popular drivers have been converted over, leaving a very small
number of rarely used drivers, which hopefully can be converted
during the next cycle with a bit more visibility (and hopefully
people popping out of the woodwork to help test)
- more tweaks for BE systems, particularly with the kernel image
format. In connection with this, I've cleaned up the way we
generate the linker script for the decompressor.
- removal of hard-coded assumptions of the kernel stack size, making
everywhere depend on the value of THREAD_SIZE_ORDER.
- MCPM updates from Nicolas Pitre.
- Make it easier for proper CPU part number checks (which should
always include the vendor field).
- Assembly code optimisation - use the "bx" instruction when
returning from a function on ARMv6+ rather than "mov pc, reg".
- Save the last kernel misaligned fault location and report it via
the procfs alignment file.
- Clean up the way we create the initial stack frame, which is a
repeated pattern in several different locations.
- Support for 8-byte get_user(), needed for some DRM implementations.
- mcs locking from Will Deacon.
- Save and restore a few more Cortex-A9 registers (for errata
workarounds)
- Fix various aspects of the SWP emulation, and the ELF hwcap for the
SWP instruction.
- Update LPAE logic for pte_write and pmd_write to make it more
correct.
- Support for Broadcom Brahma15 CPU cores.
- ARM assembly crypto updates from Ard Biesheuvel"
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (53 commits)
ARM: add comments to the early page table remap code
ARM: 8122/1: smp_scu: enable SCU standby support
ARM: 8121/1: smp_scu: use macro for SCU enable bit
ARM: 8120/1: crypto: sha512: add ARM NEON implementation
ARM: 8119/1: crypto: sha1: add ARM NEON implementation
ARM: 8118/1: crypto: sha1/make use of common SHA-1 structures
ARM: 8113/1: remove remaining definitions of PLAT_PHYS_OFFSET from <mach/memory.h>
ARM: 8111/1: Enable erratum 798181 for Broadcom Brahma-B15
ARM: 8110/1: do CPU-specific init for Broadcom Brahma15 cores
ARM: 8109/1: mm: Modify pte_write and pmd_write logic for LPAE
ARM: 8108/1: mm: Introduce {pte,pmd}_isset and {pte,pmd}_isclear
ARM: hwcap: disable HWCAP_SWP if the CPU advertises it has exclusives
ARM: SWP emulation: only initialise on ARMv7 CPUs
ARM: SWP emulation: always enable when SMP is enabled
ARM: 8103/1: save/restore Cortex-A9 CP15 registers on suspend/resume
ARM: 8098/1: mcs lock: implement wfe-based polling for MCS locking
ARM: 8091/2: add get_user() support for 8 byte types
ARM: 8097/1: unistd.h: relocate comments back to place
ARM: 8096/1: Describe required sort order for textofs-y (TEXT_OFFSET)
ARM: 8090/1: add revision info for PL310 errata 588369 and 727915
...
- Fixes and code refactoring for stage2 kvm MMU unmap_range
- Support unmapping IPAs on deleting memslots for arm and arm64
- Support MMIO mappings in stage2 faults
- KVM VGIC v2 emulation on GICv3 hardware
- Big-Endian support for arm/arm64 (guest and host)
- Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
- Detect non page-aligned GICV regions and bail out (plugs guest-can-crash host bug)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJT3oZTAAoJEEtpOizt6ddyKQIH/1Bj/cZYSkkSf3IJfQhHRbWN
jS37IBsvcHwjHkRJxCNmQuKP/Ho5XEusluPGrVY25PAgBMl+ouPqAuKzUk+GEab6
snjJjDFqw0zs0x0h3tg6UwfZdF+eyyIkmFGn8/IATD5P3PPd8kWBVtYnSnZmYK+R
KJNVcp6RPDrt9kvUDY8Ln9fW99Jl+7CdgQAnc3QkHcXUlanLyrfq+fE1lSzyrbhZ
ETzyMFAX4kCdc8tflgyyBS4A7+RvfQ6ZIQummxoAMFHIoSk90dtK7ovX68rd9U3e
yL+mpe130+dTIFpUMbxCnIdE7C0eud3vcgXC6MuWtFjUrxQoaEgsVE+ffGC5tX0=
=axkp
-----END PGP SIGNATURE-----
Merge tag 'kvm-arm-for-3.17' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm
KVM/ARM New features for 3.17 include:
- Fixes and code refactoring for stage2 kvm MMU unmap_range
- Support unmapping IPAs on deleting memslots for arm and arm64
- Support MMIO mappings in stage2 faults
- KVM VGIC v2 emulation on GICv3 hardware
- Big-Endian support for arm/arm64 (guest and host)
- Debug Architecture support for arm64 (arm32 is on Christoffer's todo list)
Conflicts:
virt/kvm/arm/vgic.c [last minute cherry-pick from 3.17 to 3.16]
Pull ARM fixes from Russell King:
"A few fixes for ARM. Some of these are correctness issues:
- TLBs must be flushed after the old mappings are removed by the DMA
mapping code, but before the new mappings are established.
- An off-by-one entry error in the Keystone LPAE setup code.
Fixes include:
- ensuring that the identity mapping for LPAE does not remove the
kernel image from the identity map.
- preventing userspace from trapping into kgdb.
- fixing a preemption issue in the Intel iwmmxt code.
- fixing a build error with nommu.
Other changes include:
- Adding a note about which areas of memory are expected to be
accessible while the identity mapping tables are in place"
* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
ARM: 8124/1: don't enter kgdb when userspace executes a kgdb break instruction
ARM: idmap: add identity mapping usage note
ARM: 8115/1: LPAE: reduce damage caused by idmap to virtual memory layout
ARM: fix alignment of keystone page table fixup
ARM: 8112/1: only select ARM_PATCH_PHYS_VIRT if MMU is enabled
ARM: 8100/1: Fix preemption disable in iwmmxt_task_enable()
ARM: DMA: ensure that old section mappings are flushed from the TLB
The kgdb breakpoint hooks (kgdb_brk_fn and kgdb_compiled_brk_fn)
should only be entered when a kgdb break instruction is executed
from the kernel. Otherwise, if kgdb is enabled, a userspace program
can cause the kernel to drop into the debugger by executing either
KGDB_BREAKINST or KGDB_COMPILED_BREAK.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
With SCU standby enabled, SCU CLK will be turned off when all processors
are in WFI mode. And the clock will be turned on when any processor
leaves WFI mode.
This behavior should be preferable in terms of power efficiency of
system idle. So let's set the SCU standby bit to enable the support in
function scu_enable().
Cortex-A9 earlier than r2p0 has no standby bit in SCU, so we need to
skip setting the bit for those.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Acked-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Use macro instead of magic number for SCU enable bit.
Signed-off-by: Shawn Guo <shawn.guo@freescale.com>
Acked-by: Soren Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 1c2f87c225
(ARM: 8025/1: Get rid of meminfo) dropped the upper bound on
the number of memory banks that can be added as there was no
technical need in the kernel. It turns out though, some bootloaders
(specifically the arndale-octa exynos boards) may pass invalid memory
information and rely on the kernel to not parse this data. This is a
bug in the bootloader but we still need to work around this.
Work around this by introducing a dt_fixup function. This function
gets called before the flattened devicetree is scanned for memory
and the like. In this fixup function for exynos, limit the maximum
number of memory regions in the devicetree.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Tested-by: Andreas Färber <afaerber@suse.de>
[glikely: Added a comment and fixed up function name]
Signed-off-by: Grant Likely <grant.likely@linaro.org>
Broadcom Brahma-B15 (r0p0..r0p2) is also affected by Cortex-A15
erratum 798181, so enable the workaround for Brahma-B15.
Signed-off-by: Gregory Fong <gregory.0xf0@gmail.com>
Acked-by: Marc Carino <marc.ceeeee@gmail.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
commit 431a84b1a4
("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()")
introduced macros {inc,dec}_preempt_count to iwmmxt_task_enable
to make it run with preemption disabled.
Unfortunately, other functions in iwmmxt.S also use concan_{save,dump,load}
sections located in iwmmxt_task_enable() to deal with iWMMXt coprocessor.
This causes an unbalanced preempt_count due to excessive dec_preempt_count
and destroyed return addresses in callers of concan_ labels due to a register
collision:
Linux version 3.16.0-rc3-00062-gd92a333-dirty (jef@armhf) (gcc version 4.8.3 (Debian 4.8.3-4) ) #5 PREEMPT Thu Jul 3 19:46:39 CEST 2014
CPU: ARMv7 Processor [560f5815] revision 5 (ARMv7), cr=10c5387d
CPU: PIPT / VIPT nonaliasing data cache, PIPT instruction cache
Machine model: SolidRun CuBox
...
PJ4 iWMMXt v2 coprocessor enabled.
...
Unable to handle kernel paging request at virtual address fffffffe
pgd = bb25c000
[fffffffe] *pgd=3bfde821, *pte=00000000, *ppte=00000000
Internal error: Oops: 80000007 [#1] PREEMPT ARM
Modules linked in:
CPU: 0 PID: 62 Comm: startpar Not tainted 3.16.0-rc3-00062-gd92a333-dirty #5
task: bb230b80 ti: bb256000 task.ti: bb256000
PC is at 0xfffffffe
LR is at iwmmxt_task_copy+0x44/0x4c
pc : [<fffffffe>] lr : [<800130ac>] psr: 40000033
sp : bb257de8 ip : 00000013 fp : bb257ea4
r10: bb256000 r9 : fffffdfe r8 : 76e898e6
r7 : bb257ec8 r6 : bb256000 r5 : 7ea12760 r4 : 000000a0
r3 : ffffffff r2 : 00000003 r1 : bb257df8 r0 : 00000000
Flags: nZcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user
Control: 10c5387d Table: 3b25c019 DAC: 00000015
Process startpar (pid: 62, stack limit = 0xbb256248)
This patch fixes the issue by moving concan_{save,dump,load} into separate
code sections and make iwmmxt_task_enable() call them in the same way the
other functions use concan_ symbols. The test for valid ownership is moved
to concan_save and is safe for the other user of it, iwmmxt_task_disable().
The register collision is also resolved by moving concan_ symbols as
{inc,dec}_preempt_count are now local to iwmmxt_task_enable().
Fixes: 431a84b1a4 ("ARM: 8034/1: Disable preemption in iwmmxt_task_enable()")
Signed-off-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
When the CPU has support for the byte and word exclusive operations,
userspace should use them in preference to the SWP instructions.
Detect the presence of these instructions by reading the ISAR CPU ID
registers and adjust the ELF HWCAP mask appropriately.
Note that ARM1136 < r1p0 has no ISAR4, so this is explicitly detected
and the test disabled, leaving the current situation where HWCAP_SWP
is set.
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Previous CPUs do not have the ability to trap SWP instructions, so
it's pointless initialising this code there.
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 78d7530ac3 ("ARM: Clean up linker script using new linker script
macros.") modified the arm kernel linker script to use the STABS_DEBUG
macro, but left a .comment section definition. As STABS_DEBUG defines
the .comment section in an identical way, the second section definition
is redundant and can be removed.
This patch removes the redundant .comment section definition.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Use the newly-introduced frame_pointer macro to extract
the correct FP based on whether we are in THUMB2 mode or not.
Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make the unwind code use the correct API so that the frame pointer
is extracted from the correct register.
Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make use of the arm_get_current_stackframe api so that
the frame pointer is correctly referenced in THUMB2 mode
Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Make the perf backend use the API so that it correctly references the FP
when in THUMB2 mode
Signed-off-by: Nikolay Borisov <Nikolay.Borisov@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls. Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).
We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.
Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code. This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.
Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ensure that platform maintainers check the CPU part number in the right
manner: the CPU part number is meaningless without also checking the
CPU implement(e|o)r (choose your preferred spelling!) Provide an
interface which returns both the implementer and part number together,
and update the definitions to include the implementer.
Mark the old function as being deprecated... indeed, using the old
function with the definitions will now always evaluate as false, so
people must update their un-merged code to the new function. While
this could be avoided by adding new definitions, we'd also have to
create new names for them which would be awkward.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
HSCTLR.EE is defined as bit[25] referring to arm manual
DDI0606C.b(p1590).
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Li Liu <john.liuli@huawei.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
In order to make way for the GICv3 registers, move the v2-specific
registers to their own structure.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Under perf, the fp unwinding scheme requires access to user space memory
and can provoke a pagefault via call to __copy_from_user_inatomic from
user_backtrace. This unwinding can take place in response to an interrupt
(__perf_event_overflow). This is undesirable as we may already have
mmap_sem held for write. One example being a process that calls mprotect
just as a the PMU counters overflow.
An example that can provoke this behaviour:
perf record -e event:tocapture --call-graph fp ./application_to_test
This patch addresses this issue by disabling pagefaults briefly in
user_backtrace (as is done in the other architectures: ARM64, x86, Sparc etc.).
Without the patch a deadlock occurs when __perf_event_overflow is called
while reading the data from the user space:
[ INFO: possible recursive locking detected ]
3.16.0-rc2-00038-g0ed7ff6 #46 Not tainted
---------------------------------------------
stress/1634 is trying to acquire lock:
(&mm->mmap_sem){++++++}, at: [<c001dc04>] do_page_fault+0xa8/0x428
but task is already holding lock:
(&mm->mmap_sem){++++++}, at: [<c00f4098>] SyS_mprotect+0xa8/0x1c8
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&mm->mmap_sem);
lock(&mm->mmap_sem);
*** DEADLOCK ***
May be due to missing lock nesting notation
2 locks held by stress/1634:
#0: (&mm->mmap_sem){++++++}, at: [<c00f4098>] SyS_mprotect+0xa8/0x1c8
#1: (rcu_read_lock){......}, at: [<c00c29dc>] __perf_event_overflow+0x120/0x294
stack backtrace:
CPU: 1 PID: 1634 Comm: stress Not tainted 3.16.0-rc2-00038-g0ed7ff6 #46
[<c0017c8c>] (unwind_backtrace) from [<c0012eec>] (show_stack+0x20/0x24)
[<c0012eec>] (show_stack) from [<c04de914>] (dump_stack+0x7c/0x98)
[<c04de914>] (dump_stack) from [<c006a360>] (__lock_acquire+0x1484/0x1cf0)
[<c006a360>] (__lock_acquire) from [<c006b14c>] (lock_acquire+0xa4/0x11c)
[<c006b14c>] (lock_acquire) from [<c04e3880>] (down_read+0x40/0x7c)
[<c04e3880>] (down_read) from [<c001dc04>] (do_page_fault+0xa8/0x428)
[<c001dc04>] (do_page_fault) from [<c00084ec>] (do_DataAbort+0x44/0xa8)
[<c00084ec>] (do_DataAbort) from [<c0013a1c>] (__dabt_svc+0x3c/0x60)
Exception stack(0xed7c5ae0 to 0xed7c5b28)
5ae0: ed7c5b5c b6dadff4 ffffffec 00000000 b6dadff4 ebc08000 00000000 ebc08000
5b00: 0000007e 00000000 ed7c4000 ed7c5b94 00000014 ed7c5b2c c001a438 c0236c60
5b20: 00000013 ffffffff
[<c0013a1c>] (__dabt_svc) from [<c0236c60>] (__copy_from_user+0xa4/0x3a4)
Acked-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Jean Pihet <jean.pihet@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
An event may occur when an mm is already released.
As per commit 20afc60f89
'x86, perf: Check that current->mm is alive before getting user callchain'
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jean Pihet <jean.pihet@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Currently the krait_pmu_{enable,disable}_event functions use the global
cpu_pmu variable while all the other pmu enable/disable functions
derive this from the event argument.
This patch brings the Krait functions into line with the rest of the PMU
backends by deriving the address of the pmu from the event argument.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
When described in DT, PMUs are given specific compatible strings
(e.g. "arm,cortex-a15-pmu") which makes it very easy to reorganise the
way individual PMUs are handled (i.e. we can easily split them into
separate drivers). The same is not true of PMUs described in board
files, which are all use the platform_device_id "arm-pmu" and must all
be handled by the same driver.
To enable splitting the ARMv6, ARMv7, and XScale PMU drivers we need
board files to identify which variant they provide. As a first step,
this patch adds new platform_device_id values: "armv6-pmu", "armv7-pmu,
and "xscale-pmu".
Once board files are moved over and all existing uses of "arm-pmu" are
gone, we can split the existing driver apart.
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The perf userspace tools can't handle dashes or spaces in PMU names,
which conflicts with the current naming scheme in the arm perf backend.
This prevents these PMUs from being accessed by name from the perf
tools. Additionally the ARMv6 pmus are named "v6", which does not fully
distinguish them in the sys/bus/event_source namespace.
This patch renames the PMUs consistently to a lower case form with
underscores, e.g. "armv6_1176", "armv7_cortex_a9". This is both readily
accepted by today's perf tool, and far easier to type than the
(apparently unused) convention in use previously. The OProfile name
conversion code is updated to handle this.
Due to a copy-paste error involving two "xscale1" entries, "xscale2" has
never been matched by the name OProfile name mapping. While we're
updating names, this is corrected.
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[sachin: fixed missing semicolons in armv6 backend]
Signed-off-by: Sachin Kamat <sachin.kamat@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that we have macros for declaring fully invalid event maps, put them
to work for the XScale PMU event maps. While this necessitates repeating
common indices, we no longer need to refer to *_UNSUPPORTED events at
all, and it makes it possible for the even maps to fit on a single page
on a reasonably sized monitor.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that we have macros for declaring fully invalid event maps, put them
to work for all the ARMv6 PMU event maps. While this necessitates
repeating common indices, we no longer need to refer to *_UNSUPPORTED
events at all, and it makes it possible for the even maps to fit on a
single page on a reasonably sized monitor.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Now that we have macros for declaring fully invalid event maps, put them
to work for all the ARMv7 PMU event maps. While this necessitates
repeating common indices, we no longer need to refer to *_UNSUPPORTED
events at all, and it makes it possible for the even maps to fit on a
single page on a reasonably sized monitor.
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Conditionally compile kprobes test cases for ARMv5 instructions to avoid
compilation errors with ARMv4 targets like:
/tmp/cc7Tx8ST.s:16740: Error: selected processor does not support ARM mode `clz r0,r0'
Signed-off-by: Jon Medhurst <tixy@linaro.org>
ARM data processing instructions which have a register specified shift
are defined as UNPREDICTABLE if PC is used for any register, not just
the shift value as the code was previous assuming. This issue manifests
on A15 devices as either test case failures or undefined instructions
aborts.
Reported-by: David Long <dave.long@linaro.org>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Due to a long-standing issue with Thumb symbol lookup [1] the jprobes
tests fail when built into a kernel compiled as Thumb mode. (They work
fine for ARM mode kernels or for Thumb when built as a loadable module.)
Rather than have this problem terminate testing prematurely lets instead
emit an error message and carry on with the main kprobes tests, delaying
the final failure report until the end.
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2011-August/063026.html
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Commit 143e1e28cb (sched: Rework sched_domain topology definition)
introduced a number of functions with a return value of 'const int'.
gcc doesn't know what to do with that and, if the kernel is compiled
with W=1, complains with the following warnings whenever sched.h
is included.
include/linux/sched.h:875:25: warning: type qualifiers ignored on function return type
include/linux/sched.h:882:25: warning: type qualifiers ignored on function return type
include/linux/sched.h:889:25: warning: type qualifiers ignored on function return type
include/linux/sched.h:1002:21: warning: type qualifiers ignored on function return type
Commits fb2aa855 (sched, ARM: Create a dedicated scheduler topology table)
and 607b45e9a (sched, powerpc: Create a dedicated topology table) introduce
the same warning in the arm and powerpc code.
Drop 'const' from the function declarations to fix the problem.
The fix for all three patches has to be applied together to avoid
compilation failures for the affected architectures.
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1403658329-13196-1-git-send-email-linux@roeck-us.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
On the syscall tracing path, we call out to secure_computing() to allow
seccomp to check the syscall number being attempted. As part of this, a
SIGTRAP may be sent to the tracer and the syscall could be re-written by
a subsequent SET_SYSCALL ptrace request. Unfortunately, this new syscall
is ignored by the current code unless TIF_SYSCALL_TRACE is also set on
the current thread.
This patch slightly reworks the enter path of the syscall tracing code
so that we always reload the syscall number from
current_thread_info()->syscall after the potential ptrace traps.
Acked-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
GCC 4.6.4 spits out the following warning when building perf_event_v7.c:
arch/arm/kernel/perf_event_v7.c: In function 'krait_pmu_get_event_idx':
arch/arm/kernel/perf_event_v7.c:1927:6: warning: 'bit' may be used uninitialized in this function
While upgrading the version of gcc may solve this, the code can also be
organised to be more efficient by not carrying more local variables than
is necessary across the armv7pmu_get_event_idx function call. If we set
'bit' to -1 (which is invalid for clear_bit) we can use that as an
indication whether we need to clear a bit after this function.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull more scheduler updates from Ingo Molnar:
"Second round of scheduler changes:
- try-to-wakeup and IPI reduction speedups, from Andy Lutomirski
- continued power scheduling cleanups and refactorings, from Nicolas
Pitre
- misc fixes and enhancements"
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched/deadline: Delete extraneous extern for to_ratio()
sched/idle: Optimize try-to-wake-up IPI
sched/idle: Simplify wake_up_idle_cpu()
sched/idle: Clear polling before descheduling the idle thread
sched, trace: Add a tracepoint for IPI-less remote wakeups
cpuidle: Set polling in poll_idle
sched: Remove redundant assignment to "rt_rq" in update_curr_rt(...)
sched: Rename capacity related flags
sched: Final power vs. capacity cleanups
sched: Remove remaining dubious usage of "power"
sched: Let 'struct sched_group_power' care about CPU capacity
sched/fair: Disambiguate existing/remaining "capacity" usage
sched/fair: Change "has_capacity" to "has_free_capacity"
sched/fair: Remove "power" from 'struct numa_stats'
sched: Fix signedness bug in yield_to()
sched/fair: Use time_after() in record_wakee()
sched/balancing: Reduce the rate of needless idle load balancing
sched/fair: Fix unlocked reads of some cfs_b->quota/period
Pull more perf updates from Ingo Molnar:
"A second round of perf updates:
- wide reaching kprobes sanitization and robustization, with the hope
of fixing all 'probe this function crashes the kernel' bugs, by
Masami Hiramatsu.
- uprobes updates from Oleg Nesterov: tmpfs support, corner case
fixes and robustization work.
- perf tooling updates and fixes from Jiri Olsa, Namhyung Ki, Arnaldo
et al:
* Add support to accumulate hist periods (Namhyung Kim)
* various fixes, refactorings and enhancements"
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (101 commits)
perf: Differentiate exec() and non-exec() comm events
perf: Fix perf_event_comm() vs. exec() assumption
uprobes/x86: Rename arch_uprobe->def to ->defparam, minor comment updates
perf/documentation: Add description for conditional branch filter
perf/x86: Add conditional branch filtering support
perf/tool: Add conditional branch filter 'cond' to perf record
perf: Add new conditional branch filter 'PERF_SAMPLE_BRANCH_COND'
uprobes: Teach copy_insn() to support tmpfs
uprobes: Shift ->readpage check from __copy_insn() to uprobe_register()
perf/x86: Use common PMU interrupt disabled code
perf/ARM: Use common PMU interrupt disabled code
perf: Disable sampled events if no PMU interrupt
perf: Fix use after free in perf_remove_from_context()
perf tools: Fix 'make help' message error
perf record: Fix poll return value propagation
perf tools: Move elide bool into perf_hpp_fmt struct
perf tools: Remove elide setup for SORT_MODE__MEMORY mode
perf tools: Fix "==" into "=" in ui_browser__warning assignment
perf tools: Allow overriding sysfs and proc finding with env var
perf tools: Consider header files outside perf directory in tags target
...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iEYEABECAAYFAlOTY+wACgkQuseO5dulBZXrIgCdFZyXRojufLLKikWEvHjZ3/k5
KsQAnimtcge+62/IX7YwDjWS+xg9Wt3m
=yPrI
-----END PGP SIGNATURE-----
Merge tag 'llvmlinux-for-v3.16' of git://git.linuxfoundation.org/llvmlinux/kernel
Pull LLVM patches from Behan Webster:
"Next set of patches to support compiling the kernel with clang.
They've been soaking in linux-next since the last merge window.
More still in the works for the next merge window..."
* tag 'llvmlinux-for-v3.16' of git://git.linuxfoundation.org/llvmlinux/kernel:
arm, unwind, LLVMLinux: Enable clang to be used for unwinding the stack
ARM: LLVMLinux: Change "extern inline" to "static inline" in glue-cache.h
all: LLVMLinux: Change DWARF flag to support gcc and clang
net: netfilter: LLVMLinux: vlais-netfilter
crypto: LLVMLinux: aligned-attribute.patch
Patch to prevent warning of a buggy compiler when using clang and
the ARM_UNWIND option.
Clang defines (at least on the current trunk) GNUC, GNUC_MINOR, and
GNUC_PATCHLEVEL to 4, 2, and 1 respectively.
This version of GCC gets flagged as buggy, but it isn't actually an
issue with clang so the patch will do what it did before unless clang
is defined and then it will not report the GCC version as an issue.
Signed-off-by: Mark Charlebois <charlebm@gmail.com>
Signed-off-by: Behan Webster <behanw@converseincode.com>