Some newer machines do not advertise legacy timers. The kernel can handle
that situation if the TSC and the CPU frequency are enumerated by CPUID or
MSRs and the CPU supports TSC deadline timer. If the CPU does not support
TSC deadline timer the local APIC timer frequency has to be known as well.
Some Ryzens machines do not advertize legacy timers, but there is no
reliable way to determine the bus frequency which feeds the local APIC
timer when the machine allows overclocking of that frequency.
As there is no legacy timer the local APIC timer calibration crashes due to
a NULL pointer dereference when accessing the not installed global clock
event device.
Switch the calibration loop to a non interrupt based one, which polls
either TSC (if frequency is known) or jiffies. The latter requires a global
clockevent. As the machines which do not have a global clockevent installed
have a known TSC frequency this is a non issue. For older machines where
TSC frequency is not known, there is no known case where the legacy timers
do not exist as that would have been reported long ago.
Reported-by: Daniel Drake <drake@endlessm.com>
Reported-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Daniel Drake <drake@endlessm.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1908091443030.21433@nanos.tec.linutronix.de
Link: http://bugzilla.opensuse.org/show_bug.cgi?id=1142926#c12
Dave Hansen spelled out the rules in an e-mail:
https://lkml.kernel.org/r/91eefbe4-e32b-d762-be4d-672ff915db47@intel.com
Copy those right into the <asm/intel-family.h> file to make it easy for
people to find them.
Suggested-by: Borislav Petkov <bp@alien8.de>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190815224704.GA10025@agluck-desk2.amr.corp.intel.com
Recent gcc compilers (gcc 9.1) generate warnings about an out of bounds
memset, if the memset goes accross several fields of a struct. This
generated a couple of warnings on x86_64 builds in sanitize_boot_params().
Fix this by explicitly saving the fields in struct boot_params
that are intended to be preserved, and zeroing all the rest.
[ tglx: Tagged for stable as it breaks the warning free build there as well ]
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20190731054627.5627-2-jhubbard@nvidia.com
/home/tglx/work/kernel/linus/linux/arch/x86/math-emu/errors.c: In function ‘FPU_printall’:
/home/tglx/work/kernel/linus/linux/arch/x86/math-emu/errors.c:187:9: warning: this statement may fall through [-Wimplicit-fallthrough=]
tagi = FPU_Special(r);
~~~~~^~~~~~~~~~~~~~~~
/home/tglx/work/kernel/linus/linux/arch/x86/math-emu/errors.c:188:3: note: here
case TAG_Valid:
^~~~
/home/tglx/work/kernel/linus/linux/arch/x86/math-emu/fpu_trig.c: In function ‘fyl2xp1’:
/home/tglx/work/kernel/linus/linux/arch/x86/math-emu/fpu_trig.c:1353:7: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (denormal_operand() < 0)
^
/home/tglx/work/kernel/linus/linux/arch/x86/math-emu/fpu_trig.c:1356:3: note: here
case TAG_Zero:
Remove the pointless 'break;' after 'continue;' while at it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Fix
arch/x86/kernel/apic/probe_32.c: In function ‘default_setup_apic_routing’:
arch/x86/kernel/apic/probe_32.c:146:7: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (!APIC_XAPIC(version)) {
^
arch/x86/kernel/apic/probe_32.c:151:3: note: here
case X86_VENDOR_HYGON:
^~~~
for 32-bit builds.
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190811154036.29805-1-bp@alien8.de
Currently, failure of cpuhp_setup_state() is ignored and the syscore ops
and the control interfaces can still be added even after the failure. But,
this error handling will cause a few issues:
1. The CPUs may have different values in the IA32_UMWAIT_CONTROL
MSR because there is no way to roll back the control MSR on
the CPUs which already set the MSR before the failure.
2. If the sysfs interface is added successfully, there will be a mismatch
between the global control value and the control MSR:
- The interface shows the default global control value. But,
the control MSR is not set to the value because the CPU online
function, which is supposed to set the MSR to the value,
is not installed.
- If the sysadmin changes the global control value through
the interface, the control MSR on all current online CPUs is
set to the new value. But, the control MSR on newly onlined CPUs
after the value change will not be set to the new value due to
lack of the CPU online function.
3. On resume from suspend/hibernation, the boot CPU restores the control
MSR to the global control value through the syscore ops. But, the
control MSR on all APs is not set due to lake of the CPU online
function.
To solve the issues and enforce consistent behavior on the failure
of the CPU hotplug setup, make the following changes:
1. Cache the original control MSR value which is configured by
hardware or BIOS before kernel boot. This value is likely to
be 0. But it could be a different number as well. Cache the
control MSR only once before the MSR is changed.
2. Add the CPU offline function so that the MSR is restored to the
original control value on all CPUs on the failure.
3. On the failure, exit from cpumait_init() so that the syscore ops
and the control interfaces are not added.
Reported-by: Valdis Kletnieks <valdis.kletnieks@vt.edu>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/1565401237-60936-1-git-send-email-fenghua.yu@intel.com
A few minor RISC-V updates for v5.3-rc4:
- Remove __udivdi3() from the 32-bit Linux port, converting the only
upstream user to use do_div(), per Linux policy
- Convert the RISC-V standard clocksource away from per-cpu data structures,
since only one is used by Linux, even on a multi-CPU system
- A set of DT binding updates that remove an obsolete text binding in
favor of a YAML binding, fix a bogus compatible string in the schema
(thus fixing a "make dtbs_check" warning), and clarifies the future
values expected in one of the RISC-V CPU properties
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEElRDoIDdEz9/svf2Kx4+xDQu9KksFAl1PO50ACgkQx4+xDQu9
Kks+aA//d2HVbYpsQT4dnK9HHySTEN8COpxAtXEyprqEqFvhGWiXHf/o5DZJS+KA
J0T4tnfajUxNzN+/B+Wvg8QRZyojad1gPQ8WpKGsRjVMtSZJuvo/knW1aVJPFr5S
28AjXyR5XVugvt5mNSNJTrPBeJ/bzNSZOLfat+gCsHBblNipdWwZhOwcM4mi3sQM
9fc8R5Mg0LHwNF0yVoA47WEwWgjXINkOE5ntvgNydiwoTc4noB046gy0ciZF04WS
YZMNRmr3BCL30zGZv6Ewu7xvcRYf84wjhIvPFkuaJHfxBzwAd4gulsytqGCQmFIC
Na7/5HOtzXpsS27hSev+1SGljv81p3rlKIBVxB2E1OH/eDl1U+yhm/AtM0wbXkpD
2UMHmKoSL/oYIXKOXwpWSKTGxNJbF1c56q4lwDVjq/kvg88GhFXQV/cQV1pS2Aao
KkqKl8AfxzoG3KNGKNJD42ztMW+3a3Wp215pGbrVVAwVOD8kFgCiM9RtqH2pTZrA
AjD/wpAaH9glGkCcwPovzOJ1XA9VKLy4nWLowv5zB9To1wbbAuYRj/7pUNm6LTWF
kcU0E6Yza5b2kKvwSzLdZa4W837XQrf3fRKMTTgH+fcjwma+GlLwx+f6Yk8AmTpP
TMSpV9C6M5RKLVqdzNWUT7q4dzTsNPXse5DhhhnMh3cBPHIGlzQ=
=DIhv
-----END PGP SIGNATURE-----
Merge tag 'riscv/for-v5.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux
Pull RISC-V updates from Paul Walmsley:
"A few minor RISC-V updates for v5.3-rc4:
- Remove __udivdi3() from the 32-bit Linux port, converting the only
upstream user to use do_div(), per Linux policy
- Convert the RISC-V standard clocksource away from per-cpu data
structures, since only one is used by Linux, even on a multi-CPU
system
- A set of DT binding updates that remove an obsolete text binding in
favor of a YAML binding, fix a bogus compatible string in the
schema (thus fixing a "make dtbs_check" warning), and clarifies the
future values expected in one of the RISC-V CPU properties"
* tag 'riscv/for-v5.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
dt-bindings: riscv: fix the schema compatible string for the HiFive Unleashed board
dt-bindings: riscv: remove obsolete cpus.txt
RISC-V: Remove udivdi3
riscv: delay: use do_div() instead of __udivdi3()
dt-bindings: Update the riscv,isa string description
RISC-V: Remove per cpu clocksource
Pull x86 fixes from Thomas Gleixner:
"A few fixes for x86:
- Don't reset the carefully adjusted build flags for the purgatory
and remove the unwanted flags instead. The 'reset all' approach led
to build fails under certain circumstances.
- Unbreak CLANG build of the purgatory by avoiding the builtin
memcpy/memset implementations.
- Address missing prototype warnings by including the proper header
- Fix yet more fall-through issues"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/lib/cpu: Address missing prototypes warning
x86/purgatory: Use CFLAGS_REMOVE rather than reset KBUILD_CFLAGS
x86/purgatory: Do not use __builtin_memcpy and __builtin_memset
x86: mtrr: cyrix: Mark expected switch fall-through
x86/ptrace: Mark expected switch fall-through
A compilation -Wimplicit-fallthrough warning was enabled by commit
a035d552a9 ("Makefile: Globally enable fall-through warning")
Even though clang 10.0.0 does not currently support this warning without
a patch, clang currently does not support a value for this option.
Link: https://bugs.llvm.org/show_bug.cgi?id=39382
The gcc default for this warning is 3 so removing the =3 has no effect
for gcc and enables the warning for patched versions of clang.
Also remove the =3 from an existing use in a parisc Makefile:
arch/parisc/math-emu/Makefile
Signed-off-by: Joe Perches <joe@perches.com>
Reviewed-and-tested-by: Nathan Chancellor <natechancellor@gmail.com>
Cc: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Just one fix, a revert of a commit that was meant to be a minor improvement to
some inline asm, but ended up having no real benefit with GCC and broke booting
32-bit machines when using Clang.
Thanks to:
Arnd Bergmann, Christophe Leroy, Nathan Chancellor, Nick Desaulniers, Segher
Boessenkool.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAl1Oi1ATHG1wZUBlbGxl
cm1hbi5pZC5hdQAKCRBR6+o8yOGlgDtbD/4wRJ0otvftdDX/1gurM4C9C9SUkmJA
Om8YYaeyQ429mpQm6Hl8UJcnmeOZS0xOggOT4wNjUmyBYnc6UeFn8WBiCngdpPzp
0ISVUOXh8iJWippllOdWYVLioirJJO4XEyKkUMbhMbwCfmaTI2axaxoo/woSTBWt
1TuZybDTa1hB5jrJ60aHA4vUxxa2UH58MZP1UOME581mAy77N2RDzC5lBZcK2ob7
mlCQn0HgLTuM/KZIRyZ7DpWehFIS0tFfbkB6PCcti9+dNxyK56/fzcp8U4cUg5iu
w/ESFrtVL13MR0n8XkJ1gfvvh78l3l0jaDGrcGifkUTIJoDHaOVOtTG/0jFjF/TN
e22IQ8kNJcqspfFu2Kazby16d97hKqUgIgYKheBGX9bIeWuQzrEWDxgTqa3Exr0v
TX3V9LDQjSSNJFZaIrJU3Oa8xxErQKaNKtgNuUK7I3JUjr50UynzXaJFLdh+VNzg
6uKtaO51CZMflFlqQ3qdhiPfh2mUCL2W7cGSMJ1ftduN2BZmezsYSwdrBQ53tYQ4
M5n59vA4hy+8HxRd9lhrdsas2a21OhcDxU3Leq+OOBSWsvHSa6MoNrqqONeN7FS1
+GqQP5NUefV57MSXojTnpPSRxoK5VgK1SMXkjhgoqYul2GLz8UdRzTl9U94UAAXY
TM1s3o3/dGS7mQ==
=cRJi
-----END PGP SIGNATURE-----
Merge tag 'powerpc-5.3-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fix from Michael Ellerman:
"Just one fix, a revert of a commit that was meant to be a minor
improvement to some inline asm, but ended up having no real benefit
with GCC and broke booting 32-bit machines when using Clang.
Thanks to: Arnd Bergmann, Christophe Leroy, Nathan Chancellor, Nick
Desaulniers, Segher Boessenkool"
* tag 'powerpc-5.3-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
Revert "powerpc: slightly improve cache helpers"
Hi Linus,
Please, pull the following patches that mark switch cases where we are
expecting to fall through.
- Fix fall-through warnings in arm, sparc64, mips, i386 and s390.
Thanks
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEkmRahXBSurMIg1YvRwW0y0cG2zEFAl1OQ6EACgkQRwW0y0cG
2zGyGBAAoUeSCRKJCmFutAbZaDAduhJ0rVschBM/F/FK7/PLf08Cwzdfz0ezTqtk
yMmOeVEOoYBG8S1lBFxBdk1W1wJV7hcCxl33vtORnwDAIS0tDEpOuHGuLsrvzrDf
16zDSHcKjEGETXXVku9asDbBvylXUybt2dXC77g26Qojj8h1pqBl0XYgk+BWNzat
f4lZHhkyV2Pi5Q4PphC0W+JAKVQvfbqhnWC/q2McOsokdhxw/wNWqRLt49KD6PXX
aCAWFbnUa1+11prdtBl1hQS/MWhqKjxSFfaIHXulgL7FxdPYo5A1a+v9V8v/fX9O
JwEd0FLCS8xgfh1cFcpI1lR8HkRngSIbHvLaoITusogZu399cSkrj6ChtYDd1Hpv
HwuXwXdlWDSeZkVI9LVrDIN2Rg8StuIgbwTXd98EM1x1aoCivsP0iQr5t0wyNF5W
4Zy1WWOUXsKe1acK/kzzQ/8zBG/70ZroAXnadZtUY8p5QhH/HM9zkP4GfTedflZ4
nw4vD8ZI0ZeIpYf+HDjV+wV7BF9TV552ArMym0CUHDsodeC/dQKWQZMVMGgNBvMU
CvM5ByAUR2OTYzNssS4G97cxvWyiW+0OZxLSGyy5blJtNX+IgHZ9GKPSrGHwz9+2
PdllXCYQCxk48vrrG0kczSm431ZajEwWat3j9BzdD5FrSfQXfqY=
=6lSP
-----END PGP SIGNATURE-----
Merge tag 'Wimplicit-fallthrough-5.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux
Pull fall-through fixes from Gustavo A. R. Silva:
"Mark more switch cases where we are expecting to fall through, fixing
fall-through warnings in arm, sparc64, mips, i386 and s390"
* tag 'Wimplicit-fallthrough-5.3-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux:
ARM: ep93xx: Mark expected switch fall-through
scsi: fas216: Mark expected switch fall-throughs
pcmcia: db1xxx_ss: Mark expected switch fall-throughs
video: fbdev: omapfb_main: Mark expected switch fall-throughs
watchdog: riowd: Mark expected switch fall-through
s390/net: Mark expected switch fall-throughs
crypto: ux500/crypt: Mark expected switch fall-throughs
watchdog: wdt977: Mark expected switch fall-through
watchdog: scx200_wdt: Mark expected switch fall-through
watchdog: Mark expected switch fall-throughs
ARM: signal: Mark expected switch fall-through
mfd: omap-usb-host: Mark expected switch fall-throughs
mfd: db8500-prcmu: Mark expected switch fall-throughs
ARM: OMAP: dma: Mark expected switch fall-throughs
ARM: alignment: Mark expected switch fall-throughs
ARM: tegra: Mark expected switch fall-through
ARM/hw_breakpoint: Mark expected switch fall-throughs
Mark switch cases where we are expecting to fall through.
Fix the following warnings (Building: arm-ep93xx_defconfig arm):
arch/arm/mach-ep93xx/crunch.c: In function 'crunch_do':
arch/arm/mach-ep93xx/crunch.c:46:3: warning: this statement may
fall through [-Wimplicit-fallthrough=]
memset(crunch_state, 0, sizeof(*crunch_state));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/arm/mach-ep93xx/crunch.c:53:2: note: here
case THREAD_NOTIFY_EXIT:
^~~~
Notice that, in this particular case, the code comment is
modified in accordance with what GCC is expecting to find.
Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Mark switch cases where we are expecting to fall through.
This patch fixes the following warning:
arch/arm/kernel/signal.c: In function 'do_signal':
arch/arm/kernel/signal.c:598:12: warning: this statement may fall through [-Wimplicit-fallthrough=]
restart -= 2;
~~~~~~~~^~~~
arch/arm/kernel/signal.c:599:3: note: here
case -ERESTARTNOHAND:
^~~~
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Mark switch cases where we are expecting to fall through.
This patch fixes the following warnings:
arch/arm/plat-omap/dma.c: In function 'omap_set_dma_src_burst_mode':
arch/arm/plat-omap/dma.c:384:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (dma_omap2plus()) {
^
arch/arm/plat-omap/dma.c:393:2: note: here
case OMAP_DMA_DATA_BURST_16:
^~~~
arch/arm/plat-omap/dma.c:394:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (dma_omap2plus()) {
^
arch/arm/plat-omap/dma.c:402:2: note: here
default:
^~~~~~~
arch/arm/plat-omap/dma.c: In function 'omap_set_dma_dest_burst_mode':
arch/arm/plat-omap/dma.c:473:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (dma_omap2plus()) {
^
arch/arm/plat-omap/dma.c:481:2: note: here
default:
^~~~~~~
Notice that, in this particular case, the code comment is
modified in accordance with what GCC is expecting to find.
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Mark switch cases where we are expecting to fall through.
This patch fixes the following warnings:
arch/arm/mm/alignment.c: In function 'thumb2arm':
arch/arm/mm/alignment.c:688:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if ((tinstr & (3 << 9)) == 0x0400) {
^
arch/arm/mm/alignment.c:700:2: note: here
default:
^~~~~~~
arch/arm/mm/alignment.c: In function 'do_alignment_t32_to_handler':
arch/arm/mm/alignment.c:753:15: warning: this statement may fall through [-Wimplicit-fallthrough=]
poffset->un = (tinst2 & 0xff) << 2;
~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~
arch/arm/mm/alignment.c:754:2: note: here
case 0xe940:
^~~~
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Mark switch cases where we are expecting to fall through.
This patch fixes the following warning:
arch/arm/mach-tegra/reset.c: In function 'tegra_cpu_reset_handler_enable':
arch/arm/mach-tegra/reset.c:72:3: warning: this statement may fall through [-Wimplicit-fallthrough=]
tegra_cpu_reset_handler_set(reset_address);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/arm/mach-tegra/reset.c:74:2: note: here
case 0:
^~~~
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Mark switch cases where we are expecting to fall through.
This patch fixes the following warnings:
arch/arm/kernel/hw_breakpoint.c: In function 'hw_breakpoint_arch_parse':
arch/arm/kernel/hw_breakpoint.c:609:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (hw->ctrl.len == ARM_BREAKPOINT_LEN_2)
^
arch/arm/kernel/hw_breakpoint.c:611:2: note: here
case 3:
^~~~
arch/arm/kernel/hw_breakpoint.c:613:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1)
^
arch/arm/kernel/hw_breakpoint.c:615:2: note: here
default:
^~~~~~~
arch/arm/kernel/hw_breakpoint.c: In function 'arch_build_bp_info':
arch/arm/kernel/hw_breakpoint.c:544:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if ((hw->ctrl.type != ARM_BREAKPOINT_EXECUTE)
^
arch/arm/kernel/hw_breakpoint.c:547:2: note: here
default:
^~~~~~~
In file included from include/linux/kernel.h:11,
from include/linux/list.h:9,
from include/linux/preempt.h:11,
from include/linux/hardirq.h:5,
from arch/arm/kernel/hw_breakpoint.c:16:
arch/arm/kernel/hw_breakpoint.c: In function 'hw_breakpoint_pending':
include/linux/compiler.h:78:22: warning: this statement may fall through [-Wimplicit-fallthrough=]
# define unlikely(x) __builtin_expect(!!(x), 0)
^~~~~~~~~~~~~~~~~~~~~~~~~~
include/asm-generic/bug.h:136:2: note: in expansion of macro 'unlikely'
unlikely(__ret_warn_on); \
^~~~~~~~
arch/arm/kernel/hw_breakpoint.c:863:3: note: in expansion of macro 'WARN'
WARN(1, "Asynchronous watchpoint exception taken. Debugging results may be unreliable\n");
^~~~
arch/arm/kernel/hw_breakpoint.c:864:2: note: here
case ARM_ENTRY_SYNC_WATCHPOINT:
^~~~
arch/arm/kernel/hw_breakpoint.c: In function 'core_has_os_save_restore':
arch/arm/kernel/hw_breakpoint.c:910:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (oslsr & ARM_OSLSR_OSLM0)
^
arch/arm/kernel/hw_breakpoint.c:912:2: note: here
default:
^~~~~~~
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQEcBAABAgAGBQJdTfRfAAoJEL/70l94x66DcN0IAIwyaU2+kwP0jd2miQuKxgwl
WU4u7dZCoQC6meWEVmrSJIVMBONRubmZ9iCqT7807YP8YZSQpOth51FMbULUWuy1
VW1eaRwqidX0EAihDhg2ZbBZ8H6RQ9Fn0aiEEh44dAZZAwGSVnO3PRKvQEJ15xjk
q+OQ4hrxtoorwLj+myejmq3YenTFTCMMJfYwwvlCl+J1FfrLZi5k3X5Gjk+j8Ixd
8CL8/6u5Lu6MCgfYVvxvo8/bUPiATBdF1sWJMMALwXTrDiSy4tQRD0NvZP1HM8G1
hy0XnhgtsS9rWNLtAFOj+r/XhP9V5lOOGX8yBcj0XQQr+DC9MG6MCL+pXXOaMcA=
=ZZh8
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Paolo Bonzini:
"Bugfixes (arm and x86) and cleanups"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
selftests: kvm: Adding config fragments
KVM: selftests: Update gitignore file for latest changes
kvm: remove unnecessary PageReserved check
KVM: arm/arm64: vgic: Reevaluate level sensitive interrupts on enable
KVM: arm: Don't write junk to CP15 registers on reset
KVM: arm64: Don't write junk to sysregs on reset
KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block
x86: kvm: remove useless calls to kvm_para_available
KVM: no need to check return value of debugfs_create functions
KVM: remove kvm_arch_has_vcpu_debugfs()
KVM: Fix leak vCPU's VMCS value into other pCPU
KVM: Check preempted_in_kernel for involuntary preemption
KVM: LAPIC: Don't need to wakeup vCPU twice afer timer fire
arm64: KVM: hyp: debug-sr: Mark expected switch fall-through
KVM: arm64: Update kvm_arm_exception_class and esr_class_str for new EC
KVM: arm: vgic-v3: Mark expected switch fall-through
arm64: KVM: regmap: Fix unexpected switch fall-through
KVM: arm/arm64: Introduce kvm_pmu_vcpu_init() to setup PMU counter index
- Map vdso also for statically linked binaries like all other
architectures.
- Fix no .bss usage compile-time check to account common objects with the
help of binutils size tool. Top level Makefile change acked-by
Masahiro.
- A fix to make perf happy with _etext symbol type.
- Fix dump_pagetables which is broken since p*d_offset implementation
change to comply with mm/gup.c expectations.
- Revert memory sharing for diag calls in protected virtualization, since
this is not required after all.
- Couple of other minor code cleanups.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE3QHqV+H2a8xAv27vjYWKoQLXFBgFAl1NT9YACgkQjYWKoQLX
FBjuVwgAmIjAoEQJSpW0vdZ6yHXszY+WprduXTfvd5ZiYrUukSu47n3BsL8fcGIj
oQLpst2TWBuIiIpXVvjpaX/S3bB2zgtTmauyZFi8s9vROepqo5Q34g1a7egGtoXu
aE8QRHSw2Kd4JYgSLTceDZz3XK2XGfqBHAXi9tLKjjdnGihMpu/AfIFG81qfOMXx
mguwAUzK/BBq0vuKAI9nVn04d0YDLj7p6jODQK5Wyj+V6D3Ys6BpeJaTAtGpnOGY
WYMV3nC5b5OVh99QHUUAXZkCJPKmS59Nqod1yjDa5SquBXCJR5S9w32HUXFWLIsX
X9lvXkf4edNO8khlE3AdUw/vQRHgXA==
=8Ex/
-----END PGP SIGNATURE-----
Merge tag 's390-5.3-5' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 fixes from Vasily Gorbik:
- Map vdso also for statically linked binaries like all other
architectures.
- Fix no .bss usage compile-time check to account common objects with
the help of binutils size tool. Top level Makefile change acked-by
Masahiro.
- A fix to make perf happy with _etext symbol type.
- Fix dump_pagetables which is broken since p*d_offset implementation
change to comply with mm/gup.c expectations.
- Revert memory sharing for diag calls in protected virtualization,
since this is not required after all.
- Couple of other minor code cleanups.
* tag 's390-5.3-5' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/vdso: map vdso also for statically linked binaries
s390/build: use size command to perform empty .bss check
kbuild: add OBJSIZE variable for the size tool
s390: put _stext and _etext into .text section
s390/head64: cleanup unused labels
s390/unwind: remove stack recursion warning
s390/setup: adjust start_code of init_mm to _text
s390/mm: fix dump_pagetables top level page table walking
s390/protvirt: avoid memory sharing for diag 308 set/store
- Fix our system register reset so that we stop writing
non-sensical values to them, and track which registers
get reset instead.
- Sync VMCR back from the GIC on WFI so that KVM has an
exact vue of PMR.
- Reevaluate state of HW-mapped, level triggered interrupts
on enable.
-----BEGIN PGP SIGNATURE-----
iQJDBAABCgAtFiEEn9UcU+C1Yxj9lZw9I9DQutE9ekMFAl1NHNIPHG1hekBrZXJu
ZWwub3JnAAoJECPQ0LrRPXpDITgP/irkNHkSxRPBswncMb9oYguoGOUIStxSApfD
E/9wH5khyXBvThqPmaY65kJok9YZwd+PbqDvp6umCkWGUeIbWI4bzM2bD974La8N
qaogJxOQubqg9r/TYf5f3EI8V67CS+E/ixK6zrz2XvqvFEmQ3OMhrRFjoFkEK1wA
yQwV15h/4IbebI2D1pkVbi8S/s54wAc3dr20cS3KfqrLYrvEIQrCm7lIOVwKk8fo
wjk8D+2IuF9LBsClQUOZwHG26yxcbd7FNwm4qEneR3UmtT14AgHnPJO3uQUTxM+o
bRajO3tXx4hIe1PAG7l4RxW47+AfjhAD1kzWtmM+xN5NdIfoew/pGMf3HXLhYhYO
9dM5YOPrF5CmZGNbSnrNPwD5mejLngNLtWBUvUKMxjtxILhADxGZIOgfM08pr+XO
9I1gzhPpczc3IIDEmDspkQnylQBPXyu2uOORPllHmmuZwMeODQ2pyyw9leBXq7lx
5tUybLPR2N1uxAIfkN6gEJKq0YgyPcapcVBr9vgyEFA6YKKyGZqthDMTIvm1TlGI
3ha2lUdRjYnYVUZ+WY8RqZVhJ3mB5qgCW4GUUeDx1nm5Ttskx1ic0WGIiErD21vs
Eo//2N9ROp+E1iJC8dCdVZjKy0ZbOJwdW9t9Y89Coo6PF6SqiXBlxrqR+GIQdKlo
P1hdjjRI
=pZYJ
-----END PGP SIGNATURE-----
Merge tag 'kvmarm-fixes-for-5.3-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/arm fixes for 5.3, take #2
- Fix our system register reset so that we stop writing
non-sensical values to them, and track which registers
get reset instead.
- Sync VMCR back from the GIC on WFI so that KVM has an
exact vue of PMR.
- Reevaluate state of HW-mapped, level triggered interrupts
on enable.
s390 does not map the vdso for statically linked binaries, assuming
that this doesn't make sense. See commit fc5243d98a ("[S390]
arch_setup_additional_pages arguments").
However with glibc commit d665367f596d ("linux: Enable vDSO for static
linking as default (BZ#19767)") and commit 5e855c895401 ("s390: Enable
VDSO for static linking") the vdso is also used for statically linked
binaries - if the kernel would make it available.
Therefore map the vdso always, just like all other architectures.
Reported-by: Stefan Liebler <stli@linux.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
At the moment, the way we reset CP15 registers is mildly insane:
We write junk to them, call the reset functions, and then check that
we have something else in them.
The "fun" thing is that this can happen while the guest is running
(PSCI, for example). If anything in KVM has to evaluate the state
of a CP15 register while junk is in there, bad thing may happen.
Let's stop doing that. Instead, we track that we have called a
reset function for that register, and assume that the reset
function has done something.
In the end, the very need of this reset check is pretty dubious,
as it doesn't check everything (a lot of the CP15 reg leave outside
of the cp15_regs[] array). It may well be axed in the near future.
Signed-off-by: Marc Zyngier <maz@kernel.org>
At the moment, the way we reset system registers is mildly insane:
We write junk to them, call the reset functions, and then check that
we have something else in them.
The "fun" thing is that this can happen while the guest is running
(PSCI, for example). If anything in KVM has to evaluate the state
of a system register while junk is in there, bad thing may happen.
Let's stop doing that. Instead, we track that we have called a
reset function for that register, and assume that the reset
function has done something. This requires fixing a couple of
sysreg refinition in the trap table.
In the end, the very need of this reset check is pretty dubious,
as it doesn't check everything (a lot of the sysregs leave outside of
the sys_regs[] array). It may well be axed in the near future.
Tested-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
This should never have landed in the first place: it was added as part
of 64-bit divide support for 32-bit systems, but the kernel doesn't
allow this sort of division. I must have forgotten to remove it.
This patch removes the support. Since this routine only worked on
64-bit platforms but was only built on 32-bit platforms, it's
essentially just nonsense anyway.
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Link: https://lore.kernel.org/linux-riscv/nycvar.YSQ.7.76.1908061413360.19480@knanqh.ubzr/T/#t
Reported-by: Eric Lin <tesheng@andestech.com>
Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
In preparation for removing __udivdi3() from the RISC-V
architecture-specific files, convert its one user to use do_div().
This avoids breaking the RV32 build after __udivdi3() is removed.
This second version removes the assignment of the remainder to an
unused temporary variable. Thanks to Nicolas Pitre <nico@fluxnic.net>
for the suggestion.
Signed-off-by: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Nicolas Pitre <nico@fluxnic.net>
Currently empty .bss checks performed do not pay attention to "common
objects" in object files which end up in .bss section eventually.
The "size" tool is a part of binutils and since version 2.18 provides
"--common" command line option, which allows to account "common objects"
sizes in .bss section size. Utilize "size --common" to perform accurate
check that .bss section is unused. Besides that the size tool handles
object files without .bss section gracefully and doesn't require
additional objdump run.
The linux kernel requires binutils 2.20 since 4.13.
Kbuild exports OBJSIZE to reference the right size tool.
Link: http://lkml.kernel.org/r/patch-2.thread-2257a1.git-2257a1c53d4a.your-ad-here.call-01565088755-ext-5120@work.hours
Reported-and-tested-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
When building with W=1, warnings about missing prototypes are emitted:
CC arch/x86/lib/cpu.o
arch/x86/lib/cpu.c:5:14: warning: no previous prototype for 'x86_family' [-Wmissing-prototypes]
5 | unsigned int x86_family(unsigned int sig)
| ^~~~~~~~~~
arch/x86/lib/cpu.c:18:14: warning: no previous prototype for 'x86_model' [-Wmissing-prototypes]
18 | unsigned int x86_model(unsigned int sig)
| ^~~~~~~~~
arch/x86/lib/cpu.c:33:14: warning: no previous prototype for 'x86_stepping' [-Wmissing-prototypes]
33 | unsigned int x86_stepping(unsigned int sig)
| ^~~~~~~~~~~~
Add the proper include file so the prototypes are there.
Signed-off-by: Valdis Kletnieks <valdis.kletnieks@vt.edu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/42513.1565234837@turing-police
KBUILD_CFLAGS is very carefully built up in the top level Makefile,
particularly when cross compiling or using different build tools.
Resetting KBUILD_CFLAGS via := assignment is an antipattern.
The comment above the reset mentions that -pg is problematic. Other
Makefiles use `CFLAGS_REMOVE_file.o = $(CC_FLAGS_FTRACE)` when
CONFIG_FUNCTION_TRACER is set. Prefer that pattern to wiping out all of
the important KBUILD_CFLAGS then manually having to re-add them. Seems
also that __stack_chk_fail references are generated when using
CONFIG_STACKPROTECTOR or CONFIG_STACKPROTECTOR_STRONG.
Fixes: 8fc5b4d412 ("purgatory: core purgatory functionality")
Reported-by: Vaibhav Rustagi <vaibhavrustagi@google.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Vaibhav Rustagi <vaibhavrustagi@google.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20190807221539.94583-2-ndesaulniers@google.com
Implementing memcpy and memset in terms of __builtin_memcpy and
__builtin_memset is problematic.
GCC at -O2 will replace calls to the builtins with calls to memcpy and
memset (but will generate an inline implementation at -Os). Clang will
replace the builtins with these calls regardless of optimization level.
$ llvm-objdump -dr arch/x86/purgatory/string.o | tail
0000000000000339 memcpy:
339: 48 b8 00 00 00 00 00 00 00 00 movabsq $0, %rax
000000000000033b: R_X86_64_64 memcpy
343: ff e0 jmpq *%rax
0000000000000345 memset:
345: 48 b8 00 00 00 00 00 00 00 00 movabsq $0, %rax
0000000000000347: R_X86_64_64 memset
34f: ff e0
Such code results in infinite recursion at runtime. This is observed
when doing kexec.
Instead, reuse an implementation from arch/x86/boot/compressed/string.c.
This requires to implement a stub function for warn(). Also, Clang may
lower memcmp's that compare against 0 to bcmp's, so add a small definition,
too. See also: commit 5f074f3e19 ("lib/string.c: implement a basic bcmp")
Fixes: 8fc5b4d412 ("purgatory: core purgatory functionality")
Reported-by: Vaibhav Rustagi <vaibhavrustagi@google.com>
Debugged-by: Vaibhav Rustagi <vaibhavrustagi@google.com>
Debugged-by: Manoj Gupta <manojgupta@google.com>
Suggested-by: Alistair Delva <adelva@google.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Vaibhav Rustagi <vaibhavrustagi@google.com>
Cc: stable@vger.kernel.org
Link: https://bugs.chromium.org/p/chromium/issues/detail?id=984056
Link: https://lkml.kernel.org/r/20190807221539.94583-1-ndesaulniers@google.com
Mark switch cases where we are expecting to fall through.
Fix the following warning (Building: i386_defconfig i386):
arch/x86/kernel/cpu/mtrr/cyrix.c:99:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lkml.kernel.org/r/20190805201712.GA19927@embeddedor
Mark switch cases where we are expecting to fall through.
Fix the following warning (Building: allnoconfig i386):
arch/x86/kernel/ptrace.c:202:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (unlikely(value == 0))
^
arch/x86/kernel/ptrace.c:206:2: note: here
default:
^~~~~~~
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lkml.kernel.org/r/20190805195654.GA17831@embeddedor
- Various switch fall through annotations to fixup warnings & errors
resulting from -Wimplicit-fallthrough.
- A fix for systems (at least jazz) using an i8253 PIT as clocksource
when it's not suitably configured.
- Set struct cacheinfo's cpu_map_populated field to true, indicating
that we filled in cache info detected from cop0 registers & avoiding
complaints about that info being (intentionally) missing in
devicetree.
-----BEGIN PGP SIGNATURE-----
iIsEABYIADMWIQRgLjeFAZEXQzy86/s+p5+stXUA3QUCXUnSPBUccGF1bC5idXJ0
b25AbWlwcy5jb20ACgkQPqefrLV1AN2u3gD/TaMPczS5027R0FMXskiroUHaMG4S
JL0EYIVmfny4vwYBAIvLr5l1jEXEqegjYXFabuI5PybQlFmTZMhjauh6gKYJ
=e6fl
-----END PGP SIGNATURE-----
Merge tag 'mips_fixes_5.3_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Pull MIPS fixes from Paul Burton:
"A few MIPS fixes for 5.3:
- Various switch fall through annotations to fixup warnings & errors
resulting from -Wimplicit-fallthrough.
- A fix for systems (at least jazz) using an i8253 PIT as clocksource
when it's not suitably configured.
- Set struct cacheinfo's cpu_map_populated field to true, indicating
that we filled in cache info detected from cop0 registers &
avoiding complaints about that info being (intentionally) missing
in devicetree"
* tag 'mips_fixes_5.3_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
MIPS: BCM63XX: Mark expected switch fall-through
MIPS: OProfile: Mark expected switch fall-throughs
MIPS: Annotate fall-through in Cavium Octeon code
MIPS: Annotate fall-through in kvm/emulate.c
mips: fix cacheinfo
MIPS: kernel: only use i8253 clocksource with periodic clockevent
Pull pti updates from Thomas Gleixner:
"The performance deterioration departement is not proud at all to
present yet another set of speculation fences to mitigate the next
chapter in the 'what could possibly go wrong' story.
The new vulnerability belongs to the Spectre class and affects GS
based data accesses and has therefore been dubbed 'Grand Schemozzle'
for secret communication purposes. It's officially listed as
CVE-2019-1125.
Conditional branches in the entry paths which contain a SWAPGS
instruction (interrupts and exceptions) can be mis-speculated which
results in speculative accesses with a wrong GS base.
This can happen on entry from user mode through a mis-speculated
branch which takes the entry from kernel mode path and therefore does
not execute the SWAPGS instruction. The following speculative accesses
are done with user GS base.
On entry from kernel mode the mis-speculated branch executes the
SWAPGS instruction in the entry from user mode path which has the same
effect that the following GS based accesses are done with user GS
base.
If there is a disclosure gadget available in these code paths the
mis-speculated data access can be leaked through the usual side
channels.
The entry from user mode issue affects all CPUs which have speculative
execution. The entry from kernel mode issue affects only Intel CPUs
which can speculate through SWAPGS. On CPUs from other vendors SWAPGS
has semantics which prevent that.
SMAP migitates both problems but only when the CPU is not affected by
the Meltdown vulnerability.
The mitigation is to issue LFENCE instructions in the entry from
kernel mode path for all affected CPUs and on the affected Intel CPUs
also in the entry from user mode path unless PTI is enabled because
the CR3 write is serializing.
The fences are as usual enabled conditionally and can be completely
disabled on the kernel command line. The Spectre V1 documentation is
updated accordingly.
A big "Thank You!" goes to Josh for doing the heavy lifting for this
round of hardware misfeature 'repair'. Of course also "Thank You!" to
everybody else who contributed in one way or the other"
* 'x86/grand-schemozzle' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
Documentation: Add swapgs description to the Spectre v1 documentation
x86/speculation/swapgs: Exclude ATOMs from speculation through SWAPGS
x86/entry/64: Use JMP instead of JMPQ
x86/speculation: Enable Spectre v1 swapgs mitigations
x86/speculation: Prepare entry code for Spectre v1 swapgs mitigations
Perf relies on _etext and _stext symbols being one of 't', 'T', 'v' or
'V'. Put them into .text section to guarantee that.
Also moves padding to page boundary inside .text which has an effect that
.text section is now padded with nops rather than 0's, which apparently
has been the initial intention for specifying 0x0700 fill expression.
Reported-by: Thomas Richter <tmricht@linux.ibm.com>
Tested-by: Thomas Richter <tmricht@linux.ibm.com>
Suggested-by: Andreas Krebbel <krebbel@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Cleanup labels in head64 some of which are not being used since git
recorded history.
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Remove pointless stack recursion on stack type ... warning, which
only confuses people. There is no way to make backchain unwinder 100%
reliable. When a task is interrupted in-between stack frame allocation
and backchain write instructions new stack frame backchain pointer is
left uninitialized (there are also sometimes additional instruction
in-between stack frame allocation and backchain write instructions due
to gcc shrink-wrapping). In attempt to unwind such stack the unwinder
would still try to use that invalid backchain value and perform all kind
of sanity checks on it to make sure we are not pointed out of stack. In
some cases that invalid backchain value would be 0 and we would falsely
treat next stackframe as pt_regs and again gprs[15] in those pt_regs
might happen to point at some address within the task's stack.
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
After some investigation it doesn't look like init_mm fields
start_code/end_code are used anywhere besides potentially in dump_mm for
debugging purposes. Originally the value of 0 for start_code reflected
the presence of lowcore and early boot code. But with kaslr in place
start_code/end_code range should not span over unoccupied by the code
segment memory. So, adjust init_mm start_code to point at the beginning
of the code segment like other architectures do it.
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Since commit d1874a0c28 ("s390/mm: make the pxd_offset functions more
robust") behaviour of p4d_offset, pud_offset and pmd_offset has been
changed so that they cannot be used to iterate through top level page
table, because the index for the top level page table is now calculated
in pgd_offset. To avoid dumping the very first region/segment top level
table entry 2048 times simply iterate entry pointer like it is already
done in other page walking cases.
Fixes: d1874a0c28 ("s390/mm: make the pxd_offset functions more robust")
Reported-by: Ilya Leoshkevich <iii@linux.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
This reverts commit db9492cef4 ("s390/protvirt: add memory sharing for
diag 308 set/store") which due to ultravisor implementation change is
not needed after all.
Fixes: db9492cef4 ("s390/protvirt: add memory sharing for diag 308 set/store")
Reviewed-by: Janosch Frank <frankja@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Mark switch cases where we are expecting to fall through.
This patch fixes the following warning (Building: bcm63xx_defconfig mips):
arch/mips/pci/ops-bcm63xx.c: In function ‘bcm63xx_pcie_can_access’:
arch/mips/pci/ops-bcm63xx.c:474:6: warning: this statement may fall through [-Wimplicit-fallthrough=]
if (PCI_SLOT(devfn) == 0)
^
arch/mips/pci/ops-bcm63xx.c:477:2: note: here
default:
^~~~~~~
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Cc: bcm-kernel-feedback-list@broadcom.com
Cc: linux-mips@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Most code in arch/x86/kernel/kvm.c is called through x86_hyper_kvm, and thus only
runs if KVM has been detected. There is no need to check again for the CPUID
base.
Cc: Sergio Lopez <slp@redhat.com>
Cc: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When calling debugfs functions, there is no need to ever check the
return value. The function can work or not, but the code logic should
never do something different based on this.
Also, when doing this, change kvm_arch_create_vcpu_debugfs() to return
void instead of an integer, as we should not care at all about if this
function actually does anything or not.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <x86@kernel.org>
Cc: <kvm@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is no need for this function as all arches have to implement
kvm_arch_create_vcpu_debugfs() no matter what. A #define symbol
let us actually simplify the code.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
After commit d73eb57b80 (KVM: Boost vCPUs that are delivering interrupts), a
five years old bug is exposed. Running ebizzy benchmark in three 80 vCPUs VMs
on one 80 pCPUs Skylake server, a lot of rcu_sched stall warning splatting
in the VMs after stress testing:
INFO: rcu_sched detected stalls on CPUs/tasks: { 4 41 57 62 77} (detected by 15, t=60004 jiffies, g=899, c=898, q=15073)
Call Trace:
flush_tlb_mm_range+0x68/0x140
tlb_flush_mmu.part.75+0x37/0xe0
tlb_finish_mmu+0x55/0x60
zap_page_range+0x142/0x190
SyS_madvise+0x3cd/0x9c0
system_call_fastpath+0x1c/0x21
swait_active() sustains to be true before finish_swait() is called in
kvm_vcpu_block(), voluntarily preempted vCPUs are taken into account
by kvm_vcpu_on_spin() loop greatly increases the probability condition
kvm_arch_vcpu_runnable(vcpu) is checked and can be true, when APICv
is enabled the yield-candidate vCPU's VMCS RVI field leaks(by
vmx_sync_pir_to_irr()) into spinning-on-a-taken-lock vCPU's current
VMCS.
This patch fixes it by checking conservatively a subset of events.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Marc Zyngier <Marc.Zyngier@arm.com>
Cc: stable@vger.kernel.org
Fixes: 98f4a1467 (KVM: add kvm_arch_vcpu_runnable() test to kvm_vcpu_on_spin() loop)
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
kvm_set_pending_timer() will take care to wake up the sleeping vCPU which
has pending timer, don't need to check this in apic_timer_expired() again.
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wire up the new clone3 syscall.
A fix for the PAPR SCM nvdimm driver, to fix a crash when firmware gives us a
device that's attached to a non-online NUMA node.
A fix for a boot failure on 32-bit with KASAN enabled.
Three fixes for implicit fall through warnings, some of which are errors for us
due to -Werror.
Thanks to:
Aneesh Kumar K.V, Christophe Leroy, Kees Cook, Santosh Sivaraj, Stephen
Rothwell.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEJFGtCPCthwEv2Y/bUevqPMjhpYAFAl1GwsMTHG1wZUBlbGxl
cm1hbi5pZC5hdQAKCRBR6+o8yOGlgGYjD/4qVVDSPfbEBj+1yH5wIFPNZeEg+VW1
duHRlaWI+p+U0/quj91IFTXXgdOiv9Rk8N2BiypYfDX8qXHZqvTSyK97axURw9vt
To45oEVhDLF0YBY8u9kiY3DiSgmyDffpc3b70pcJtSbSgSpe7bgd7lNBi9lnxBM5
KZ+TmwYb35m3c9NtNpy0kiZf9pgBt9X+CkjSxyuDewEcQm3oEPTgjZQ+6EDluoF+
El6c26QZQynkozUqpVDfM/j8tNQXJGc1WAtBk/LKhgp9TXUZL9owl/0exuQw1pWG
NHGpM1BJ9hb1f7Kvw07z+/Vrbszt5ktUtI9owG09W/Lr5yHSP/CCvodwN+OgAhus
28jDNiDBXzI1TpUEj1mifU0lf8q/0oUQ2EP3gh95y9/kYDN8YcQ/iqK3YWKMcED8
y7WllrZmkahjSWAOpCKW2qkQUV5KcuWKf4s8w9uik9AgXAScob2JXG92igkUTyA0
8FXeMSKie9mO45XpTf+z+uXpb/t0omqOIFuC/ZT4hNZTMPeoUPkn68UAhJNUU9xX
WO+HEcy3s38d9LZkftHmbVRBpE+5IR2k+tdE3BI3lSpNyj5jellZSIvXnZSY0jQn
fYc5F0mX+XGyP3aKF1I7EQsLIYTJI3QjLD+5knELetEyGSXP4uxvAuy7CYDxtGE+
cEagmCkZGMcP7g==
=WT8n
-----END PGP SIGNATURE-----
Merge tag 'powerpc-5.3-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
"Some more powerpc fixes for 5.3:
- Wire up the new clone3 syscall.
- A fix for the PAPR SCM nvdimm driver, to fix a crash when firmware
gives us a device that's attached to a non-online NUMA node.
- A fix for a boot failure on 32-bit with KASAN enabled.
- Three fixes for implicit fall through warnings, some of which are
errors for us due to -Werror.
Thanks to: Aneesh Kumar K.V, Christophe Leroy, Kees Cook, Santosh
Sivaraj, Stephen Rothwell"
* tag 'powerpc-5.3-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/kasan: fix early boot failure on PPC32
drivers/macintosh/smu.c: Mark expected switch fall-through
powerpc/spe: Mark expected switch fall-throughs
powerpc/nvdimm: Pick nearby online node if the device node is not online
powerpc/kvm: Fall through switch case explicitly
powerpc: Wire up clone3 syscall
- fix build for xtensa cores with coprocessors that was broken by
entry/return abstraction patch.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEK2eFS5jlMn3N6xfYUfnMkfg/oEQFAl1GJwUTHGpjbXZia2Jj
QGdtYWlsLmNvbQAKCRBR+cyR+D+gRJHlD/9+P56UjqWp6WIySoba6ctMj0oZuj90
XdG1/sf63o8Ne10fa4MotWLr2HUqKt3jEBW06QZ/H/k4WHPeP3vxBCyNMmG3ljXS
J8rbZcmdxwU8B/fEvqFkU0YRfH/ULRse1RdOYAU26mGpxHkQ1yIRSca1SHP0QFH9
xycOastc/oGC3opy2va3NK+ptfEj8JrxZLabD8xCFFNoXTjAMddE+r4fH5UvBuiX
/1TuejEp6BFAmAQAmNbG4JY8CZgxxu6gxUoYyTsWKMvbvujwakxzKvmXHLaXC/ZV
RLfV+6KtNFvoT11r2emV9HyG1xPvzErrz0Ht+I4X+zkl5NfW/YrnrbghSDlXqmkg
xci9mLBIja6oVnMJREs/lzn7yFZHTjAqxO9yajr46fODv+NMbMy8e9t2LJSNmve4
r0j3wNk9D1HMhABb7Pxq2Fu1XoQrYBLpv1ZOST7ZhdGJauWvUonDJb47BvTLtmnw
Ix4JIrgtHgo4UaIVeuirZDahyeOnP+c6NFh9uf6QPkTcwcfwLVoEKUOoMhWhszea
ltp+p/vHWX6hdNhL6kRuW0J+mpFfZVufG5LJQZM+sVbCsdHdTNJplKmBO0pNSlX0
H0srv7R7/DJMMbSsK//hUhBEqncqlMAszyduNH6sWmcxzOneWOKCQCFaZ5ccQUIr
D+r3MloOv4Ij5Q==
=QOUM
-----END PGP SIGNATURE-----
Merge tag 'xtensa-20190803' of git://github.com/jcmvbkbc/linux-xtensa
Pull Xtensa fix from Max Filippov:
"Fix build for xtensa cores with coprocessors that was broken by
entry/return abstraction patch"
* tag 'xtensa-20190803' of git://github.com/jcmvbkbc/linux-xtensa:
xtensa: fix build for cores with coprocessors