Commit Graph

783113 Commits

Author SHA1 Message Date
Waiman Long
0fa809ca7f locking/pvqspinlock: Extend node size when pvqspinlock is configured
The qspinlock code supports up to 4 levels of slowpath nesting using
four per-CPU mcs_spinlock structures. For 64-bit architectures, they
fit nicely in one 64-byte cacheline.

For para-virtualized (PV) qspinlocks it needs to store more information
in the per-CPU node structure than there is space for. It uses a trick
to use a second cacheline to hold the extra information that it needs.
So PV qspinlock needs to access two extra cachelines for its information
whereas the native qspinlock code only needs one extra cacheline.

Freshly added counter profiling of the qspinlock code, however, revealed
that it was very rare to use more than two levels of slowpath nesting.
So it doesn't make sense to penalize PV qspinlock code in order to have
four mcs_spinlock structures in the same cacheline to optimize for a case
in the native qspinlock code that rarely happens.

Extend the per-CPU node structure to have two more long words when PV
qspinlock locks are configured to hold the extra data that it needs.

As a result, the PV qspinlock code will enjoy the same benefit of using
just one extra cacheline like the native counterpart, for most cases.

[ mingo: Minor changelog edits. ]

Signed-off-by: Waiman Long <longman@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1539697507-28084-2-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-17 08:37:32 +02:00
Waiman Long
1222109a53 locking/qspinlock_stat: Count instances of nested lock slowpaths
Queued spinlock supports up to 4 levels of lock slowpath nesting -
user context, soft IRQ, hard IRQ and NMI. However, we are not sure how
often the nesting happens.

So add 3 more per-CPU stat counters to track the number of instances where
nesting index goes to 1, 2 and 3 respectively.

On a dual-socket 64-core 128-thread Zen server, the following were the
new stat counter values under different circumstances:

         State                         slowpath   index1   index2   index3
         -----                         --------   ------   ------   -------
  After bootup                         1,012,150    82       0        0
  After parallel build + perf-top    125,195,009    82       0        0

So the chance of having more than 2 levels of nesting is extremely low.

[ mingo: Minor changelog edits. ]

Signed-off-by: Waiman Long <longman@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1539697507-28084-1-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-17 08:37:31 +02:00
Peter Zijlstra
7aa54be297 locking/qspinlock, x86: Provide liveness guarantee
On x86 we cannot do fetch_or() with a single instruction and thus end up
using a cmpxchg loop, this reduces determinism. Replace the fetch_or()
with a composite operation: tas-pending + load.

Using two instructions of course opens a window we previously did not
have. Consider the scenario:

	CPU0		CPU1		CPU2

 1)	lock
	  trylock -> (0,0,1)

 2)			lock
			  trylock /* fail */

 3)	unlock -> (0,0,0)

 4)					lock
					  trylock -> (0,0,1)

 5)			  tas-pending -> (0,1,1)
			  load-val <- (0,1,0) from 3

 6)			  clear-pending-set-locked -> (0,0,1)

			  FAIL: _2_ owners

where 5) is our new composite operation. When we consider each part of
the qspinlock state as a separate variable (as we can when
_Q_PENDING_BITS == 8) then the above is entirely possible, because
tas-pending will only RmW the pending byte, so the later load is able
to observe prior tail and lock state (but not earlier than its own
trylock, which operates on the whole word, due to coherence).

To avoid this we need 2 things:

 - the load must come after the tas-pending (obviously, otherwise it
   can trivially observe prior state).

 - the tas-pending must be a full word RmW instruction, it cannot be an XCHGB for
   example, such that we cannot observe other state prior to setting
   pending.

On x86 we can realize this by using "LOCK BTS m32, r32" for
tas-pending followed by a regular load.

Note that observing later state is not a problem:

 - if we fail to observe a later unlock, we'll simply spin-wait for
   that store to become visible.

 - if we observe a later xchg_tail(), there is no difference from that
   xchg_tail() having taken place before the tas-pending.

Suggested-by: Will Deacon <will.deacon@arm.com>
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Fixes: 59fb586b4a ("locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath")
Link: https://lkml.kernel.org/r/20181003130957.183726335@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 17:33:54 +02:00
Peter Zijlstra
288e4521f0 x86/asm: 'Simplify' GEN_*_RMWcc() macros
Currently the GEN_*_RMWcc() macros include a return statement, which
pretty much mandates we directly wrap them in a (inline) function.

Macros with return statements are tricky and, as per the above, limit
use, so remove the return statement and make them
statement-expressions. This allows them to be used more widely.

Also, shuffle the arguments a bit. Place the @cc argument as 3rd, this
makes it consistent between UNARY and BINARY, but more importantly, it
makes the @arg0 argument last.

Since the @arg0 argument is now last, we can do CPP trickery and make
it an optional argument, simplifying the users; 17 out of 18
occurences do not need this argument.

Finally, change to asm symbolic names, instead of the numeric ordering
of operands, which allows us to get rid of __BINARY_RMWcc_ARG and get
cleaner code overall.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: JBeulich@suse.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bp@alien8.de
Cc: hpa@linux.intel.com
Link: https://lkml.kernel.org/r/20181003130957.108960094@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 17:33:54 +02:00
Peter Zijlstra
756b1df4c2 locking/qspinlock: Rework some comments
While working my way through the code again; I felt the comments could
use help.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 17:33:54 +02:00
Peter Zijlstra
53bf57fab7 locking/qspinlock: Re-order code
Flip the branch condition after atomic_fetch_or_acquire(_Q_PENDING_VAL)
such that we loose the indent. This also result in a more natural code
flow IMO.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 17:33:53 +02:00
Ingo Molnar
ec57e2f0ac Merge branch 'x86/build' into locking/core, to pick up dependent patches and unify jump-label work
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 17:30:11 +02:00
Waiman Long
4766ab5677 locking/lockdep: Remove duplicated 'lock_class_ops' percpu array
Remove the duplicated 'lock_class_ops' percpu array that is not used
anywhere.

Signed-off-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: 8ca2b56cd7 ("locking/lockdep: Make class->ops a percpu counter and move it under CONFIG_DEBUG_LOCKDEP=y")
Link: http://lkml.kernel.org/r/1539380547-16726-1-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16 08:21:10 +02:00
Adam Borowski
72a9c67363 x86/defconfig: Enable CONFIG_USB_XHCI_HCD=y
A spanking new machine I just got has all but one USB ports wired as 3.0.
Booting defconfig resulted in no keyboard or mouse, which was pretty
uncool.  Let's enable that -- USB3 is ubiquitous rather than an oddity.
As 'y' not 'm' -- recovering from initrd problems needs a keyboard.

Also add it to the 32-bit defconfig.

Signed-off-by: Adam Borowski <kilobyte@angband.pl>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-usb@vger.kernel.org
Link: http://lkml.kernel.org/r/20181009062803.4332-1-kilobyte@angband.pl
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-10 08:29:51 +02:00
Lance Roy
4de1a293a0 futex: Replace spin_is_locked() with lockdep
lockdep_assert_held() is better suited for checking locking requirements,
since it won't get confused when the lock is held by some other task. This
is also a step towards possibly removing spin_is_locked().

Signed-off-by: Lance Roy <ldr709@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Darren Hart <dvhart@infradead.org>
Link: https://lkml.kernel.org/r/20181003053902.6910-12-ldr709@gmail.com
2018-10-09 13:19:28 +02:00
Waiman Long
8ca2b56cd7 locking/lockdep: Make class->ops a percpu counter and move it under CONFIG_DEBUG_LOCKDEP=y
A sizable portion of the CPU cycles spent on the __lock_acquire() is used
up by the atomic increment of the class->ops stat counter. By taking it out
from the lock_class structure and changing it to a per-cpu per-lock-class
counter, we can reduce the amount of cacheline contention on the class
structure when multiple CPUs are trying to acquire locks of the same
class simultaneously.

To limit the increase in memory consumption because of the percpu nature
of that counter, it is now put back under the CONFIG_DEBUG_LOCKDEP
config option. So the memory consumption increase will only occur if
CONFIG_DEBUG_LOCKDEP is defined. The lock_class structure, however,
is reduced in size by 16 bytes on 64-bit archs after ops removal and
a minor restructuring of the fields.

This patch also fixes a bug in the increment code as the counter is of
the 'unsigned long' type, but atomic_inc() was used to increment it.

Signed-off-by: Waiman Long <longman@redhat.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/d66681f3-8781-9793-1dcf-2436a284550b@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-09 09:56:33 +02:00
Nadav Amit
5bdcd510c2 x86/jump-labels: Macrofy inline assembly code to work around GCC inlining bugs
As described in:

  77b0bf55bc: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block - which is also a minor cleanup for the jump-label code.

As a result the code size is slightly increased, but inlining decisions
are better:

      text     data     bss      dec     hex  filename
  18163528 10226300 2957312 31347140 1de51c4  ./vmlinux before
  18163608 10227348 2957312 31348268 1de562c  ./vmlinux after (+1128)

And functions such as intel_pstate_adjust_policy_max(),
kvm_cpu_accept_dm_intr(), kvm_register_readl() are inlined.

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20181005202718.229565-4-namit@vmware.com
Link: https://lore.kernel.org/lkml/20181003213100.189959-11-namit@vmware.com/T/#u
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-06 15:52:17 +02:00
Nadav Amit
d5a581d84a x86/cpufeature: Macrofy inline assembly code to work around GCC inlining bugs
As described in:

  77b0bf55bc: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block - which is pretty pointless indirection in the static_cpu_has()
case, but is worth it to improve overall inlining quality.

The patch slightly increases the kernel size:

      text     data     bss      dec     hex  filename
  18162879 10226256 2957312 31346447 1de4f0f  ./vmlinux before
  18163528 10226300 2957312 31347140 1de51c4  ./vmlinux after (+693)

And enables the inlining of function such as free_ldt_pgtables().

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20181005202718.229565-3-namit@vmware.com
Link: https://lore.kernel.org/lkml/20181003213100.189959-10-namit@vmware.com/T/#u
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-06 15:52:16 +02:00
Nadav Amit
0474d5d9d2 x86/extable: Macrofy inline assembly code to work around GCC inlining bugs
As described in:

  77b0bf55bc: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block - which is also a minor cleanup for the exception table
code.

Text size goes up a bit:

      text     data     bss      dec     hex  filename
  18162555 10226288 2957312 31346155 1de4deb  ./vmlinux before
  18162879 10226256 2957312 31346447 1de4f0f  ./vmlinux after (+292)

But this allows the inlining of functions such as nested_vmx_exit_reflected(),
set_segment_reg(), __copy_xstate_to_user() which is a net benefit.

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20181005202718.229565-2-namit@vmware.com
Link: https://lore.kernel.org/lkml/20181003213100.189959-9-namit@vmware.com/T/#u
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-06 15:52:15 +02:00
Ingo Molnar
02678a5823 Merge branch 'core/core' into x86/build, to prevent conflicts
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-06 15:51:56 +02:00
Ingo Molnar
bce6824cc8 Merge branch 'x86/core' into x86/build, to avoid conflicts
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-05 11:27:23 +02:00
Nadav Amit
494b5168f2 x86/paravirt: Work around GCC inlining bugs when compiling paravirt ops
As described in:

  77b0bf55bc: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block. As a result GCC considers the inline assembly block as
a single instruction. (Which it isn't, but that's the best we can get.)

In this patch we wrap the paravirt call section tricks in a macro,
to hide it from GCC.

The effect of the patch is a more aggressive inlining, which also
causes a size increase of kernel.

      text     data     bss      dec     hex  filename
  18147336 10226688 2957312 31331336 1de1408  ./vmlinux before
  18162555 10226288 2957312 31346155 1de4deb  ./vmlinux after (+14819)

The number of static text symbols (non-inlined functions) goes down:

  Before: 40053
  After:  39942 (-111)

[ mingo: Rewrote the changelog. ]

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alok Kataria <akataria@vmware.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: virtualization@lists.linux-foundation.org
Link: http://lkml.kernel.org/r/20181003213100.189959-8-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04 11:25:00 +02:00
Nadav Amit
f81f8ad56f x86/bug: Macrofy the BUG table section handling, to work around GCC inlining bugs
As described in:

  77b0bf55bc: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block. As a result GCC considers the inline assembly block as
a single instruction. (Which it isn't, but that's the best we can get.)

This patch increases the kernel size:

      text     data     bss      dec     hex  filename
  18146889 10225380 2957312 31329581 1de0d2d  ./vmlinux before
  18147336 10226688 2957312 31331336 1de1408  ./vmlinux after (+1755)

But enables more aggressive inlining (and probably better branch decisions).

The number of static text symbols in vmlinux is much lower:

 Before: 40218
 After:  40053 (-165)

The assembly code gets harder to read due to the extra macro layer.

[ mingo: Rewrote the changelog. ]

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20181003213100.189959-7-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04 11:25:00 +02:00
Nadav Amit
77f48ec28e x86/alternatives: Macrofy lock prefixes to work around GCC inlining bugs
As described in:

  77b0bf55bc: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block - i.e. to macrify the affected block.

As a result GCC considers the inline assembly block as a single instruction.

This patch handles the LOCK prefix, allowing more aggresive inlining:

      text     data     bss      dec     hex  filename
  18140140 10225284 2957312 31322736 1ddf270  ./vmlinux before
  18146889 10225380 2957312 31329581 1de0d2d  ./vmlinux after (+6845)

This is the reduction in non-inlined functions:

  Before: 40286
  After:  40218 (-68)

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20181003213100.189959-6-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04 11:24:59 +02:00
Nadav Amit
9e1725b410 x86/refcount: Work around GCC inlining bug
As described in:

  77b0bf55bc: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

The workaround is to set an assembly macro and call it from the inline
assembly block. As a result GCC considers the inline assembly block as
a single instruction. (Which it isn't, but that's the best we can get.)

This patch allows GCC to inline simple functions such as __get_seccomp_filter().

To no-one's surprise the result is that GCC performs more aggressive (read: correct)
inlining decisions in these senarios, which reduces the kernel size and presumably
also speeds it up:

      text     data     bss      dec     hex  filename
  18140970 10225412 2957312 31323694 1ddf62e  ./vmlinux before
  18140140 10225284 2957312 31322736 1ddf270  ./vmlinux after (-958)

16 fewer static text symbols:

   Before: 40302
    After: 40286 (-16)

these got inlined instead.

Functions such as kref_get(), free_user(), fuse_file_get() now get inlined. Hurray!

[ mingo: Rewrote the changelog. ]

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20181003213100.189959-5-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04 11:24:59 +02:00
Nadav Amit
c06c4d8090 x86/objtool: Use asm macros to work around GCC inlining bugs
As described in:

  77b0bf55bc: ("kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs")

GCC's inlining heuristics are broken with common asm() patterns used in
kernel code, resulting in the effective disabling of inlining.

In the case of objtool the resulting borkage can be significant, since all the
annotations of objtool are discarded during linkage and never inlined,
yet GCC bogusly considers most functions affected by objtool annotations
as 'too large'.

The workaround is to set an assembly macro and call it from the inline
assembly block. As a result GCC considers the inline assembly block as
a single instruction. (Which it isn't, but that's the best we can get.)

This increases the kernel size slightly:

      text     data     bss      dec     hex filename
  18140829 10224724 2957312 31322865 1ddf2f1 ./vmlinux before
  18140970 10225412 2957312 31323694 1ddf62e ./vmlinux after (+829)

The number of static text symbols (i.e. non-inlined functions) is reduced:

  Before:  40321
  After:   40302 (-19)

[ mingo: Rewrote the changelog. ]

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Christopher Li <sparse@chrisli.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-sparse@vger.kernel.org
Link: http://lkml.kernel.org/r/20181003213100.189959-4-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04 11:24:58 +02:00
Nadav Amit
77b0bf55bc kbuild/Makefile: Prepare for using macros in inline assembly code to work around asm() related GCC inlining bugs
Using macros in inline assembly allows us to work around bugs
in GCC's inlining decisions.

Compile macros.S and use it to assemble all C files.
Currently only x86 will use it.

Background:

The inlining pass of GCC doesn't include an assembler, so it's not aware
of basic properties of the generated code, such as its size in bytes,
or that there are such things as discontiuous blocks of code and data
due to the newfangled linker feature called 'sections' ...

Instead GCC uses a lazy and fragile heuristic: it does a linear count of
certain syntactic and whitespace elements in inlined assembly block source
code, such as a count of new-lines and semicolons (!), as a poor substitute
for "code size and complexity".

Unsurprisingly this heuristic falls over and breaks its neck whith certain
common types of kernel code that use inline assembly, such as the frequent
practice of putting useful information into alternative sections.

As a result of this fresh, 20+ years old GCC bug, GCC's inlining decisions
are effectively disabled for inlined functions that make use of such asm()
blocks, because GCC thinks those sections of code are "large" - when in
reality they are often result in just a very low number of machine
instructions.

This absolute lack of inlining provess when GCC comes across such asm()
blocks both increases generated kernel code size and causes performance
overhead, which is particularly noticeable on paravirt kernels, which make
frequent use of these inlining facilities in attempt to stay out of the
way when running on baremetal hardware.

Instead of fixing the compiler we use a workaround: we set an assembly macro
and call it from the inlined assembly block. As a result GCC considers the
inline assembly block as a single instruction. (Which it often isn't but I digress.)

This uglifies and bloats the source code - for example just the refcount
related changes have this impact:

 Makefile                 |    9 +++++++--
 arch/x86/Makefile        |    7 +++++++
 arch/x86/kernel/macros.S |    7 +++++++
 scripts/Kbuild.include   |    4 +++-
 scripts/mod/Makefile     |    2 ++
 5 files changed, 26 insertions(+), 3 deletions(-)

Yay readability and maintainability, it's not like assembly code is hard to read
and maintain ...

We also hope that GCC will eventually get fixed, but we are not holding
our breath for that. Yet we are optimistic, it might still happen, any decade now.

[ mingo: Wrote new changelog describing the background. ]

Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michal Marek <michal.lkml@markovi.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kbuild@vger.kernel.org
Link: http://lkml.kernel.org/r/20181003213100.189959-3-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04 10:57:09 +02:00
Nadav Amit
35e76b99dd kbuild/arch/xtensa: Define LINKER_SCRIPT for the linker script
Define the LINKER_SCRIPT when building the linker script as being done
in other architectures. This is required, because upcoming Makefile changes
would otherwise break things.

Signed-off-by: Nadav Amit <namit@vmware.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Chris Zankel <chris@zankel.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Michal Marek <michal.lkml@markovi.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-xtensa@linux-xtensa.org
Link: http://lkml.kernel.org/r/20181003213100.189959-2-namit@vmware.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04 10:05:38 +02:00
Ingo Molnar
c0554d2d3d Merge branch 'linus' into x86/core, to pick up fixes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-04 08:23:03 +02:00
Waiman Long
ce52a18db4 locking/lockdep: Add a faster path in __lock_release()
When __lock_release() is called, the most likely unlock scenario is
on the innermost lock in the chain.  In this case, we can skip some of
the checks and provide a faster path to completion.

Signed-off-by: Waiman Long <longman@redhat.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1538511560-10090-4-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-03 08:46:03 +02:00
Waiman Long
8ee1086247 locking/lockdep: Eliminate redundant IRQs check in __lock_acquire()
The static __lock_acquire() function has only two callers:

 1) lock_acquire()
 2) reacquire_held_locks()

In lock_acquire(), raw_local_irq_save() is called beforehand. So
IRQs must have been disabled. So the check:

	DEBUG_LOCKS_WARN_ON(!irqs_disabled())

is kind of redundant in this case. So move the above check
to reacquire_held_locks() to eliminate redundant code in the
lock_acquire() path.

Signed-off-by: Waiman Long <longman@redhat.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1538511560-10090-3-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-03 08:46:02 +02:00
Waiman Long
44318d5b07 locking/lockdep: Remove add_chain_cache_classes()
The inline function add_chain_cache_classes() is defined, but has no
caller. Just remove it.

Signed-off-by: Waiman Long <longman@redhat.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: http://lkml.kernel.org/r/1538511560-10090-2-git-send-email-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-03 08:46:02 +02:00
Greg Kroah-Hartman
1d2ba7fee2 fbdev fixes for v4.19-rc7:
- fix OMAPFB_MEMORY_READ ioctl to not leak kernel memory in omapfb driver
   (Tomi Valkeinen)
 
 - add missing prepare/unprepare clock operations in pxa168fb driver
   (Lubomir Rintel)
 
 - add nobgrt option in efifb driver to disable ACPI BGRT logo restore
   (Hans de Goede)
 
 - fix spelling mistake in fall-through annotation in stifb driver
   (Gustavo A. R. Silva)
 
 - fix URL for uvesafb repository in the documentation (Adam Jackson)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABCAAGBQJbszAJAAoJEH4ztj+gR8ILxKMQAKXBLSbOtbPOBlYpbgOldIYM
 Lq5iDojT7V9wHc1zll2DGu6dQ/W79jJFDgXdOocszo5wF8Wb8/UHZymGlYf8Uri1
 mGP6zFiaosa0DDoDLF+sgV2/GWYy8dOCPkNPibP6b/dDAlAd5KjEy1p0yo77Cpri
 7gCLNoKD8vWp9OIEW7b8evqSJMYAkDjs65P8VhmGNLrHQrTHyjro/0kHPREGmbMh
 aVn1tjJkL2mX12WAKFqo/GVFxEY6/dVayOoLrSqiedi96KabyipMIB80sVYsEYW6
 xmxQ12BGq0lSxxA/o639t9xqOyotS+Yj+dVXZXhAmFBc96Bw27pZm9As0ilFFsNz
 uP/knLO6DL5zLQnAcKzy02db+ZzbDuV9JJF0Bex9z1uQWJmLN+HCg6CSc262/FhQ
 bde22gKNVp+5yM61oK9WwbRVJ5uc/70BTmWrK8nNnIhgo6MkTYvb6oBSIqkmjrIc
 CNlMDN3gtLLjPAMaTZirLPadohFGOtpIB62TLBnLttTjYMukw2b/p3UGjrFi7re0
 VAIFfWL9nj/f1pLvofLy/QIRHiKqmN8F9P08EyKOudEzPemKqscIVR+gPweIPmIK
 LghssFY3x9gIlikDyigIt41XkY6XxXgNyH3hSiNs8ONWJi7YaNj/d2t0/HQVOHBL
 xyltV/qlZHbg2JR1tEcJ
 =xome
 -----END PGP SIGNATURE-----

Merge tag 'fbdev-v4.19-rc7' of https://github.com/bzolnier/linux

Bartlomiej writes:
  "fbdev fixes for v4.19-rc7:

   - fix OMAPFB_MEMORY_READ ioctl to not leak kernel memory in omapfb driver
     (Tomi Valkeinen)

   - add missing prepare/unprepare clock operations in pxa168fb driver
     (Lubomir Rintel)

   - add nobgrt option in efifb driver to disable ACPI BGRT logo restore
     (Hans de Goede)

   - fix spelling mistake in fall-through annotation in stifb driver
     (Gustavo A. R. Silva)

   - fix URL for uvesafb repository in the documentation (Adam Jackson)"

* tag 'fbdev-v4.19-rc7' of https://github.com/bzolnier/linux:
  video/fbdev/stifb: Fix spelling mistake in fall-through annotation
  uvesafb: Fix URLs in the documentation
  efifb: BGRT: Add nobgrt option
  fbdev/omapfb: fix omapfb_memory_read infoleak
  pxa168fb: prepare the clock
2018-10-02 05:19:43 -07:00
Greg Kroah-Hartman
5e0b19ac33 MMC core:
- Fixup conversion of debounce time to/from ms/us
 
 MMC host:
  - sdhi: Fixup whitelisting for Gen3 types
 -----BEGIN PGP SIGNATURE-----
 
 iQJLBAABCgA1FiEEugLDXPmKSktSkQsV/iaEJXNYjCkFAluzG3YXHHVsZi5oYW5z
 c29uQGxpbmFyby5vcmcACgkQ/iaEJXNYjCmUIA//aO8Itm9cnbQQPtcO09Wca1Ht
 JlgRc3ZP0Tackpe8d3eH9MzQ5pRJ9lBTMUR0cqEF/HHpjqktPLA66IkalsceEq+E
 eMXLOqGUzKLqEJGKslggKJzSRuqFxtNNATPPqNkaOXZ/UK7cGgSsWHnavgUWnEd4
 h+hcGSu4R4J/ftrNPlc0TB8tCzo/ER7Xs3JNV/GzP5FNpMs+LzjSN1Vcp/9CJKkK
 wZmmVSnLH0oY5U/VM1EUSMjc8p2pe7K2j9OENt3aFpMXsW5HTeHcwmMxbWUFGVAs
 iuGzk6CBhVujWPrujcWf+7IzlTHgkDAaH1WB5YJJ32r6xpgiHxiP7HuGtHsvDpml
 S5rCV3gHRhxI6BIyaldZ94bRzqcDQ1hoCidKOdEGRfFjtnvIvFAajERz2HkF8KiX
 I6ErmnFrrLA3EQTDd0DueldaZ+R2OD/h43awlj67834LofE8l6zj1vCOSqXcnWqk
 03Qm2I5xrkicfncWLi6vxcpiWLIil2CLP7porovOuzuY0aZp3adRNwykQqI7E0oo
 Fz5Vch2iiQgkwzIMFqUciiXuKDOXKZO6qGQLczARhgzU2YhlcN0y8aGC5ecXWTqn
 4JS4grJstR1Hp9ri6RYj20FXMqtwESePyLhg5b7C0Ru8aWVfLDFNVlOg+tdzv4Ti
 QnHF/N5ymxkAxztC1MU=
 =AYff
 -----END PGP SIGNATURE-----

Merge tag 'mmc-v4.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Ulf writes:
  "MMC core:
    - Fixup conversion of debounce time to/from ms/us

   MMC host:
    - sdhi: Fixup whitelisting for Gen3 types"

* tag 'mmc-v4.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc:
  mmc: slot-gpio: Fix debounce time to use miliseconds again
  mmc: core: Fix debounce time to use microseconds
  mmc: sdhi: sys_dmac: check for all Gen3 types when whitelisting
2018-10-02 05:19:04 -07:00
Andrew Murray
bccb484b9a Documentation/lockstat: Fix trivial typo
Fix incorrect line number in example output

Signed-off-by: Andrew Murray <andrew.murray@arm.com>
Cc: Jiri Kosina <trivial@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-doc@vger.kernel.org
Link: http://lkml.kernel.org/r/1538391663-54524-1-git-send-email-andrew.murray@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02 10:59:25 +02:00
Andrea Parri
2f359c7ea5 locking/memory-barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()
Amend the changes in commit:

  1f03e8d291 ("locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()")

... by updating the documentation accordingly.

Also remove some obsolete information related to the implementation.

Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Daniel Lustig <dlustig@nvidia.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: linux-arch@vger.kernel.org
Cc: parri.andrea@gmail.com
Link: http://lkml.kernel.org/r/20180926182920.27644-5-paulmck@linux.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02 10:28:05 +02:00
Paul E. McKenney
d8fa25c4ef tools/memory-model: Add more LKMM limitations
This commit adds more detail about compiler optimizations and
not-yet-modeled Linux-kernel APIs.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: stern@rowland.harvard.edu
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/20180926182920.27644-4-paulmck@linux.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02 10:28:04 +02:00
SeongJae Park
3d2046a6fa tools/memory-model: Fix a README typo
This commit fixes a duplicate-"the" typo in README.

Signed-off-by: SeongJae Park <sj38.park@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/20180926182920.27644-3-paulmck@linux.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02 10:28:03 +02:00
Alan Stern
6e89e831a9 tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire
More than one kernel developer has expressed the opinion that the LKMM
should enforce ordering of writes by locking.  In other words, given
the following code:

	WRITE_ONCE(x, 1);
	spin_unlock(&s):
	spin_lock(&s);
	WRITE_ONCE(y, 1);

the stores to x and y should be propagated in order to all other CPUs,
even though those other CPUs might not access the lock s.  In terms of
the memory model, this means expanding the cumul-fence relation.

Locks should also provide read-read (and read-write) ordering in a
similar way.  Given:

	READ_ONCE(x);
	spin_unlock(&s);
	spin_lock(&s);
	READ_ONCE(y);		// or WRITE_ONCE(y, 1);

the load of x should be executed before the load of (or store to) y.
The LKMM already provides this ordering, but it provides it even in
the case where the two accesses are separated by a release/acquire
pair of fences rather than unlock/lock.  This would prevent
architectures from using weakly ordered implementations of release and
acquire, which seems like an unnecessary restriction.  The patch
therefore removes the ordering requirement from the LKMM for that
case.

There are several arguments both for and against this change.  Let us
refer to these enhanced ordering properties by saying that the LKMM
would require locks to be RCtso (a bit of a misnomer, but analogous to
RCpc and RCsc) and it would require ordinary acquire/release only to
be RCpc.  (Note: In the following, the phrase "all supported
architectures" is meant not to include RISC-V.  Although RISC-V is
indeed supported by the kernel, the implementation is still somewhat
in a state of flux and therefore statements about it would be
premature.)

Pros:

	The kernel already provides RCtso ordering for locks on all
	supported architectures, even though this is not stated
	explicitly anywhere.  Therefore the LKMM should formalize it.

	In theory, guaranteeing RCtso ordering would reduce the need
	for additional barrier-like constructs meant to increase the
	ordering strength of locks.

	Will Deacon and Peter Zijlstra are strongly in favor of
	formalizing the RCtso requirement.  Linus Torvalds and Will
	would like to go even further, requiring locks to have RCsc
	behavior (ordering preceding writes against later reads), but
	they recognize that this would incur a noticeable performance
	degradation on the POWER architecture.  Linus also points out
	that people have made the mistake, in the past, of assuming
	that locking has stronger ordering properties than is
	currently guaranteed, and this change would reduce the
	likelihood of such mistakes.

	Not requiring ordinary acquire/release to be any stronger than
	RCpc may prove advantageous for future architectures, allowing
	them to implement smp_load_acquire() and smp_store_release()
	with more efficient machine instructions than would be
	possible if the operations had to be RCtso.  Will and Linus
	approve this rationale, hypothetical though it is at the
	moment (it may end up affecting the RISC-V implementation).
	The same argument may or may not apply to RMW-acquire/release;
	see also the second Con entry below.

	Linus feels that locks should be easy for people to use
	without worrying about memory consistency issues, since they
	are so pervasive in the kernel, whereas acquire/release is
	much more of an "experts only" tool.  Requiring locks to be
	RCtso is a step in this direction.

Cons:

	Andrea Parri and Luc Maranget think that locks should have the
	same ordering properties as ordinary acquire/release (indeed,
	Luc points out that the names "acquire" and "release" derive
	from the usage of locks).  Andrea points out that having
	different ordering properties for different forms of acquires
	and releases is not only unnecessary, it would also be
	confusing and unmaintainable.

	Locks are constructed from lower-level primitives, typically
	RMW-acquire (for locking) and ordinary release (for unlock).
	It is illogical to require stronger ordering properties from
	the high-level operations than from the low-level operations
	they comprise.  Thus, this change would make

		while (cmpxchg_acquire(&s, 0, 1) != 0)
			cpu_relax();

	an incorrect implementation of spin_lock(&s) as far as the
	LKMM is concerned.  In theory this weakness can be ameliorated
	by changing the LKMM even further, requiring
	RMW-acquire/release also to be RCtso (which it already is on
	all supported architectures).

	As far as I know, nobody has singled out any examples of code
	in the kernel that actually relies on locks being RCtso.
	(People mumble about RCU and the scheduler, but nobody has
	pointed to any actual code.  If there are any real cases,
	their number is likely quite small.)  If RCtso ordering is not
	needed, why require it?

	A handful of locking constructs (qspinlocks, qrwlocks, and
	mcs_spinlocks) are built on top of smp_cond_load_acquire()
	instead of an RMW-acquire instruction.  It currently provides
	only the ordinary acquire semantics, not the stronger ordering
	this patch would require of locks.  In theory this could be
	ameliorated by requiring smp_cond_load_acquire() in
	combination with ordinary release also to be RCtso (which is
	currently true on all supported architectures).

	On future weakly ordered architectures, people may be able to
	implement locks in a non-RCtso fashion with significant
	performance improvement.  Meeting the RCtso requirement would
	necessarily add run-time overhead.

Overall, the technical aspects of these arguments seem relatively
minor, and it appears mostly to boil down to a matter of opinion.
Since the opinions of senior kernel maintainers such as Linus,
Peter, and Will carry more weight than those of Luc and Andrea, this
patch changes the model in accordance with the maintainers' wishes.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Link: http://lkml.kernel.org/r/20180926182920.27644-2-paulmck@linux.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02 10:28:01 +02:00
Paul E. McKenney
c4f790f244 tools/memory-model: Add litmus-test naming scheme
This commit documents the scheme used to generate the names for the
litmus tests.

[ paulmck: Apply feedback from Andrea Parri and Will Deacon. ]
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: stern@rowland.harvard.edu
Link: http://lkml.kernel.org/r/20180926182920.27644-1-paulmck@linux.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02 10:28:00 +02:00
Matthew Wilcox
27df89689e locking/spinlocks: Remove an instruction from spin and write locks
Both spin locks and write locks currently do:

 f0 0f b1 17             lock cmpxchg %edx,(%rdi)
 85 c0                   test   %eax,%eax
 75 05                   jne    [slowpath]

This 'test' insn is superfluous; the cmpxchg insn sets the Z flag
appropriately.  Peter pointed out that using atomic_try_cmpxchg_acquire()
will let the compiler know this is true.  Comparing before/after
disassemblies show the only effect is to remove this insn.

Take this opportunity to make the spin & write lock code resemble each
other more closely and have similar likely() hints.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Link: http://lkml.kernel.org/r/20180820162639.GC25153@bombadil.infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-02 09:49:42 +02:00
Ard Biesheuvel
77ac1c02d9 jump_label: Fix NULL dereference bug in __jump_label_mod_update()
Commit 1948367768 ("jump_label: Annotate entries that operate on
__init code earlier") refactored the code that manages runtime
patching of jump labels in modules that are tied to static keys
defined in other modules or in the core kernel.

In the latter case, we may iterate over the static_key_mod linked
list until we hit the entry for the core kernel, whose 'mod' field
will be NULL, and attempt to dereference it to get at its 'state'
member.

So let's add a non-NULL check: this forces the 'init' argument of
__jump_label_update() to false for static keys that are defined in
the core kernel, which is appropriate given that __init annotated
jump_label entries in the core kernel should no longer be active
at this point (i.e., when loading modules).

Fixes: 1948367768 ("jump_label: Annotate entries that operate on ...")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20181001081324.11553-1-ard.biesheuvel@linaro.org
2018-10-02 08:08:18 +02:00
Ard Biesheuvel
57d1587703 s390/vmlinux.lds: Move JUMP_TABLE_DATA into output section
Commit e872267b8b ("jump_table: move entries into ro_after_init
region") moved the __jump_table input section into the __ro_after_init
output section, but inadvertently put the macro in the wrong place in
the s390 linker script. Let's fix that.

Fixes: e872267b8b ("jump_table: move entries into ro_after_init region")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Tested-by: Guenter Roeck <linux@roeck-us.net>
Cc: linux-s390@vger.kernel.org
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20180930164950.3841-1-ard.biesheuvel@linaro.org
2018-10-02 08:08:08 +02:00
Greg Kroah-Hartman
385afbf8c3 Late arm64 fixes
- Fix handling of young contiguous ptes for hugetlb mappings
 
 - Fix livelock when taking access faults on contiguous hugetlb mappings
 
 - Tighten up register accesses via KVM SET_ONE_REG ioctl()s
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABCgAGBQJbslhqAAoJELescNyEwWM0FIoH/2fQYrzEZk+zjcJxIxwZOVn8
 L1lpSb4+xa0OPLvHU/TEvPCo2B7J3R9jisqQKcqe0MeOvqRThfIsYOWfcFf5NoX8
 K4ysmaVk6treS1IJ9ZK+2g5pSuKpvFNQ0euBdoolCe4wV/ZDTH2dNlovdIvnucV2
 ybpwUptTK33tpUAlkadGsFo/O8Qdsu3MhQD4ymDZXNj8N7L9lrIwCX42wDZpvcFd
 XR2O0/tAOtbz1n7PBmtCehenS0BzU5877MAmQsb9c93qyyZ37cMhS1L1RCPqhXV9
 TfX/+nyjkRpt+gaMJTV39JjMTBcbtVVHNe32cC470H5OvgK6SNELcJsIlEeUFbo=
 =Subb
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Will writes:
  "Late arm64 fixes

   - Fix handling of young contiguous ptes for hugetlb mappings

   - Fix livelock when taking access faults on contiguous hugetlb mappings

   - Tighten up register accesses via KVM SET_ONE_REG ioctl()s"

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: KVM: Sanitize PSTATE.M when being set from userspace
  arm64: KVM: Tighten guest core register access from userspace
  arm64: hugetlb: Avoid unnecessary clearing in huge_ptep_set_access_flags
  arm64: hugetlb: Fix handling of young ptes
2018-10-01 17:24:20 -07:00
Greg Kroah-Hartman
b62e425593 ARM: SoC fixes
A handful of fixes that have been coming in the last couple of weeks:
 
  - Freescale fixes for on-chip accellerators
  - A DT fix for stm32 to avoid fallback to non-DMA SPI mode
  - Fixes for badly specified interrupts on BCM63xx SoCs
  - Allwinner A64 HDMI was incorrectly specified as fully compatble with R40
  - Drive strength fix for SAMA5D2 NAND pins on one board
 -----BEGIN PGP SIGNATURE-----
 
 iQJDBAABCAAtFiEElf+HevZ4QCAJmMQ+jBrnPN6EHHcFAluxIecPHG9sb2ZAbGl4
 b20ubmV0AAoJEIwa5zzehBx3KswP/iJT6PRSv2OiZq5UyUPhAOx9dW+9uQP5qCYO
 43hRkEhUQEbHAibjd4jKq7r2jNfOEeoZARyhE89tQc+RxwU7oOxH5Aohbmk1o4TQ
 bQ8AQHoofdNerwr8LKWAWvXe6Ff74d6NIJEQZ1ampndt7pul6LDJbLGg503tDPKZ
 fomG/W50id7xA8xexEfZZRXZu9HSRqNk6/wZYycUhsreZZ30nSQwJTJvLiSiTTAh
 qWleTc0dD3BazQBEf8VJwLSu3UfigXF+dP7p/joElgULhk00fHYrhWdAa8d0F3ib
 tS0foD/alLVslnjIDh8baEkErfqDvtZlpRCinNob1R56yzmkSxjBqCb6kSt4jCN8
 o+rlNnmnJPRH/qj0wdjd9phw5AWyZw1V1lSRvZGPacG6i7ZYb02Sj13u05k8826m
 hIpnryhrwuO8lKrDUCV4GT/oDpKS7ujskJZFWEUgjXHZA/XDodNXN5Rkuw8LeJmh
 HJx1Ef5v/RLbdoIl3Ybs1zDdbg9rmxdaqfDs3Ukka9doZGB1wtZh+GbF1v6u6GZi
 zmrcu3jzhDVek7Lw1ZWUCUBCxmYLbcg2txd6ZtkCV09M/fuSnQuxF/mLqiq03YAL
 ASy7ejKc5tf8DPnHKlZ7KIR4eMXEhxUFOpKblAQktHvREel2zC5xjOQjQvCTm1hD
 w5rDtaPt
 =+/9J
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Olof writes:
  "ARM: SoC fixes

   A handful of fixes that have been coming in the last couple of weeks:

   - Freescale fixes for on-chip accellerators
   - A DT fix for stm32 to avoid fallback to non-DMA SPI mode
   - Fixes for badly specified interrupts on BCM63xx SoCs
   - Allwinner A64 HDMI was incorrectly specified as fully compatble with R40
   - Drive strength fix for SAMA5D2 NAND pins on one board"

* tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  ARM: dts: stm32: update SPI6 dmas property on stm32mp157c
  soc: fsl: qe: Fix copy/paste bug in ucc_get_tdm_sync_shift()
  soc: fsl: qbman: qman: avoid allocating from non existing gen_pool
  ARM: dts: BCM63xx: Fix incorrect interrupt specifiers
  MAINTAINERS: update the Annapurna Labs maintainer email
  ARM: dts: sun8i: drop A64 HDMI PHY fallback compatible from R40 DT
  ARM: dts: at91: sama5d2_ptc_ek: fix nand pinctrl
2018-10-01 17:23:27 -07:00
Greg Kroah-Hartman
ef0f2584c2 Fixes for v4.19-rc7
- Fix failure-path memory leak in ramoops_init (nixiaoming)
 -----BEGIN PGP SIGNATURE-----
 Comment: Kees Cook <kees@outflux.net>
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAluxB3cWHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJidvEACZlqemGsSVQ4elXWCW9EPqyVSn
 lbPg5ONurG/51J6313Ankgn7PrOI7WRd3iAXUWYByoc0DXn/jgz0i/B1+MtSnH4g
 fMeZ3DMnwmuC4h8/50/xmxbjKj2+vW7tX+978wWbYnvYNC+UXf8CN4J4MwBFmNR3
 DGH+oVCx2MITbIYQ3u5FIUgRJl0sD15GtxHg/l5Ff78dtUJVvlnD6ZAa9/rDBCPu
 DD0IqjJ5BqTmGw98L2tG0I2SDSrC8TGAYdQlZK/k7vHUqCWP6QspCpQQy3x6bh+W
 QRL6gCNEPZIGB+uAVbueCo80zRrx1NltbkbO4n9zn9ItYxpvOXJwGT4peYYXwabO
 nn+cWwRAJGITp9BFKIk/V8p05rpRhw8oeOBHIzwylb3C6bqK3ijnkk9mNSmUnLPG
 FtzRs/cYEMHi7n7+aygm1lNHn98PAWGLmomXLyxIUtsSH1jvFWG+jxLiRih/Oah5
 qjSGw2r681vLKsj2tQV4hpFvWaV83t2cO1QOldWSSkVHKJO9+pglEUGz9gfCvBRz
 bCvA0aPNGmMGTO0faQMB/TG2pMtOK94UU6kBIALxlEeHrtdpoOaQN6g++R7sY3/y
 SWo8BeGlNMSAtkWtd6Y7xxIF5N4PDDc6iMyjHGM/KCptANVu03BWlfOn32tel9jR
 17wiJi6SjqQjHVp7jw==
 =bzfz
 -----END PGP SIGNATURE-----

Merge tag 'pstore-v4.19-rc7' of https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Kees writes:
  "Pstore fixes for v4.19-rc7

   - Fix failure-path memory leak in ramoops_init (nixiaoming)"

* tag 'pstore-v4.19-rc7' of https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  pstore/ram: Fix failure-path memory leak in ramoops_init
2018-10-01 17:22:36 -07:00
Marc Zyngier
2a3f93459d arm64: KVM: Sanitize PSTATE.M when being set from userspace
Not all execution modes are valid for a guest, and some of them
depend on what the HW actually supports. Let's verify that what
userspace provides is compatible with both the VM settings and
the HW capabilities.

Cc: <stable@vger.kernel.org>
Fixes: 0d854a60b1 ("arm64: KVM: enable initialization of a 32bit vcpu")
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-10-01 14:38:26 +01:00
Dave Martin
d26c25a9d1 arm64: KVM: Tighten guest core register access from userspace
We currently allow userspace to access the core register file
in about any possible way, including straddling multiple
registers and doing unaligned accesses.

This is not the expected use of the ABI, and nobody is actually
using it that way. Let's tighten it by explicitly checking
the size and alignment for each field of the register file.

Cc: <stable@vger.kernel.org>
Fixes: 2f4a07c5f9 ("arm64: KVM: guest one-reg interface")
Reviewed-by: Christoffer Dall <christoffer.dall@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
[maz: rewrote Dave's initial patch to be more easily backported]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-10-01 14:38:05 +01:00
Masahiro Yamada
ac0d656795 x86/build: Remove unused CONFIG_AS_CRC32
CONFIG_AS_CRC32 is not used anywhere. Its last user was removed by

  0cb6c969ed ("net, lib: kill arch_fast_hash library bits")

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lkml.kernel.org/r/1538389443-28514-1-git-send-email-yamada.masahiro@socionext.com
2018-10-01 14:58:09 +02:00
Kees Cook
bac6f6cda2 pstore/ram: Fix failure-path memory leak in ramoops_init
As reported by nixiaoming, with some minor clarifications:

1) memory leak in ramoops_register_dummy():
   dummy_data = kzalloc(sizeof(*dummy_data), GFP_KERNEL);
   but no kfree() if platform_device_register_data() fails.

2) memory leak in ramoops_init():
   Missing platform_device_unregister(dummy) and kfree(dummy_data)
   if platform_driver_register(&ramoops_driver) fails.

I've clarified the purpose of ramoops_register_dummy(), and added a
common cleanup routine for all three failure paths to call.

Reported-by: nixiaoming <nixiaoming@huawei.com>
Cc: stable@vger.kernel.org
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
2018-09-30 10:15:41 -07:00
Greg Kroah-Hartman
17b57b1883 Linux 4.19-rc6 2018-09-30 07:15:35 -07:00
Greg Kroah-Hartman
9a10b06375 A trivial fix for auxdisplay
- MAINTAINERS reference fix for moved file
     Reported by Joe Perches
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEPjU5OPd5QIZ9jqqOGXyLc2htIW0FAluwu8YACgkQGXyLc2ht
 IW1R+xAAupXPOU/zP3dbmyi4mANNG99fy5cmQDIri4qb5DIPLOWl4qVZLwN1AfeA
 jzXHoWyqfHYCBavzpe3uAQl3EU9QJNoDUn5048WvRBRYcjIpvTLGUnDC9fWmbEoP
 nmmXBuAn16iZE+/BOSwnDtVPgUkPEU09aStQYi2plolwraMmScYizqfM56CAQEwB
 kv1x9Rf1tRShsyAgACmvgzczAjJ+Ctx3qPf/72q455uJU9eIqbi0S/xmQ4RHkELP
 FCdEv0/20aSUOV1u55FxMWwqZaYquM2/gcj7/NffZrKs5Fz2woPoGesB9uZ0b4lq
 QtosUUSpCg6n03/vjfK7Rej/2wBa439fR58849Mu97o2faMDPzCP57Fu8SR9rM3D
 2LuRwktYK+9NXI0eHu7d9YSrep3qC9r1KrfQD2t5M39Ut56ZBJmXZBeVNEBKThYb
 MwC5TvpXxqD1AGP1MeP9GFe8zIjhYJNen535VyrUmW/aYtZcPBYkgCPk9e0AEAv0
 4PWUmrbdS+dEtIwmhQZqQ0eopFAyRwPJ3TkgpW3ZHa36yDcDwXuVnSRYvSepIcHZ
 5UkxJcUNKUEXJ9EjvVuFx7BmdKUzxPSGuq+Q0W/1Nr+waNs4HMDoKtQN8TDVXg7c
 vtmZSkqunvkmiRZlaY+JuPxm9sAf2FO5FZ23pLbAfQlDISeNO0A=
 =7ZGd
 -----END PGP SIGNATURE-----

Merge tag 'auxdisplay-for-greg-v4.19-rc6' of https://github.com/ojeda/linux

Miguel writes:
  "A trivial fix for auxdisplay

    - MAINTAINERS reference fix for moved file
      Reported by Joe Perches"

* tag 'auxdisplay-for-greg-v4.19-rc6' of https://github.com/ojeda/linux:
  MAINTAINERS: fix reference to moved drivers/{misc => auxdisplay}/panel.c
2018-09-30 06:20:33 -07:00
Greg Kroah-Hartman
9ba6873e16 filesystem-dax for 4.19-rc6
Fix a deadlock in the new for 4.19 dax_lock_mapping_entry() routine.
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJbsDBrAAoJEB7SkWpmfYgCi54QAIz8yFZI5+5+amG/L/F9mGe4
 sagcSPsk67EzzTDhnKASTlRmpm0+LWzQckY7o/fDRoM0VQVKjXVDke4VTDnHFg7W
 JfZMN24dg6Pcbq3CxuSXMOiWd8vXSnLL2Myin+fQ/kY1rxnIz2ZYNWxCQsLvdPiC
 VKJAbpYlcG41HZPPnRkMaRBxf2INUraSgyHFoehbgvlwLD7YUOzPh9strauutK5M
 xljv2d/yjfaW4U6DhQhUSo+sDYRLGDkbqQw6ZoVqbODA0IXdY6ytiCujLLD9xODg
 lDKF68jCX/+lFIURm8BRpX9iqHvfILC5el61a4bTxjJ6XUf+Ok5vgkeZFDfQKziC
 rLqm09NTQ5Xu0MJ8Ql+5cqAFqBMA7Uy1zF6l8DnGFCtMV/S0H/TgdXWLzHjRXQvE
 18ekLqTcRk5UmPXJYJ829ln0TKTd3zyuVgwuLuGAeO97m431y3K2Q74ncPahgE9+
 W0nduPFTmMikohcKah2P3mQWGtUAYWodQsEs+Y9gJPyoDic+fmjo+mI0xg7CeFL4
 kpfug45i8hdbnlHrHOJ6bz7fRq7CvaaRaI3gOvFfuN2TJVY8Qfs/8JD4HN8F7u+r
 zDPVnvkutaYV1uOOBU4nDzPJ+naVGlpOj1/tsMU4ikj3LbfkfW+gxsr6XGZYPU81
 qYjEfXm60ritFoAA5dVV
 =3l8a
 -----END PGP SIGNATURE-----

Merge tag 'libnvdimm-fixes2-4.19-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm

Dan writes:
  "filesystem-dax for 4.19-rc6

   Fix a deadlock in the new for 4.19 dax_lock_mapping_entry() routine."

* tag 'libnvdimm-fixes2-4.19-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
  dax: Fix deadlock in dax_lock_mapping_entry()
2018-09-30 06:19:38 -07:00
Miguel Ojeda
03d179a840 MAINTAINERS: fix reference to moved drivers/{misc => auxdisplay}/panel.c
Commit 51c1e9b554 ("auxdisplay: Move panel.c to drivers/auxdisplay folder")
moved the file, but the MAINTAINERS reference was not updated.

Link: https://lore.kernel.org/lkml/20180928220131.31075-1-joe@perches.com/
Reported-by: Joe Perches <joe@perches.com>
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2018-09-30 13:50:05 +02:00
Greg Kroah-Hartman
291d0e5d81 for-linus-20180929
-----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAluv6bgQHGF4Ym9lQGtl
 cm5lbC5kawAKCRD301j7KXHgprAFD/9YEm5/YGX8ypqepUISfr7MSspZCl/r0GVv
 Oyu14jfTe18ji9mnFzA+g0y/KbNgdyOYstOp9l4V2Pxmt0Hq3KHjiqdbGbICUcDz
 uPbwnXWW3Pem6l6JOmT86n14c5irkmB/XezlBEE2n7cVReCOwkycr8VNUkXax/Mu
 Wuv1nAX2+uNBGSg0g4H2Y5Dk0fxmQcyKKkVfsz1xa9T2G5sB7gi8XU3+mqXT77hC
 BN7aaB306g+gNwGuHp4V6r9eUmSilRHq53qYTKRD8Vtbe2VeVlsmnU8LjFGRuxcN
 UZOuEO5MftIt32epi8hEQwWVxoZlaHv5qTjAHjiM77H7+kZGVK7Xv/ZrJHoRRQcI
 vIrNKZX0wUtlsC/MmdCcYdqzxgyMJYNc7+Y13W2M/GgXamrOVcjaYWGaxDWfLlIN
 jLkFrBK+9XRnvh5o0yKmoL/LXFJ4vXc3T9cvaYN/KTJUhYcfBEDuvfJTPKjbrWkc
 iv6ORaLh9hbtUmIJO2yo0ZtLo9vxhegJK1NP6bICo0fJ9iiOrIQpxLiEegWA0ITb
 85ot2Iepao5wqNnobSUBdlIKgIt1hECaNVCb3wUvIM7KYZilYP5nYsHg28x9+ZPq
 CJZRzYmMBaH+yKo9FkO/+rxGPg6R9Gri5QZRTFAVUcW02I407A8+nlMwQMEBZxqb
 80h8OO/ACw==
 =8c0k
 -----END PGP SIGNATURE-----

Merge tag 'for-linus-20180929' of git://git.kernel.dk/linux-block

Jens writes:
  "Block fixes for 4.19-rc6

   A set of fixes that should go into this release. This pull request
   contains:

   - A fix (hopefully) for the persistent grants for xen-blkfront. A
     previous fix from this series wasn't complete, hence reverted, and
     this one should hopefully be it. (Boris Ostrovsky)

   - Fix for an elevator drain warning with SMR devices, which is
     triggered when you switch schedulers (Damien)

   - bcache deadlock fix (Guoju Fang)

   - Fix for the block unplug tracepoint, which has had the
     timer/explicit flag reverted since 4.11 (Ilya)

   - Fix a regression in this series where the blk-mq timeout hook is
     invoked with the RCU read lock held, hence preventing it from
     blocking (Keith)

   - NVMe pull from Christoph, with a single multipath fix (Susobhan Dey)"

* tag 'for-linus-20180929' of git://git.kernel.dk/linux-block:
  xen/blkfront: correct purging of persistent grants
  Revert "xen/blkfront: When purging persistent grants, keep them in the buffer"
  blk-mq: I/O and timer unplugs are inverted in blktrace
  bcache: add separate workqueue for journal_write to avoid deadlock
  xen/blkfront: When purging persistent grants, keep them in the buffer
  block: fix deadline elevator drain for zoned block devices
  blk-mq: Allow blocking queue tag iter callbacks
  nvme: properly propagate errors in nvme_mpath_init
2018-09-29 14:52:14 -07:00