linux_dsm_epyc7002/kernel/locking
Thomas Gleixner 8208498438 rtmutex: Detect changes in the pi lock chain
When we walk the lock chain, we drop all locks after each step. So the
lock chain can change under us before we reacquire the locks. That's
harmless in principle as we just follow the wrong lock path. But it
can lead to a false positive in the dead lock detection logic:

T0 holds L0
T0 blocks on L1 held by T1
T1 blocks on L2 held by T2
T2 blocks on L3 held by T3
T4 blocks on L4 held by T4

Now we walk the chain

lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> 
     lock T2 ->  adjust T2 ->  drop locks

T2 times out and blocks on L0

Now we continue:

lock T2 -> lock L0 -> deadlock detected, but it's not a deadlock at all.

Brad tried to work around that in the deadlock detection logic itself,
but the more I looked at it the less I liked it, because it's crystal
ball magic after the fact.

We actually can detect a chain change very simple:

lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 ->

     next_lock = T2->pi_blocked_on->lock;

drop locks

T2 times out and blocks on L0

Now we continue:

lock T2 -> 

     if (next_lock != T2->pi_blocked_on->lock)
     	   return;

So if we detect that T2 is now blocked on a different lock we stop the
chain walk. That's also correct in the following scenario:

lock T1 -> lock L2 -> adjust L2 -> unlock T1 -> lock T2 -> adjust T2 ->

     next_lock = T2->pi_blocked_on->lock;

drop locks

T3 times out and drops L3
T2 acquires L3 and blocks on L4 now

Now we continue:

lock T2 -> 

     if (next_lock != T2->pi_blocked_on->lock)
     	   return;

We don't have to follow up the chain at that point, because T2
propagated our priority up to T4 already.

[ Folded a cleanup patch from peterz ]

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reported-by: Brad Mouring <bmouring@ni.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20140605152801.930031935@linutronix.de
Cc: stable@vger.kernel.org
2014-06-07 14:55:40 +02:00
..
lglock.c locking: Move the lglocks code to kernel/locking/ 2013-11-06 09:24:20 +01:00
lockdep_internals.h
lockdep_proc.c lockdep/proc: Fix lock-time avg computation 2013-11-11 12:41:34 +01:00
lockdep_states.h
lockdep.c asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/* 2014-05-05 16:07:46 -07:00
locktorture.c rcutorture: Add a lock_busted to test the test 2014-02-23 09:04:43 -08:00
Makefile lglock: map to spinlock when !CONFIG_SMP 2014-04-07 16:36:14 -07:00
mcs_spinlock.c locking/mutexes: Introduce cancelable MCS lock for adaptive spinning 2014-03-11 12:14:56 +01:00
mcs_spinlock.h locking/mutexes: Introduce cancelable MCS lock for adaptive spinning 2014-03-11 12:14:56 +01:00
mutex-debug.c locking/mutex: Fix debug_mutexes 2014-04-11 10:40:35 +02:00
mutex-debug.h
mutex.c Merge branch 'x86-asmlinkage-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip 2014-03-31 14:13:25 -07:00
mutex.h
percpu-rwsem.c locking: Move the percpu-rwsem code to kernel/locking/ 2013-11-06 09:24:22 +01:00
rtmutex_common.h sched/deadline: Add SCHED_DEADLINE inheritance logic 2014-01-13 13:42:56 +01:00
rtmutex-debug.c rtmutex: Turn the plist into an rb-tree 2014-01-13 13:41:50 +01:00
rtmutex-debug.h rtmutex: Handle deadlock detection smarter 2014-06-07 14:55:40 +02:00
rtmutex-tester.c locking: Move the rtmutex code to kernel/locking/ 2013-11-06 09:23:59 +01:00
rtmutex.c rtmutex: Detect changes in the pi lock chain 2014-06-07 14:55:40 +02:00
rtmutex.h rtmutex: Handle deadlock detection smarter 2014-06-07 14:55:40 +02:00
rwsem-spinlock.c locking: Move the rwsem code to kernel/locking/ 2013-11-06 09:24:18 +01:00
rwsem-xadd.c asmlinkage: Mark rwsem functions that can be called from assembler asmlinkage 2014-02-13 18:13:37 -08:00
rwsem.c locking: Move the rwsem code to kernel/locking/ 2013-11-06 09:24:18 +01:00
semaphore.c locking: Move the semaphore core to kernel/locking/ 2013-11-06 07:55:22 +01:00
spinlock_debug.c locking: Move the spinlock code to kernel/locking/ 2013-11-06 07:55:21 +01:00
spinlock.c locking: Move the spinlock code to kernel/locking/ 2013-11-06 07:55:21 +01:00