linux_dsm_epyc7002/kernel/locking
Frederic Weisbecker 948f83768a locking/lockdep: Test all incompatible scenarios at once in check_irq_usage()
check_prev_add_irq() tests all incompatible scenarios one after the
other while adding a lock (@next) to a tree dependency (@prev):

	LOCK_USED_IN_HARDIRQ          vs         LOCK_ENABLED_HARDIRQ
	LOCK_USED_IN_HARDIRQ_READ     vs         LOCK_ENABLED_HARDIRQ
	LOCK_USED_IN_SOFTIRQ          vs         LOCK_ENABLED_SOFTIRQ
	LOCK_USED_IN_SOFTIRQ_READ     vs         LOCK_ENABLED_SOFTIRQ

Also for these four scenarios, we must at least iterate the @prev
backward dependency. Then if it matches the relevant LOCK_USED_* bit,
we must also iterate the @next forward dependency.

Therefore in the best case we iterate 4 times, in the worst case 8 times.

A different approach can let us divide the number of branch iterations
by 4:

1) Iterate through @prev backward dependencies and accumulate all the IRQ
   uses in a single mask. In the best case where the current lock hasn't
   been used in IRQ, we stop here.

2) Iterate through @next forward dependencies and try to find a lock
   whose usage is exclusive to the accumulated usages gathered in the
   previous step. If we find one (call it @lockA), we have found an
   incompatible use, otherwise we stop here. Only bad locking scenario
   go further. So a sane verification stop here.

3) Iterate again through @prev backward dependency and find the lock
   whose usage matches @lockA in term of incompatibility. Call that
   lock @lockB.

4) Report the incompatible usages of @lockA and @lockB

If no incompatible use is found, the verification never goes beyond
step 2 which means at most two iterations.

The following compares the execution measurements of the function
check_prev_add_irq():

            Number of  calls   | Avg (ns)  | Stdev (ns) | Total time (ns)
  ------------------------------------------------------------------------
  Mainline         8452        |  2652     |    11962   |    22415143
  This patch       8452        |  1518     |     7090   |    12835602

Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: https://lkml.kernel.org/r/20190402160244.32434-5-frederic@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-29 08:29:20 +02:00
..
lock_events_list.h locking/rwsem: Enable lock event counting 2019-04-10 10:56:06 +02:00
lock_events.c locking/lock_events: Don't show pvqspinlock events on bare metal 2019-04-10 10:56:05 +02:00
lock_events.h locking/lock_events: Make lock_events available for all archs & other locks 2019-04-10 10:56:04 +02:00
lockdep_internals.h locking/lockdep: Test all incompatible scenarios at once in check_irq_usage() 2019-04-29 08:29:20 +02:00
lockdep_proc.c locking/lockdep: Introduce lockdep_next_lockchain() and lock_chain_count() 2019-02-28 07:55:44 +01:00
lockdep_states.h locking/lockdep: Rework FS_RECLAIM annotation 2017-08-10 12:29:03 +02:00
lockdep.c locking/lockdep: Test all incompatible scenarios at once in check_irq_usage() 2019-04-29 08:29:20 +02:00
locktorture.c Merge branches 'doc.2019.01.26a', 'fixes.2019.01.26a', 'sil.2019.01.26a', 'spdx.2019.02.09a', 'srcu.2019.01.26a' and 'torture.2019.01.26a' into HEAD 2019-02-09 08:47:52 -08:00
Makefile locking/lock_events: Make lock_events available for all archs & other locks 2019-04-10 10:56:04 +02:00
mcs_spinlock.h locking/mcs: Use smp_cond_load_acquire() in MCS spin loop 2018-04-27 09:48:49 +02:00
mutex-debug.c locking/mutex: Replace spin_is_locked() with lockdep 2018-11-12 09:06:22 -08:00
mutex-debug.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
mutex.c kernel/locking/mutex.c: remove caller signal_pending branch predictions 2019-01-04 13:13:48 -08:00
mutex.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
osq_lock.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
percpu-rwsem.c locking/rwsem: Remove arch specific rwsem files 2019-04-03 14:50:50 +02:00
qrwlock.c locking/qrwlock: Prevent slowpath writers getting held up by fastpath 2017-10-25 10:57:25 +02:00
qspinlock_paravirt.h locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs 2019-04-10 10:56:03 +02:00
qspinlock_stat.h locking/lock_events: Make lock_events available for all archs & other locks 2019-04-10 10:56:04 +02:00
qspinlock.c locking/qspinlock_stat: Introduce generic lockevent_*() counting APIs 2019-04-10 10:56:03 +02:00
rtmutex_common.h locking/rtmutex: Handle non enqueued waiters gracefully in remove_waiter() 2018-03-28 23:01:30 +02:00
rtmutex-debug.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
rtmutex-debug.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
rtmutex.c futex: Handle early deadlock return correctly 2019-02-08 13:00:36 +01:00
rtmutex.h License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
rwsem-xadd.c locking/rwsem: Enable lock event counting 2019-04-10 10:56:06 +02:00
rwsem.c locking/rwsem: Enhance DEBUG_RWSEMS_WARN_ON() macro 2019-04-10 10:56:03 +02:00
rwsem.h locking/rwsem: Prevent unneeded warning during locking selftest 2019-04-14 11:09:35 +02:00
semaphore.c sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h> 2017-03-02 08:42:34 +01:00
spinlock_debug.c locking/spinlock/debug: Remove spinlock lockup detection code 2017-02-10 09:09:49 +01:00
spinlock.c locking/core: Remove break_lock field when CONFIG_GENERIC_LOCKBREAK=y 2017-12-12 11:24:01 +01:00
test-ww_mutex.c locking/ww_mutex: Fix runtime warning in the WW mutex selftest 2018-10-03 08:56:31 +02:00