linux_dsm_epyc7002/kernel/locking
Peter Zijlstra 5293c2efda futex,rt_mutex: Provide futex specific rt_mutex API
Part of what makes futex_unlock_pi() intricate is that
rt_mutex_futex_unlock() -> rt_mutex_slowunlock() can drop
rt_mutex::wait_lock.

This means it cannot rely on the atomicy of wait_lock, which would be
preferred in order to not rely on hb->lock so much.

The reason rt_mutex_slowunlock() needs to drop wait_lock is because it can
race with the rt_mutex fastpath, however futexes have their own fast path.

Since futexes already have a bunch of separate rt_mutex accessors, complete
that set and implement a rt_mutex variant without fastpath for them.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: juri.lelli@arm.com
Cc: bigeasy@linutronix.de
Cc: xlpang@redhat.com
Cc: rostedt@goodmis.org
Cc: mathieu.desnoyers@efficios.com
Cc: jdesfossez@efficios.com
Cc: dvhart@infradead.org
Cc: bristot@redhat.com
Link: http://lkml.kernel.org/r/20170322104151.702962446@infradead.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-03-23 19:10:07 +01:00
..
lockdep_internals.h lockdep: Limit static allocations if PROVE_LOCKING_SMALL is defined 2016-11-18 11:33:19 -08:00
lockdep_proc.c Replace <asm/uaccess.h> with <linux/uaccess.h> globally 2016-12-24 11:46:01 -08:00
lockdep_states.h
lockdep.c locking/lockdep: Handle statically initialized PER_CPU locks properly 2017-03-16 09:57:08 +01:00
locktorture.c sched/headers: Prepare for the removal of <linux/rtmutex.h> from <linux/sched.h> 2017-03-02 08:42:32 +01:00
Makefile locking/ww_mutex: Begin kselftests for ww_mutex 2017-01-14 11:37:14 +01:00
mcs_spinlock.h locking/core: Remove cpu_relax_lowlatency() users 2016-11-16 10:15:10 +01:00
mutex-debug.c locking/mutex: Rework mutex::owner 2016-10-25 11:31:50 +02:00
mutex-debug.h locking/mutex: Fix lockdep_assert_held() fail 2017-01-30 11:42:59 +01:00
mutex.c sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h> 2017-03-02 08:42:34 +01:00
mutex.h locking/mutex: Fix lockdep_assert_held() fail 2017-01-30 11:42:59 +01:00
osq_lock.c locking/osq: Break out of spin-wait busy waiting loop for a preempted vCPU in osq_lock() 2016-11-22 12:48:10 +01:00
percpu-rwsem.c locking/percpu-rwsem: Replace waitqueue with rcuwait 2017-01-14 11:14:35 +01:00
qrwlock.c locking/core: Remove cpu_relax_lowlatency() users 2016-11-16 10:15:10 +01:00
qspinlock_paravirt.h locking/pvqspinlock: Don't wait if vCPU is preempted 2017-01-12 09:35:57 +01:00
qspinlock_stat.h sched/headers: Prepare for new header dependencies before moving code to <linux/sched/clock.h> 2017-03-02 08:42:27 +01:00
qspinlock.c locking/qspinlock: Use __this_cpu_dec() instead of full-blown this_cpu_dec() 2016-06-27 11:37:41 +02:00
rtmutex_common.h futex,rt_mutex: Provide futex specific rt_mutex API 2017-03-23 19:10:07 +01:00
rtmutex-debug.c futex: Remove rt_mutex_deadlock_account_*() 2017-03-23 19:10:07 +01:00
rtmutex-debug.h futex: Remove rt_mutex_deadlock_account_*() 2017-03-23 19:10:07 +01:00
rtmutex.c futex,rt_mutex: Provide futex specific rt_mutex API 2017-03-23 19:10:07 +01:00
rtmutex.h futex: Remove rt_mutex_deadlock_account_*() 2017-03-23 19:10:07 +01:00
rwsem-spinlock.c locking/rwsem: Fix down_write_killable() for CONFIG_RWSEM_GENERIC_SPINLOCK=y 2017-03-16 09:28:30 +01:00
rwsem-xadd.c sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h> 2017-03-02 08:42:34 +01:00
rwsem.c locking/lockdep: Add new check to lock_downgrade() 2017-03-16 09:57:07 +01:00
rwsem.h locking/rwsem: Protect all writes to owner by WRITE_ONCE() 2016-06-08 15:16:59 +02:00
semaphore.c sched/headers: Prepare for new header dependencies before moving code to <linux/sched/debug.h> 2017-03-02 08:42:34 +01:00
spinlock_debug.c locking/spinlock/debug: Remove spinlock lockup detection code 2017-02-10 09:09:49 +01:00
spinlock.c locking/spinlocks: Remove the unused spin_lock_bh_nested() API 2017-01-12 09:33:39 +01:00
test-ww_mutex.c locking/ww_mutex: Improve test to cover acquire context changes 2017-03-16 09:57:09 +01:00