linux_dsm_epyc7002/include/linux/lockref.h

40 lines
1.1 KiB
C
Raw Normal View History

#ifndef __LINUX_LOCKREF_H
#define __LINUX_LOCKREF_H
/*
* Locked reference counts.
*
* These are different from just plain atomic refcounts in that they
* are atomic with respect to the spinlock that goes with them. In
* particular, there can be implementations that don't actually get
* the spinlock for the common decrement/increment operations, but they
* still have to check that the operation is done semantically as if
* the spinlock had been taken (using a cmpxchg operation that covers
* both the lock and the count word, or using memory transactions, for
* example).
*/
#include <linux/spinlock.h>
struct lockref {
lockref: implement lockless reference count updates using cmpxchg() Instead of taking the spinlock, the lockless versions atomically check that the lock is not taken, and do the reference count update using a cmpxchg() loop. This is semantically identical to doing the reference count update protected by the lock, but avoids the "wait for lock" contention that you get when accesses to the reference count are contended. Note that a "lockref" is absolutely _not_ equivalent to an atomic_t. Even when the lockref reference counts are updated atomically with cmpxchg, the fact that they also verify the state of the spinlock means that the lockless updates can never happen while somebody else holds the spinlock. So while "lockref_put_or_lock()" looks a lot like just another name for "atomic_dec_and_lock()", and both optimize to lockless updates, they are fundamentally different: the decrement done by atomic_dec_and_lock() is truly independent of any lock (as long as it doesn't decrement to zero), so a locked region can still see the count change. The lockref structure, in contrast, really is a *locked* reference count. If you hold the spinlock, the reference count will be stable and you can modify the reference count without using atomics, because even the lockless updates will see and respect the state of the lock. In order to enable the cmpxchg lockless code, the architecture needs to do three things: (1) Make sure that the "arch_spinlock_t" and an "unsigned int" can fit in an aligned u64, and have a "cmpxchg()" implementation that works on such a u64 data type. (2) define a helper function to test for a spinlock being unlocked ("arch_spin_value_unlocked()") (3) select the "ARCH_USE_CMPXCHG_LOCKREF" config variable in its Kconfig file. This enables it for x86-64 (but not 32-bit, we'd need to make sure cmpxchg() turns into the proper cmpxchg8b in order to enable it for 32-bit mode). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-09-03 02:12:15 +07:00
union {
#ifdef CONFIG_CMPXCHG_LOCKREF
aligned_u64 lock_count;
#endif
struct {
spinlock_t lock;
unsigned int count;
};
};
};
extern void lockref_get(struct lockref *);
extern int lockref_get_not_zero(struct lockref *);
extern int lockref_get_or_lock(struct lockref *);
extern int lockref_put_or_lock(struct lockref *);
2013-09-08 05:49:18 +07:00
extern void lockref_mark_dead(struct lockref *);
extern int lockref_get_not_dead(struct lockref *);
#endif /* __LINUX_LOCKREF_H */