mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-05 09:46:43 +07:00
e7d33bb5ea
The only actual current lockref user (dcache) uses zero reference counts even for perfectly live dentries, because it's a cache: there may not be any users, but that doesn't mean that we want to throw away the dentry. At the same time, the dentry cache does have a notion of a truly "dead" dentry that we must not even increment the reference count of, because we have pruned it and it is not valid. Currently that distinction is not visible in the lockref itself, and the dentry cache validation uses "lockref_get_or_lock()" to either get a new reference to a dentry that already had existing references (and thus cannot be dead), or get the dentry lock so that we can then verify the dentry and increment the reference count under the lock if that verification was successful. That's all somewhat complicated. This adds the concept of being "dead" to the lockref itself, by simply using a count that is negative. This allows a usage scenario where we can increment the refcount of a dentry without having to validate it, and pushing the special "we killed it" case into the lockref code. The dentry code itself doesn't actually use this yet, and it's probably too late in the merge window to do that code (the dentry_kill() code with its "should I decrement the count" logic really is pretty complex code), but let's introduce the concept at the lockref level now. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
40 lines
1.1 KiB
C
40 lines
1.1 KiB
C
#ifndef __LINUX_LOCKREF_H
|
|
#define __LINUX_LOCKREF_H
|
|
|
|
/*
|
|
* Locked reference counts.
|
|
*
|
|
* These are different from just plain atomic refcounts in that they
|
|
* are atomic with respect to the spinlock that goes with them. In
|
|
* particular, there can be implementations that don't actually get
|
|
* the spinlock for the common decrement/increment operations, but they
|
|
* still have to check that the operation is done semantically as if
|
|
* the spinlock had been taken (using a cmpxchg operation that covers
|
|
* both the lock and the count word, or using memory transactions, for
|
|
* example).
|
|
*/
|
|
|
|
#include <linux/spinlock.h>
|
|
|
|
struct lockref {
|
|
union {
|
|
#ifdef CONFIG_CMPXCHG_LOCKREF
|
|
aligned_u64 lock_count;
|
|
#endif
|
|
struct {
|
|
spinlock_t lock;
|
|
unsigned int count;
|
|
};
|
|
};
|
|
};
|
|
|
|
extern void lockref_get(struct lockref *);
|
|
extern int lockref_get_not_zero(struct lockref *);
|
|
extern int lockref_get_or_lock(struct lockref *);
|
|
extern int lockref_put_or_lock(struct lockref *);
|
|
|
|
extern void lockref_mark_dead(struct lockref *);
|
|
extern int lockref_get_not_dead(struct lockref *);
|
|
|
|
#endif /* __LINUX_LOCKREF_H */
|