diff --git a/Documentation/locking/lockdep-design.rst b/Documentation/locking/lockdep-design.rst index 23fcbc4d3fc0..cec03bd1294a 100644 --- a/Documentation/locking/lockdep-design.rst +++ b/Documentation/locking/lockdep-design.rst @@ -392,3 +392,261 @@ Run the command and save the output, then compare against the output from a later run of this command to identify the leakers. This same output can also help you find situations where runtime lock initialization has been omitted. + +Recursive read locks: +--------------------- +The whole of the rest document tries to prove a certain type of cycle is equivalent +to deadlock possibility. + +There are three types of lockers: writers (i.e. exclusive lockers, like +spin_lock() or write_lock()), non-recursive readers (i.e. shared lockers, like +down_read()) and recursive readers (recursive shared lockers, like rcu_read_lock()). +And we use the following notations of those lockers in the rest of the document: + + W or E: stands for writers (exclusive lockers). + r: stands for non-recursive readers. + R: stands for recursive readers. + S: stands for all readers (non-recursive + recursive), as both are shared lockers. + N: stands for writers and non-recursive readers, as both are not recursive. + +Obviously, N is "r or W" and S is "r or R". + +Recursive readers, as their name indicates, are the lockers allowed to acquire +even inside the critical section of another reader of the same lock instance, +in other words, allowing nested read-side critical sections of one lock instance. + +While non-recursive readers will cause a self deadlock if trying to acquire inside +the critical section of another reader of the same lock instance. + +The difference between recursive readers and non-recursive readers is because: +recursive readers get blocked only by a write lock *holder*, while non-recursive +readers could get blocked by a write lock *waiter*. Considering the follow example: + + TASK A: TASK B: + + read_lock(X); + write_lock(X); + read_lock_2(X); + +Task A gets the reader (no matter whether recursive or non-recursive) on X via +read_lock() first. And when task B tries to acquire writer on X, it will block +and become a waiter for writer on X. Now if read_lock_2() is recursive readers, +task A will make progress, because writer waiters don't block recursive readers, +and there is no deadlock. However, if read_lock_2() is non-recursive readers, +it will get blocked by writer waiter B, and cause a self deadlock. + +Block conditions on readers/writers of the same lock instance: +-------------------------------------------------------------- +There are simply four block conditions: + +1. Writers block other writers. +2. Readers block writers. +3. Writers block both recursive readers and non-recursive readers. +4. And readers (recursive or not) don't block other recursive readers but + may block non-recursive readers (because of the potential co-existing + writer waiters) + +Block condition matrix, Y means the row blocks the column, and N means otherwise. + + | E | r | R | + +---+---+---+---+ + E | Y | Y | Y | + +---+---+---+---+ + r | Y | Y | N | + +---+---+---+---+ + R | Y | Y | N | + + (W: writers, r: non-recursive readers, R: recursive readers) + + +acquired recursively. Unlike non-recursive read locks, recursive read locks +only get blocked by current write lock *holders* other than write lock +*waiters*, for example: + + TASK A: TASK B: + + read_lock(X); + + write_lock(X); + + read_lock(X); + +is not a deadlock for recursive read locks, as while the task B is waiting for +the lock X, the second read_lock() doesn't need to wait because it's a recursive +read lock. However if the read_lock() is non-recursive read lock, then the above +case is a deadlock, because even if the write_lock() in TASK B cannot get the +lock, but it can block the second read_lock() in TASK A. + +Note that a lock can be a write lock (exclusive lock), a non-recursive read +lock (non-recursive shared lock) or a recursive read lock (recursive shared +lock), depending on the lock operations used to acquire it (more specifically, +the value of the 'read' parameter for lock_acquire()). In other words, a single +lock instance has three types of acquisition depending on the acquisition +functions: exclusive, non-recursive read, and recursive read. + +To be concise, we call that write locks and non-recursive read locks as +"non-recursive" locks and recursive read locks as "recursive" locks. + +Recursive locks don't block each other, while non-recursive locks do (this is +even true for two non-recursive read locks). A non-recursive lock can block the +corresponding recursive lock, and vice versa. + +A deadlock case with recursive locks involved is as follow: + + TASK A: TASK B: + + read_lock(X); + read_lock(Y); + write_lock(Y); + write_lock(X); + +Task A is waiting for task B to read_unlock() Y and task B is waiting for task +A to read_unlock() X. + +Dependency types and strong dependency paths: +--------------------------------------------- +Lock dependencies record the orders of the acquisitions of a pair of locks, and +because there are 3 types for lockers, there are, in theory, 9 types of lock +dependencies, but we can show that 4 types of lock dependencies are enough for +deadlock detection. + +For each lock dependency: + + L1 -> L2 + +, which means lockdep has seen L1 held before L2 held in the same context at runtime. +And in deadlock detection, we care whether we could get blocked on L2 with L1 held, +IOW, whether there is a locker L3 that L1 blocks L3 and L2 gets blocked by L3. So +we only care about 1) what L1 blocks and 2) what blocks L2. As a result, we can combine +recursive readers and non-recursive readers for L1 (as they block the same types) and +we can combine writers and non-recursive readers for L2 (as they get blocked by the +same types). + +With the above combination for simplification, there are 4 types of dependency edges +in the lockdep graph: + +1) -(ER)->: exclusive writer to recursive reader dependency, "X -(ER)-> Y" means + X -> Y and X is a writer and Y is a recursive reader. + +2) -(EN)->: exclusive writer to non-recursive locker dependency, "X -(EN)-> Y" means + X -> Y and X is a writer and Y is either a writer or non-recursive reader. + +3) -(SR)->: shared reader to recursive reader dependency, "X -(SR)-> Y" means + X -> Y and X is a reader (recursive or not) and Y is a recursive reader. + +4) -(SN)->: shared reader to non-recursive locker dependency, "X -(SN)-> Y" means + X -> Y and X is a reader (recursive or not) and Y is either a writer or + non-recursive reader. + +Note that given two locks, they may have multiple dependencies between them, for example: + + TASK A: + + read_lock(X); + write_lock(Y); + ... + + TASK B: + + write_lock(X); + write_lock(Y); + +, we have both X -(SN)-> Y and X -(EN)-> Y in the dependency graph. + +We use -(xN)-> to represent edges that are either -(EN)-> or -(SN)->, the +similar for -(Ex)->, -(xR)-> and -(Sx)-> + +A "path" is a series of conjunct dependency edges in the graph. And we define a +"strong" path, which indicates the strong dependency throughout each dependency +in the path, as the path that doesn't have two conjunct edges (dependencies) as +-(xR)-> and -(Sx)->. In other words, a "strong" path is a path from a lock +walking to another through the lock dependencies, and if X -> Y -> Z is in the +path (where X, Y, Z are locks), and the walk from X to Y is through a -(SR)-> or +-(ER)-> dependency, the walk from Y to Z must not be through a -(SN)-> or +-(SR)-> dependency. + +We will see why the path is called "strong" in next section. + +Recursive Read Deadlock Detection: +---------------------------------- + +We now prove two things: + +Lemma 1: + +If there is a closed strong path (i.e. a strong circle), then there is a +combination of locking sequences that causes deadlock. I.e. a strong circle is +sufficient for deadlock detection. + +Lemma 2: + +If there is no closed strong path (i.e. strong circle), then there is no +combination of locking sequences that could cause deadlock. I.e. strong +circles are necessary for deadlock detection. + +With these two Lemmas, we can easily say a closed strong path is both sufficient +and necessary for deadlocks, therefore a closed strong path is equivalent to +deadlock possibility. As a closed strong path stands for a dependency chain that +could cause deadlocks, so we call it "strong", considering there are dependency +circles that won't cause deadlocks. + +Proof for sufficiency (Lemma 1): + +Let's say we have a strong circle: + + L1 -> L2 ... -> Ln -> L1 + +, which means we have dependencies: + + L1 -> L2 + L2 -> L3 + ... + Ln-1 -> Ln + Ln -> L1 + +We now can construct a combination of locking sequences that cause deadlock: + +Firstly let's make one CPU/task get the L1 in L1 -> L2, and then another get +the L2 in L2 -> L3, and so on. After this, all of the Lx in Lx -> Lx+1 are +held by different CPU/tasks. + +And then because we have L1 -> L2, so the holder of L1 is going to acquire L2 +in L1 -> L2, however since L2 is already held by another CPU/task, plus L1 -> +L2 and L2 -> L3 are not -(xR)-> and -(Sx)-> (the definition of strong), which +means either L2 in L1 -> L2 is a non-recursive locker (blocked by anyone) or +the L2 in L2 -> L3, is writer (blocking anyone), therefore the holder of L1 +cannot get L2, it has to wait L2's holder to release. + +Moreover, we can have a similar conclusion for L2's holder: it has to wait L3's +holder to release, and so on. We now can prove that Lx's holder has to wait for +Lx+1's holder to release, and note that Ln+1 is L1, so we have a circular +waiting scenario and nobody can get progress, therefore a deadlock. + +Proof for necessary (Lemma 2): + +Lemma 2 is equivalent to: If there is a deadlock scenario, then there must be a +strong circle in the dependency graph. + +According to Wikipedia[1], if there is a deadlock, then there must be a circular +waiting scenario, means there are N CPU/tasks, where CPU/task P1 is waiting for +a lock held by P2, and P2 is waiting for a lock held by P3, ... and Pn is waiting +for a lock held by P1. Let's name the lock Px is waiting as Lx, so since P1 is waiting +for L1 and holding Ln, so we will have Ln -> L1 in the dependency graph. Similarly, +we have L1 -> L2, L2 -> L3, ..., Ln-1 -> Ln in the dependency graph, which means we +have a circle: + + Ln -> L1 -> L2 -> ... -> Ln + +, and now let's prove the circle is strong: + +For a lock Lx, Px contributes the dependency Lx-1 -> Lx and Px+1 contributes +the dependency Lx -> Lx+1, and since Px is waiting for Px+1 to release Lx, +so it's impossible that Lx on Px+1 is a reader and Lx on Px is a recursive +reader, because readers (no matter recursive or not) don't block recursive +readers, therefore Lx-1 -> Lx and Lx -> Lx+1 cannot be a -(xR)-> -(Sx)-> pair, +and this is true for any lock in the circle, therefore, the circle is strong. + +References: +----------- +[1]: https://en.wikipedia.org/wiki/Deadlock +[2]: Shibu, K. (2009). Intro To Embedded Systems (1st ed.). Tata McGraw-Hill diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst index 62c5ad98c11c..a334b584f2b3 100644 --- a/Documentation/locking/seqlock.rst +++ b/Documentation/locking/seqlock.rst @@ -139,6 +139,24 @@ with the associated LOCKTYPE lock acquired. Read path: same as in :ref:`seqcount_t`. + +.. _seqcount_latch_t: + +Latch sequence counters (``seqcount_latch_t``) +---------------------------------------------- + +Latch sequence counters are a multiversion concurrency control mechanism +where the embedded seqcount_t counter even/odd value is used to switch +between two copies of protected data. This allows the sequence counter +read path to safely interrupt its own write side critical section. + +Use seqcount_latch_t when the write side sections cannot be protected +from interruption by readers. This is typically the case when the read +side can be invoked from NMI handlers. + +Check `raw_write_seqcount_latch()` for more information. + + .. _seqlock_t: Sequential locks (``seqlock_t``) diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 49d925043171..f70dffc2771f 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -54,7 +54,7 @@ struct clocksource *art_related_clocksource; struct cyc2ns { struct cyc2ns_data data[2]; /* 0 + 2*16 = 32 */ - seqcount_t seq; /* 32 + 4 = 36 */ + seqcount_latch_t seq; /* 32 + 4 = 36 */ }; /* fits one cacheline */ @@ -73,14 +73,14 @@ __always_inline void cyc2ns_read_begin(struct cyc2ns_data *data) preempt_disable_notrace(); do { - seq = this_cpu_read(cyc2ns.seq.sequence); + seq = this_cpu_read(cyc2ns.seq.seqcount.sequence); idx = seq & 1; data->cyc2ns_offset = this_cpu_read(cyc2ns.data[idx].cyc2ns_offset); data->cyc2ns_mul = this_cpu_read(cyc2ns.data[idx].cyc2ns_mul); data->cyc2ns_shift = this_cpu_read(cyc2ns.data[idx].cyc2ns_shift); - } while (unlikely(seq != this_cpu_read(cyc2ns.seq.sequence))); + } while (unlikely(seq != this_cpu_read(cyc2ns.seq.seqcount.sequence))); } __always_inline void cyc2ns_read_end(void) @@ -186,7 +186,7 @@ static void __init cyc2ns_init_boot_cpu(void) { struct cyc2ns *c2n = this_cpu_ptr(&cyc2ns); - seqcount_init(&c2n->seq); + seqcount_latch_init(&c2n->seq); __set_cyc2ns_scale(tsc_khz, smp_processor_id(), rdtsc()); } @@ -203,7 +203,7 @@ static void __init cyc2ns_init_secondary_cpus(void) for_each_possible_cpu(cpu) { if (cpu != this_cpu) { - seqcount_init(&c2n->seq); + seqcount_latch_init(&c2n->seq); c2n = per_cpu_ptr(&cyc2ns, cpu); c2n->data[0] = data[0]; c2n->data[1] = data[1]; diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h index 379986e40159..cd223b68b69d 100644 --- a/include/asm-generic/atomic-instrumented.h +++ b/include/asm-generic/atomic-instrumented.h @@ -60,7 +60,7 @@ atomic_set_release(atomic_t *v, int i) static __always_inline void atomic_add(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic_add(i, v); } #define atomic_add atomic_add @@ -69,7 +69,7 @@ atomic_add(int i, atomic_t *v) static __always_inline int atomic_add_return(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return(i, v); } #define atomic_add_return atomic_add_return @@ -79,7 +79,7 @@ atomic_add_return(int i, atomic_t *v) static __always_inline int atomic_add_return_acquire(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return_acquire(i, v); } #define atomic_add_return_acquire atomic_add_return_acquire @@ -89,7 +89,7 @@ atomic_add_return_acquire(int i, atomic_t *v) static __always_inline int atomic_add_return_release(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return_release(i, v); } #define atomic_add_return_release atomic_add_return_release @@ -99,7 +99,7 @@ atomic_add_return_release(int i, atomic_t *v) static __always_inline int atomic_add_return_relaxed(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return_relaxed(i, v); } #define atomic_add_return_relaxed atomic_add_return_relaxed @@ -109,7 +109,7 @@ atomic_add_return_relaxed(int i, atomic_t *v) static __always_inline int atomic_fetch_add(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add(i, v); } #define atomic_fetch_add atomic_fetch_add @@ -119,7 +119,7 @@ atomic_fetch_add(int i, atomic_t *v) static __always_inline int atomic_fetch_add_acquire(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_acquire(i, v); } #define atomic_fetch_add_acquire atomic_fetch_add_acquire @@ -129,7 +129,7 @@ atomic_fetch_add_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_add_release(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_release(i, v); } #define atomic_fetch_add_release atomic_fetch_add_release @@ -139,7 +139,7 @@ atomic_fetch_add_release(int i, atomic_t *v) static __always_inline int atomic_fetch_add_relaxed(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_relaxed(i, v); } #define atomic_fetch_add_relaxed atomic_fetch_add_relaxed @@ -148,7 +148,7 @@ atomic_fetch_add_relaxed(int i, atomic_t *v) static __always_inline void atomic_sub(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic_sub(i, v); } #define atomic_sub atomic_sub @@ -157,7 +157,7 @@ atomic_sub(int i, atomic_t *v) static __always_inline int atomic_sub_return(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return(i, v); } #define atomic_sub_return atomic_sub_return @@ -167,7 +167,7 @@ atomic_sub_return(int i, atomic_t *v) static __always_inline int atomic_sub_return_acquire(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return_acquire(i, v); } #define atomic_sub_return_acquire atomic_sub_return_acquire @@ -177,7 +177,7 @@ atomic_sub_return_acquire(int i, atomic_t *v) static __always_inline int atomic_sub_return_release(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return_release(i, v); } #define atomic_sub_return_release atomic_sub_return_release @@ -187,7 +187,7 @@ atomic_sub_return_release(int i, atomic_t *v) static __always_inline int atomic_sub_return_relaxed(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return_relaxed(i, v); } #define atomic_sub_return_relaxed atomic_sub_return_relaxed @@ -197,7 +197,7 @@ atomic_sub_return_relaxed(int i, atomic_t *v) static __always_inline int atomic_fetch_sub(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub(i, v); } #define atomic_fetch_sub atomic_fetch_sub @@ -207,7 +207,7 @@ atomic_fetch_sub(int i, atomic_t *v) static __always_inline int atomic_fetch_sub_acquire(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub_acquire(i, v); } #define atomic_fetch_sub_acquire atomic_fetch_sub_acquire @@ -217,7 +217,7 @@ atomic_fetch_sub_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_sub_release(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub_release(i, v); } #define atomic_fetch_sub_release atomic_fetch_sub_release @@ -227,7 +227,7 @@ atomic_fetch_sub_release(int i, atomic_t *v) static __always_inline int atomic_fetch_sub_relaxed(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub_relaxed(i, v); } #define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed @@ -237,7 +237,7 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v) static __always_inline void atomic_inc(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic_inc(v); } #define atomic_inc atomic_inc @@ -247,7 +247,7 @@ atomic_inc(atomic_t *v) static __always_inline int atomic_inc_return(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return(v); } #define atomic_inc_return atomic_inc_return @@ -257,7 +257,7 @@ atomic_inc_return(atomic_t *v) static __always_inline int atomic_inc_return_acquire(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return_acquire(v); } #define atomic_inc_return_acquire atomic_inc_return_acquire @@ -267,7 +267,7 @@ atomic_inc_return_acquire(atomic_t *v) static __always_inline int atomic_inc_return_release(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return_release(v); } #define atomic_inc_return_release atomic_inc_return_release @@ -277,7 +277,7 @@ atomic_inc_return_release(atomic_t *v) static __always_inline int atomic_inc_return_relaxed(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return_relaxed(v); } #define atomic_inc_return_relaxed atomic_inc_return_relaxed @@ -287,7 +287,7 @@ atomic_inc_return_relaxed(atomic_t *v) static __always_inline int atomic_fetch_inc(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc(v); } #define atomic_fetch_inc atomic_fetch_inc @@ -297,7 +297,7 @@ atomic_fetch_inc(atomic_t *v) static __always_inline int atomic_fetch_inc_acquire(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc_acquire(v); } #define atomic_fetch_inc_acquire atomic_fetch_inc_acquire @@ -307,7 +307,7 @@ atomic_fetch_inc_acquire(atomic_t *v) static __always_inline int atomic_fetch_inc_release(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc_release(v); } #define atomic_fetch_inc_release atomic_fetch_inc_release @@ -317,7 +317,7 @@ atomic_fetch_inc_release(atomic_t *v) static __always_inline int atomic_fetch_inc_relaxed(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc_relaxed(v); } #define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed @@ -327,7 +327,7 @@ atomic_fetch_inc_relaxed(atomic_t *v) static __always_inline void atomic_dec(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic_dec(v); } #define atomic_dec atomic_dec @@ -337,7 +337,7 @@ atomic_dec(atomic_t *v) static __always_inline int atomic_dec_return(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return(v); } #define atomic_dec_return atomic_dec_return @@ -347,7 +347,7 @@ atomic_dec_return(atomic_t *v) static __always_inline int atomic_dec_return_acquire(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return_acquire(v); } #define atomic_dec_return_acquire atomic_dec_return_acquire @@ -357,7 +357,7 @@ atomic_dec_return_acquire(atomic_t *v) static __always_inline int atomic_dec_return_release(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return_release(v); } #define atomic_dec_return_release atomic_dec_return_release @@ -367,7 +367,7 @@ atomic_dec_return_release(atomic_t *v) static __always_inline int atomic_dec_return_relaxed(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return_relaxed(v); } #define atomic_dec_return_relaxed atomic_dec_return_relaxed @@ -377,7 +377,7 @@ atomic_dec_return_relaxed(atomic_t *v) static __always_inline int atomic_fetch_dec(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec(v); } #define atomic_fetch_dec atomic_fetch_dec @@ -387,7 +387,7 @@ atomic_fetch_dec(atomic_t *v) static __always_inline int atomic_fetch_dec_acquire(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec_acquire(v); } #define atomic_fetch_dec_acquire atomic_fetch_dec_acquire @@ -397,7 +397,7 @@ atomic_fetch_dec_acquire(atomic_t *v) static __always_inline int atomic_fetch_dec_release(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec_release(v); } #define atomic_fetch_dec_release atomic_fetch_dec_release @@ -407,7 +407,7 @@ atomic_fetch_dec_release(atomic_t *v) static __always_inline int atomic_fetch_dec_relaxed(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec_relaxed(v); } #define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed @@ -416,7 +416,7 @@ atomic_fetch_dec_relaxed(atomic_t *v) static __always_inline void atomic_and(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic_and(i, v); } #define atomic_and atomic_and @@ -425,7 +425,7 @@ atomic_and(int i, atomic_t *v) static __always_inline int atomic_fetch_and(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and(i, v); } #define atomic_fetch_and atomic_fetch_and @@ -435,7 +435,7 @@ atomic_fetch_and(int i, atomic_t *v) static __always_inline int atomic_fetch_and_acquire(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and_acquire(i, v); } #define atomic_fetch_and_acquire atomic_fetch_and_acquire @@ -445,7 +445,7 @@ atomic_fetch_and_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_and_release(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and_release(i, v); } #define atomic_fetch_and_release atomic_fetch_and_release @@ -455,7 +455,7 @@ atomic_fetch_and_release(int i, atomic_t *v) static __always_inline int atomic_fetch_and_relaxed(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and_relaxed(i, v); } #define atomic_fetch_and_relaxed atomic_fetch_and_relaxed @@ -465,7 +465,7 @@ atomic_fetch_and_relaxed(int i, atomic_t *v) static __always_inline void atomic_andnot(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic_andnot(i, v); } #define atomic_andnot atomic_andnot @@ -475,7 +475,7 @@ atomic_andnot(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot(i, v); } #define atomic_fetch_andnot atomic_fetch_andnot @@ -485,7 +485,7 @@ atomic_fetch_andnot(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot_acquire(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot_acquire(i, v); } #define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire @@ -495,7 +495,7 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot_release(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot_release(i, v); } #define atomic_fetch_andnot_release atomic_fetch_andnot_release @@ -505,7 +505,7 @@ atomic_fetch_andnot_release(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot_relaxed(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot_relaxed(i, v); } #define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed @@ -514,7 +514,7 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v) static __always_inline void atomic_or(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic_or(i, v); } #define atomic_or atomic_or @@ -523,7 +523,7 @@ atomic_or(int i, atomic_t *v) static __always_inline int atomic_fetch_or(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or(i, v); } #define atomic_fetch_or atomic_fetch_or @@ -533,7 +533,7 @@ atomic_fetch_or(int i, atomic_t *v) static __always_inline int atomic_fetch_or_acquire(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or_acquire(i, v); } #define atomic_fetch_or_acquire atomic_fetch_or_acquire @@ -543,7 +543,7 @@ atomic_fetch_or_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_or_release(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or_release(i, v); } #define atomic_fetch_or_release atomic_fetch_or_release @@ -553,7 +553,7 @@ atomic_fetch_or_release(int i, atomic_t *v) static __always_inline int atomic_fetch_or_relaxed(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or_relaxed(i, v); } #define atomic_fetch_or_relaxed atomic_fetch_or_relaxed @@ -562,7 +562,7 @@ atomic_fetch_or_relaxed(int i, atomic_t *v) static __always_inline void atomic_xor(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic_xor(i, v); } #define atomic_xor atomic_xor @@ -571,7 +571,7 @@ atomic_xor(int i, atomic_t *v) static __always_inline int atomic_fetch_xor(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor(i, v); } #define atomic_fetch_xor atomic_fetch_xor @@ -581,7 +581,7 @@ atomic_fetch_xor(int i, atomic_t *v) static __always_inline int atomic_fetch_xor_acquire(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor_acquire(i, v); } #define atomic_fetch_xor_acquire atomic_fetch_xor_acquire @@ -591,7 +591,7 @@ atomic_fetch_xor_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_xor_release(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor_release(i, v); } #define atomic_fetch_xor_release atomic_fetch_xor_release @@ -601,7 +601,7 @@ atomic_fetch_xor_release(int i, atomic_t *v) static __always_inline int atomic_fetch_xor_relaxed(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor_relaxed(i, v); } #define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed @@ -611,7 +611,7 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v) static __always_inline int atomic_xchg(atomic_t *v, int i) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg(v, i); } #define atomic_xchg atomic_xchg @@ -621,7 +621,7 @@ atomic_xchg(atomic_t *v, int i) static __always_inline int atomic_xchg_acquire(atomic_t *v, int i) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg_acquire(v, i); } #define atomic_xchg_acquire atomic_xchg_acquire @@ -631,7 +631,7 @@ atomic_xchg_acquire(atomic_t *v, int i) static __always_inline int atomic_xchg_release(atomic_t *v, int i) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg_release(v, i); } #define atomic_xchg_release atomic_xchg_release @@ -641,7 +641,7 @@ atomic_xchg_release(atomic_t *v, int i) static __always_inline int atomic_xchg_relaxed(atomic_t *v, int i) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg_relaxed(v, i); } #define atomic_xchg_relaxed atomic_xchg_relaxed @@ -651,7 +651,7 @@ atomic_xchg_relaxed(atomic_t *v, int i) static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg(v, old, new); } #define atomic_cmpxchg atomic_cmpxchg @@ -661,7 +661,7 @@ atomic_cmpxchg(atomic_t *v, int old, int new) static __always_inline int atomic_cmpxchg_acquire(atomic_t *v, int old, int new) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg_acquire(v, old, new); } #define atomic_cmpxchg_acquire atomic_cmpxchg_acquire @@ -671,7 +671,7 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new) static __always_inline int atomic_cmpxchg_release(atomic_t *v, int old, int new) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg_release(v, old, new); } #define atomic_cmpxchg_release atomic_cmpxchg_release @@ -681,7 +681,7 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new) static __always_inline int atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg_relaxed(v, old, new); } #define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed @@ -691,8 +691,8 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) { - instrument_atomic_write(v, sizeof(*v)); - instrument_atomic_write(old, sizeof(*old)); + instrument_atomic_read_write(v, sizeof(*v)); + instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg(v, old, new); } #define atomic_try_cmpxchg atomic_try_cmpxchg @@ -702,8 +702,8 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new) static __always_inline bool atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) { - instrument_atomic_write(v, sizeof(*v)); - instrument_atomic_write(old, sizeof(*old)); + instrument_atomic_read_write(v, sizeof(*v)); + instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg_acquire(v, old, new); } #define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire @@ -713,8 +713,8 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) static __always_inline bool atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { - instrument_atomic_write(v, sizeof(*v)); - instrument_atomic_write(old, sizeof(*old)); + instrument_atomic_read_write(v, sizeof(*v)); + instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg_release(v, old, new); } #define atomic_try_cmpxchg_release atomic_try_cmpxchg_release @@ -724,8 +724,8 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) static __always_inline bool atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) { - instrument_atomic_write(v, sizeof(*v)); - instrument_atomic_write(old, sizeof(*old)); + instrument_atomic_read_write(v, sizeof(*v)); + instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg_relaxed(v, old, new); } #define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed @@ -735,7 +735,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_and_test(i, v); } #define atomic_sub_and_test atomic_sub_and_test @@ -745,7 +745,7 @@ atomic_sub_and_test(int i, atomic_t *v) static __always_inline bool atomic_dec_and_test(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_and_test(v); } #define atomic_dec_and_test atomic_dec_and_test @@ -755,7 +755,7 @@ atomic_dec_and_test(atomic_t *v) static __always_inline bool atomic_inc_and_test(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_and_test(v); } #define atomic_inc_and_test atomic_inc_and_test @@ -765,7 +765,7 @@ atomic_inc_and_test(atomic_t *v) static __always_inline bool atomic_add_negative(int i, atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_negative(i, v); } #define atomic_add_negative atomic_add_negative @@ -775,7 +775,7 @@ atomic_add_negative(int i, atomic_t *v) static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_unless(v, a, u); } #define atomic_fetch_add_unless atomic_fetch_add_unless @@ -785,7 +785,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u) static __always_inline bool atomic_add_unless(atomic_t *v, int a, int u) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_unless(v, a, u); } #define atomic_add_unless atomic_add_unless @@ -795,7 +795,7 @@ atomic_add_unless(atomic_t *v, int a, int u) static __always_inline bool atomic_inc_not_zero(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_not_zero(v); } #define atomic_inc_not_zero atomic_inc_not_zero @@ -805,7 +805,7 @@ atomic_inc_not_zero(atomic_t *v) static __always_inline bool atomic_inc_unless_negative(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_unless_negative(v); } #define atomic_inc_unless_negative atomic_inc_unless_negative @@ -815,7 +815,7 @@ atomic_inc_unless_negative(atomic_t *v) static __always_inline bool atomic_dec_unless_positive(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_unless_positive(v); } #define atomic_dec_unless_positive atomic_dec_unless_positive @@ -825,7 +825,7 @@ atomic_dec_unless_positive(atomic_t *v) static __always_inline int atomic_dec_if_positive(atomic_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_if_positive(v); } #define atomic_dec_if_positive atomic_dec_if_positive @@ -870,7 +870,7 @@ atomic64_set_release(atomic64_t *v, s64 i) static __always_inline void atomic64_add(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic64_add(i, v); } #define atomic64_add atomic64_add @@ -879,7 +879,7 @@ atomic64_add(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return(i, v); } #define atomic64_add_return atomic64_add_return @@ -889,7 +889,7 @@ atomic64_add_return(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return_acquire(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return_acquire(i, v); } #define atomic64_add_return_acquire atomic64_add_return_acquire @@ -899,7 +899,7 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return_release(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return_release(i, v); } #define atomic64_add_return_release atomic64_add_return_release @@ -909,7 +909,7 @@ atomic64_add_return_release(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return_relaxed(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return_relaxed(i, v); } #define atomic64_add_return_relaxed atomic64_add_return_relaxed @@ -919,7 +919,7 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add(i, v); } #define atomic64_fetch_add atomic64_fetch_add @@ -929,7 +929,7 @@ atomic64_fetch_add(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_acquire(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_acquire(i, v); } #define atomic64_fetch_add_acquire atomic64_fetch_add_acquire @@ -939,7 +939,7 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_release(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_release(i, v); } #define atomic64_fetch_add_release atomic64_fetch_add_release @@ -949,7 +949,7 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_relaxed(i, v); } #define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed @@ -958,7 +958,7 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v) static __always_inline void atomic64_sub(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic64_sub(i, v); } #define atomic64_sub atomic64_sub @@ -967,7 +967,7 @@ atomic64_sub(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return(i, v); } #define atomic64_sub_return atomic64_sub_return @@ -977,7 +977,7 @@ atomic64_sub_return(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return_acquire(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return_acquire(i, v); } #define atomic64_sub_return_acquire atomic64_sub_return_acquire @@ -987,7 +987,7 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return_release(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return_release(i, v); } #define atomic64_sub_return_release atomic64_sub_return_release @@ -997,7 +997,7 @@ atomic64_sub_return_release(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return_relaxed(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return_relaxed(i, v); } #define atomic64_sub_return_relaxed atomic64_sub_return_relaxed @@ -1007,7 +1007,7 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub(i, v); } #define atomic64_fetch_sub atomic64_fetch_sub @@ -1017,7 +1017,7 @@ atomic64_fetch_sub(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub_acquire(i, v); } #define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire @@ -1027,7 +1027,7 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub_release(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub_release(i, v); } #define atomic64_fetch_sub_release atomic64_fetch_sub_release @@ -1037,7 +1037,7 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub_relaxed(i, v); } #define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed @@ -1047,7 +1047,7 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v) static __always_inline void atomic64_inc(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic64_inc(v); } #define atomic64_inc atomic64_inc @@ -1057,7 +1057,7 @@ atomic64_inc(atomic64_t *v) static __always_inline s64 atomic64_inc_return(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return(v); } #define atomic64_inc_return atomic64_inc_return @@ -1067,7 +1067,7 @@ atomic64_inc_return(atomic64_t *v) static __always_inline s64 atomic64_inc_return_acquire(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return_acquire(v); } #define atomic64_inc_return_acquire atomic64_inc_return_acquire @@ -1077,7 +1077,7 @@ atomic64_inc_return_acquire(atomic64_t *v) static __always_inline s64 atomic64_inc_return_release(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return_release(v); } #define atomic64_inc_return_release atomic64_inc_return_release @@ -1087,7 +1087,7 @@ atomic64_inc_return_release(atomic64_t *v) static __always_inline s64 atomic64_inc_return_relaxed(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return_relaxed(v); } #define atomic64_inc_return_relaxed atomic64_inc_return_relaxed @@ -1097,7 +1097,7 @@ atomic64_inc_return_relaxed(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc(v); } #define atomic64_fetch_inc atomic64_fetch_inc @@ -1107,7 +1107,7 @@ atomic64_fetch_inc(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc_acquire(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc_acquire(v); } #define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire @@ -1117,7 +1117,7 @@ atomic64_fetch_inc_acquire(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc_release(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc_release(v); } #define atomic64_fetch_inc_release atomic64_fetch_inc_release @@ -1127,7 +1127,7 @@ atomic64_fetch_inc_release(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc_relaxed(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc_relaxed(v); } #define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed @@ -1137,7 +1137,7 @@ atomic64_fetch_inc_relaxed(atomic64_t *v) static __always_inline void atomic64_dec(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic64_dec(v); } #define atomic64_dec atomic64_dec @@ -1147,7 +1147,7 @@ atomic64_dec(atomic64_t *v) static __always_inline s64 atomic64_dec_return(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return(v); } #define atomic64_dec_return atomic64_dec_return @@ -1157,7 +1157,7 @@ atomic64_dec_return(atomic64_t *v) static __always_inline s64 atomic64_dec_return_acquire(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return_acquire(v); } #define atomic64_dec_return_acquire atomic64_dec_return_acquire @@ -1167,7 +1167,7 @@ atomic64_dec_return_acquire(atomic64_t *v) static __always_inline s64 atomic64_dec_return_release(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return_release(v); } #define atomic64_dec_return_release atomic64_dec_return_release @@ -1177,7 +1177,7 @@ atomic64_dec_return_release(atomic64_t *v) static __always_inline s64 atomic64_dec_return_relaxed(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return_relaxed(v); } #define atomic64_dec_return_relaxed atomic64_dec_return_relaxed @@ -1187,7 +1187,7 @@ atomic64_dec_return_relaxed(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec(v); } #define atomic64_fetch_dec atomic64_fetch_dec @@ -1197,7 +1197,7 @@ atomic64_fetch_dec(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec_acquire(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec_acquire(v); } #define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire @@ -1207,7 +1207,7 @@ atomic64_fetch_dec_acquire(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec_release(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec_release(v); } #define atomic64_fetch_dec_release atomic64_fetch_dec_release @@ -1217,7 +1217,7 @@ atomic64_fetch_dec_release(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec_relaxed(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec_relaxed(v); } #define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed @@ -1226,7 +1226,7 @@ atomic64_fetch_dec_relaxed(atomic64_t *v) static __always_inline void atomic64_and(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic64_and(i, v); } #define atomic64_and atomic64_and @@ -1235,7 +1235,7 @@ atomic64_and(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and(i, v); } #define atomic64_fetch_and atomic64_fetch_and @@ -1245,7 +1245,7 @@ atomic64_fetch_and(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and_acquire(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and_acquire(i, v); } #define atomic64_fetch_and_acquire atomic64_fetch_and_acquire @@ -1255,7 +1255,7 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and_release(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and_release(i, v); } #define atomic64_fetch_and_release atomic64_fetch_and_release @@ -1265,7 +1265,7 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and_relaxed(i, v); } #define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed @@ -1275,7 +1275,7 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v) static __always_inline void atomic64_andnot(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic64_andnot(i, v); } #define atomic64_andnot atomic64_andnot @@ -1285,7 +1285,7 @@ atomic64_andnot(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot(i, v); } #define atomic64_fetch_andnot atomic64_fetch_andnot @@ -1295,7 +1295,7 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot_acquire(i, v); } #define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire @@ -1305,7 +1305,7 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot_release(i, v); } #define atomic64_fetch_andnot_release atomic64_fetch_andnot_release @@ -1315,7 +1315,7 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot_relaxed(i, v); } #define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed @@ -1324,7 +1324,7 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v) static __always_inline void atomic64_or(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic64_or(i, v); } #define atomic64_or atomic64_or @@ -1333,7 +1333,7 @@ atomic64_or(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or(i, v); } #define atomic64_fetch_or atomic64_fetch_or @@ -1343,7 +1343,7 @@ atomic64_fetch_or(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or_acquire(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or_acquire(i, v); } #define atomic64_fetch_or_acquire atomic64_fetch_or_acquire @@ -1353,7 +1353,7 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or_release(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or_release(i, v); } #define atomic64_fetch_or_release atomic64_fetch_or_release @@ -1363,7 +1363,7 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or_relaxed(i, v); } #define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed @@ -1372,7 +1372,7 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v) static __always_inline void atomic64_xor(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); arch_atomic64_xor(i, v); } #define atomic64_xor atomic64_xor @@ -1381,7 +1381,7 @@ atomic64_xor(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor(i, v); } #define atomic64_fetch_xor atomic64_fetch_xor @@ -1391,7 +1391,7 @@ atomic64_fetch_xor(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor_acquire(i, v); } #define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire @@ -1401,7 +1401,7 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor_release(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor_release(i, v); } #define atomic64_fetch_xor_release atomic64_fetch_xor_release @@ -1411,7 +1411,7 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor_relaxed(i, v); } #define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed @@ -1421,7 +1421,7 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_xchg(atomic64_t *v, s64 i) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg(v, i); } #define atomic64_xchg atomic64_xchg @@ -1431,7 +1431,7 @@ atomic64_xchg(atomic64_t *v, s64 i) static __always_inline s64 atomic64_xchg_acquire(atomic64_t *v, s64 i) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg_acquire(v, i); } #define atomic64_xchg_acquire atomic64_xchg_acquire @@ -1441,7 +1441,7 @@ atomic64_xchg_acquire(atomic64_t *v, s64 i) static __always_inline s64 atomic64_xchg_release(atomic64_t *v, s64 i) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg_release(v, i); } #define atomic64_xchg_release atomic64_xchg_release @@ -1451,7 +1451,7 @@ atomic64_xchg_release(atomic64_t *v, s64 i) static __always_inline s64 atomic64_xchg_relaxed(atomic64_t *v, s64 i) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg_relaxed(v, i); } #define atomic64_xchg_relaxed atomic64_xchg_relaxed @@ -1461,7 +1461,7 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 i) static __always_inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg(v, old, new); } #define atomic64_cmpxchg atomic64_cmpxchg @@ -1471,7 +1471,7 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) static __always_inline s64 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg_acquire(v, old, new); } #define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire @@ -1481,7 +1481,7 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) static __always_inline s64 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg_release(v, old, new); } #define atomic64_cmpxchg_release atomic64_cmpxchg_release @@ -1491,7 +1491,7 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) static __always_inline s64 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg_relaxed(v, old, new); } #define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed @@ -1501,8 +1501,8 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { - instrument_atomic_write(v, sizeof(*v)); - instrument_atomic_write(old, sizeof(*old)); + instrument_atomic_read_write(v, sizeof(*v)); + instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg(v, old, new); } #define atomic64_try_cmpxchg atomic64_try_cmpxchg @@ -1512,8 +1512,8 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) { - instrument_atomic_write(v, sizeof(*v)); - instrument_atomic_write(old, sizeof(*old)); + instrument_atomic_read_write(v, sizeof(*v)); + instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg_acquire(v, old, new); } #define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire @@ -1523,8 +1523,8 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { - instrument_atomic_write(v, sizeof(*v)); - instrument_atomic_write(old, sizeof(*old)); + instrument_atomic_read_write(v, sizeof(*v)); + instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg_release(v, old, new); } #define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release @@ -1534,8 +1534,8 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) { - instrument_atomic_write(v, sizeof(*v)); - instrument_atomic_write(old, sizeof(*old)); + instrument_atomic_read_write(v, sizeof(*v)); + instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg_relaxed(v, old, new); } #define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed @@ -1545,7 +1545,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_and_test(i, v); } #define atomic64_sub_and_test atomic64_sub_and_test @@ -1555,7 +1555,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v) static __always_inline bool atomic64_dec_and_test(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_and_test(v); } #define atomic64_dec_and_test atomic64_dec_and_test @@ -1565,7 +1565,7 @@ atomic64_dec_and_test(atomic64_t *v) static __always_inline bool atomic64_inc_and_test(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_and_test(v); } #define atomic64_inc_and_test atomic64_inc_and_test @@ -1575,7 +1575,7 @@ atomic64_inc_and_test(atomic64_t *v) static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_negative(i, v); } #define atomic64_add_negative atomic64_add_negative @@ -1585,7 +1585,7 @@ atomic64_add_negative(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_unless(v, a, u); } #define atomic64_fetch_add_unless atomic64_fetch_add_unless @@ -1595,7 +1595,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } #define atomic64_add_unless atomic64_add_unless @@ -1605,7 +1605,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u) static __always_inline bool atomic64_inc_not_zero(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_not_zero(v); } #define atomic64_inc_not_zero atomic64_inc_not_zero @@ -1615,7 +1615,7 @@ atomic64_inc_not_zero(atomic64_t *v) static __always_inline bool atomic64_inc_unless_negative(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_unless_negative(v); } #define atomic64_inc_unless_negative atomic64_inc_unless_negative @@ -1625,7 +1625,7 @@ atomic64_inc_unless_negative(atomic64_t *v) static __always_inline bool atomic64_dec_unless_positive(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_unless_positive(v); } #define atomic64_dec_unless_positive atomic64_dec_unless_positive @@ -1635,7 +1635,7 @@ atomic64_dec_unless_positive(atomic64_t *v) static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { - instrument_atomic_write(v, sizeof(*v)); + instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_if_positive(v); } #define atomic64_dec_if_positive atomic64_dec_if_positive @@ -1786,4 +1786,4 @@ atomic64_dec_if_positive(atomic64_t *v) }) #endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */ -// 89bf97f3a7509b740845e51ddf31055b48a81f40 +// 9d5e6a315fb1335d02f0ccd3655a91c3dafcc63e diff --git a/include/asm-generic/bitops/instrumented-atomic.h b/include/asm-generic/bitops/instrumented-atomic.h index fb2cb33a4013..81915dcd4b4e 100644 --- a/include/asm-generic/bitops/instrumented-atomic.h +++ b/include/asm-generic/bitops/instrumented-atomic.h @@ -67,7 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr) */ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) { - instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); + instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); return arch_test_and_set_bit(nr, addr); } @@ -80,7 +80,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) */ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) { - instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); + instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); return arch_test_and_clear_bit(nr, addr); } @@ -93,7 +93,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) */ static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) { - instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); + instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); return arch_test_and_change_bit(nr, addr); } diff --git a/include/asm-generic/bitops/instrumented-lock.h b/include/asm-generic/bitops/instrumented-lock.h index b9bec468ae03..75ef606f7145 100644 --- a/include/asm-generic/bitops/instrumented-lock.h +++ b/include/asm-generic/bitops/instrumented-lock.h @@ -52,7 +52,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) */ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) { - instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); + instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); return arch_test_and_set_bit_lock(nr, addr); } diff --git a/include/asm-generic/bitops/instrumented-non-atomic.h b/include/asm-generic/bitops/instrumented-non-atomic.h index 20f788a25ef9..37363d570b9b 100644 --- a/include/asm-generic/bitops/instrumented-non-atomic.h +++ b/include/asm-generic/bitops/instrumented-non-atomic.h @@ -58,6 +58,30 @@ static inline void __change_bit(long nr, volatile unsigned long *addr) arch___change_bit(nr, addr); } +static inline void __instrument_read_write_bitop(long nr, volatile unsigned long *addr) +{ + if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC)) { + /* + * We treat non-atomic read-write bitops a little more special. + * Given the operations here only modify a single bit, assuming + * non-atomicity of the writer is sufficient may be reasonable + * for certain usage (and follows the permissible nature of the + * assume-plain-writes-atomic rule): + * 1. report read-modify-write races -> check read; + * 2. do not report races with marked readers, but do report + * races with unmarked readers -> check "atomic" write. + */ + kcsan_check_read(addr + BIT_WORD(nr), sizeof(long)); + /* + * Use generic write instrumentation, in case other sanitizers + * or tools are enabled alongside KCSAN. + */ + instrument_write(addr + BIT_WORD(nr), sizeof(long)); + } else { + instrument_read_write(addr + BIT_WORD(nr), sizeof(long)); + } +} + /** * __test_and_set_bit - Set a bit and return its old value * @nr: Bit to set @@ -68,7 +92,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr) */ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) { - instrument_write(addr + BIT_WORD(nr), sizeof(long)); + __instrument_read_write_bitop(nr, addr); return arch___test_and_set_bit(nr, addr); } @@ -82,7 +106,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr) */ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) { - instrument_write(addr + BIT_WORD(nr), sizeof(long)); + __instrument_read_write_bitop(nr, addr); return arch___test_and_clear_bit(nr, addr); } @@ -96,7 +120,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr) */ static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr) { - instrument_write(addr + BIT_WORD(nr), sizeof(long)); + __instrument_read_write_bitop(nr, addr); return arch___test_and_change_bit(nr, addr); } diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index 43e6ea591975..42faebbaa202 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -42,6 +42,21 @@ static __always_inline void instrument_write(const volatile void *v, size_t size kcsan_check_write(v, size); } +/** + * instrument_read_write - instrument regular read-write access + * + * Instrument a regular write access. The instrumentation should be inserted + * before the actual write happens. + * + * @ptr address of access + * @size size of access + */ +static __always_inline void instrument_read_write(const volatile void *v, size_t size) +{ + kasan_check_write(v, size); + kcsan_check_read_write(v, size); +} + /** * instrument_atomic_read - instrument atomic read access * @@ -72,6 +87,21 @@ static __always_inline void instrument_atomic_write(const volatile void *v, size kcsan_check_atomic_write(v, size); } +/** + * instrument_atomic_read_write - instrument atomic read-write access + * + * Instrument an atomic read-write access. The instrumentation should be + * inserted before the actual write happens. + * + * @ptr address of access + * @size size of access + */ +static __always_inline void instrument_atomic_read_write(const volatile void *v, size_t size) +{ + kasan_check_write(v, size); + kcsan_check_atomic_read_write(v, size); +} + /** * instrument_copy_to_user - instrument reads of copy_to_user * diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h index c5f6c1dcf7e3..cf14840609ce 100644 --- a/include/linux/kcsan-checks.h +++ b/include/linux/kcsan-checks.h @@ -7,19 +7,13 @@ #include #include -/* - * ACCESS TYPE MODIFIERS - * - * : normal read access; - * WRITE : write access; - * ATOMIC: access is atomic; - * ASSERT: access is not a regular access, but an assertion; - * SCOPED: access is a scoped access; - */ -#define KCSAN_ACCESS_WRITE 0x1 -#define KCSAN_ACCESS_ATOMIC 0x2 -#define KCSAN_ACCESS_ASSERT 0x4 -#define KCSAN_ACCESS_SCOPED 0x8 +/* Access types -- if KCSAN_ACCESS_WRITE is not set, the access is a read. */ +#define KCSAN_ACCESS_WRITE (1 << 0) /* Access is a write. */ +#define KCSAN_ACCESS_COMPOUND (1 << 1) /* Compounded read-write instrumentation. */ +#define KCSAN_ACCESS_ATOMIC (1 << 2) /* Access is atomic. */ +/* The following are special, and never due to compiler instrumentation. */ +#define KCSAN_ACCESS_ASSERT (1 << 3) /* Access is an assertion. */ +#define KCSAN_ACCESS_SCOPED (1 << 4) /* Access is a scoped access. */ /* * __kcsan_*: Always calls into the runtime when KCSAN is enabled. This may be used @@ -204,6 +198,15 @@ static inline void __kcsan_disable_current(void) { } #define __kcsan_check_write(ptr, size) \ __kcsan_check_access(ptr, size, KCSAN_ACCESS_WRITE) +/** + * __kcsan_check_read_write - check regular read-write access for races + * + * @ptr: address of access + * @size: size of access + */ +#define __kcsan_check_read_write(ptr, size) \ + __kcsan_check_access(ptr, size, KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE) + /** * kcsan_check_read - check regular read access for races * @@ -221,18 +224,30 @@ static inline void __kcsan_disable_current(void) { } #define kcsan_check_write(ptr, size) \ kcsan_check_access(ptr, size, KCSAN_ACCESS_WRITE) +/** + * kcsan_check_read_write - check regular read-write access for races + * + * @ptr: address of access + * @size: size of access + */ +#define kcsan_check_read_write(ptr, size) \ + kcsan_check_access(ptr, size, KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE) + /* * Check for atomic accesses: if atomic accesses are not ignored, this simply * aliases to kcsan_check_access(), otherwise becomes a no-op. */ #ifdef CONFIG_KCSAN_IGNORE_ATOMICS -#define kcsan_check_atomic_read(...) do { } while (0) -#define kcsan_check_atomic_write(...) do { } while (0) +#define kcsan_check_atomic_read(...) do { } while (0) +#define kcsan_check_atomic_write(...) do { } while (0) +#define kcsan_check_atomic_read_write(...) do { } while (0) #else #define kcsan_check_atomic_read(ptr, size) \ kcsan_check_access(ptr, size, KCSAN_ACCESS_ATOMIC) #define kcsan_check_atomic_write(ptr, size) \ kcsan_check_access(ptr, size, KCSAN_ACCESS_ATOMIC | KCSAN_ACCESS_WRITE) +#define kcsan_check_atomic_read_write(ptr, size) \ + kcsan_check_access(ptr, size, KCSAN_ACCESS_ATOMIC | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND) #endif /** diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 6a584b3e5c74..f5594879175a 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -54,7 +54,11 @@ struct lock_list { struct lock_class *class; struct lock_class *links_to; const struct lock_trace *trace; - int distance; + u16 distance; + /* bitmap of different dependencies from head to this */ + u8 dep; + /* used by BFS to record whether "prev -> this" only has -(*R)-> */ + u8 only_xr; /* * The parent field is used to implement breadth-first search, and the @@ -469,6 +473,20 @@ static inline void print_irqtrace_events(struct task_struct *curr) } #endif +/* Variable used to make lockdep treat read_lock() as recursive in selftests */ +#ifdef CONFIG_DEBUG_LOCKING_API_SELFTESTS +extern unsigned int force_read_lock_recursive; +#else /* CONFIG_DEBUG_LOCKING_API_SELFTESTS */ +#define force_read_lock_recursive 0 +#endif /* CONFIG_DEBUG_LOCKING_API_SELFTESTS */ + +#ifdef CONFIG_LOCKDEP +extern bool read_lock_is_recursive(void); +#else /* CONFIG_LOCKDEP */ +/* If !LOCKDEP, the value is meaningless */ +#define read_lock_is_recursive() 0 +#endif + /* * For trivial one-depth nesting of a lock-class, the following * global define can be used. (Subsystems with multiple levels @@ -490,7 +508,14 @@ static inline void print_irqtrace_events(struct task_struct *curr) #define spin_release(l, i) lock_release(l, i) #define rwlock_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) -#define rwlock_acquire_read(l, s, t, i) lock_acquire_shared_recursive(l, s, t, NULL, i) +#define rwlock_acquire_read(l, s, t, i) \ +do { \ + if (read_lock_is_recursive()) \ + lock_acquire_shared_recursive(l, s, t, NULL, i); \ + else \ + lock_acquire_shared(l, s, t, NULL, i); \ +} while (0) + #define rwlock_release(l, i) lock_release(l, i) #define seqcount_acquire(l, s, t, i) lock_acquire_exclusive(l, s, t, NULL, i) @@ -512,19 +537,19 @@ static inline void print_irqtrace_events(struct task_struct *curr) #define lock_map_release(l) lock_release(l, _THIS_IP_) #ifdef CONFIG_PROVE_LOCKING -# define might_lock(lock) \ +# define might_lock(lock) \ do { \ typecheck(struct lockdep_map *, &(lock)->dep_map); \ lock_acquire(&(lock)->dep_map, 0, 0, 0, 1, NULL, _THIS_IP_); \ lock_release(&(lock)->dep_map, _THIS_IP_); \ } while (0) -# define might_lock_read(lock) \ +# define might_lock_read(lock) \ do { \ typecheck(struct lockdep_map *, &(lock)->dep_map); \ lock_acquire(&(lock)->dep_map, 0, 0, 1, 1, NULL, _THIS_IP_); \ lock_release(&(lock)->dep_map, _THIS_IP_); \ } while (0) -# define might_lock_nested(lock, subclass) \ +# define might_lock_nested(lock, subclass) \ do { \ typecheck(struct lockdep_map *, &(lock)->dep_map); \ lock_acquire(&(lock)->dep_map, subclass, 0, 1, 1, NULL, \ @@ -534,44 +559,39 @@ do { \ DECLARE_PER_CPU(int, hardirqs_enabled); DECLARE_PER_CPU(int, hardirq_context); +DECLARE_PER_CPU(unsigned int, lockdep_recursion); -/* - * The below lockdep_assert_*() macros use raw_cpu_read() to access the above - * per-cpu variables. This is required because this_cpu_read() will potentially - * call into preempt/irq-disable and that obviously isn't right. This is also - * correct because when IRQs are enabled, it doesn't matter if we accidentally - * read the value from our previous CPU. - */ +#define __lockdep_enabled (debug_locks && !this_cpu_read(lockdep_recursion)) #define lockdep_assert_irqs_enabled() \ do { \ - WARN_ON_ONCE(debug_locks && !raw_cpu_read(hardirqs_enabled)); \ + WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirqs_enabled)); \ } while (0) #define lockdep_assert_irqs_disabled() \ do { \ - WARN_ON_ONCE(debug_locks && raw_cpu_read(hardirqs_enabled)); \ + WARN_ON_ONCE(__lockdep_enabled && this_cpu_read(hardirqs_enabled)); \ } while (0) #define lockdep_assert_in_irq() \ do { \ - WARN_ON_ONCE(debug_locks && !raw_cpu_read(hardirq_context)); \ + WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \ } while (0) #define lockdep_assert_preemption_enabled() \ do { \ WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT) && \ - debug_locks && \ + __lockdep_enabled && \ (preempt_count() != 0 || \ - !raw_cpu_read(hardirqs_enabled))); \ + !this_cpu_read(hardirqs_enabled))); \ } while (0) #define lockdep_assert_preemption_disabled() \ do { \ WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT) && \ - debug_locks && \ + __lockdep_enabled && \ (preempt_count() == 0 && \ - raw_cpu_read(hardirqs_enabled))); \ + this_cpu_read(hardirqs_enabled))); \ } while (0) #else diff --git a/include/linux/lockdep_types.h b/include/linux/lockdep_types.h index bb35b449f533..9a1fd49df17f 100644 --- a/include/linux/lockdep_types.h +++ b/include/linux/lockdep_types.h @@ -35,8 +35,12 @@ enum lockdep_wait_type { /* * We'd rather not expose kernel/lockdep_states.h this wide, but we do need * the total number of states... :-( + * + * XXX_LOCK_USAGE_STATES is the number of lines in lockdep_states.h, for each + * of those we generates 4 states, Additionally we report on USED and USED_READ. */ -#define XXX_LOCK_USAGE_STATES (1+2*4) +#define XXX_LOCK_USAGE_STATES 2 +#define LOCK_TRACE_STATES (XXX_LOCK_USAGE_STATES*4 + 2) /* * NR_LOCKDEP_CACHING_CLASSES ... Number of classes @@ -106,7 +110,7 @@ struct lock_class { * IRQ/softirq usage tracking bits: */ unsigned long usage_mask; - const struct lock_trace *usage_traces[XXX_LOCK_USAGE_STATES]; + const struct lock_trace *usage_traces[LOCK_TRACE_STATES]; /* * Generation counter, when doing certain classes of graph walking, diff --git a/include/linux/rbtree_latch.h b/include/linux/rbtree_latch.h index 7d012faa509a..3d1a9e716b80 100644 --- a/include/linux/rbtree_latch.h +++ b/include/linux/rbtree_latch.h @@ -42,8 +42,8 @@ struct latch_tree_node { }; struct latch_tree_root { - seqcount_t seq; - struct rb_root tree[2]; + seqcount_latch_t seq; + struct rb_root tree[2]; }; /** @@ -206,7 +206,7 @@ latch_tree_find(void *key, struct latch_tree_root *root, do { seq = raw_read_seqcount_latch(&root->seq); node = __lt_find(key, root, seq & 1, ops->comp); - } while (read_seqcount_retry(&root->seq, seq)); + } while (read_seqcount_latch_retry(&root->seq, seq)); return node; } diff --git a/include/linux/refcount.h b/include/linux/refcount.h index 0e3ee25eb156..7fabb1af18e0 100644 --- a/include/linux/refcount.h +++ b/include/linux/refcount.h @@ -165,7 +165,7 @@ static inline unsigned int refcount_read(const refcount_t *r) * * Return: false if the passed refcount is 0, true otherwise */ -static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) +static inline __must_check bool __refcount_add_not_zero(int i, refcount_t *r, int *oldp) { int old = refcount_read(r); @@ -174,12 +174,20 @@ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) break; } while (!atomic_try_cmpxchg_relaxed(&r->refs, &old, old + i)); + if (oldp) + *oldp = old; + if (unlikely(old < 0 || old + i < 0)) refcount_warn_saturate(r, REFCOUNT_ADD_NOT_ZERO_OVF); return old; } +static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) +{ + return __refcount_add_not_zero(i, r, NULL); +} + /** * refcount_add - add a value to a refcount * @i: the value to add to the refcount @@ -196,16 +204,24 @@ static inline __must_check bool refcount_add_not_zero(int i, refcount_t *r) * cases, refcount_inc(), or one of its variants, should instead be used to * increment a reference count. */ -static inline void refcount_add(int i, refcount_t *r) +static inline void __refcount_add(int i, refcount_t *r, int *oldp) { int old = atomic_fetch_add_relaxed(i, &r->refs); + if (oldp) + *oldp = old; + if (unlikely(!old)) refcount_warn_saturate(r, REFCOUNT_ADD_UAF); else if (unlikely(old < 0 || old + i < 0)) refcount_warn_saturate(r, REFCOUNT_ADD_OVF); } +static inline void refcount_add(int i, refcount_t *r) +{ + __refcount_add(i, r, NULL); +} + /** * refcount_inc_not_zero - increment a refcount unless it is 0 * @r: the refcount to increment @@ -219,9 +235,14 @@ static inline void refcount_add(int i, refcount_t *r) * * Return: true if the increment was successful, false otherwise */ +static inline __must_check bool __refcount_inc_not_zero(refcount_t *r, int *oldp) +{ + return __refcount_add_not_zero(1, r, oldp); +} + static inline __must_check bool refcount_inc_not_zero(refcount_t *r) { - return refcount_add_not_zero(1, r); + return __refcount_inc_not_zero(r, NULL); } /** @@ -236,9 +257,14 @@ static inline __must_check bool refcount_inc_not_zero(refcount_t *r) * Will WARN if the refcount is 0, as this represents a possible use-after-free * condition. */ +static inline void __refcount_inc(refcount_t *r, int *oldp) +{ + __refcount_add(1, r, oldp); +} + static inline void refcount_inc(refcount_t *r) { - refcount_add(1, r); + __refcount_inc(r, NULL); } /** @@ -261,10 +287,13 @@ static inline void refcount_inc(refcount_t *r) * * Return: true if the resulting refcount is 0, false otherwise */ -static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) +static inline __must_check bool __refcount_sub_and_test(int i, refcount_t *r, int *oldp) { int old = atomic_fetch_sub_release(i, &r->refs); + if (oldp) + *oldp = old; + if (old == i) { smp_acquire__after_ctrl_dep(); return true; @@ -276,6 +305,11 @@ static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) return false; } +static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) +{ + return __refcount_sub_and_test(i, r, NULL); +} + /** * refcount_dec_and_test - decrement a refcount and test if it is 0 * @r: the refcount @@ -289,9 +323,14 @@ static inline __must_check bool refcount_sub_and_test(int i, refcount_t *r) * * Return: true if the resulting refcount is 0, false otherwise */ +static inline __must_check bool __refcount_dec_and_test(refcount_t *r, int *oldp) +{ + return __refcount_sub_and_test(1, r, oldp); +} + static inline __must_check bool refcount_dec_and_test(refcount_t *r) { - return refcount_sub_and_test(1, r); + return __refcount_dec_and_test(r, NULL); } /** @@ -304,10 +343,20 @@ static inline __must_check bool refcount_dec_and_test(refcount_t *r) * Provides release memory ordering, such that prior loads and stores are done * before. */ +static inline void __refcount_dec(refcount_t *r, int *oldp) +{ + int old = atomic_fetch_sub_release(1, &r->refs); + + if (oldp) + *oldp = old; + + if (unlikely(old <= 1)) + refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); +} + static inline void refcount_dec(refcount_t *r) { - if (unlikely(atomic_fetch_sub_release(1, &r->refs) <= 1)) - refcount_warn_saturate(r, REFCOUNT_DEC_LEAK); + __refcount_dec(r, NULL); } extern __must_check bool refcount_dec_if_one(refcount_t *r); diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index 962d9768945f..ac5b07f558b0 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -53,7 +54,7 @@ * * If the write serialization mechanism is one of the common kernel * locking primitives, use a sequence counter with associated lock - * (seqcount_LOCKTYPE_t) instead. + * (seqcount_LOCKNAME_t) instead. * * If it's desired to automatically handle the sequence counter writer * serialization and non-preemptibility requirements, use a sequential @@ -117,7 +118,7 @@ static inline void seqcount_lockdep_reader_access(const seqcount_t *s) #define SEQCNT_ZERO(name) { .sequence = 0, SEQCOUNT_DEP_MAP_INIT(name) } /* - * Sequence counters with associated locks (seqcount_LOCKTYPE_t) + * Sequence counters with associated locks (seqcount_LOCKNAME_t) * * A sequence counter which associates the lock used for writer * serialization at initialization time. This enables lockdep to validate @@ -131,63 +132,117 @@ static inline void seqcount_lockdep_reader_access(const seqcount_t *s) * See Documentation/locking/seqlock.rst */ -#ifdef CONFIG_LOCKDEP +/* + * For PREEMPT_RT, seqcount_LOCKNAME_t write side critical sections cannot + * disable preemption. It can lead to higher latencies, and the write side + * sections will not be able to acquire locks which become sleeping locks + * (e.g. spinlock_t). + * + * To remain preemptible while avoiding a possible livelock caused by the + * reader preempting the writer, use a different technique: let the reader + * detect if a seqcount_LOCKNAME_t writer is in progress. If that is the + * case, acquire then release the associated LOCKNAME writer serialization + * lock. This will allow any possibly-preempted writer to make progress + * until the end of its writer serialization lock critical section. + * + * This lock-unlock technique must be implemented for all of PREEMPT_RT + * sleeping locks. See Documentation/locking/locktypes.rst + */ +#if defined(CONFIG_LOCKDEP) || defined(CONFIG_PREEMPT_RT) #define __SEQ_LOCK(expr) expr #else #define __SEQ_LOCK(expr) #endif /** - * typedef seqcount_LOCKNAME_t - sequence counter with LOCKTYPR associated + * typedef seqcount_LOCKNAME_t - sequence counter with LOCKNAME associated * @seqcount: The real sequence counter - * @lock: Pointer to the associated spinlock + * @lock: Pointer to the associated lock * - * A plain sequence counter with external writer synchronization by a - * spinlock. The spinlock is associated to the sequence count in the + * A plain sequence counter with external writer synchronization by + * LOCKNAME @lock. The lock is associated to the sequence counter in the * static initializer or init function. This enables lockdep to validate * that the write side critical section is properly serialized. - */ - -/** - * seqcount_LOCKNAME_init() - runtime initializer for seqcount_LOCKNAME_t - * @s: Pointer to the seqcount_LOCKNAME_t instance - * @lock: Pointer to the associated LOCKTYPE + * + * LOCKNAME: raw_spinlock, spinlock, rwlock, mutex, or ww_mutex. */ /* - * SEQCOUNT_LOCKTYPE() - Instantiate seqcount_LOCKNAME_t and helpers - * @locktype: actual typename - * @lockname: name + * seqcount_LOCKNAME_init() - runtime initializer for seqcount_LOCKNAME_t + * @s: Pointer to the seqcount_LOCKNAME_t instance + * @lock: Pointer to the associated lock + */ + +#define seqcount_LOCKNAME_init(s, _lock, lockname) \ + do { \ + seqcount_##lockname##_t *____s = (s); \ + seqcount_init(&____s->seqcount); \ + __SEQ_LOCK(____s->lock = (_lock)); \ + } while (0) + +#define seqcount_raw_spinlock_init(s, lock) seqcount_LOCKNAME_init(s, lock, raw_spinlock) +#define seqcount_spinlock_init(s, lock) seqcount_LOCKNAME_init(s, lock, spinlock) +#define seqcount_rwlock_init(s, lock) seqcount_LOCKNAME_init(s, lock, rwlock); +#define seqcount_mutex_init(s, lock) seqcount_LOCKNAME_init(s, lock, mutex); +#define seqcount_ww_mutex_init(s, lock) seqcount_LOCKNAME_init(s, lock, ww_mutex); + +/* + * SEQCOUNT_LOCKNAME() - Instantiate seqcount_LOCKNAME_t and helpers + * seqprop_LOCKNAME_*() - Property accessors for seqcount_LOCKNAME_t + * + * @lockname: "LOCKNAME" part of seqcount_LOCKNAME_t + * @locktype: LOCKNAME canonical C data type * @preemptible: preemptibility of above locktype * @lockmember: argument for lockdep_assert_held() + * @lockbase: associated lock release function (prefix only) + * @lock_acquire: associated lock acquisition function (full call) */ -#define SEQCOUNT_LOCKTYPE(locktype, lockname, preemptible, lockmember) \ +#define SEQCOUNT_LOCKNAME(lockname, locktype, preemptible, lockmember, lockbase, lock_acquire) \ typedef struct seqcount_##lockname { \ seqcount_t seqcount; \ __SEQ_LOCK(locktype *lock); \ } seqcount_##lockname##_t; \ \ -static __always_inline void \ -seqcount_##lockname##_init(seqcount_##lockname##_t *s, locktype *lock) \ -{ \ - seqcount_init(&s->seqcount); \ - __SEQ_LOCK(s->lock = lock); \ -} \ - \ static __always_inline seqcount_t * \ -__seqcount_##lockname##_ptr(seqcount_##lockname##_t *s) \ +__seqprop_##lockname##_ptr(seqcount_##lockname##_t *s) \ { \ return &s->seqcount; \ } \ \ -static __always_inline bool \ -__seqcount_##lockname##_preemptible(seqcount_##lockname##_t *s) \ +static __always_inline unsigned \ +__seqprop_##lockname##_sequence(const seqcount_##lockname##_t *s) \ { \ - return preemptible; \ + unsigned seq = READ_ONCE(s->seqcount.sequence); \ + \ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) \ + return seq; \ + \ + if (preemptible && unlikely(seq & 1)) { \ + __SEQ_LOCK(lock_acquire); \ + __SEQ_LOCK(lockbase##_unlock(s->lock)); \ + \ + /* \ + * Re-read the sequence counter since the (possibly \ + * preempted) writer made progress. \ + */ \ + seq = READ_ONCE(s->seqcount.sequence); \ + } \ + \ + return seq; \ +} \ + \ +static __always_inline bool \ +__seqprop_##lockname##_preemptible(const seqcount_##lockname##_t *s) \ +{ \ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) \ + return preemptible; \ + \ + /* PREEMPT_RT relies on the above LOCK+UNLOCK */ \ + return false; \ } \ \ static __always_inline void \ -__seqcount_##lockname##_assert(seqcount_##lockname##_t *s) \ +__seqprop_##lockname##_assert(const seqcount_##lockname##_t *s) \ { \ __SEQ_LOCK(lockdep_assert_held(lockmember)); \ } @@ -196,50 +251,56 @@ __seqcount_##lockname##_assert(seqcount_##lockname##_t *s) \ * __seqprop() for seqcount_t */ -static inline seqcount_t *__seqcount_ptr(seqcount_t *s) +static inline seqcount_t *__seqprop_ptr(seqcount_t *s) { return s; } -static inline bool __seqcount_preemptible(seqcount_t *s) +static inline unsigned __seqprop_sequence(const seqcount_t *s) +{ + return READ_ONCE(s->sequence); +} + +static inline bool __seqprop_preemptible(const seqcount_t *s) { return false; } -static inline void __seqcount_assert(seqcount_t *s) +static inline void __seqprop_assert(const seqcount_t *s) { lockdep_assert_preemption_disabled(); } -SEQCOUNT_LOCKTYPE(raw_spinlock_t, raw_spinlock, false, s->lock) -SEQCOUNT_LOCKTYPE(spinlock_t, spinlock, false, s->lock) -SEQCOUNT_LOCKTYPE(rwlock_t, rwlock, false, s->lock) -SEQCOUNT_LOCKTYPE(struct mutex, mutex, true, s->lock) -SEQCOUNT_LOCKTYPE(struct ww_mutex, ww_mutex, true, &s->lock->base) +#define __SEQ_RT IS_ENABLED(CONFIG_PREEMPT_RT) -/** +SEQCOUNT_LOCKNAME(raw_spinlock, raw_spinlock_t, false, s->lock, raw_spin, raw_spin_lock(s->lock)) +SEQCOUNT_LOCKNAME(spinlock, spinlock_t, __SEQ_RT, s->lock, spin, spin_lock(s->lock)) +SEQCOUNT_LOCKNAME(rwlock, rwlock_t, __SEQ_RT, s->lock, read, read_lock(s->lock)) +SEQCOUNT_LOCKNAME(mutex, struct mutex, true, s->lock, mutex, mutex_lock(s->lock)) +SEQCOUNT_LOCKNAME(ww_mutex, struct ww_mutex, true, &s->lock->base, ww_mutex, ww_mutex_lock(s->lock, NULL)) + +/* * SEQCNT_LOCKNAME_ZERO - static initializer for seqcount_LOCKNAME_t * @name: Name of the seqcount_LOCKNAME_t instance - * @lock: Pointer to the associated LOCKTYPE + * @lock: Pointer to the associated LOCKNAME */ -#define SEQCOUNT_LOCKTYPE_ZERO(seq_name, assoc_lock) { \ +#define SEQCOUNT_LOCKNAME_ZERO(seq_name, assoc_lock) { \ .seqcount = SEQCNT_ZERO(seq_name.seqcount), \ __SEQ_LOCK(.lock = (assoc_lock)) \ } -#define SEQCNT_SPINLOCK_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) -#define SEQCNT_RAW_SPINLOCK_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) -#define SEQCNT_RWLOCK_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) -#define SEQCNT_MUTEX_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) -#define SEQCNT_WW_MUTEX_ZERO(name, lock) SEQCOUNT_LOCKTYPE_ZERO(name, lock) - +#define SEQCNT_RAW_SPINLOCK_ZERO(name, lock) SEQCOUNT_LOCKNAME_ZERO(name, lock) +#define SEQCNT_SPINLOCK_ZERO(name, lock) SEQCOUNT_LOCKNAME_ZERO(name, lock) +#define SEQCNT_RWLOCK_ZERO(name, lock) SEQCOUNT_LOCKNAME_ZERO(name, lock) +#define SEQCNT_MUTEX_ZERO(name, lock) SEQCOUNT_LOCKNAME_ZERO(name, lock) +#define SEQCNT_WW_MUTEX_ZERO(name, lock) SEQCOUNT_LOCKNAME_ZERO(name, lock) #define __seqprop_case(s, lockname, prop) \ - seqcount_##lockname##_t: __seqcount_##lockname##_##prop((void *)(s)) + seqcount_##lockname##_t: __seqprop_##lockname##_##prop((void *)(s)) #define __seqprop(s, prop) _Generic(*(s), \ - seqcount_t: __seqcount_##prop((void *)(s)), \ + seqcount_t: __seqprop_##prop((void *)(s)), \ __seqprop_case((s), raw_spinlock, prop), \ __seqprop_case((s), spinlock, prop), \ __seqprop_case((s), rwlock, prop), \ @@ -247,12 +308,13 @@ SEQCOUNT_LOCKTYPE(struct ww_mutex, ww_mutex, true, &s->lock->base) __seqprop_case((s), ww_mutex, prop)) #define __seqcount_ptr(s) __seqprop(s, ptr) +#define __seqcount_sequence(s) __seqprop(s, sequence) #define __seqcount_lock_preemptible(s) __seqprop(s, preemptible) #define __seqcount_assert_lock_held(s) __seqprop(s, assert) /** * __read_seqcount_begin() - begin a seqcount_t read section w/o barrier - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * __read_seqcount_begin is like read_seqcount_begin, but has no smp_rmb() * barrier. Callers should ensure that smp_rmb() or equivalent ordering is @@ -265,56 +327,45 @@ SEQCOUNT_LOCKTYPE(struct ww_mutex, ww_mutex, true, &s->lock->base) * Return: count to be passed to read_seqcount_retry() */ #define __read_seqcount_begin(s) \ - __read_seqcount_t_begin(__seqcount_ptr(s)) - -static inline unsigned __read_seqcount_t_begin(const seqcount_t *s) -{ - unsigned ret; - -repeat: - ret = READ_ONCE(s->sequence); - if (unlikely(ret & 1)) { - cpu_relax(); - goto repeat; - } - kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); - return ret; -} +({ \ + unsigned seq; \ + \ + while ((seq = __seqcount_sequence(s)) & 1) \ + cpu_relax(); \ + \ + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); \ + seq; \ +}) /** * raw_read_seqcount_begin() - begin a seqcount_t read section w/o lockdep - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * Return: count to be passed to read_seqcount_retry() */ #define raw_read_seqcount_begin(s) \ - raw_read_seqcount_t_begin(__seqcount_ptr(s)) - -static inline unsigned raw_read_seqcount_t_begin(const seqcount_t *s) -{ - unsigned ret = __read_seqcount_t_begin(s); - smp_rmb(); - return ret; -} +({ \ + unsigned seq = __read_seqcount_begin(s); \ + \ + smp_rmb(); \ + seq; \ +}) /** * read_seqcount_begin() - begin a seqcount_t read critical section - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * Return: count to be passed to read_seqcount_retry() */ #define read_seqcount_begin(s) \ - read_seqcount_t_begin(__seqcount_ptr(s)) - -static inline unsigned read_seqcount_t_begin(const seqcount_t *s) -{ - seqcount_lockdep_reader_access(s); - return raw_read_seqcount_t_begin(s); -} +({ \ + seqcount_lockdep_reader_access(__seqcount_ptr(s)); \ + raw_read_seqcount_begin(s); \ +}) /** * raw_read_seqcount() - read the raw seqcount_t counter value - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * raw_read_seqcount opens a read critical section of the given * seqcount_t, without any lockdep checking, and without checking or @@ -324,20 +375,18 @@ static inline unsigned read_seqcount_t_begin(const seqcount_t *s) * Return: count to be passed to read_seqcount_retry() */ #define raw_read_seqcount(s) \ - raw_read_seqcount_t(__seqcount_ptr(s)) - -static inline unsigned raw_read_seqcount_t(const seqcount_t *s) -{ - unsigned ret = READ_ONCE(s->sequence); - smp_rmb(); - kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); - return ret; -} +({ \ + unsigned seq = __seqcount_sequence(s); \ + \ + smp_rmb(); \ + kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX); \ + seq; \ +}) /** * raw_seqcount_begin() - begin a seqcount_t read critical section w/o * lockdep and w/o counter stabilization - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * raw_seqcount_begin opens a read critical section of the given * seqcount_t. Unlike read_seqcount_begin(), this function will not wait @@ -352,20 +401,17 @@ static inline unsigned raw_read_seqcount_t(const seqcount_t *s) * Return: count to be passed to read_seqcount_retry() */ #define raw_seqcount_begin(s) \ - raw_seqcount_t_begin(__seqcount_ptr(s)) - -static inline unsigned raw_seqcount_t_begin(const seqcount_t *s) -{ - /* - * If the counter is odd, let read_seqcount_retry() fail - * by decrementing the counter. - */ - return raw_read_seqcount_t(s) & ~1; -} +({ \ + /* \ + * If the counter is odd, let read_seqcount_retry() fail \ + * by decrementing the counter. \ + */ \ + raw_read_seqcount(s) & ~1; \ +}) /** * __read_seqcount_retry() - end a seqcount_t read section w/o barrier - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * @start: count, from read_seqcount_begin() * * __read_seqcount_retry is like read_seqcount_retry, but has no smp_rmb() @@ -389,7 +435,7 @@ static inline int __read_seqcount_t_retry(const seqcount_t *s, unsigned start) /** * read_seqcount_retry() - end a seqcount_t read critical section - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * @start: count, from read_seqcount_begin() * * read_seqcount_retry closes the read critical section of given @@ -409,7 +455,7 @@ static inline int read_seqcount_t_retry(const seqcount_t *s, unsigned start) /** * raw_write_seqcount_begin() - start a seqcount_t write section w/o lockdep - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants */ #define raw_write_seqcount_begin(s) \ do { \ @@ -428,7 +474,7 @@ static inline void raw_write_seqcount_t_begin(seqcount_t *s) /** * raw_write_seqcount_end() - end a seqcount_t write section w/o lockdep - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants */ #define raw_write_seqcount_end(s) \ do { \ @@ -448,7 +494,7 @@ static inline void raw_write_seqcount_t_end(seqcount_t *s) /** * write_seqcount_begin_nested() - start a seqcount_t write section with * custom lockdep nesting level - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * @subclass: lockdep nesting level * * See Documentation/locking/lockdep-design.rst @@ -471,7 +517,7 @@ static inline void write_seqcount_t_begin_nested(seqcount_t *s, int subclass) /** * write_seqcount_begin() - start a seqcount_t write side critical section - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * write_seqcount_begin opens a write side critical section of the given * seqcount_t. @@ -497,7 +543,7 @@ static inline void write_seqcount_t_begin(seqcount_t *s) /** * write_seqcount_end() - end a seqcount_t write side critical section - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * The write section must've been opened with write_seqcount_begin(). */ @@ -517,7 +563,7 @@ static inline void write_seqcount_t_end(seqcount_t *s) /** * raw_write_seqcount_barrier() - do a seqcount_t write barrier - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * This can be used to provide an ordering guarantee instead of the usual * consistency guarantee. It is one wmb cheaper, because it can collapse @@ -571,7 +617,7 @@ static inline void raw_write_seqcount_t_barrier(seqcount_t *s) /** * write_seqcount_invalidate() - invalidate in-progress seqcount_t read * side operations - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * @s: Pointer to seqcount_t or any of the seqcount_LOCKNAME_t variants * * After write_seqcount_invalidate, no seqcount_t read side operations * will complete successfully and see data older than this. @@ -587,34 +633,73 @@ static inline void write_seqcount_t_invalidate(seqcount_t *s) kcsan_nestable_atomic_end(); } -/** - * raw_read_seqcount_latch() - pick even/odd seqcount_t latch data copy - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants +/* + * Latch sequence counters (seqcount_latch_t) * - * Use seqcount_t latching to switch between two storage places protected - * by a sequence counter. Doing so allows having interruptible, preemptible, - * seqcount_t write side critical sections. + * A sequence counter variant where the counter even/odd value is used to + * switch between two copies of protected data. This allows the read path, + * typically NMIs, to safely interrupt the write side critical section. * - * Check raw_write_seqcount_latch() for more details and a full reader and - * writer usage example. - * - * Return: sequence counter raw value. Use the lowest bit as an index for - * picking which data copy to read. The full counter value must then be - * checked with read_seqcount_retry(). + * As the write sections are fully preemptible, no special handling for + * PREEMPT_RT is needed. */ -#define raw_read_seqcount_latch(s) \ - raw_read_seqcount_t_latch(__seqcount_ptr(s)) +typedef struct { + seqcount_t seqcount; +} seqcount_latch_t; -static inline int raw_read_seqcount_t_latch(seqcount_t *s) -{ - /* Pairs with the first smp_wmb() in raw_write_seqcount_latch() */ - int seq = READ_ONCE(s->sequence); /* ^^^ */ - return seq; +/** + * SEQCNT_LATCH_ZERO() - static initializer for seqcount_latch_t + * @seq_name: Name of the seqcount_latch_t instance + */ +#define SEQCNT_LATCH_ZERO(seq_name) { \ + .seqcount = SEQCNT_ZERO(seq_name.seqcount), \ } /** - * raw_write_seqcount_latch() - redirect readers to even/odd copy - * @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants + * seqcount_latch_init() - runtime initializer for seqcount_latch_t + * @s: Pointer to the seqcount_latch_t instance + */ +static inline void seqcount_latch_init(seqcount_latch_t *s) +{ + seqcount_init(&s->seqcount); +} + +/** + * raw_read_seqcount_latch() - pick even/odd latch data copy + * @s: Pointer to seqcount_latch_t + * + * See raw_write_seqcount_latch() for details and a full reader/writer + * usage example. + * + * Return: sequence counter raw value. Use the lowest bit as an index for + * picking which data copy to read. The full counter must then be checked + * with read_seqcount_latch_retry(). + */ +static inline unsigned raw_read_seqcount_latch(const seqcount_latch_t *s) +{ + /* + * Pairs with the first smp_wmb() in raw_write_seqcount_latch(). + * Due to the dependent load, a full smp_rmb() is not needed. + */ + return READ_ONCE(s->seqcount.sequence); +} + +/** + * read_seqcount_latch_retry() - end a seqcount_latch_t read section + * @s: Pointer to seqcount_latch_t + * @start: count, from raw_read_seqcount_latch() + * + * Return: true if a read section retry is required, else false + */ +static inline int +read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start) +{ + return read_seqcount_retry(&s->seqcount, start); +} + +/** + * raw_write_seqcount_latch() - redirect latch readers to even/odd copy + * @s: Pointer to seqcount_latch_t * * The latch technique is a multiversion concurrency control method that allows * queries during non-atomic modifications. If you can guarantee queries never @@ -633,7 +718,7 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s) * The basic form is a data structure like:: * * struct latch_struct { - * seqcount_t seq; + * seqcount_latch_t seq; * struct data_struct data[2]; * }; * @@ -643,13 +728,13 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s) * void latch_modify(struct latch_struct *latch, ...) * { * smp_wmb(); // Ensure that the last data[1] update is visible - * latch->seq++; + * latch->seq.sequence++; * smp_wmb(); // Ensure that the seqcount update is visible * * modify(latch->data[0], ...); * * smp_wmb(); // Ensure that the data[0] update is visible - * latch->seq++; + * latch->seq.sequence++; * smp_wmb(); // Ensure that the seqcount update is visible * * modify(latch->data[1], ...); @@ -668,8 +753,8 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s) * idx = seq & 0x01; * entry = data_query(latch->data[idx], ...); * - * // read_seqcount_retry() includes needed smp_rmb() - * } while (read_seqcount_retry(&latch->seq, seq)); + * // This includes needed smp_rmb() + * } while (read_seqcount_latch_retry(&latch->seq, seq)); * * return entry; * } @@ -688,19 +773,16 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s) * to miss an entire modification sequence, once it resumes it might * observe the new entry. * - * NOTE: + * NOTE2: * * When data is a dynamic data structure; one should use regular RCU * patterns to manage the lifetimes of the objects within. */ -#define raw_write_seqcount_latch(s) \ - raw_write_seqcount_t_latch(__seqcount_ptr(s)) - -static inline void raw_write_seqcount_t_latch(seqcount_t *s) +static inline void raw_write_seqcount_latch(seqcount_latch_t *s) { - smp_wmb(); /* prior stores before incrementing "sequence" */ - s->sequence++; - smp_wmb(); /* increment "sequence" before following stores */ + smp_wmb(); /* prior stores before incrementing "sequence" */ + s->seqcount.sequence++; + smp_wmb(); /* increment "sequence" before following stores */ } /* @@ -714,13 +796,17 @@ static inline void raw_write_seqcount_t_latch(seqcount_t *s) * - Documentation/locking/seqlock.rst */ typedef struct { - struct seqcount seqcount; + /* + * Make sure that readers don't starve writers on PREEMPT_RT: use + * seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK(). + */ + seqcount_spinlock_t seqcount; spinlock_t lock; } seqlock_t; #define __SEQLOCK_UNLOCKED(lockname) \ { \ - .seqcount = SEQCNT_ZERO(lockname), \ + .seqcount = SEQCNT_SPINLOCK_ZERO(lockname, &(lockname).lock), \ .lock = __SPIN_LOCK_UNLOCKED(lockname) \ } @@ -730,12 +816,12 @@ typedef struct { */ #define seqlock_init(sl) \ do { \ - seqcount_init(&(sl)->seqcount); \ spin_lock_init(&(sl)->lock); \ + seqcount_spinlock_init(&(sl)->seqcount, &(sl)->lock); \ } while (0) /** - * DEFINE_SEQLOCK() - Define a statically allocated seqlock_t + * DEFINE_SEQLOCK(sl) - Define a statically allocated seqlock_t * @sl: Name of the seqlock_t instance */ #define DEFINE_SEQLOCK(sl) \ @@ -778,6 +864,12 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) return read_seqcount_retry(&sl->seqcount, start); } +/* + * For all seqlock_t write side functions, use write_seqcount_*t*_begin() + * instead of the generic write_seqcount_begin(). This way, no redundant + * lockdep_assert_held() checks are added. + */ + /** * write_seqlock() - start a seqlock_t write side critical section * @sl: Pointer to seqlock_t @@ -794,7 +886,7 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) static inline void write_seqlock(seqlock_t *sl) { spin_lock(&sl->lock); - write_seqcount_t_begin(&sl->seqcount); + write_seqcount_t_begin(&sl->seqcount.seqcount); } /** @@ -806,7 +898,7 @@ static inline void write_seqlock(seqlock_t *sl) */ static inline void write_sequnlock(seqlock_t *sl) { - write_seqcount_t_end(&sl->seqcount); + write_seqcount_t_end(&sl->seqcount.seqcount); spin_unlock(&sl->lock); } @@ -820,7 +912,7 @@ static inline void write_sequnlock(seqlock_t *sl) static inline void write_seqlock_bh(seqlock_t *sl) { spin_lock_bh(&sl->lock); - write_seqcount_t_begin(&sl->seqcount); + write_seqcount_t_begin(&sl->seqcount.seqcount); } /** @@ -833,7 +925,7 @@ static inline void write_seqlock_bh(seqlock_t *sl) */ static inline void write_sequnlock_bh(seqlock_t *sl) { - write_seqcount_t_end(&sl->seqcount); + write_seqcount_t_end(&sl->seqcount.seqcount); spin_unlock_bh(&sl->lock); } @@ -847,7 +939,7 @@ static inline void write_sequnlock_bh(seqlock_t *sl) static inline void write_seqlock_irq(seqlock_t *sl) { spin_lock_irq(&sl->lock); - write_seqcount_t_begin(&sl->seqcount); + write_seqcount_t_begin(&sl->seqcount.seqcount); } /** @@ -859,7 +951,7 @@ static inline void write_seqlock_irq(seqlock_t *sl) */ static inline void write_sequnlock_irq(seqlock_t *sl) { - write_seqcount_t_end(&sl->seqcount); + write_seqcount_t_end(&sl->seqcount.seqcount); spin_unlock_irq(&sl->lock); } @@ -868,7 +960,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) unsigned long flags; spin_lock_irqsave(&sl->lock, flags); - write_seqcount_t_begin(&sl->seqcount); + write_seqcount_t_begin(&sl->seqcount.seqcount); return flags; } @@ -897,7 +989,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) static inline void write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) { - write_seqcount_t_end(&sl->seqcount); + write_seqcount_t_end(&sl->seqcount.seqcount); spin_unlock_irqrestore(&sl->lock, flags); } diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 9147ff6a12e5..3994a217bde7 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 +#define pr_fmt(fmt) "kcsan: " fmt + #include #include #include @@ -98,6 +100,9 @@ static atomic_long_t watchpoints[CONFIG_KCSAN_NUM_WATCHPOINTS + NUM_SLOTS-1]; */ static DEFINE_PER_CPU(long, kcsan_skip); +/* For kcsan_prandom_u32_max(). */ +static DEFINE_PER_CPU(struct rnd_state, kcsan_rand_state); + static __always_inline atomic_long_t *find_watchpoint(unsigned long addr, size_t size, bool expect_write, @@ -223,7 +228,7 @@ is_atomic(const volatile void *ptr, size_t size, int type, struct kcsan_ctx *ctx if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC) && (type & KCSAN_ACCESS_WRITE) && size <= sizeof(long) && - IS_ALIGNED((unsigned long)ptr, size)) + !(type & KCSAN_ACCESS_COMPOUND) && IS_ALIGNED((unsigned long)ptr, size)) return true; /* Assume aligned writes up to word size are atomic. */ if (ctx->atomic_next > 0) { @@ -269,11 +274,28 @@ should_watch(const volatile void *ptr, size_t size, int type, struct kcsan_ctx * return true; } +/* + * Returns a pseudo-random number in interval [0, ep_ro). See prandom_u32_max() + * for more details. + * + * The open-coded version here is using only safe primitives for all contexts + * where we can have KCSAN instrumentation. In particular, we cannot use + * prandom_u32() directly, as its tracepoint could cause recursion. + */ +static u32 kcsan_prandom_u32_max(u32 ep_ro) +{ + struct rnd_state *state = &get_cpu_var(kcsan_rand_state); + const u32 res = prandom_u32_state(state); + + put_cpu_var(kcsan_rand_state); + return (u32)(((u64) res * ep_ro) >> 32); +} + static inline void reset_kcsan_skip(void) { long skip_count = kcsan_skip_watch - (IS_ENABLED(CONFIG_KCSAN_SKIP_WATCH_RANDOMIZE) ? - prandom_u32_max(kcsan_skip_watch) : + kcsan_prandom_u32_max(kcsan_skip_watch) : 0); this_cpu_write(kcsan_skip, skip_count); } @@ -283,12 +305,18 @@ static __always_inline bool kcsan_is_enabled(void) return READ_ONCE(kcsan_enabled) && get_ctx()->disable_count == 0; } -static inline unsigned int get_delay(void) +/* Introduce delay depending on context and configuration. */ +static void delay_access(int type) { unsigned int delay = in_task() ? kcsan_udelay_task : kcsan_udelay_interrupt; - return delay - (IS_ENABLED(CONFIG_KCSAN_DELAY_RANDOMIZE) ? - prandom_u32_max(delay) : - 0); + /* For certain access types, skew the random delay to be longer. */ + unsigned int skew_delay_order = + (type & (KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_ASSERT)) ? 1 : 0; + + delay -= IS_ENABLED(CONFIG_KCSAN_DELAY_RANDOMIZE) ? + kcsan_prandom_u32_max(delay >> skew_delay_order) : + 0; + udelay(delay); } void kcsan_save_irqtrace(struct task_struct *task) @@ -361,13 +389,13 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr, * already removed the watchpoint, or another thread consumed * the watchpoint before this thread. */ - kcsan_counter_inc(KCSAN_COUNTER_REPORT_RACES); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_REPORT_RACES]); } if ((type & KCSAN_ACCESS_ASSERT) != 0) - kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ASSERT_FAILURES]); else - kcsan_counter_inc(KCSAN_COUNTER_DATA_RACES); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_DATA_RACES]); user_access_restore(flags); } @@ -408,7 +436,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type) goto out; if (!check_encodable((unsigned long)ptr, size)) { - kcsan_counter_inc(KCSAN_COUNTER_UNENCODABLE_ACCESSES); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_UNENCODABLE_ACCESSES]); goto out; } @@ -428,12 +456,12 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type) * with which should_watch() returns true should be tweaked so * that this case happens very rarely. */ - kcsan_counter_inc(KCSAN_COUNTER_NO_CAPACITY); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_NO_CAPACITY]); goto out_unlock; } - kcsan_counter_inc(KCSAN_COUNTER_SETUP_WATCHPOINTS); - kcsan_counter_inc(KCSAN_COUNTER_USED_WATCHPOINTS); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_SETUP_WATCHPOINTS]); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_USED_WATCHPOINTS]); /* * Read the current value, to later check and infer a race if the data @@ -459,7 +487,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type) if (IS_ENABLED(CONFIG_KCSAN_DEBUG)) { kcsan_disable_current(); - pr_err("KCSAN: watching %s, size: %zu, addr: %px [slot: %d, encoded: %lx]\n", + pr_err("watching %s, size: %zu, addr: %px [slot: %d, encoded: %lx]\n", is_write ? "write" : "read", size, ptr, watchpoint_slot((unsigned long)ptr), encode_watchpoint((unsigned long)ptr, size, is_write)); @@ -470,7 +498,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type) * Delay this thread, to increase probability of observing a racy * conflicting access. */ - udelay(get_delay()); + delay_access(type); /* * Re-read value, and check if it is as expected; if not, we infer a @@ -535,16 +563,16 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type) * increment this counter. */ if (is_assert && value_change == KCSAN_VALUE_CHANGE_TRUE) - kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ASSERT_FAILURES]); kcsan_report(ptr, size, type, value_change, KCSAN_REPORT_RACE_SIGNAL, watchpoint - watchpoints); } else if (value_change == KCSAN_VALUE_CHANGE_TRUE) { /* Inferring a race, since the value should not have changed. */ - kcsan_counter_inc(KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN]); if (is_assert) - kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ASSERT_FAILURES]); if (IS_ENABLED(CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN) || is_assert) kcsan_report(ptr, size, type, KCSAN_VALUE_CHANGE_TRUE, @@ -557,7 +585,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type) * reused after this point. */ remove_watchpoint(watchpoint); - kcsan_counter_dec(KCSAN_COUNTER_USED_WATCHPOINTS); + atomic_long_dec(&kcsan_counters[KCSAN_COUNTER_USED_WATCHPOINTS]); out_unlock: if (!kcsan_interrupt_watcher) local_irq_restore(irq_flags); @@ -614,13 +642,16 @@ void __init kcsan_init(void) BUG_ON(!in_task()); kcsan_debugfs_init(); + prandom_seed_full_state(&kcsan_rand_state); /* * We are in the init task, and no other tasks should be running; * WRITE_ONCE without memory barrier is sufficient. */ - if (kcsan_early_enable) + if (kcsan_early_enable) { + pr_info("enabled early\n"); WRITE_ONCE(kcsan_enabled, true); + } } /* === Exported interface =================================================== */ @@ -793,7 +824,17 @@ EXPORT_SYMBOL(__kcsan_check_access); EXPORT_SYMBOL(__tsan_write##size); \ void __tsan_unaligned_write##size(void *ptr) \ __alias(__tsan_write##size); \ - EXPORT_SYMBOL(__tsan_unaligned_write##size) + EXPORT_SYMBOL(__tsan_unaligned_write##size); \ + void __tsan_read_write##size(void *ptr); \ + void __tsan_read_write##size(void *ptr) \ + { \ + check_access(ptr, size, \ + KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE); \ + } \ + EXPORT_SYMBOL(__tsan_read_write##size); \ + void __tsan_unaligned_read_write##size(void *ptr) \ + __alias(__tsan_read_write##size); \ + EXPORT_SYMBOL(__tsan_unaligned_read_write##size) DEFINE_TSAN_READ_WRITE(1); DEFINE_TSAN_READ_WRITE(2); @@ -879,3 +920,130 @@ void __tsan_init(void) { } EXPORT_SYMBOL(__tsan_init); + +/* + * Instrumentation for atomic builtins (__atomic_*, __sync_*). + * + * Normal kernel code _should not_ be using them directly, but some + * architectures may implement some or all atomics using the compilers' + * builtins. + * + * Note: If an architecture decides to fully implement atomics using the + * builtins, because they are implicitly instrumented by KCSAN (and KASAN, + * etc.), implementing the ARCH_ATOMIC interface (to get instrumentation via + * atomic-instrumented) is no longer necessary. + * + * TSAN instrumentation replaces atomic accesses with calls to any of the below + * functions, whose job is to also execute the operation itself. + */ + +#define DEFINE_TSAN_ATOMIC_LOAD_STORE(bits) \ + u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder); \ + u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder) \ + { \ + if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ + check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_ATOMIC); \ + } \ + return __atomic_load_n(ptr, memorder); \ + } \ + EXPORT_SYMBOL(__tsan_atomic##bits##_load); \ + void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder); \ + void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder) \ + { \ + if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ + check_access(ptr, bits / BITS_PER_BYTE, \ + KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC); \ + } \ + __atomic_store_n(ptr, v, memorder); \ + } \ + EXPORT_SYMBOL(__tsan_atomic##bits##_store) + +#define DEFINE_TSAN_ATOMIC_RMW(op, bits, suffix) \ + u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder); \ + u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder) \ + { \ + if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ + check_access(ptr, bits / BITS_PER_BYTE, \ + KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ + KCSAN_ACCESS_ATOMIC); \ + } \ + return __atomic_##op##suffix(ptr, v, memorder); \ + } \ + EXPORT_SYMBOL(__tsan_atomic##bits##_##op) + +/* + * Note: CAS operations are always classified as write, even in case they + * fail. We cannot perform check_access() after a write, as it might lead to + * false positives, in cases such as: + * + * T0: __atomic_compare_exchange_n(&p->flag, &old, 1, ...) + * + * T1: if (__atomic_load_n(&p->flag, ...)) { + * modify *p; + * p->flag = 0; + * } + * + * The only downside is that, if there are 3 threads, with one CAS that + * succeeds, another CAS that fails, and an unmarked racing operation, we may + * point at the wrong CAS as the source of the race. However, if we assume that + * all CAS can succeed in some other execution, the data race is still valid. + */ +#define DEFINE_TSAN_ATOMIC_CMPXCHG(bits, strength, weak) \ + int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \ + u##bits val, int mo, int fail_mo); \ + int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \ + u##bits val, int mo, int fail_mo) \ + { \ + if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ + check_access(ptr, bits / BITS_PER_BYTE, \ + KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ + KCSAN_ACCESS_ATOMIC); \ + } \ + return __atomic_compare_exchange_n(ptr, exp, val, weak, mo, fail_mo); \ + } \ + EXPORT_SYMBOL(__tsan_atomic##bits##_compare_exchange_##strength) + +#define DEFINE_TSAN_ATOMIC_CMPXCHG_VAL(bits) \ + u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \ + int mo, int fail_mo); \ + u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \ + int mo, int fail_mo) \ + { \ + if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ + check_access(ptr, bits / BITS_PER_BYTE, \ + KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ + KCSAN_ACCESS_ATOMIC); \ + } \ + __atomic_compare_exchange_n(ptr, &exp, val, 0, mo, fail_mo); \ + return exp; \ + } \ + EXPORT_SYMBOL(__tsan_atomic##bits##_compare_exchange_val) + +#define DEFINE_TSAN_ATOMIC_OPS(bits) \ + DEFINE_TSAN_ATOMIC_LOAD_STORE(bits); \ + DEFINE_TSAN_ATOMIC_RMW(exchange, bits, _n); \ + DEFINE_TSAN_ATOMIC_RMW(fetch_add, bits, ); \ + DEFINE_TSAN_ATOMIC_RMW(fetch_sub, bits, ); \ + DEFINE_TSAN_ATOMIC_RMW(fetch_and, bits, ); \ + DEFINE_TSAN_ATOMIC_RMW(fetch_or, bits, ); \ + DEFINE_TSAN_ATOMIC_RMW(fetch_xor, bits, ); \ + DEFINE_TSAN_ATOMIC_RMW(fetch_nand, bits, ); \ + DEFINE_TSAN_ATOMIC_CMPXCHG(bits, strong, 0); \ + DEFINE_TSAN_ATOMIC_CMPXCHG(bits, weak, 1); \ + DEFINE_TSAN_ATOMIC_CMPXCHG_VAL(bits) + +DEFINE_TSAN_ATOMIC_OPS(8); +DEFINE_TSAN_ATOMIC_OPS(16); +DEFINE_TSAN_ATOMIC_OPS(32); +DEFINE_TSAN_ATOMIC_OPS(64); + +void __tsan_atomic_thread_fence(int memorder); +void __tsan_atomic_thread_fence(int memorder) +{ + __atomic_thread_fence(memorder); +} +EXPORT_SYMBOL(__tsan_atomic_thread_fence); + +void __tsan_atomic_signal_fence(int memorder); +void __tsan_atomic_signal_fence(int memorder) { } +EXPORT_SYMBOL(__tsan_atomic_signal_fence); diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c index 023e49c58d55..3c8093a371b1 100644 --- a/kernel/kcsan/debugfs.c +++ b/kernel/kcsan/debugfs.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 +#define pr_fmt(fmt) "kcsan: " fmt + #include #include #include @@ -15,10 +17,19 @@ #include "kcsan.h" -/* - * Statistics counters. - */ -static atomic_long_t counters[KCSAN_COUNTER_COUNT]; +atomic_long_t kcsan_counters[KCSAN_COUNTER_COUNT]; +static const char *const counter_names[] = { + [KCSAN_COUNTER_USED_WATCHPOINTS] = "used_watchpoints", + [KCSAN_COUNTER_SETUP_WATCHPOINTS] = "setup_watchpoints", + [KCSAN_COUNTER_DATA_RACES] = "data_races", + [KCSAN_COUNTER_ASSERT_FAILURES] = "assert_failures", + [KCSAN_COUNTER_NO_CAPACITY] = "no_capacity", + [KCSAN_COUNTER_REPORT_RACES] = "report_races", + [KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN] = "races_unknown_origin", + [KCSAN_COUNTER_UNENCODABLE_ACCESSES] = "unencodable_accesses", + [KCSAN_COUNTER_ENCODING_FALSE_POSITIVES] = "encoding_false_positives", +}; +static_assert(ARRAY_SIZE(counter_names) == KCSAN_COUNTER_COUNT); /* * Addresses for filtering functions from reporting. This list can be used as a @@ -39,34 +50,6 @@ static struct { }; static DEFINE_SPINLOCK(report_filterlist_lock); -static const char *counter_to_name(enum kcsan_counter_id id) -{ - switch (id) { - case KCSAN_COUNTER_USED_WATCHPOINTS: return "used_watchpoints"; - case KCSAN_COUNTER_SETUP_WATCHPOINTS: return "setup_watchpoints"; - case KCSAN_COUNTER_DATA_RACES: return "data_races"; - case KCSAN_COUNTER_ASSERT_FAILURES: return "assert_failures"; - case KCSAN_COUNTER_NO_CAPACITY: return "no_capacity"; - case KCSAN_COUNTER_REPORT_RACES: return "report_races"; - case KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN: return "races_unknown_origin"; - case KCSAN_COUNTER_UNENCODABLE_ACCESSES: return "unencodable_accesses"; - case KCSAN_COUNTER_ENCODING_FALSE_POSITIVES: return "encoding_false_positives"; - case KCSAN_COUNTER_COUNT: - BUG(); - } - return NULL; -} - -void kcsan_counter_inc(enum kcsan_counter_id id) -{ - atomic_long_inc(&counters[id]); -} - -void kcsan_counter_dec(enum kcsan_counter_id id) -{ - atomic_long_dec(&counters[id]); -} - /* * The microbenchmark allows benchmarking KCSAN core runtime only. To run * multiple threads, pipe 'microbench=' from multiple tasks into the @@ -86,7 +69,7 @@ static noinline void microbenchmark(unsigned long iters) */ WRITE_ONCE(kcsan_enabled, false); - pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters); + pr_info("%s begin | iters: %lu\n", __func__, iters); cycles = get_cycles(); while (iters--) { @@ -97,73 +80,13 @@ static noinline void microbenchmark(unsigned long iters) } cycles = get_cycles() - cycles; - pr_info("KCSAN: %s end | cycles: %llu\n", __func__, cycles); + pr_info("%s end | cycles: %llu\n", __func__, cycles); WRITE_ONCE(kcsan_enabled, was_enabled); /* restore context */ current->kcsan_ctx = ctx_save; } -/* - * Simple test to create conflicting accesses. Write 'test=' to KCSAN's - * debugfs file from multiple tasks to generate real conflicts and show reports. - */ -static long test_dummy; -static long test_flags; -static long test_scoped; -static noinline void test_thread(unsigned long iters) -{ - const long CHANGE_BITS = 0xff00ff00ff00ff00L; - const struct kcsan_ctx ctx_save = current->kcsan_ctx; - cycles_t cycles; - - /* We may have been called from an atomic region; reset context. */ - memset(¤t->kcsan_ctx, 0, sizeof(current->kcsan_ctx)); - - pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters); - pr_info("test_dummy@%px, test_flags@%px, test_scoped@%px,\n", - &test_dummy, &test_flags, &test_scoped); - - cycles = get_cycles(); - while (iters--) { - /* These all should generate reports. */ - __kcsan_check_read(&test_dummy, sizeof(test_dummy)); - ASSERT_EXCLUSIVE_WRITER(test_dummy); - ASSERT_EXCLUSIVE_ACCESS(test_dummy); - - ASSERT_EXCLUSIVE_BITS(test_flags, ~CHANGE_BITS); /* no report */ - __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */ - - ASSERT_EXCLUSIVE_BITS(test_flags, CHANGE_BITS); /* report */ - __kcsan_check_read(&test_flags, sizeof(test_flags)); /* no report */ - - /* not actually instrumented */ - WRITE_ONCE(test_dummy, iters); /* to observe value-change */ - __kcsan_check_write(&test_dummy, sizeof(test_dummy)); - - test_flags ^= CHANGE_BITS; /* generate value-change */ - __kcsan_check_write(&test_flags, sizeof(test_flags)); - - BUG_ON(current->kcsan_ctx.scoped_accesses.prev); - { - /* Should generate reports anywhere in this block. */ - ASSERT_EXCLUSIVE_WRITER_SCOPED(test_scoped); - ASSERT_EXCLUSIVE_ACCESS_SCOPED(test_scoped); - BUG_ON(!current->kcsan_ctx.scoped_accesses.prev); - /* Unrelated accesses. */ - __kcsan_check_access(&cycles, sizeof(cycles), 0); - __kcsan_check_access(&cycles, sizeof(cycles), KCSAN_ACCESS_ATOMIC); - } - BUG_ON(current->kcsan_ctx.scoped_accesses.prev); - } - cycles = get_cycles() - cycles; - - pr_info("KCSAN: %s end | cycles: %llu\n", __func__, cycles); - - /* restore context */ - current->kcsan_ctx = ctx_save; -} - static int cmp_filterlist_addrs(const void *rhs, const void *lhs) { const unsigned long a = *(const unsigned long *)rhs; @@ -220,7 +143,7 @@ static ssize_t insert_report_filterlist(const char *func) ssize_t ret = 0; if (!addr) { - pr_err("KCSAN: could not find function: '%s'\n", func); + pr_err("could not find function: '%s'\n", func); return -ENOENT; } @@ -270,9 +193,10 @@ static int show_info(struct seq_file *file, void *v) /* show stats */ seq_printf(file, "enabled: %i\n", READ_ONCE(kcsan_enabled)); - for (i = 0; i < KCSAN_COUNTER_COUNT; ++i) - seq_printf(file, "%s: %ld\n", counter_to_name(i), - atomic_long_read(&counters[i])); + for (i = 0; i < KCSAN_COUNTER_COUNT; ++i) { + seq_printf(file, "%s: %ld\n", counter_names[i], + atomic_long_read(&kcsan_counters[i])); + } /* show filter functions, and filter type */ spin_lock_irqsave(&report_filterlist_lock, flags); @@ -307,18 +231,12 @@ debugfs_write(struct file *file, const char __user *buf, size_t count, loff_t *o WRITE_ONCE(kcsan_enabled, true); } else if (!strcmp(arg, "off")) { WRITE_ONCE(kcsan_enabled, false); - } else if (!strncmp(arg, "microbench=", sizeof("microbench=") - 1)) { + } else if (str_has_prefix(arg, "microbench=")) { unsigned long iters; - if (kstrtoul(&arg[sizeof("microbench=") - 1], 0, &iters)) + if (kstrtoul(&arg[strlen("microbench=")], 0, &iters)) return -EINVAL; microbenchmark(iters); - } else if (!strncmp(arg, "test=", sizeof("test=") - 1)) { - unsigned long iters; - - if (kstrtoul(&arg[sizeof("test=") - 1], 0, &iters)) - return -EINVAL; - test_thread(iters); } else if (!strcmp(arg, "whitelist")) { set_report_filterlist_whitelist(true); } else if (!strcmp(arg, "blacklist")) { diff --git a/kernel/kcsan/kcsan-test.c b/kernel/kcsan/kcsan-test.c index fed6fcb5768c..ebe7fd245104 100644 --- a/kernel/kcsan/kcsan-test.c +++ b/kernel/kcsan/kcsan-test.c @@ -27,6 +27,12 @@ #include #include +#ifdef CONFIG_CC_HAS_TSAN_COMPOUND_READ_BEFORE_WRITE +#define __KCSAN_ACCESS_RW(alt) (KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE) +#else +#define __KCSAN_ACCESS_RW(alt) (alt) +#endif + /* Points to current test-case memory access "kernels". */ static void (*access_kernels[2])(void); @@ -186,20 +192,21 @@ static bool report_matches(const struct expect_report *r) /* Access 1 & 2 */ for (i = 0; i < 2; ++i) { + const int ty = r->access[i].type; const char *const access_type = - (r->access[i].type & KCSAN_ACCESS_ASSERT) ? - ((r->access[i].type & KCSAN_ACCESS_WRITE) ? - "assert no accesses" : - "assert no writes") : - ((r->access[i].type & KCSAN_ACCESS_WRITE) ? - "write" : - "read"); + (ty & KCSAN_ACCESS_ASSERT) ? + ((ty & KCSAN_ACCESS_WRITE) ? + "assert no accesses" : + "assert no writes") : + ((ty & KCSAN_ACCESS_WRITE) ? + ((ty & KCSAN_ACCESS_COMPOUND) ? + "read-write" : + "write") : + "read"); const char *const access_type_aux = - (r->access[i].type & KCSAN_ACCESS_ATOMIC) ? - " (marked)" : - ((r->access[i].type & KCSAN_ACCESS_SCOPED) ? - " (scoped)" : - ""); + (ty & KCSAN_ACCESS_ATOMIC) ? + " (marked)" : + ((ty & KCSAN_ACCESS_SCOPED) ? " (scoped)" : ""); if (i == 1) { /* Access 2 */ @@ -277,6 +284,12 @@ static noinline void test_kernel_write_atomic(void) WRITE_ONCE(test_var, READ_ONCE_NOCHECK(test_sink) + 1); } +static noinline void test_kernel_atomic_rmw(void) +{ + /* Use builtin, so we can set up the "bad" atomic/non-atomic scenario. */ + __atomic_fetch_add(&test_var, 1, __ATOMIC_RELAXED); +} + __no_kcsan static noinline void test_kernel_write_uninstrumented(void) { test_var++; } @@ -390,6 +403,15 @@ static noinline void test_kernel_seqlock_writer(void) write_sequnlock_irqrestore(&test_seqlock, flags); } +static noinline void test_kernel_atomic_builtins(void) +{ + /* + * Generate concurrent accesses, expecting no reports, ensuring KCSAN + * treats builtin atomics as actually atomic. + */ + __atomic_load_n(&test_var, __ATOMIC_RELAXED); +} + /* ===== Test cases ===== */ /* Simple test with normal data race. */ @@ -430,8 +452,8 @@ static void test_concurrent_races(struct kunit *test) const struct expect_report expect = { .access = { /* NULL will match any address. */ - { test_kernel_rmw_array, NULL, 0, KCSAN_ACCESS_WRITE }, - { test_kernel_rmw_array, NULL, 0, 0 }, + { test_kernel_rmw_array, NULL, 0, __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_rmw_array, NULL, 0, __KCSAN_ACCESS_RW(0) }, }, }; static const struct expect_report never = { @@ -620,6 +642,29 @@ static void test_read_plain_atomic_write(struct kunit *test) KUNIT_EXPECT_TRUE(test, match_expect); } +/* Test that atomic RMWs generate correct report. */ +__no_kcsan +static void test_read_plain_atomic_rmw(struct kunit *test) +{ + const struct expect_report expect = { + .access = { + { test_kernel_read, &test_var, sizeof(test_var), 0 }, + { test_kernel_atomic_rmw, &test_var, sizeof(test_var), + KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC }, + }, + }; + bool match_expect = false; + + if (IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) + return; + + begin_test_checks(test_kernel_read, test_kernel_atomic_rmw); + do { + match_expect = report_matches(&expect); + } while (!end_test_checks(match_expect)); + KUNIT_EXPECT_TRUE(test, match_expect); +} + /* Zero-sized accesses should never cause data race reports. */ __no_kcsan static void test_zero_size_access(struct kunit *test) @@ -852,6 +897,59 @@ static void test_seqlock_noreport(struct kunit *test) KUNIT_EXPECT_FALSE(test, match_never); } +/* + * Test atomic builtins work and required instrumentation functions exist. We + * also test that KCSAN understands they're atomic by racing with them via + * test_kernel_atomic_builtins(), and expect no reports. + * + * The atomic builtins _SHOULD NOT_ be used in normal kernel code! + */ +static void test_atomic_builtins(struct kunit *test) +{ + bool match_never = false; + + begin_test_checks(test_kernel_atomic_builtins, test_kernel_atomic_builtins); + do { + long tmp; + + kcsan_enable_current(); + + __atomic_store_n(&test_var, 42L, __ATOMIC_RELAXED); + KUNIT_EXPECT_EQ(test, 42L, __atomic_load_n(&test_var, __ATOMIC_RELAXED)); + + KUNIT_EXPECT_EQ(test, 42L, __atomic_exchange_n(&test_var, 20, __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, 20L, test_var); + + tmp = 20L; + KUNIT_EXPECT_TRUE(test, __atomic_compare_exchange_n(&test_var, &tmp, 30L, + 0, __ATOMIC_RELAXED, + __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, tmp, 20L); + KUNIT_EXPECT_EQ(test, test_var, 30L); + KUNIT_EXPECT_FALSE(test, __atomic_compare_exchange_n(&test_var, &tmp, 40L, + 1, __ATOMIC_RELAXED, + __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, tmp, 30L); + KUNIT_EXPECT_EQ(test, test_var, 30L); + + KUNIT_EXPECT_EQ(test, 30L, __atomic_fetch_add(&test_var, 1, __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, 31L, __atomic_fetch_sub(&test_var, 1, __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, 30L, __atomic_fetch_and(&test_var, 0xf, __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, 14L, __atomic_fetch_xor(&test_var, 0xf, __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, 1L, __atomic_fetch_or(&test_var, 0xf0, __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, 241L, __atomic_fetch_nand(&test_var, 0xf, __ATOMIC_RELAXED)); + KUNIT_EXPECT_EQ(test, -2L, test_var); + + __atomic_thread_fence(__ATOMIC_SEQ_CST); + __atomic_signal_fence(__ATOMIC_SEQ_CST); + + kcsan_disable_current(); + + match_never = report_available(); + } while (!end_test_checks(match_never)); + KUNIT_EXPECT_FALSE(test, match_never); +} + /* * Each test case is run with different numbers of threads. Until KUnit supports * passing arguments for each test case, we encode #threads in the test case @@ -880,6 +978,7 @@ static struct kunit_case kcsan_test_cases[] = { KCSAN_KUNIT_CASE(test_write_write_struct_part), KCSAN_KUNIT_CASE(test_read_atomic_write_atomic), KCSAN_KUNIT_CASE(test_read_plain_atomic_write), + KCSAN_KUNIT_CASE(test_read_plain_atomic_rmw), KCSAN_KUNIT_CASE(test_zero_size_access), KCSAN_KUNIT_CASE(test_data_race), KCSAN_KUNIT_CASE(test_assert_exclusive_writer), @@ -891,6 +990,7 @@ static struct kunit_case kcsan_test_cases[] = { KCSAN_KUNIT_CASE(test_assert_exclusive_access_scoped), KCSAN_KUNIT_CASE(test_jiffies_noreport), KCSAN_KUNIT_CASE(test_seqlock_noreport), + KCSAN_KUNIT_CASE(test_atomic_builtins), {}, }; diff --git a/kernel/kcsan/kcsan.h b/kernel/kcsan/kcsan.h index 29480010dc30..8d4bf3431b3c 100644 --- a/kernel/kcsan/kcsan.h +++ b/kernel/kcsan/kcsan.h @@ -8,6 +8,7 @@ #ifndef _KERNEL_KCSAN_KCSAN_H #define _KERNEL_KCSAN_KCSAN_H +#include #include #include @@ -34,6 +35,10 @@ void kcsan_restore_irqtrace(struct task_struct *task); */ void kcsan_debugfs_init(void); +/* + * Statistics counters displayed via debugfs; should only be modified in + * slow-paths. + */ enum kcsan_counter_id { /* * Number of watchpoints currently in use. @@ -86,12 +91,7 @@ enum kcsan_counter_id { KCSAN_COUNTER_COUNT, /* number of counters */ }; - -/* - * Increment/decrement counter with given id; avoid calling these in fast-path. - */ -extern void kcsan_counter_inc(enum kcsan_counter_id id); -extern void kcsan_counter_dec(enum kcsan_counter_id id); +extern atomic_long_t kcsan_counters[KCSAN_COUNTER_COUNT]; /* * Returns true if data races in the function symbol that maps to func_addr diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c index 9d07e175de0f..d3bf87e6007c 100644 --- a/kernel/kcsan/report.c +++ b/kernel/kcsan/report.c @@ -228,6 +228,10 @@ static const char *get_access_type(int type) return "write"; case KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: return "write (marked)"; + case KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE: + return "read-write"; + case KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: + return "read-write (marked)"; case KCSAN_ACCESS_SCOPED: return "read (scoped)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_ATOMIC: @@ -275,8 +279,8 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries cur = strnstr(buf, "kcsan_", len); if (cur) { - cur += sizeof("kcsan_") - 1; - if (strncmp(cur, "test", sizeof("test") - 1)) + cur += strlen("kcsan_"); + if (!str_has_prefix(cur, "test")) continue; /* KCSAN runtime function. */ /* KCSAN related test. */ } @@ -555,7 +559,7 @@ static bool prepare_report_consumer(unsigned long *flags, * If the actual accesses to not match, this was a false * positive due to watchpoint encoding. */ - kcsan_counter_inc(KCSAN_COUNTER_ENCODING_FALSE_POSITIVES); + atomic_long_inc(&kcsan_counters[KCSAN_COUNTER_ENCODING_FALSE_POSITIVES]); goto discard; } diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c index d26a052d3383..d98bc208d06d 100644 --- a/kernel/kcsan/selftest.c +++ b/kernel/kcsan/selftest.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 +#define pr_fmt(fmt) "kcsan: " fmt + #include #include #include @@ -116,16 +118,16 @@ static int __init kcsan_selftest(void) if (do_test()) \ ++passed; \ else \ - pr_err("KCSAN selftest: " #do_test " failed"); \ + pr_err("selftest: " #do_test " failed"); \ } while (0) RUN_TEST(test_requires); RUN_TEST(test_encode_decode); RUN_TEST(test_matching_access); - pr_info("KCSAN selftest: %d/%d tests passed\n", passed, total); + pr_info("selftest: %d/%d tests passed\n", passed, total); if (passed != total) - panic("KCSAN selftests failed"); + panic("selftests failed"); return 0; } postcore_initcall(kcsan_selftest); diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 2facbbd146ec..3e99dfef8408 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -76,6 +76,23 @@ module_param(lock_stat, int, 0644); #define lock_stat 0 #endif +DEFINE_PER_CPU(unsigned int, lockdep_recursion); +EXPORT_PER_CPU_SYMBOL_GPL(lockdep_recursion); + +static inline bool lockdep_enabled(void) +{ + if (!debug_locks) + return false; + + if (raw_cpu_read(lockdep_recursion)) + return false; + + if (current->lockdep_recursion) + return false; + + return true; +} + /* * lockdep_lock: protects the lockdep graph, the hashes and the * class/list/hash allocators. @@ -93,7 +110,7 @@ static inline void lockdep_lock(void) arch_spin_lock(&__lock); __owner = current; - current->lockdep_recursion++; + __this_cpu_inc(lockdep_recursion); } static inline void lockdep_unlock(void) @@ -101,7 +118,7 @@ static inline void lockdep_unlock(void) if (debug_locks && DEBUG_LOCKS_WARN_ON(__owner != current)) return; - current->lockdep_recursion--; + __this_cpu_dec(lockdep_recursion); __owner = NULL; arch_spin_unlock(&__lock); } @@ -371,6 +388,21 @@ static struct hlist_head classhash_table[CLASSHASH_SIZE]; static struct hlist_head chainhash_table[CHAINHASH_SIZE]; +/* + * the id of held_lock + */ +static inline u16 hlock_id(struct held_lock *hlock) +{ + BUILD_BUG_ON(MAX_LOCKDEP_KEYS_BITS + 2 > 16); + + return (hlock->class_idx | (hlock->read << MAX_LOCKDEP_KEYS_BITS)); +} + +static inline unsigned int chain_hlock_class_idx(u16 hlock_id) +{ + return hlock_id & (MAX_LOCKDEP_KEYS - 1); +} + /* * The hash key of the lock dependency chains is a hash itself too: * it's a hash of all locks taken up to that lock, including that lock. @@ -393,10 +425,15 @@ void lockdep_init_task(struct task_struct *task) task->lockdep_recursion = 0; } +static __always_inline void lockdep_recursion_inc(void) +{ + __this_cpu_inc(lockdep_recursion); +} + static __always_inline void lockdep_recursion_finish(void) { - if (WARN_ON_ONCE((--current->lockdep_recursion) & LOCKDEP_RECURSION_MASK)) - current->lockdep_recursion = 0; + if (WARN_ON_ONCE(__this_cpu_dec_return(lockdep_recursion))) + __this_cpu_write(lockdep_recursion, 0); } void lockdep_set_selftest_task(struct task_struct *task) @@ -585,6 +622,8 @@ static const char *usage_str[] = #include "lockdep_states.h" #undef LOCKDEP_STATE [LOCK_USED] = "INITIAL USE", + [LOCK_USED_READ] = "INITIAL READ USE", + /* abused as string storage for verify_lock_unused() */ [LOCK_USAGE_STATES] = "IN-NMI", }; #endif @@ -1320,7 +1359,7 @@ static struct lock_list *alloc_list_entry(void) */ static int add_lock_to_list(struct lock_class *this, struct lock_class *links_to, struct list_head *head, - unsigned long ip, int distance, + unsigned long ip, u16 distance, u8 dep, const struct lock_trace *trace) { struct lock_list *entry; @@ -1334,6 +1373,7 @@ static int add_lock_to_list(struct lock_class *this, entry->class = this; entry->links_to = links_to; + entry->dep = dep; entry->distance = distance; entry->trace = trace; /* @@ -1421,23 +1461,19 @@ static inline unsigned int __cq_get_elem_count(struct circular_queue *cq) return (cq->rear - cq->front) & CQ_MASK; } -static inline void mark_lock_accessed(struct lock_list *lock, - struct lock_list *parent) +static inline void mark_lock_accessed(struct lock_list *lock) { - unsigned long nr; - - nr = lock - list_entries; - WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */ - lock->parent = parent; lock->class->dep_gen_id = lockdep_dependency_gen_id; } +static inline void visit_lock_entry(struct lock_list *lock, + struct lock_list *parent) +{ + lock->parent = parent; +} + static inline unsigned long lock_accessed(struct lock_list *lock) { - unsigned long nr; - - nr = lock - list_entries; - WARN_ON(nr >= ARRAY_SIZE(list_entries)); /* Out-of-bounds, input fail */ return lock->class->dep_gen_id == lockdep_dependency_gen_id; } @@ -1471,85 +1507,283 @@ static inline struct list_head *get_dep_list(struct lock_list *lock, int offset) return lock_class + offset; } +/* + * Return values of a bfs search: + * + * BFS_E* indicates an error + * BFS_R* indicates a result (match or not) + * + * BFS_EINVALIDNODE: Find a invalid node in the graph. + * + * BFS_EQUEUEFULL: The queue is full while doing the bfs. + * + * BFS_RMATCH: Find the matched node in the graph, and put that node into + * *@target_entry. + * + * BFS_RNOMATCH: Haven't found the matched node and keep *@target_entry + * _unchanged_. + */ +enum bfs_result { + BFS_EINVALIDNODE = -2, + BFS_EQUEUEFULL = -1, + BFS_RMATCH = 0, + BFS_RNOMATCH = 1, +}; /* - * Forward- or backward-dependency search, used for both circular dependency - * checking and hardirq-unsafe/softirq-unsafe checking. + * bfs_result < 0 means error */ -static int __bfs(struct lock_list *source_entry, - void *data, - int (*match)(struct lock_list *entry, void *data), - struct lock_list **target_entry, - int offset) +static inline bool bfs_error(enum bfs_result res) +{ + return res < 0; +} + +/* + * DEP_*_BIT in lock_list::dep + * + * For dependency @prev -> @next: + * + * SR: @prev is shared reader (->read != 0) and @next is recursive reader + * (->read == 2) + * ER: @prev is exclusive locker (->read == 0) and @next is recursive reader + * SN: @prev is shared reader and @next is non-recursive locker (->read != 2) + * EN: @prev is exclusive locker and @next is non-recursive locker + * + * Note that we define the value of DEP_*_BITs so that: + * bit0 is prev->read == 0 + * bit1 is next->read != 2 + */ +#define DEP_SR_BIT (0 + (0 << 1)) /* 0 */ +#define DEP_ER_BIT (1 + (0 << 1)) /* 1 */ +#define DEP_SN_BIT (0 + (1 << 1)) /* 2 */ +#define DEP_EN_BIT (1 + (1 << 1)) /* 3 */ + +#define DEP_SR_MASK (1U << (DEP_SR_BIT)) +#define DEP_ER_MASK (1U << (DEP_ER_BIT)) +#define DEP_SN_MASK (1U << (DEP_SN_BIT)) +#define DEP_EN_MASK (1U << (DEP_EN_BIT)) + +static inline unsigned int +__calc_dep_bit(struct held_lock *prev, struct held_lock *next) +{ + return (prev->read == 0) + ((next->read != 2) << 1); +} + +static inline u8 calc_dep(struct held_lock *prev, struct held_lock *next) +{ + return 1U << __calc_dep_bit(prev, next); +} + +/* + * calculate the dep_bit for backwards edges. We care about whether @prev is + * shared and whether @next is recursive. + */ +static inline unsigned int +__calc_dep_bitb(struct held_lock *prev, struct held_lock *next) +{ + return (next->read != 2) + ((prev->read == 0) << 1); +} + +static inline u8 calc_depb(struct held_lock *prev, struct held_lock *next) +{ + return 1U << __calc_dep_bitb(prev, next); +} + +/* + * Initialize a lock_list entry @lock belonging to @class as the root for a BFS + * search. + */ +static inline void __bfs_init_root(struct lock_list *lock, + struct lock_class *class) +{ + lock->class = class; + lock->parent = NULL; + lock->only_xr = 0; +} + +/* + * Initialize a lock_list entry @lock based on a lock acquisition @hlock as the + * root for a BFS search. + * + * ->only_xr of the initial lock node is set to @hlock->read == 2, to make sure + * that -> @hlock and @hlock -> is not -(*R)-> + * and -(S*)->. + */ +static inline void bfs_init_root(struct lock_list *lock, + struct held_lock *hlock) +{ + __bfs_init_root(lock, hlock_class(hlock)); + lock->only_xr = (hlock->read == 2); +} + +/* + * Similar to bfs_init_root() but initialize the root for backwards BFS. + * + * ->only_xr of the initial lock node is set to @hlock->read != 0, to make sure + * that -> @hlock and @hlock -> is not + * -(*S)-> and -(R*)-> (reverse order of -(*R)-> and -(S*)->). + */ +static inline void bfs_init_rootb(struct lock_list *lock, + struct held_lock *hlock) +{ + __bfs_init_root(lock, hlock_class(hlock)); + lock->only_xr = (hlock->read != 0); +} + +static inline struct lock_list *__bfs_next(struct lock_list *lock, int offset) +{ + if (!lock || !lock->parent) + return NULL; + + return list_next_or_null_rcu(get_dep_list(lock->parent, offset), + &lock->entry, struct lock_list, entry); +} + +/* + * Breadth-First Search to find a strong path in the dependency graph. + * + * @source_entry: the source of the path we are searching for. + * @data: data used for the second parameter of @match function + * @match: match function for the search + * @target_entry: pointer to the target of a matched path + * @offset: the offset to struct lock_class to determine whether it is + * locks_after or locks_before + * + * We may have multiple edges (considering different kinds of dependencies, + * e.g. ER and SN) between two nodes in the dependency graph. But + * only the strong dependency path in the graph is relevant to deadlocks. A + * strong dependency path is a dependency path that doesn't have two adjacent + * dependencies as -(*R)-> -(S*)->, please see: + * + * Documentation/locking/lockdep-design.rst + * + * for more explanation of the definition of strong dependency paths + * + * In __bfs(), we only traverse in the strong dependency path: + * + * In lock_list::only_xr, we record whether the previous dependency only + * has -(*R)-> in the search, and if it does (prev only has -(*R)->), we + * filter out any -(S*)-> in the current dependency and after that, the + * ->only_xr is set according to whether we only have -(*R)-> left. + */ +static enum bfs_result __bfs(struct lock_list *source_entry, + void *data, + bool (*match)(struct lock_list *entry, void *data), + struct lock_list **target_entry, + int offset) { - struct lock_list *entry; - struct lock_list *lock; - struct list_head *head; struct circular_queue *cq = &lock_cq; - int ret = 1; + struct lock_list *lock = NULL; + struct lock_list *entry; + struct list_head *head; + unsigned int cq_depth; + bool first; lockdep_assert_locked(); - if (match(source_entry, data)) { - *target_entry = source_entry; - ret = 0; - goto exit; - } - - head = get_dep_list(source_entry, offset); - if (list_empty(head)) - goto exit; - __cq_init(cq); __cq_enqueue(cq, source_entry); - while ((lock = __cq_dequeue(cq))) { + while ((lock = __bfs_next(lock, offset)) || (lock = __cq_dequeue(cq))) { + if (!lock->class) + return BFS_EINVALIDNODE; - if (!lock->class) { - ret = -2; - goto exit; + /* + * Step 1: check whether we already finish on this one. + * + * If we have visited all the dependencies from this @lock to + * others (iow, if we have visited all lock_list entries in + * @lock->class->locks_{after,before}) we skip, otherwise go + * and visit all the dependencies in the list and mark this + * list accessed. + */ + if (lock_accessed(lock)) + continue; + else + mark_lock_accessed(lock); + + /* + * Step 2: check whether prev dependency and this form a strong + * dependency path. + */ + if (lock->parent) { /* Parent exists, check prev dependency */ + u8 dep = lock->dep; + bool prev_only_xr = lock->parent->only_xr; + + /* + * Mask out all -(S*)-> if we only have *R in previous + * step, because -(*R)-> -(S*)-> don't make up a strong + * dependency. + */ + if (prev_only_xr) + dep &= ~(DEP_SR_MASK | DEP_SN_MASK); + + /* If nothing left, we skip */ + if (!dep) + continue; + + /* If there are only -(*R)-> left, set that for the next step */ + lock->only_xr = !(dep & (DEP_SN_MASK | DEP_EN_MASK)); } + /* + * Step 3: we haven't visited this and there is a strong + * dependency path to this, so check with @match. + */ + if (match(lock, data)) { + *target_entry = lock; + return BFS_RMATCH; + } + + /* + * Step 4: if not match, expand the path by adding the + * forward or backwards dependencis in the search + * + */ + first = true; head = get_dep_list(lock, offset); - list_for_each_entry_rcu(entry, head, entry) { - if (!lock_accessed(entry)) { - unsigned int cq_depth; - mark_lock_accessed(entry, lock); - if (match(entry, data)) { - *target_entry = entry; - ret = 0; - goto exit; - } + visit_lock_entry(entry, lock); - if (__cq_enqueue(cq, entry)) { - ret = -1; - goto exit; - } - cq_depth = __cq_get_elem_count(cq); - if (max_bfs_queue_depth < cq_depth) - max_bfs_queue_depth = cq_depth; - } + /* + * Note we only enqueue the first of the list into the + * queue, because we can always find a sibling + * dependency from one (see __bfs_next()), as a result + * the space of queue is saved. + */ + if (!first) + continue; + + first = false; + + if (__cq_enqueue(cq, entry)) + return BFS_EQUEUEFULL; + + cq_depth = __cq_get_elem_count(cq); + if (max_bfs_queue_depth < cq_depth) + max_bfs_queue_depth = cq_depth; } } -exit: - return ret; + + return BFS_RNOMATCH; } -static inline int __bfs_forwards(struct lock_list *src_entry, - void *data, - int (*match)(struct lock_list *entry, void *data), - struct lock_list **target_entry) +static inline enum bfs_result +__bfs_forwards(struct lock_list *src_entry, + void *data, + bool (*match)(struct lock_list *entry, void *data), + struct lock_list **target_entry) { return __bfs(src_entry, data, match, target_entry, offsetof(struct lock_class, locks_after)); } -static inline int __bfs_backwards(struct lock_list *src_entry, - void *data, - int (*match)(struct lock_list *entry, void *data), - struct lock_list **target_entry) +static inline enum bfs_result +__bfs_backwards(struct lock_list *src_entry, + void *data, + bool (*match)(struct lock_list *entry, void *data), + struct lock_list **target_entry) { return __bfs(src_entry, data, match, target_entry, offsetof(struct lock_class, locks_before)); @@ -1659,15 +1893,72 @@ print_circular_bug_header(struct lock_list *entry, unsigned int depth, print_circular_bug_entry(entry, depth); } -static inline int class_equal(struct lock_list *entry, void *data) +/* + * We are about to add A -> B into the dependency graph, and in __bfs() a + * strong dependency path A -> .. -> B is found: hlock_class equals + * entry->class. + * + * If A -> .. -> B can replace A -> B in any __bfs() search (means the former + * is _stronger_ than or equal to the latter), we consider A -> B as redundant. + * For example if A -> .. -> B is -(EN)-> (i.e. A -(E*)-> .. -(*N)-> B), and A + * -> B is -(ER)-> or -(EN)->, then we don't need to add A -> B into the + * dependency graph, as any strong path ..-> A -> B ->.. we can get with + * having dependency A -> B, we could already get a equivalent path ..-> A -> + * .. -> B -> .. with A -> .. -> B. Therefore A -> B is reduntant. + * + * We need to make sure both the start and the end of A -> .. -> B is not + * weaker than A -> B. For the start part, please see the comment in + * check_redundant(). For the end part, we need: + * + * Either + * + * a) A -> B is -(*R)-> (everything is not weaker than that) + * + * or + * + * b) A -> .. -> B is -(*N)-> (nothing is stronger than this) + * + */ +static inline bool hlock_equal(struct lock_list *entry, void *data) { - return entry->class == data; + struct held_lock *hlock = (struct held_lock *)data; + + return hlock_class(hlock) == entry->class && /* Found A -> .. -> B */ + (hlock->read == 2 || /* A -> B is -(*R)-> */ + !entry->only_xr); /* A -> .. -> B is -(*N)-> */ +} + +/* + * We are about to add B -> A into the dependency graph, and in __bfs() a + * strong dependency path A -> .. -> B is found: hlock_class equals + * entry->class. + * + * We will have a deadlock case (conflict) if A -> .. -> B -> A is a strong + * dependency cycle, that means: + * + * Either + * + * a) B -> A is -(E*)-> + * + * or + * + * b) A -> .. -> B is -(*N)-> (i.e. A -> .. -(*N)-> B) + * + * as then we don't have -(*R)-> -(S*)-> in the cycle. + */ +static inline bool hlock_conflict(struct lock_list *entry, void *data) +{ + struct held_lock *hlock = (struct held_lock *)data; + + return hlock_class(hlock) == entry->class && /* Found A -> .. -> B */ + (hlock->read == 0 || /* B -> A is -(E*)-> */ + !entry->only_xr); /* A -> .. -> B is -(*N)-> */ } static noinline void print_circular_bug(struct lock_list *this, - struct lock_list *target, - struct held_lock *check_src, - struct held_lock *check_tgt) + struct lock_list *target, + struct held_lock *check_src, + struct held_lock *check_tgt) { struct task_struct *curr = current; struct lock_list *parent; @@ -1714,10 +2005,10 @@ static noinline void print_bfs_bug(int ret) WARN(1, "lockdep bfs error:%d\n", ret); } -static int noop_count(struct lock_list *entry, void *data) +static bool noop_count(struct lock_list *entry, void *data) { (*(unsigned long *)data)++; - return 0; + return false; } static unsigned long __lockdep_count_forward_deps(struct lock_list *this) @@ -1734,8 +2025,7 @@ unsigned long lockdep_count_forward_deps(struct lock_class *class) unsigned long ret, flags; struct lock_list this; - this.parent = NULL; - this.class = class; + __bfs_init_root(&this, class); raw_local_irq_save(flags); lockdep_lock(); @@ -1761,8 +2051,7 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class) unsigned long ret, flags; struct lock_list this; - this.parent = NULL; - this.class = class; + __bfs_init_root(&this, class); raw_local_irq_save(flags); lockdep_lock(); @@ -1775,18 +2064,18 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class) /* * Check that the dependency graph starting at can lead to - * or not. Print an error and return 0 if it does. + * or not. */ -static noinline int -check_path(struct lock_class *target, struct lock_list *src_entry, +static noinline enum bfs_result +check_path(struct held_lock *target, struct lock_list *src_entry, + bool (*match)(struct lock_list *entry, void *data), struct lock_list **target_entry) { - int ret; + enum bfs_result ret; - ret = __bfs_forwards(src_entry, (void *)target, class_equal, - target_entry); + ret = __bfs_forwards(src_entry, target, match, target_entry); - if (unlikely(ret < 0)) + if (unlikely(bfs_error(ret))) print_bfs_bug(ret); return ret; @@ -1797,24 +2086,23 @@ check_path(struct lock_class *target, struct lock_list *src_entry, * lead to . If it can, there is a circle when adding * -> dependency. * - * Print an error and return 0 if it does. + * Print an error and return BFS_RMATCH if it does. */ -static noinline int +static noinline enum bfs_result check_noncircular(struct held_lock *src, struct held_lock *target, struct lock_trace **const trace) { - int ret; + enum bfs_result ret; struct lock_list *target_entry; - struct lock_list src_entry = { - .class = hlock_class(src), - .parent = NULL, - }; + struct lock_list src_entry; + + bfs_init_root(&src_entry, src); debug_atomic_inc(nr_cyclic_checks); - ret = check_path(hlock_class(target), &src_entry, &target_entry); + ret = check_path(target, &src_entry, hlock_conflict, &target_entry); - if (unlikely(!ret)) { + if (unlikely(ret == BFS_RMATCH)) { if (!*trace) { /* * If save_trace fails here, the printing might @@ -1836,27 +2124,35 @@ check_noncircular(struct held_lock *src, struct held_lock *target, * or not. If it can, -> dependency is already * in the graph. * - * Print an error and return 2 if it does or 1 if it does not. + * Return BFS_RMATCH if it does, or BFS_RMATCH if it does not, return BFS_E* if + * any error appears in the bfs search. */ -static noinline int +static noinline enum bfs_result check_redundant(struct held_lock *src, struct held_lock *target) { - int ret; + enum bfs_result ret; struct lock_list *target_entry; - struct lock_list src_entry = { - .class = hlock_class(src), - .parent = NULL, - }; + struct lock_list src_entry; + + bfs_init_root(&src_entry, src); + /* + * Special setup for check_redundant(). + * + * To report redundant, we need to find a strong dependency path that + * is equal to or stronger than -> . So if is E, + * we need to let __bfs() only search for a path starting at a -(E*)->, + * we achieve this by setting the initial node's ->only_xr to true in + * that case. And if is S, we set initial ->only_xr to false + * because both -(S*)-> (equal) and -(E*)-> (stronger) are redundant. + */ + src_entry.only_xr = src->read == 0; debug_atomic_inc(nr_redundant_checks); - ret = check_path(hlock_class(target), &src_entry, &target_entry); + ret = check_path(target, &src_entry, hlock_equal, &target_entry); - if (!ret) { + if (ret == BFS_RMATCH) debug_atomic_inc(nr_redundant); - ret = 2; - } else if (ret < 0) - ret = 0; return ret; } @@ -1864,39 +2160,86 @@ check_redundant(struct held_lock *src, struct held_lock *target) #ifdef CONFIG_TRACE_IRQFLAGS -static inline int usage_accumulate(struct lock_list *entry, void *mask) -{ - *(unsigned long *)mask |= entry->class->usage_mask; - - return 0; -} - /* * Forwards and backwards subgraph searching, for the purposes of * proving that two subgraphs can be connected by a new dependency * without creating any illegal irq-safe -> irq-unsafe lock dependency. + * + * A irq safe->unsafe deadlock happens with the following conditions: + * + * 1) We have a strong dependency path A -> ... -> B + * + * 2) and we have ENABLED_IRQ usage of B and USED_IN_IRQ usage of A, therefore + * irq can create a new dependency B -> A (consider the case that a holder + * of B gets interrupted by an irq whose handler will try to acquire A). + * + * 3) the dependency circle A -> ... -> B -> A we get from 1) and 2) is a + * strong circle: + * + * For the usage bits of B: + * a) if A -> B is -(*N)->, then B -> A could be any type, so any + * ENABLED_IRQ usage suffices. + * b) if A -> B is -(*R)->, then B -> A must be -(E*)->, so only + * ENABLED_IRQ_*_READ usage suffices. + * + * For the usage bits of A: + * c) if A -> B is -(E*)->, then B -> A could be any type, so any + * USED_IN_IRQ usage suffices. + * d) if A -> B is -(S*)->, then B -> A must be -(*N)->, so only + * USED_IN_IRQ_*_READ usage suffices. */ -static inline int usage_match(struct lock_list *entry, void *mask) +/* + * There is a strong dependency path in the dependency graph: A -> B, and now + * we need to decide which usage bit of A should be accumulated to detect + * safe->unsafe bugs. + * + * Note that usage_accumulate() is used in backwards search, so ->only_xr + * stands for whether A -> B only has -(S*)-> (in this case ->only_xr is true). + * + * As above, if only_xr is false, which means A -> B has -(E*)-> dependency + * path, any usage of A should be considered. Otherwise, we should only + * consider _READ usage. + */ +static inline bool usage_accumulate(struct lock_list *entry, void *mask) { - return entry->class->usage_mask & *(unsigned long *)mask; + if (!entry->only_xr) + *(unsigned long *)mask |= entry->class->usage_mask; + else /* Mask out _READ usage bits */ + *(unsigned long *)mask |= (entry->class->usage_mask & LOCKF_IRQ); + + return false; +} + +/* + * There is a strong dependency path in the dependency graph: A -> B, and now + * we need to decide which usage bit of B conflicts with the usage bits of A, + * i.e. which usage bit of B may introduce safe->unsafe deadlocks. + * + * As above, if only_xr is false, which means A -> B has -(*N)-> dependency + * path, any usage of B should be considered. Otherwise, we should only + * consider _READ usage. + */ +static inline bool usage_match(struct lock_list *entry, void *mask) +{ + if (!entry->only_xr) + return !!(entry->class->usage_mask & *(unsigned long *)mask); + else /* Mask out _READ usage bits */ + return !!((entry->class->usage_mask & LOCKF_IRQ) & *(unsigned long *)mask); } /* * Find a node in the forwards-direction dependency sub-graph starting * at @root->class that matches @bit. * - * Return 0 if such a node exists in the subgraph, and put that node + * Return BFS_MATCH if such a node exists in the subgraph, and put that node * into *@target_entry. - * - * Return 1 otherwise and keep *@target_entry unchanged. - * Return <0 on error. */ -static int +static enum bfs_result find_usage_forwards(struct lock_list *root, unsigned long usage_mask, struct lock_list **target_entry) { - int result; + enum bfs_result result; debug_atomic_inc(nr_find_usage_forwards_checks); @@ -1908,18 +2251,12 @@ find_usage_forwards(struct lock_list *root, unsigned long usage_mask, /* * Find a node in the backwards-direction dependency sub-graph starting * at @root->class that matches @bit. - * - * Return 0 if such a node exists in the subgraph, and put that node - * into *@target_entry. - * - * Return 1 otherwise and keep *@target_entry unchanged. - * Return <0 on error. */ -static int +static enum bfs_result find_usage_backwards(struct lock_list *root, unsigned long usage_mask, struct lock_list **target_entry) { - int result; + enum bfs_result result; debug_atomic_inc(nr_find_usage_backwards_checks); @@ -1939,7 +2276,7 @@ static void print_lock_class_header(struct lock_class *class, int depth) #endif printk(KERN_CONT " {\n"); - for (bit = 0; bit < LOCK_USAGE_STATES; bit++) { + for (bit = 0; bit < LOCK_TRACE_STATES; bit++) { if (class->usage_mask & (1 << bit)) { int len = depth; @@ -2179,17 +2516,39 @@ static unsigned long invert_dir_mask(unsigned long mask) } /* - * As above, we clear bitnr0 (LOCK_*_READ off) with bitmask ops. First, for all - * bits with bitnr0 set (LOCK_*_READ), add those with bitnr0 cleared (LOCK_*). - * And then mask out all bitnr0. + * Note that a LOCK_ENABLED_IRQ_*_READ usage and a LOCK_USED_IN_IRQ_*_READ + * usage may cause deadlock too, for example: + * + * P1 P2 + * + * write_lock(l1); + * read_lock(l2); + * write_lock(l2); + * + * read_lock(l1); + * + * , in above case, l1 will be marked as LOCK_USED_IN_IRQ_HARDIRQ_READ and l2 + * will marked as LOCK_ENABLE_IRQ_HARDIRQ_READ, and this is a possible + * deadlock. + * + * In fact, all of the following cases may cause deadlocks: + * + * LOCK_USED_IN_IRQ_* -> LOCK_ENABLED_IRQ_* + * LOCK_USED_IN_IRQ_*_READ -> LOCK_ENABLED_IRQ_* + * LOCK_USED_IN_IRQ_* -> LOCK_ENABLED_IRQ_*_READ + * LOCK_USED_IN_IRQ_*_READ -> LOCK_ENABLED_IRQ_*_READ + * + * As a result, to calculate the "exclusive mask", first we invert the + * direction (USED_IN/ENABLED) of the original mask, and 1) for all bits with + * bitnr0 set (LOCK_*_READ), add those with bitnr0 cleared (LOCK_*). 2) for all + * bits with bitnr0 cleared (LOCK_*_READ), add those with bitnr0 set (LOCK_*). */ static unsigned long exclusive_mask(unsigned long mask) { unsigned long excl = invert_dir_mask(mask); - /* Strip read */ excl |= (excl & LOCKF_IRQ_READ) >> LOCK_USAGE_READ_MASK; - excl &= ~LOCKF_IRQ_READ; + excl |= (excl & LOCKF_IRQ) << LOCK_USAGE_READ_MASK; return excl; } @@ -2206,6 +2565,7 @@ static unsigned long original_mask(unsigned long mask) unsigned long excl = invert_dir_mask(mask); /* Include read in existing usages */ + excl |= (excl & LOCKF_IRQ_READ) >> LOCK_USAGE_READ_MASK; excl |= (excl & LOCKF_IRQ) << LOCK_USAGE_READ_MASK; return excl; @@ -2220,14 +2580,24 @@ static int find_exclusive_match(unsigned long mask, enum lock_usage_bit *bitp, enum lock_usage_bit *excl_bitp) { - int bit, excl; + int bit, excl, excl_read; for_each_set_bit(bit, &mask, LOCK_USED) { + /* + * exclusive_bit() strips the read bit, however, + * LOCK_ENABLED_IRQ_*_READ may cause deadlocks too, so we need + * to search excl | LOCK_USAGE_READ_MASK as well. + */ excl = exclusive_bit(bit); + excl_read = excl | LOCK_USAGE_READ_MASK; if (excl_mask & lock_flag(excl)) { *bitp = bit; *excl_bitp = excl; return 0; + } else if (excl_mask & lock_flag(excl_read)) { + *bitp = bit; + *excl_bitp = excl_read; + return 0; } } return -1; @@ -2247,17 +2617,16 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev, struct lock_list *target_entry1; struct lock_list *target_entry; struct lock_list this, that; - int ret; + enum bfs_result ret; /* * Step 1: gather all hard/soft IRQs usages backward in an * accumulated usage mask. */ - this.parent = NULL; - this.class = hlock_class(prev); + bfs_init_rootb(&this, prev); ret = __bfs_backwards(&this, &usage_mask, usage_accumulate, NULL); - if (ret < 0) { + if (bfs_error(ret)) { print_bfs_bug(ret); return 0; } @@ -2272,16 +2641,15 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev, */ forward_mask = exclusive_mask(usage_mask); - that.parent = NULL; - that.class = hlock_class(next); + bfs_init_root(&that, next); ret = find_usage_forwards(&that, forward_mask, &target_entry1); - if (ret < 0) { + if (bfs_error(ret)) { print_bfs_bug(ret); return 0; } - if (ret == 1) - return ret; + if (ret == BFS_RNOMATCH) + return 1; /* * Step 3: we found a bad match! Now retrieve a lock from the backward @@ -2291,11 +2659,11 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev, backward_mask = original_mask(target_entry1->class->usage_mask); ret = find_usage_backwards(&this, backward_mask, &target_entry); - if (ret < 0) { + if (bfs_error(ret)) { print_bfs_bug(ret); return 0; } - if (DEBUG_LOCKS_WARN_ON(ret == 1)) + if (DEBUG_LOCKS_WARN_ON(ret == BFS_RNOMATCH)) return 1; /* @@ -2459,11 +2827,11 @@ check_deadlock(struct task_struct *curr, struct held_lock *next) */ static int check_prev_add(struct task_struct *curr, struct held_lock *prev, - struct held_lock *next, int distance, + struct held_lock *next, u16 distance, struct lock_trace **const trace) { struct lock_list *entry; - int ret; + enum bfs_result ret; if (!hlock_class(prev)->key || !hlock_class(next)->key) { /* @@ -2494,22 +2862,12 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev, * in the graph whose neighbours are to be checked. */ ret = check_noncircular(next, prev, trace); - if (unlikely(ret <= 0)) + if (unlikely(bfs_error(ret) || ret == BFS_RMATCH)) return 0; if (!check_irq_usage(curr, prev, next)) return 0; - /* - * For recursive read-locks we do all the dependency checks, - * but we dont store read-triggered dependencies (only - * write-triggered dependencies). This ensures that only the - * write-side dependencies matter, and that if for example a - * write-lock never takes any other locks, then the reads are - * equivalent to a NOP. - */ - if (next->read == 2 || prev->read == 2) - return 1; /* * Is the -> dependency already present? * @@ -2522,7 +2880,35 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev, if (entry->class == hlock_class(next)) { if (distance == 1) entry->distance = 1; - return 1; + entry->dep |= calc_dep(prev, next); + + /* + * Also, update the reverse dependency in @next's + * ->locks_before list. + * + * Here we reuse @entry as the cursor, which is fine + * because we won't go to the next iteration of the + * outer loop: + * + * For normal cases, we return in the inner loop. + * + * If we fail to return, we have inconsistency, i.e. + * ::locks_after contains while + * ::locks_before doesn't contain . In + * that case, we return after the inner and indicate + * something is wrong. + */ + list_for_each_entry(entry, &hlock_class(next)->locks_before, entry) { + if (entry->class == hlock_class(prev)) { + if (distance == 1) + entry->distance = 1; + entry->dep |= calc_depb(prev, next); + return 1; + } + } + + /* is not found in ::locks_before */ + return 0; } } @@ -2531,8 +2917,10 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev, * Is the -> link redundant? */ ret = check_redundant(prev, next); - if (ret != 1) - return ret; + if (bfs_error(ret)) + return 0; + else if (ret == BFS_RMATCH) + return 2; #endif if (!*trace) { @@ -2547,14 +2935,18 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev, */ ret = add_lock_to_list(hlock_class(next), hlock_class(prev), &hlock_class(prev)->locks_after, - next->acquire_ip, distance, *trace); + next->acquire_ip, distance, + calc_dep(prev, next), + *trace); if (!ret) return 0; ret = add_lock_to_list(hlock_class(prev), hlock_class(next), &hlock_class(next)->locks_before, - next->acquire_ip, distance, *trace); + next->acquire_ip, distance, + calc_depb(prev, next), + *trace); if (!ret) return 0; @@ -2590,16 +2982,11 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next) goto out_bug; for (;;) { - int distance = curr->lockdep_depth - depth + 1; + u16 distance = curr->lockdep_depth - depth + 1; hlock = curr->held_locks + depth - 1; - /* - * Only non-recursive-read entries get new dependencies - * added: - */ - if (hlock->read != 2 && hlock->check) { - int ret = check_prev_add(curr, hlock, next, distance, - &trace); + if (hlock->check) { + int ret = check_prev_add(curr, hlock, next, distance, &trace); if (!ret) return 0; @@ -2875,7 +3262,10 @@ static inline void free_chain_hlocks(int base, int size) struct lock_class *lock_chain_get_class(struct lock_chain *chain, int i) { - return lock_classes + chain_hlocks[chain->base + i]; + u16 chain_hlock = chain_hlocks[chain->base + i]; + unsigned int class_idx = chain_hlock_class_idx(chain_hlock); + + return lock_classes + class_idx - 1; } /* @@ -2901,12 +3291,12 @@ static inline int get_first_held_lock(struct task_struct *curr, /* * Returns the next chain_key iteration */ -static u64 print_chain_key_iteration(int class_idx, u64 chain_key) +static u64 print_chain_key_iteration(u16 hlock_id, u64 chain_key) { - u64 new_chain_key = iterate_chain_key(chain_key, class_idx); + u64 new_chain_key = iterate_chain_key(chain_key, hlock_id); - printk(" class_idx:%d -> chain_key:%016Lx", - class_idx, + printk(" hlock_id:%d -> chain_key:%016Lx", + (unsigned int)hlock_id, (unsigned long long)new_chain_key); return new_chain_key; } @@ -2923,12 +3313,12 @@ print_chain_keys_held_locks(struct task_struct *curr, struct held_lock *hlock_ne hlock_next->irq_context); for (; i < depth; i++) { hlock = curr->held_locks + i; - chain_key = print_chain_key_iteration(hlock->class_idx, chain_key); + chain_key = print_chain_key_iteration(hlock_id(hlock), chain_key); print_lock(hlock); } - print_chain_key_iteration(hlock_next->class_idx, chain_key); + print_chain_key_iteration(hlock_id(hlock_next), chain_key); print_lock(hlock_next); } @@ -2936,14 +3326,14 @@ static void print_chain_keys_chain(struct lock_chain *chain) { int i; u64 chain_key = INITIAL_CHAIN_KEY; - int class_id; + u16 hlock_id; printk("depth: %u\n", chain->depth); for (i = 0; i < chain->depth; i++) { - class_id = chain_hlocks[chain->base + i]; - chain_key = print_chain_key_iteration(class_id, chain_key); + hlock_id = chain_hlocks[chain->base + i]; + chain_key = print_chain_key_iteration(hlock_id, chain_key); - print_lock_name(lock_classes + class_id); + print_lock_name(lock_classes + chain_hlock_class_idx(hlock_id) - 1); printk("\n"); } } @@ -2992,7 +3382,7 @@ static int check_no_collision(struct task_struct *curr, } for (j = 0; j < chain->depth - 1; j++, i++) { - id = curr->held_locks[i].class_idx; + id = hlock_id(&curr->held_locks[i]); if (DEBUG_LOCKS_WARN_ON(chain_hlocks[chain->base + j] != id)) { print_collision(curr, hlock, chain); @@ -3041,7 +3431,6 @@ static inline int add_chain_cache(struct task_struct *curr, struct held_lock *hlock, u64 chain_key) { - struct lock_class *class = hlock_class(hlock); struct hlist_head *hash_head = chainhashentry(chain_key); struct lock_chain *chain; int i, j; @@ -3084,11 +3473,11 @@ static inline int add_chain_cache(struct task_struct *curr, chain->base = j; for (j = 0; j < chain->depth - 1; j++, i++) { - int lock_id = curr->held_locks[i].class_idx; + int lock_id = hlock_id(curr->held_locks + i); chain_hlocks[chain->base + j] = lock_id; } - chain_hlocks[chain->base + j] = class - lock_classes; + chain_hlocks[chain->base + j] = hlock_id(hlock); hlist_add_head_rcu(&chain->entry, hash_head); debug_atomic_inc(chain_lookup_misses); inc_chains(chain->irq_context); @@ -3275,7 +3664,7 @@ static void check_chain_key(struct task_struct *curr) if (prev_hlock && (prev_hlock->irq_context != hlock->irq_context)) chain_key = INITIAL_CHAIN_KEY; - chain_key = iterate_chain_key(chain_key, hlock->class_idx); + chain_key = iterate_chain_key(chain_key, hlock_id(hlock)); prev_hlock = hlock; } if (chain_key != curr->curr_chain_key) { @@ -3434,24 +3823,32 @@ print_irq_inversion_bug(struct task_struct *curr, */ static int check_usage_forwards(struct task_struct *curr, struct held_lock *this, - enum lock_usage_bit bit, const char *irqclass) + enum lock_usage_bit bit) { - int ret; + enum bfs_result ret; struct lock_list root; struct lock_list *target_entry; + enum lock_usage_bit read_bit = bit + LOCK_USAGE_READ_MASK; + unsigned usage_mask = lock_flag(bit) | lock_flag(read_bit); - root.parent = NULL; - root.class = hlock_class(this); - ret = find_usage_forwards(&root, lock_flag(bit), &target_entry); - if (ret < 0) { + bfs_init_root(&root, this); + ret = find_usage_forwards(&root, usage_mask, &target_entry); + if (bfs_error(ret)) { print_bfs_bug(ret); return 0; } - if (ret == 1) - return ret; + if (ret == BFS_RNOMATCH) + return 1; + + /* Check whether write or read usage is the match */ + if (target_entry->class->usage_mask & lock_flag(bit)) { + print_irq_inversion_bug(curr, &root, target_entry, + this, 1, state_name(bit)); + } else { + print_irq_inversion_bug(curr, &root, target_entry, + this, 1, state_name(read_bit)); + } - print_irq_inversion_bug(curr, &root, target_entry, - this, 1, irqclass); return 0; } @@ -3461,24 +3858,32 @@ check_usage_forwards(struct task_struct *curr, struct held_lock *this, */ static int check_usage_backwards(struct task_struct *curr, struct held_lock *this, - enum lock_usage_bit bit, const char *irqclass) + enum lock_usage_bit bit) { - int ret; + enum bfs_result ret; struct lock_list root; struct lock_list *target_entry; + enum lock_usage_bit read_bit = bit + LOCK_USAGE_READ_MASK; + unsigned usage_mask = lock_flag(bit) | lock_flag(read_bit); - root.parent = NULL; - root.class = hlock_class(this); - ret = find_usage_backwards(&root, lock_flag(bit), &target_entry); - if (ret < 0) { + bfs_init_rootb(&root, this); + ret = find_usage_backwards(&root, usage_mask, &target_entry); + if (bfs_error(ret)) { print_bfs_bug(ret); return 0; } - if (ret == 1) - return ret; + if (ret == BFS_RNOMATCH) + return 1; + + /* Check whether write or read usage is the match */ + if (target_entry->class->usage_mask & lock_flag(bit)) { + print_irq_inversion_bug(curr, &root, target_entry, + this, 0, state_name(bit)); + } else { + print_irq_inversion_bug(curr, &root, target_entry, + this, 0, state_name(read_bit)); + } - print_irq_inversion_bug(curr, &root, target_entry, - this, 0, irqclass); return 0; } @@ -3517,8 +3922,6 @@ static int SOFTIRQ_verbose(struct lock_class *class) return 0; } -#define STRICT_READ_CHECKS 1 - static int (*state_verbose_f[])(struct lock_class *class) = { #define LOCKDEP_STATE(__STATE) \ __STATE##_verbose, @@ -3543,16 +3946,6 @@ mark_lock_irq(struct task_struct *curr, struct held_lock *this, int read = new_bit & LOCK_USAGE_READ_MASK; int dir = new_bit & LOCK_USAGE_DIR_MASK; - /* - * mark USED_IN has to look forwards -- to ensure no dependency - * has ENABLED state, which would allow recursion deadlocks. - * - * mark ENABLED has to look backwards -- to ensure no dependee - * has USED_IN state, which, again, would allow recursion deadlocks. - */ - check_usage_f usage = dir ? - check_usage_backwards : check_usage_forwards; - /* * Validate that this particular lock does not have conflicting * usage states. @@ -3560,24 +3953,31 @@ mark_lock_irq(struct task_struct *curr, struct held_lock *this, if (!valid_state(curr, this, new_bit, excl_bit)) return 0; + /* + * Check for read in write conflicts + */ + if (!read && !valid_state(curr, this, new_bit, + excl_bit + LOCK_USAGE_READ_MASK)) + return 0; + + /* * Validate that the lock dependencies don't have conflicting usage * states. */ - if ((!read || STRICT_READ_CHECKS) && - !usage(curr, this, excl_bit, state_name(new_bit & ~LOCK_USAGE_READ_MASK))) - return 0; - - /* - * Check for read in write conflicts - */ - if (!read) { - if (!valid_state(curr, this, new_bit, excl_bit + LOCK_USAGE_READ_MASK)) + if (dir) { + /* + * mark ENABLED has to look backwards -- to ensure no dependee + * has USED_IN state, which, again, would allow recursion deadlocks. + */ + if (!check_usage_backwards(curr, this, excl_bit)) return 0; - - if (STRICT_READ_CHECKS && - !usage(curr, this, excl_bit + LOCK_USAGE_READ_MASK, - state_name(new_bit + LOCK_USAGE_READ_MASK))) + } else { + /* + * mark USED_IN has to look forwards -- to ensure no dependency + * has ENABLED state, which would allow recursion deadlocks. + */ + if (!check_usage_forwards(curr, this, excl_bit)) return 0; } @@ -3657,7 +4057,7 @@ void lockdep_hardirqs_on_prepare(unsigned long ip) if (unlikely(in_nmi())) return; - if (unlikely(current->lockdep_recursion & LOCKDEP_RECURSION_MASK)) + if (unlikely(__this_cpu_read(lockdep_recursion))) return; if (unlikely(lockdep_hardirqs_enabled())) { @@ -3693,7 +4093,7 @@ void lockdep_hardirqs_on_prepare(unsigned long ip) current->hardirq_chain_key = current->curr_chain_key; - current->lockdep_recursion++; + lockdep_recursion_inc(); __trace_hardirqs_on_caller(); lockdep_recursion_finish(); } @@ -3726,7 +4126,7 @@ void noinstr lockdep_hardirqs_on(unsigned long ip) goto skip_checks; } - if (unlikely(current->lockdep_recursion & LOCKDEP_RECURSION_MASK)) + if (unlikely(__this_cpu_read(lockdep_recursion))) return; if (lockdep_hardirqs_enabled()) { @@ -3779,7 +4179,7 @@ void noinstr lockdep_hardirqs_off(unsigned long ip) if (in_nmi()) { if (!IS_ENABLED(CONFIG_TRACE_IRQFLAGS_NMI)) return; - } else if (current->lockdep_recursion & LOCKDEP_RECURSION_MASK) + } else if (__this_cpu_read(lockdep_recursion)) return; /* @@ -3812,7 +4212,7 @@ void lockdep_softirqs_on(unsigned long ip) { struct irqtrace_events *trace = ¤t->irqtrace; - if (unlikely(!debug_locks || current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return; /* @@ -3827,7 +4227,7 @@ void lockdep_softirqs_on(unsigned long ip) return; } - current->lockdep_recursion++; + lockdep_recursion_inc(); /* * We'll do an OFF -> ON transition: */ @@ -3850,7 +4250,7 @@ void lockdep_softirqs_on(unsigned long ip) */ void lockdep_softirqs_off(unsigned long ip) { - if (unlikely(!debug_locks || current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return; /* @@ -3969,7 +4369,7 @@ static int separate_irq_context(struct task_struct *curr, static int mark_lock(struct task_struct *curr, struct held_lock *this, enum lock_usage_bit new_bit) { - unsigned int old_mask, new_mask, ret = 1; + unsigned int new_mask, ret = 1; if (new_bit >= LOCK_USAGE_STATES) { DEBUG_LOCKS_WARN_ON(1); @@ -3996,30 +4396,26 @@ static int mark_lock(struct task_struct *curr, struct held_lock *this, if (unlikely(hlock_class(this)->usage_mask & new_mask)) goto unlock; - old_mask = hlock_class(this)->usage_mask; hlock_class(this)->usage_mask |= new_mask; - /* - * Save one usage_traces[] entry and map both LOCK_USED and - * LOCK_USED_READ onto the same entry. - */ - if (new_bit == LOCK_USED || new_bit == LOCK_USED_READ) { - if (old_mask & (LOCKF_USED | LOCKF_USED_READ)) - goto unlock; - new_bit = LOCK_USED; + if (new_bit < LOCK_TRACE_STATES) { + if (!(hlock_class(this)->usage_traces[new_bit] = save_trace())) + return 0; } - if (!(hlock_class(this)->usage_traces[new_bit] = save_trace())) - return 0; - switch (new_bit) { - case LOCK_USED: - debug_atomic_dec(nr_unused_locks); - break; - default: + case 0 ... LOCK_USED-1: ret = mark_lock_irq(curr, this, new_bit); if (!ret) return 0; + break; + + case LOCK_USED: + debug_atomic_dec(nr_unused_locks); + break; + + default: + break; } unlock: @@ -4235,11 +4631,11 @@ void lockdep_init_map_waits(struct lockdep_map *lock, const char *name, if (subclass) { unsigned long flags; - if (DEBUG_LOCKS_WARN_ON(current->lockdep_recursion)) + if (DEBUG_LOCKS_WARN_ON(!lockdep_enabled())) return; raw_local_irq_save(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); register_lock_class(lock, subclass, 1); lockdep_recursion_finish(); raw_local_irq_restore(flags); @@ -4426,7 +4822,7 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, chain_key = INITIAL_CHAIN_KEY; chain_head = 1; } - chain_key = iterate_chain_key(chain_key, class_idx); + chain_key = iterate_chain_key(chain_key, hlock_id(hlock)); if (nest_lock && !__lock_is_held(nest_lock, -1)) { print_lock_nested_lock_not_held(curr, hlock, ip); @@ -4922,11 +5318,11 @@ void lock_set_class(struct lockdep_map *lock, const char *name, { unsigned long flags; - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return; raw_local_irq_save(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); check_flags(flags); if (__lock_set_class(lock, name, key, subclass, ip)) check_chain_key(current); @@ -4939,11 +5335,11 @@ void lock_downgrade(struct lockdep_map *lock, unsigned long ip) { unsigned long flags; - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return; raw_local_irq_save(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); check_flags(flags); if (__lock_downgrade(lock, ip)) check_chain_key(current); @@ -4981,7 +5377,7 @@ static void verify_lock_unused(struct lockdep_map *lock, struct held_lock *hlock static bool lockdep_nmi(void) { - if (current->lockdep_recursion & LOCKDEP_RECURSION_MASK) + if (raw_cpu_read(lockdep_recursion)) return false; if (!in_nmi()) @@ -4990,6 +5386,20 @@ static bool lockdep_nmi(void) return true; } +/* + * read_lock() is recursive if: + * 1. We force lockdep think this way in selftests or + * 2. The implementation is not queued read/write lock or + * 3. The locker is at an in_interrupt() context. + */ +bool read_lock_is_recursive(void) +{ + return force_read_lock_recursive || + !IS_ENABLED(CONFIG_QUEUED_RWLOCKS) || + in_interrupt(); +} +EXPORT_SYMBOL_GPL(read_lock_is_recursive); + /* * We are not always called with irqs disabled - do that here, * and also avoid lockdep recursion: @@ -5002,7 +5412,10 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass, trace_lock_acquire(lock, subclass, trylock, read, check, nest_lock, ip); - if (unlikely(current->lockdep_recursion)) { + if (!debug_locks) + return; + + if (unlikely(!lockdep_enabled())) { /* XXX allow trylock from NMI ?!? */ if (lockdep_nmi() && !trylock) { struct held_lock hlock; @@ -5025,7 +5438,7 @@ void lock_acquire(struct lockdep_map *lock, unsigned int subclass, raw_local_irq_save(flags); check_flags(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); __lock_acquire(lock, subclass, trylock, read, check, irqs_disabled_flags(flags), nest_lock, ip, 0, 0); lockdep_recursion_finish(); @@ -5039,13 +5452,13 @@ void lock_release(struct lockdep_map *lock, unsigned long ip) trace_lock_release(lock, ip); - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return; raw_local_irq_save(flags); check_flags(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); if (__lock_release(lock, ip)) check_chain_key(current); lockdep_recursion_finish(); @@ -5058,13 +5471,13 @@ noinstr int lock_is_held_type(const struct lockdep_map *lock, int read) unsigned long flags; int ret = 0; - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return 1; /* avoid false negative lockdep_assert_held() */ raw_local_irq_save(flags); check_flags(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); ret = __lock_is_held(lock, read); lockdep_recursion_finish(); raw_local_irq_restore(flags); @@ -5079,13 +5492,13 @@ struct pin_cookie lock_pin_lock(struct lockdep_map *lock) struct pin_cookie cookie = NIL_COOKIE; unsigned long flags; - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return cookie; raw_local_irq_save(flags); check_flags(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); cookie = __lock_pin_lock(lock); lockdep_recursion_finish(); raw_local_irq_restore(flags); @@ -5098,13 +5511,13 @@ void lock_repin_lock(struct lockdep_map *lock, struct pin_cookie cookie) { unsigned long flags; - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return; raw_local_irq_save(flags); check_flags(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); __lock_repin_lock(lock, cookie); lockdep_recursion_finish(); raw_local_irq_restore(flags); @@ -5115,13 +5528,13 @@ void lock_unpin_lock(struct lockdep_map *lock, struct pin_cookie cookie) { unsigned long flags; - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lockdep_enabled())) return; raw_local_irq_save(flags); check_flags(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); __lock_unpin_lock(lock, cookie); lockdep_recursion_finish(); raw_local_irq_restore(flags); @@ -5251,15 +5664,12 @@ void lock_contended(struct lockdep_map *lock, unsigned long ip) trace_lock_acquired(lock, ip); - if (unlikely(!lock_stat || !debug_locks)) - return; - - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lock_stat || !lockdep_enabled())) return; raw_local_irq_save(flags); check_flags(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); __lock_contended(lock, ip); lockdep_recursion_finish(); raw_local_irq_restore(flags); @@ -5272,15 +5682,12 @@ void lock_acquired(struct lockdep_map *lock, unsigned long ip) trace_lock_contended(lock, ip); - if (unlikely(!lock_stat || !debug_locks)) - return; - - if (unlikely(current->lockdep_recursion)) + if (unlikely(!lock_stat || !lockdep_enabled())) return; raw_local_irq_save(flags); check_flags(flags); - current->lockdep_recursion++; + lockdep_recursion_inc(); __lock_acquired(lock, ip); lockdep_recursion_finish(); raw_local_irq_restore(flags); @@ -5319,7 +5726,7 @@ static void remove_class_from_lock_chain(struct pending_free *pf, int i; for (i = chain->base; i < chain->base + chain->depth; i++) { - if (chain_hlocks[i] != class - lock_classes) + if (chain_hlock_class_idx(chain_hlocks[i]) != class - lock_classes) continue; /* * Each lock class occurs at most once in a lock chain so once diff --git a/kernel/locking/lockdep_internals.h b/kernel/locking/lockdep_internals.h index b0be1560ed17..de49f9e1c11b 100644 --- a/kernel/locking/lockdep_internals.h +++ b/kernel/locking/lockdep_internals.h @@ -20,9 +20,12 @@ enum lock_usage_bit { #undef LOCKDEP_STATE LOCK_USED, LOCK_USED_READ, - LOCK_USAGE_STATES + LOCK_USAGE_STATES, }; +/* states after LOCK_USED_READ are not traced and printed */ +static_assert(LOCK_TRACE_STATES == LOCK_USAGE_STATES); + #define LOCK_USAGE_READ_MASK 1 #define LOCK_USAGE_DIR_MASK 2 #define LOCK_USAGE_STATE_MASK (~(LOCK_USAGE_READ_MASK | LOCK_USAGE_DIR_MASK)) @@ -121,7 +124,7 @@ static const unsigned long LOCKF_USED_IN_IRQ_READ = extern struct list_head all_lock_classes; extern struct lock_chain lock_chains[]; -#define LOCK_USAGE_CHARS (1+LOCK_USAGE_STATES/2) +#define LOCK_USAGE_CHARS (2*XXX_LOCK_USAGE_STATES + 1) extern void get_usage_chars(struct lock_class *class, char usage[LOCK_USAGE_CHARS]); diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c index 1c03eec6ca9b..0642013dace4 100644 --- a/kernel/time/sched_clock.c +++ b/kernel/time/sched_clock.c @@ -35,7 +35,7 @@ * into a single 64-byte cache line. */ struct clock_data { - seqcount_t seq; + seqcount_latch_t seq; struct clock_read_data read_data[2]; ktime_t wrap_kt; unsigned long rate; @@ -76,7 +76,7 @@ struct clock_read_data *sched_clock_read_begin(unsigned int *seq) int sched_clock_read_retry(unsigned int seq) { - return read_seqcount_retry(&cd.seq, seq); + return read_seqcount_latch_retry(&cd.seq, seq); } unsigned long long notrace sched_clock(void) @@ -258,7 +258,7 @@ void __init generic_sched_clock_init(void) */ static u64 notrace suspended_sched_clock_read(void) { - unsigned int seq = raw_read_seqcount(&cd.seq); + unsigned int seq = raw_read_seqcount_latch(&cd.seq); return cd.read_data[seq & 1].epoch_cyc; } diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c index ba7657685e22..6858a31364b6 100644 --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -67,7 +67,7 @@ int __read_mostly timekeeping_suspended; * See @update_fast_timekeeper() below. */ struct tk_fast { - seqcount_raw_spinlock_t seq; + seqcount_latch_t seq; struct tk_read_base base[2]; }; @@ -101,13 +101,13 @@ static struct clocksource dummy_clock = { } static struct tk_fast tk_fast_mono ____cacheline_aligned = { - .seq = SEQCNT_RAW_SPINLOCK_ZERO(tk_fast_mono.seq, &timekeeper_lock), + .seq = SEQCNT_LATCH_ZERO(tk_fast_mono.seq), .base[0] = FAST_TK_INIT, .base[1] = FAST_TK_INIT, }; static struct tk_fast tk_fast_raw ____cacheline_aligned = { - .seq = SEQCNT_RAW_SPINLOCK_ZERO(tk_fast_raw.seq, &timekeeper_lock), + .seq = SEQCNT_LATCH_ZERO(tk_fast_raw.seq), .base[0] = FAST_TK_INIT, .base[1] = FAST_TK_INIT, }; @@ -484,7 +484,7 @@ static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf) tk_clock_read(tkr), tkr->cycle_last, tkr->mask)); - } while (read_seqcount_retry(&tkf->seq, seq)); + } while (read_seqcount_latch_retry(&tkf->seq, seq)); return now; } @@ -548,7 +548,7 @@ static __always_inline u64 __ktime_get_real_fast(struct tk_fast *tkf, u64 *mono) delta = timekeeping_delta_to_ns(tkr, clocksource_delta(tk_clock_read(tkr), tkr->cycle_last, tkr->mask)); - } while (read_seqcount_retry(&tkf->seq, seq)); + } while (read_seqcount_latch_retry(&tkf->seq, seq)); if (mono) *mono = basem + delta; diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan index 3d282d51849b..f271ff5fbb5a 100644 --- a/lib/Kconfig.kcsan +++ b/lib/Kconfig.kcsan @@ -40,6 +40,11 @@ menuconfig KCSAN if KCSAN +# Compiler capabilities that should not fail the test if they are unavailable. +config CC_HAS_TSAN_COMPOUND_READ_BEFORE_WRITE + def_bool (CC_IS_CLANG && $(cc-option,-fsanitize=thread -mllvm -tsan-compound-read-before-write=1)) || \ + (CC_IS_GCC && $(cc-option,-fsanitize=thread --param tsan-compound-read-before-write=1)) + config KCSAN_VERBOSE bool "Show verbose reports with more information about system state" depends on PROVE_LOCKING diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c index 14f44f59e733..a899b3f0e2e5 100644 --- a/lib/locking-selftest.c +++ b/lib/locking-selftest.c @@ -28,6 +28,7 @@ * Change this to 1 if you want to see the failure printouts: */ static unsigned int debug_locks_verbose; +unsigned int force_read_lock_recursive; static DEFINE_WD_CLASS(ww_lockdep); @@ -395,6 +396,49 @@ static void rwsem_ABBA1(void) MU(Y1); // should fail } +/* + * read_lock(A) + * spin_lock(B) + * spin_lock(B) + * write_lock(A) + * + * This test case is aimed at poking whether the chain cache prevents us from + * detecting a read-lock/lock-write deadlock: if the chain cache doesn't differ + * read/write locks, the following case may happen + * + * { read_lock(A)->lock(B) dependency exists } + * + * P0: + * lock(B); + * read_lock(A); + * + * { Not a deadlock, B -> A is added in the chain cache } + * + * P1: + * lock(B); + * write_lock(A); + * + * { B->A found in chain cache, not reported as a deadlock } + * + */ +static void rlock_chaincache_ABBA1(void) +{ + RL(X1); + L(Y1); + U(Y1); + RU(X1); + + L(Y1); + RL(X1); + RU(X1); + U(Y1); + + L(Y1); + WL(X1); + WU(X1); + U(Y1); // should fail +} + /* * read_lock(A) * spin_lock(B) @@ -990,6 +1034,133 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_wlock) #undef E2 #undef E3 +/* + * write-read / write-read / write-read deadlock even if read is recursive + */ + +#define E1() \ + \ + WL(X1); \ + RL(Y1); \ + RU(Y1); \ + WU(X1); + +#define E2() \ + \ + WL(Y1); \ + RL(Z1); \ + RU(Z1); \ + WU(Y1); + +#define E3() \ + \ + WL(Z1); \ + RL(X1); \ + RU(X1); \ + WU(Z1); + +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(W1R2_W2R3_W3R1) + +#undef E1 +#undef E2 +#undef E3 + +/* + * write-write / read-read / write-read deadlock even if read is recursive + */ + +#define E1() \ + \ + WL(X1); \ + WL(Y1); \ + WU(Y1); \ + WU(X1); + +#define E2() \ + \ + RL(Y1); \ + RL(Z1); \ + RU(Z1); \ + RU(Y1); + +#define E3() \ + \ + WL(Z1); \ + RL(X1); \ + RU(X1); \ + WU(Z1); + +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(W1W2_R2R3_W3R1) + +#undef E1 +#undef E2 +#undef E3 + +/* + * write-write / read-read / read-write is not deadlock when read is recursive + */ + +#define E1() \ + \ + WL(X1); \ + WL(Y1); \ + WU(Y1); \ + WU(X1); + +#define E2() \ + \ + RL(Y1); \ + RL(Z1); \ + RU(Z1); \ + RU(Y1); + +#define E3() \ + \ + RL(Z1); \ + WL(X1); \ + WU(X1); \ + RU(Z1); + +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(W1R2_R2R3_W3W1) + +#undef E1 +#undef E2 +#undef E3 + +/* + * write-read / read-read / write-write is not deadlock when read is recursive + */ + +#define E1() \ + \ + WL(X1); \ + RL(Y1); \ + RU(Y1); \ + WU(X1); + +#define E2() \ + \ + RL(Y1); \ + RL(Z1); \ + RU(Z1); \ + RU(Y1); + +#define E3() \ + \ + WL(Z1); \ + WL(X1); \ + WU(X1); \ + WU(Z1); + +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(W1W2_R2R3_R3W1) + +#undef E1 +#undef E2 +#undef E3 /* * read-lock / write-lock recursion that is actually safe. */ @@ -1009,20 +1180,28 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_wlock) #define E3() \ \ IRQ_ENTER(); \ - RL(A); \ + LOCK(A); \ L(B); \ U(B); \ - RU(A); \ + UNLOCK(A); \ IRQ_EXIT(); /* - * Generate 12 testcases: + * Generate 24 testcases: */ #include "locking-selftest-hardirq.h" -GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_hard) +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_hard_rlock) + +#include "locking-selftest-wlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_hard_wlock) #include "locking-selftest-softirq.h" -GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft) +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft_rlock) + +#include "locking-selftest-wlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft_wlock) #undef E1 #undef E2 @@ -1036,8 +1215,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft) \ IRQ_DISABLE(); \ L(B); \ - WL(A); \ - WU(A); \ + LOCK(A); \ + UNLOCK(A); \ U(B); \ IRQ_ENABLE(); @@ -1054,13 +1233,75 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft) IRQ_EXIT(); /* - * Generate 12 testcases: + * Generate 24 testcases: */ #include "locking-selftest-hardirq.h" -// GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_hard) +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_hard_rlock) + +#include "locking-selftest-wlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_hard_wlock) #include "locking-selftest-softirq.h" -// GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_soft) +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_soft_rlock) + +#include "locking-selftest-wlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion2_soft_wlock) + +#undef E1 +#undef E2 +#undef E3 +/* + * read-lock / write-lock recursion that is unsafe. + * + * A is a ENABLED_*_READ lock + * B is a USED_IN_*_READ lock + * + * read_lock(A); + * write_lock(B); + * + * read_lock(B); + * write_lock(A); // if this one is read_lock(), no deadlock + */ + +#define E1() \ + \ + IRQ_DISABLE(); \ + WL(B); \ + LOCK(A); \ + UNLOCK(A); \ + WU(B); \ + IRQ_ENABLE(); + +#define E2() \ + \ + RL(A); \ + RU(A); \ + +#define E3() \ + \ + IRQ_ENTER(); \ + RL(B); \ + RU(B); \ + IRQ_EXIT(); + +/* + * Generate 24 testcases: + */ +#include "locking-selftest-hardirq.h" +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion3_hard_rlock) + +#include "locking-selftest-wlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion3_hard_wlock) + +#include "locking-selftest-softirq.h" +#include "locking-selftest-rlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion3_soft_rlock) + +#include "locking-selftest-wlock.h" +GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion3_soft_wlock) #ifdef CONFIG_DEBUG_LOCK_ALLOC # define I_SPINLOCK(x) lockdep_reset_lock(&lock_##x.dep_map) @@ -1199,6 +1440,19 @@ static inline void print_testname(const char *testname) dotest(name##_##nr, FAILURE, LOCKTYPE_RWLOCK); \ pr_cont("\n"); +#define DO_TESTCASE_1RR(desc, name, nr) \ + print_testname(desc"/"#nr); \ + pr_cont(" |"); \ + dotest(name##_##nr, SUCCESS, LOCKTYPE_RWLOCK); \ + pr_cont("\n"); + +#define DO_TESTCASE_1RRB(desc, name, nr) \ + print_testname(desc"/"#nr); \ + pr_cont(" |"); \ + dotest(name##_##nr, FAILURE, LOCKTYPE_RWLOCK); \ + pr_cont("\n"); + + #define DO_TESTCASE_3(desc, name, nr) \ print_testname(desc"/"#nr); \ dotest(name##_spin_##nr, FAILURE, LOCKTYPE_SPIN); \ @@ -1213,6 +1467,25 @@ static inline void print_testname(const char *testname) dotest(name##_rlock_##nr, SUCCESS, LOCKTYPE_RWLOCK); \ pr_cont("\n"); +#define DO_TESTCASE_2RW(desc, name, nr) \ + print_testname(desc"/"#nr); \ + pr_cont(" |"); \ + dotest(name##_wlock_##nr, FAILURE, LOCKTYPE_RWLOCK); \ + dotest(name##_rlock_##nr, SUCCESS, LOCKTYPE_RWLOCK); \ + pr_cont("\n"); + +#define DO_TESTCASE_2x2RW(desc, name, nr) \ + DO_TESTCASE_2RW("hard-"desc, name##_hard, nr) \ + DO_TESTCASE_2RW("soft-"desc, name##_soft, nr) \ + +#define DO_TESTCASE_6x2x2RW(desc, name) \ + DO_TESTCASE_2x2RW(desc, name, 123); \ + DO_TESTCASE_2x2RW(desc, name, 132); \ + DO_TESTCASE_2x2RW(desc, name, 213); \ + DO_TESTCASE_2x2RW(desc, name, 231); \ + DO_TESTCASE_2x2RW(desc, name, 312); \ + DO_TESTCASE_2x2RW(desc, name, 321); + #define DO_TESTCASE_6(desc, name) \ print_testname(desc); \ dotest(name##_spin, FAILURE, LOCKTYPE_SPIN); \ @@ -1289,6 +1562,22 @@ static inline void print_testname(const char *testname) DO_TESTCASE_2IB(desc, name, 312); \ DO_TESTCASE_2IB(desc, name, 321); +#define DO_TESTCASE_6x1RR(desc, name) \ + DO_TESTCASE_1RR(desc, name, 123); \ + DO_TESTCASE_1RR(desc, name, 132); \ + DO_TESTCASE_1RR(desc, name, 213); \ + DO_TESTCASE_1RR(desc, name, 231); \ + DO_TESTCASE_1RR(desc, name, 312); \ + DO_TESTCASE_1RR(desc, name, 321); + +#define DO_TESTCASE_6x1RRB(desc, name) \ + DO_TESTCASE_1RRB(desc, name, 123); \ + DO_TESTCASE_1RRB(desc, name, 132); \ + DO_TESTCASE_1RRB(desc, name, 213); \ + DO_TESTCASE_1RRB(desc, name, 231); \ + DO_TESTCASE_1RRB(desc, name, 312); \ + DO_TESTCASE_1RRB(desc, name, 321); + #define DO_TESTCASE_6x6(desc, name) \ DO_TESTCASE_6I(desc, name, 123); \ DO_TESTCASE_6I(desc, name, 132); \ @@ -1966,6 +2255,108 @@ static void ww_tests(void) pr_cont("\n"); } + +/* + * + * read_lock(&A); + * + * spin_lock(&B); + * spin_lock(&B); + * read_lock(&A); + * + * is a deadlock. + */ +static void queued_read_lock_hardirq_RE_Er(void) +{ + HARDIRQ_ENTER(); + read_lock(&rwlock_A); + LOCK(B); + UNLOCK(B); + read_unlock(&rwlock_A); + HARDIRQ_EXIT(); + + HARDIRQ_DISABLE(); + LOCK(B); + read_lock(&rwlock_A); + read_unlock(&rwlock_A); + UNLOCK(B); + HARDIRQ_ENABLE(); +} + +/* + * + * spin_lock(&B); + * + * read_lock(&A); + * read_lock(&A); + * spin_lock(&B); + * + * is not a deadlock. + */ +static void queued_read_lock_hardirq_ER_rE(void) +{ + HARDIRQ_ENTER(); + LOCK(B); + read_lock(&rwlock_A); + read_unlock(&rwlock_A); + UNLOCK(B); + HARDIRQ_EXIT(); + + HARDIRQ_DISABLE(); + read_lock(&rwlock_A); + LOCK(B); + UNLOCK(B); + read_unlock(&rwlock_A); + HARDIRQ_ENABLE(); +} + +/* + * + * spin_lock(&B); + * read_lock(&A); + * + * spin_lock(&B); + * read_lock(&A); + * + * is a deadlock. Because the two read_lock()s are both non-recursive readers. + */ +static void queued_read_lock_hardirq_inversion(void) +{ + + HARDIRQ_ENTER(); + LOCK(B); + UNLOCK(B); + HARDIRQ_EXIT(); + + HARDIRQ_DISABLE(); + LOCK(B); + read_lock(&rwlock_A); + read_unlock(&rwlock_A); + UNLOCK(B); + HARDIRQ_ENABLE(); + + read_lock(&rwlock_A); + read_unlock(&rwlock_A); +} + +static void queued_read_lock_tests(void) +{ + printk(" --------------------------------------------------------------------------\n"); + printk(" | queued read lock tests |\n"); + printk(" ---------------------------\n"); + print_testname("hardirq read-lock/lock-read"); + dotest(queued_read_lock_hardirq_RE_Er, FAILURE, LOCKTYPE_RWLOCK); + pr_cont("\n"); + + print_testname("hardirq lock-read/read-lock"); + dotest(queued_read_lock_hardirq_ER_rE, SUCCESS, LOCKTYPE_RWLOCK); + pr_cont("\n"); + + print_testname("hardirq inversion"); + dotest(queued_read_lock_hardirq_inversion, FAILURE, LOCKTYPE_RWLOCK); + pr_cont("\n"); +} + void locking_selftest(void) { /* @@ -1978,6 +2369,11 @@ void locking_selftest(void) return; } + /* + * treats read_lock() as recursive read locks for testing purpose + */ + force_read_lock_recursive = 1; + /* * Run the testsuite: */ @@ -2033,14 +2429,6 @@ void locking_selftest(void) print_testname("mixed read-lock/lock-write ABBA"); pr_cont(" |"); dotest(rlock_ABBA1, FAILURE, LOCKTYPE_RWLOCK); -#ifdef CONFIG_PROVE_LOCKING - /* - * Lockdep does indeed fail here, but there's nothing we can do about - * that now. Don't kill lockdep for it. - */ - unexpected_testcase_failures--; -#endif - pr_cont(" |"); dotest(rwsem_ABBA1, FAILURE, LOCKTYPE_RWSEM); @@ -2056,6 +2444,15 @@ void locking_selftest(void) pr_cont(" |"); dotest(rwsem_ABBA3, FAILURE, LOCKTYPE_RWSEM); + print_testname("chain cached mixed R-L/L-W ABBA"); + pr_cont(" |"); + dotest(rlock_chaincache_ABBA1, FAILURE, LOCKTYPE_RWLOCK); + + DO_TESTCASE_6x1RRB("rlock W1R2/W2R3/W3R1", W1R2_W2R3_W3R1); + DO_TESTCASE_6x1RRB("rlock W1W2/R2R3/W3R1", W1W2_R2R3_W3R1); + DO_TESTCASE_6x1RR("rlock W1W2/R2R3/R3W1", W1W2_R2R3_R3W1); + DO_TESTCASE_6x1RR("rlock W1R2/R2R3/W3W1", W1R2_R2R3_W3W1); + printk(" --------------------------------------------------------------------------\n"); /* @@ -2068,11 +2465,19 @@ void locking_selftest(void) DO_TESTCASE_6x6("safe-A + unsafe-B #2", irqsafe4); DO_TESTCASE_6x6RW("irq lock-inversion", irq_inversion); - DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion); -// DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2); + DO_TESTCASE_6x2x2RW("irq read-recursion", irq_read_recursion); + DO_TESTCASE_6x2x2RW("irq read-recursion #2", irq_read_recursion2); + DO_TESTCASE_6x2x2RW("irq read-recursion #3", irq_read_recursion3); ww_tests(); + force_read_lock_recursive = 0; + /* + * queued_read_lock() specific test cases can be put here + */ + if (IS_ENABLED(CONFIG_QUEUED_RWLOCKS)) + queued_read_lock_tests(); + if (unexpected_testcase_failures) { printk("-----------------------------------------------------------------\n"); debug_locks = 0; diff --git a/mm/swap.c b/mm/swap.c index e7bdf094f76a..65ef7e3525bf 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -763,10 +763,20 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) */ void lru_add_drain_all(void) { - static seqcount_t seqcount = SEQCNT_ZERO(seqcount); - static DEFINE_MUTEX(lock); + /* + * lru_drain_gen - Global pages generation number + * + * (A) Definition: global lru_drain_gen = x implies that all generations + * 0 < n <= x are already *scheduled* for draining. + * + * This is an optimization for the highly-contended use case where a + * user space workload keeps constantly generating a flow of pages for + * each CPU. + */ + static unsigned int lru_drain_gen; static struct cpumask has_work; - int cpu, seq; + static DEFINE_MUTEX(lock); + unsigned cpu, this_gen; /* * Make sure nobody triggers this path before mm_percpu_wq is fully @@ -775,21 +785,54 @@ void lru_add_drain_all(void) if (WARN_ON(!mm_percpu_wq)) return; - seq = raw_read_seqcount_latch(&seqcount); + /* + * Guarantee pagevec counter stores visible by this CPU are visible to + * other CPUs before loading the current drain generation. + */ + smp_mb(); + + /* + * (B) Locally cache global LRU draining generation number + * + * The read barrier ensures that the counter is loaded before the mutex + * is taken. It pairs with smp_mb() inside the mutex critical section + * at (D). + */ + this_gen = smp_load_acquire(&lru_drain_gen); mutex_lock(&lock); /* - * Piggyback on drain started and finished while we waited for lock: - * all pages pended at the time of our enter were drained from vectors. + * (C) Exit the draining operation if a newer generation, from another + * lru_add_drain_all(), was already scheduled for draining. Check (A). */ - if (__read_seqcount_retry(&seqcount, seq)) + if (unlikely(this_gen != lru_drain_gen)) goto done; - raw_write_seqcount_latch(&seqcount); + /* + * (D) Increment global generation number + * + * Pairs with smp_load_acquire() at (B), outside of the critical + * section. Use a full memory barrier to guarantee that the new global + * drain generation number is stored before loading pagevec counters. + * + * This pairing must be done here, before the for_each_online_cpu loop + * below which drains the page vectors. + * + * Let x, y, and z represent some system CPU numbers, where x < y < z. + * Assume CPU #z is is in the middle of the for_each_online_cpu loop + * below and has already reached CPU #y's per-cpu data. CPU #x comes + * along, adds some pages to its per-cpu vectors, then calls + * lru_add_drain_all(). + * + * If the paired barrier is done at any later step, e.g. after the + * loop, CPU #x will just exit at (C) and miss flushing out all of its + * added pages. + */ + WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1); + smp_mb(); cpumask_clear(&has_work); - for_each_online_cpu(cpu) { struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); @@ -801,7 +844,7 @@ void lru_add_drain_all(void) need_activate_page_drain(cpu)) { INIT_WORK(work, lru_add_drain_per_cpu); queue_work_on(cpu, mm_percpu_wq, work); - cpumask_set_cpu(cpu, &has_work); + __cpumask_set_cpu(cpu, &has_work); } } @@ -816,7 +859,7 @@ void lru_add_drain_all(void) { lru_add_drain(); } -#endif +#endif /* CONFIG_SMP */ /** * release_pages - batched put_page() diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan index c50f27b3ac56..c37f9518d5d9 100644 --- a/scripts/Makefile.kcsan +++ b/scripts/Makefile.kcsan @@ -11,5 +11,5 @@ endif # of some options does not break KCSAN nor causes false positive reports. CFLAGS_KCSAN := -fsanitize=thread \ $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0) -fno-optimize-sibling-calls) \ - $(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1)) \ + $(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \ $(call cc-param,tsan-distinguish-volatile=1) diff --git a/scripts/atomic/check-atomics.sh b/scripts/atomic/check-atomics.sh index 8378c63a1e09..82748d42ecc5 100755 --- a/scripts/atomic/check-atomics.sh +++ b/scripts/atomic/check-atomics.sh @@ -16,6 +16,7 @@ fi cat <