mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-22 22:14:51 +07:00
6e89e831a9
More than one kernel developer has expressed the opinion that the LKMM should enforce ordering of writes by locking. In other words, given the following code: WRITE_ONCE(x, 1); spin_unlock(&s): spin_lock(&s); WRITE_ONCE(y, 1); the stores to x and y should be propagated in order to all other CPUs, even though those other CPUs might not access the lock s. In terms of the memory model, this means expanding the cumul-fence relation. Locks should also provide read-read (and read-write) ordering in a similar way. Given: READ_ONCE(x); spin_unlock(&s); spin_lock(&s); READ_ONCE(y); // or WRITE_ONCE(y, 1); the load of x should be executed before the load of (or store to) y. The LKMM already provides this ordering, but it provides it even in the case where the two accesses are separated by a release/acquire pair of fences rather than unlock/lock. This would prevent architectures from using weakly ordered implementations of release and acquire, which seems like an unnecessary restriction. The patch therefore removes the ordering requirement from the LKMM for that case. There are several arguments both for and against this change. Let us refer to these enhanced ordering properties by saying that the LKMM would require locks to be RCtso (a bit of a misnomer, but analogous to RCpc and RCsc) and it would require ordinary acquire/release only to be RCpc. (Note: In the following, the phrase "all supported architectures" is meant not to include RISC-V. Although RISC-V is indeed supported by the kernel, the implementation is still somewhat in a state of flux and therefore statements about it would be premature.) Pros: The kernel already provides RCtso ordering for locks on all supported architectures, even though this is not stated explicitly anywhere. Therefore the LKMM should formalize it. In theory, guaranteeing RCtso ordering would reduce the need for additional barrier-like constructs meant to increase the ordering strength of locks. Will Deacon and Peter Zijlstra are strongly in favor of formalizing the RCtso requirement. Linus Torvalds and Will would like to go even further, requiring locks to have RCsc behavior (ordering preceding writes against later reads), but they recognize that this would incur a noticeable performance degradation on the POWER architecture. Linus also points out that people have made the mistake, in the past, of assuming that locking has stronger ordering properties than is currently guaranteed, and this change would reduce the likelihood of such mistakes. Not requiring ordinary acquire/release to be any stronger than RCpc may prove advantageous for future architectures, allowing them to implement smp_load_acquire() and smp_store_release() with more efficient machine instructions than would be possible if the operations had to be RCtso. Will and Linus approve this rationale, hypothetical though it is at the moment (it may end up affecting the RISC-V implementation). The same argument may or may not apply to RMW-acquire/release; see also the second Con entry below. Linus feels that locks should be easy for people to use without worrying about memory consistency issues, since they are so pervasive in the kernel, whereas acquire/release is much more of an "experts only" tool. Requiring locks to be RCtso is a step in this direction. Cons: Andrea Parri and Luc Maranget think that locks should have the same ordering properties as ordinary acquire/release (indeed, Luc points out that the names "acquire" and "release" derive from the usage of locks). Andrea points out that having different ordering properties for different forms of acquires and releases is not only unnecessary, it would also be confusing and unmaintainable. Locks are constructed from lower-level primitives, typically RMW-acquire (for locking) and ordinary release (for unlock). It is illogical to require stronger ordering properties from the high-level operations than from the low-level operations they comprise. Thus, this change would make while (cmpxchg_acquire(&s, 0, 1) != 0) cpu_relax(); an incorrect implementation of spin_lock(&s) as far as the LKMM is concerned. In theory this weakness can be ameliorated by changing the LKMM even further, requiring RMW-acquire/release also to be RCtso (which it already is on all supported architectures). As far as I know, nobody has singled out any examples of code in the kernel that actually relies on locks being RCtso. (People mumble about RCU and the scheduler, but nobody has pointed to any actual code. If there are any real cases, their number is likely quite small.) If RCtso ordering is not needed, why require it? A handful of locking constructs (qspinlocks, qrwlocks, and mcs_spinlocks) are built on top of smp_cond_load_acquire() instead of an RMW-acquire instruction. It currently provides only the ordinary acquire semantics, not the stronger ordering this patch would require of locks. In theory this could be ameliorated by requiring smp_cond_load_acquire() in combination with ordinary release also to be RCtso (which is currently true on all supported architectures). On future weakly ordered architectures, people may be able to implement locks in a non-RCtso fashion with significant performance improvement. Meeting the RCtso requirement would necessarily add run-time overhead. Overall, the technical aspects of these arguments seem relatively minor, and it appears mostly to boil down to a matter of opinion. Since the opinions of senior kernel maintainers such as Linus, Peter, and Will carry more weight than those of Luc and Andrea, this patch changes the model in accordance with the maintainers' wishes. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: akiyks@gmail.com Cc: boqun.feng@gmail.com Cc: dhowells@redhat.com Cc: j.alglave@ucl.ac.uk Cc: linux-arch@vger.kernel.org Cc: luc.maranget@inria.fr Cc: npiggin@gmail.com Cc: parri.andrea@gmail.com Link: http://lkml.kernel.org/r/20180926182920.27644-2-paulmck@linux.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org> |
||
---|---|---|
.. | ||
.gitignore | ||
CoRR+poonceonce+Once.litmus | ||
CoRW+poonceonce+Once.litmus | ||
CoWR+poonceonce+Once.litmus | ||
CoWW+poonceonce.litmus | ||
IRIW+fencembonceonces+OnceOnce.litmus | ||
IRIW+poonceonces+OnceOnce.litmus | ||
ISA2+pooncelock+pooncelock+pombonce.litmus | ||
ISA2+poonceonces.litmus | ||
ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus | ||
LB+fencembonceonce+ctrlonceonce.litmus | ||
LB+poacquireonce+pooncerelease.litmus | ||
LB+poonceonces.litmus | ||
MP+fencewmbonceonce+fencermbonceonce.litmus | ||
MP+onceassign+derefonce.litmus | ||
MP+polockmbonce+poacquiresilsil.litmus | ||
MP+polockonce+poacquiresilsil.litmus | ||
MP+polocks.litmus | ||
MP+poonceonces.litmus | ||
MP+pooncerelease+poacquireonce.litmus | ||
MP+porevlocks.litmus | ||
R+fencembonceonces.litmus | ||
R+poonceonces.litmus | ||
README | ||
S+fencewmbonceonce+poacquireonce.litmus | ||
S+poonceonces.litmus | ||
SB+fencembonceonces.litmus | ||
SB+poonceonces.litmus | ||
SB+rfionceonce-poonceonces.litmus | ||
WRC+poonceonces+Once.litmus | ||
WRC+pooncerelease+fencermbonceonce+Once.litmus | ||
Z6.0+pooncelock+pooncelock+pombonce.litmus | ||
Z6.0+pooncelock+poonceLock+pombonce.litmus | ||
Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus |
============ LITMUS TESTS ============ CoRR+poonceonce+Once.litmus Test of read-read coherence, that is, whether or not two successive reads from the same variable are ordered. CoRW+poonceonce+Once.litmus Test of read-write coherence, that is, whether or not a read from a given variable followed by a write to that same variable are ordered. CoWR+poonceonce+Once.litmus Test of write-read coherence, that is, whether or not a write to a given variable followed by a read from that same variable are ordered. CoWW+poonceonce.litmus Test of write-write coherence, that is, whether or not two successive writes to the same variable are ordered. IRIW+fencembonceonces+OnceOnce.litmus Test of independent reads from independent writes with smp_mb() between each pairs of reads. In other words, is smp_mb() sufficient to cause two different reading processes to agree on the order of a pair of writes, where each write is to a different variable by a different process? This litmus test is forbidden by LKMM's propagation rule. IRIW+poonceonces+OnceOnce.litmus Test of independent reads from independent writes with nothing between each pairs of reads. In other words, is anything at all needed to cause two different reading processes to agree on the order of a pair of writes, where each write is to a different variable by a different process? ISA2+pooncelock+pooncelock+pombonce.litmus Tests whether the ordering provided by a lock-protected S litmus test is visible to an external process whose accesses are separated by smp_mb(). This addition of an external process to S is otherwise known as ISA2. ISA2+poonceonces.litmus As below, but with store-release replaced with WRITE_ONCE() and load-acquire replaced with READ_ONCE(). ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus Can a release-acquire chain order a prior store against a later load? LB+fencembonceonce+ctrlonceonce.litmus Does a control dependency and an smp_mb() suffice for the load-buffering litmus test, where each process reads from one of two variables then writes to the other? LB+poacquireonce+pooncerelease.litmus Does a release-acquire pair suffice for the load-buffering litmus test, where each process reads from one of two variables then writes to the other? LB+poonceonces.litmus As above, but with store-release replaced with WRITE_ONCE() and load-acquire replaced with READ_ONCE(). MP+onceassign+derefonce.litmus As below, but with rcu_assign_pointer() and an rcu_dereference(). MP+polockmbonce+poacquiresilsil.litmus Protect the access with a lock and an smp_mb__after_spinlock() in one process, and use an acquire load followed by a pair of spin_is_locked() calls in the other process. MP+polockonce+poacquiresilsil.litmus Protect the access with a lock in one process, and use an acquire load followed by a pair of spin_is_locked() calls in the other process. MP+polocks.litmus As below, but with the second access of the writer process and the first access of reader process protected by a lock. MP+poonceonces.litmus As below, but without the smp_rmb() and smp_wmb(). MP+pooncerelease+poacquireonce.litmus As below, but with a release-acquire chain. MP+porevlocks.litmus As below, but with the first access of the writer process and the second access of reader process protected by a lock. MP+fencewmbonceonce+fencermbonceonce.litmus Does a smp_wmb() (between the stores) and an smp_rmb() (between the loads) suffice for the message-passing litmus test, where one process writes data and then a flag, and the other process reads the flag and then the data. (This is similar to the ISA2 tests, but with two processes instead of three.) R+fencembonceonces.litmus This is the fully ordered (via smp_mb()) version of one of the classic counterintuitive litmus tests that illustrates the effects of store propagation delays. R+poonceonces.litmus As above, but without the smp_mb() invocations. SB+fencembonceonces.litmus This is the fully ordered (again, via smp_mb() version of store buffering, which forms the core of Dekker's mutual-exclusion algorithm. SB+poonceonces.litmus As above, but without the smp_mb() invocations. SB+rfionceonce-poonceonces.litmus This litmus test demonstrates that LKMM is not fully multicopy atomic. (Neither is it other multicopy atomic.) This litmus test also demonstrates the "locations" debugging aid, which designates additional registers and locations to be printed out in the dump of final states in the herd7 output. Without the "locations" statement, only those registers and locations mentioned in the "exists" clause will be printed. S+poonceonces.litmus As below, but without the smp_wmb() and acquire load. S+fencewmbonceonce+poacquireonce.litmus Can a smp_wmb(), instead of a release, and an acquire order a prior store against a subsequent store? WRC+poonceonces+Once.litmus WRC+pooncerelease+fencermbonceonce+Once.litmus These two are members of an extension of the MP litmus-test class in which the first write is moved to a separate process. The second is forbidden because smp_store_release() is A-cumulative in LKMM. Z6.0+pooncelock+pooncelock+pombonce.litmus Is the ordering provided by a spin_unlock() and a subsequent spin_lock() sufficient to make ordering apparent to accesses by a process not holding the lock? Z6.0+pooncelock+poonceLock+pombonce.litmus As above, but with smp_mb__after_spinlock() immediately following the spin_lock(). Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus Is the ordering provided by a release-acquire chain sufficient to make ordering apparent to accesses by a process that does not participate in that release-acquire chain? A great many more litmus tests are available here: https://github.com/paulmckrcu/litmus ================== LITMUS TEST NAMING ================== Litmus tests are usually named based on their contents, which means that looking at the name tells you what the litmus test does. The naming scheme covers litmus tests having a single cycle that passes through each process exactly once, so litmus tests not fitting this description are named on an ad-hoc basis. The structure of a litmus-test name is the litmus-test class, a plus sign ("+"), and one string for each process, separated by plus signs. The end of the name is ".litmus". The litmus-test classes may be found in the infamous test6.pdf: https://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test6.pdf Each class defines the pattern of accesses and of the variables accessed. For example, if the one process writes to a pair of variables, and the other process reads from these same variables, the corresponding litmus-test class is "MP" (message passing), which may be found on the left-hand end of the second row of tests on page one of test6.pdf. The strings used to identify the actions carried out by each process are complex due to a desire to have short(er) names. Thus, there is a tool to generate these strings from a given litmus test's actions. For example, consider the processes from SB+rfionceonce-poonceonces.litmus: P0(int *x, int *y) { int r1; int r2; WRITE_ONCE(*x, 1); r1 = READ_ONCE(*x); r2 = READ_ONCE(*y); } P1(int *x, int *y) { int r3; int r4; WRITE_ONCE(*y, 1); r3 = READ_ONCE(*y); r4 = READ_ONCE(*x); } The next step is to construct a space-separated list of descriptors, interleaving descriptions of the relation between a pair of consecutive accesses with descriptions of the second access in the pair. P0()'s WRITE_ONCE() is read by its first READ_ONCE(), which is a reads-from link (rf) and internal to the P0() process. This is "rfi", which is an abbreviation for "reads-from internal". Because some of the tools string these abbreviations together with space characters separating processes, the first character is capitalized, resulting in "Rfi". P0()'s second access is a READ_ONCE(), as opposed to (for example) smp_load_acquire(), so next is "Once". Thus far, we have "Rfi Once". P0()'s third access is also a READ_ONCE(), but to y rather than x. This is related to P0()'s second access by program order ("po"), to a different variable ("d"), and both accesses are reads ("RR"). The resulting descriptor is "PodRR". Because P0()'s third access is READ_ONCE(), we add another "Once" descriptor. A from-read ("fre") relation links P0()'s third to P1()'s first access, and the resulting descriptor is "Fre". P1()'s first access is WRITE_ONCE(), which as before gives the descriptor "Once". The string thus far is thus "Rfi Once PodRR Once Fre Once". The remainder of P1() is similar to P0(), which means we add "Rfi Once PodRR Once". Another fre links P1()'s last access to P0()'s first access, which is WRITE_ONCE(), so we add "Fre Once". The full string is thus: Rfi Once PodRR Once Fre Once Rfi Once PodRR Once Fre Once This string can be given to the "norm7" and "classify7" tools to produce the name: $ norm7 -bell linux-kernel.bell \ Rfi Once PodRR Once Fre Once Rfi Once PodRR Once Fre Once | \ sed -e 's/:.*//g' SB+rfionceonce-poonceonces Adding the ".litmus" suffix: SB+rfionceonce-poonceonces.litmus The descriptors that describe connections between consecutive accesses within the cycle through a given litmus test can be provided by the herd tool (Rfi, Po, Fre, and so on) or by the linux-kernel.bell file (Once, Release, Acquire, and so on). To see the full list of descriptors, execute the following command: $ diyone7 -bell linux-kernel.bell -show edges