linux_dsm_epyc7002/ipc
NeilBrown 8f0db01800 rhashtable: use bit_spin_locks to protect hash bucket.
This patch changes rhashtables to use a bit_spin_lock on BIT(1) of the
bucket pointer to lock the hash chain for that bucket.

The benefits of a bit spin_lock are:
 - no need to allocate a separate array of locks.
 - no need to have a configuration option to guide the
   choice of the size of this array
 - locking cost is often a single test-and-set in a cache line
   that will have to be loaded anyway.  When inserting at, or removing
   from, the head of the chain, the unlock is free - writing the new
   address in the bucket head implicitly clears the lock bit.
   For __rhashtable_insert_fast() we ensure this always happens
   when adding a new key.
 - even when lockings costs 2 updates (lock and unlock), they are
   in a cacheline that needs to be read anyway.

The cost of using a bit spin_lock is a little bit of code complexity,
which I think is quite manageable.

Bit spin_locks are sometimes inappropriate because they are not fair -
if multiple CPUs repeatedly contend of the same lock, one CPU can
easily be starved.  This is not a credible situation with rhashtable.
Multiple CPUs may want to repeatedly add or remove objects, but they
will typically do so at different buckets, so they will attempt to
acquire different locks.

As we have more bit-locks than we previously had spinlocks (by at
least a factor of two) we can expect slightly less contention to
go with the slightly better cache behavior and reduced memory
consumption.

To enhance type checking, a new struct is introduced to represent the
  pointer plus lock-bit
that is stored in the bucket-table.  This is "struct rhash_lock_head"
and is empty.  A pointer to this needs to be cast to either an
unsigned lock, or a "struct rhash_head *" to be useful.
Variables of this type are most often called "bkt".

Previously "pprev" would sometimes point to a bucket, and sometimes a
->next pointer in an rhash_head.  As these are now different types,
pprev is NULL when it would have pointed to the bucket. In that case,
'blk' is used, together with correct locking protocol.

Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-07 19:12:12 -07:00
..
compat.c License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
ipc_sysctl.c ipc: IPCMNI limit check for semmni 2018-10-31 08:54:14 -07:00
Makefile License cleanup: add SPDX GPL-2.0 license identifier to files with no license 2017-11-02 11:10:55 +01:00
mq_sysctl.c ipc: convert use of typedef ctl_table to struct ctl_table 2014-06-06 16:08:16 -07:00
mqueue.c Merge branch 'work.mount' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs 2019-03-12 14:08:19 -07:00
msg.c ipc: rename old-style shmctl/semctl/msgctl syscalls 2019-01-25 17:22:50 +01:00
msgutil.c ipc: convert ipc_namespace.count from atomic_t to refcount_t 2017-09-08 18:26:51 -07:00
namespace.c ipc: Convert mqueue fs to fs_context 2019-02-28 03:29:29 -05:00
sem.c ipc/sem.c: replace kvmalloc/memset with kvzalloc and use struct_size 2019-03-07 18:32:02 -08:00
shm.c ipc: rename old-style shmctl/semctl/msgctl syscalls 2019-01-25 17:22:50 +01:00
syscall.c ipc: rename old-style shmctl/semctl/msgctl syscalls 2019-01-25 17:22:50 +01:00
util.c rhashtable: use bit_spin_locks to protect hash bucket. 2019-04-07 19:12:12 -07:00
util.h ipc: rename old-style shmctl/semctl/msgctl syscalls 2019-01-25 17:22:50 +01:00