linux_dsm_epyc7002/include/linux/percpu_ida.h
Kent Overstreet 798ab48eec idr: Percpu ida
Percpu frontend for allocating ids. With percpu allocation (that works),
it's impossible to guarantee it will always be possible to allocate all
nr_tags - typically, some will be stuck on a remote percpu freelist
where the current job can't get to them.

We do guarantee that it will always be possible to allocate at least
(nr_tags / 2) tags - this is done by keeping track of which and how many
cpus have tags on their percpu freelists. On allocation failure if
enough cpus have tags that there could potentially be (nr_tags / 2) tags
stuck on remote percpu freelists, we then pick a remote cpu at random to
steal from.

Note that there's no cpu hotplug notifier - we don't care, because
steal_tags() will eventually get the down cpu's tags. We _could_ satisfy
more allocations if we had a notifier - but we'll still meet our
guarantees and it's absolutely not a correctness issue, so I don't think
it's worth the extra code.

From akpm:

    "It looks OK to me (that's as close as I get to an ack :))

v6 changes:
  - Add #include <linux/cpumask.h> to include/linux/percpu_ida.h to
    make alpha/arc builds happy (Fengguang)
  - Move second (cpu >= nr_cpu_ids) check inside of first check scope
    in steal_tags() (akpm + nab)

v5 changes:
  - Change percpu_ida->cpus_have_tags to cpumask_t (kmo + akpm)
  - Add comment for percpu_ida_cpu->lock + ->nr_free (kmo + akpm)
  - Convert steal_tags() to use cpumask_weight() + cpumask_next() +
    cpumask_first() + cpumask_clear_cpu() (kmo + akpm)
  - Add comment for alloc_global_tags() (kmo + akpm)
  - Convert percpu_ida_alloc() to use cpumask_set_cpu() (kmo + akpm)
  - Convert percpu_ida_free() to use cpumask_set_cpu() (kmo + akpm)
  - Drop percpu_ida->cpus_have_tags allocation in percpu_ida_init()
    (kmo + akpm)
  - Drop percpu_ida->cpus_have_tags kfree in percpu_ida_destroy()
    (kmo + akpm)
  - Add comment for percpu_ida_alloc @ gfp (kmo + akpm)
  - Move to percpu_ida.c + percpu_ida.h (kmo + akpm + nab)

v4 changes:

  - Fix tags.c reference in percpu_ida_init (akpm)

Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2013-09-09 14:29:15 -07:00

61 lines
1.5 KiB
C

#ifndef __PERCPU_IDA_H__
#define __PERCPU_IDA_H__
#include <linux/types.h>
#include <linux/bitops.h>
#include <linux/init.h>
#include <linux/spinlock_types.h>
#include <linux/wait.h>
#include <linux/cpumask.h>
struct percpu_ida_cpu;
struct percpu_ida {
/*
* number of tags available to be allocated, as passed to
* percpu_ida_init()
*/
unsigned nr_tags;
struct percpu_ida_cpu __percpu *tag_cpu;
/*
* Bitmap of cpus that (may) have tags on their percpu freelists:
* steal_tags() uses this to decide when to steal tags, and which cpus
* to try stealing from.
*
* It's ok for a freelist to be empty when its bit is set - steal_tags()
* will just keep looking - but the bitmap _must_ be set whenever a
* percpu freelist does have tags.
*/
cpumask_t cpus_have_tags;
struct {
spinlock_t lock;
/*
* When we go to steal tags from another cpu (see steal_tags()),
* we want to pick a cpu at random. Cycling through them every
* time we steal is a bit easier and more or less equivalent:
*/
unsigned cpu_last_stolen;
/* For sleeping on allocation failure */
wait_queue_head_t wait;
/*
* Global freelist - it's a stack where nr_free points to the
* top
*/
unsigned nr_free;
unsigned *freelist;
} ____cacheline_aligned_in_smp;
};
int percpu_ida_alloc(struct percpu_ida *pool, gfp_t gfp);
void percpu_ida_free(struct percpu_ida *pool, unsigned tag);
void percpu_ida_destroy(struct percpu_ida *pool);
int percpu_ida_init(struct percpu_ida *pool, unsigned long nr_tags);
#endif /* __PERCPU_IDA_H__ */