mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-24 22:57:27 +07:00
0ade34c370
We've measured that we spend ~0.6% of sys cpu time in cpumask_next_and(). It's essentially a joined iteration in search for a non-zero bit, which is currently implemented as a lookup join (find a nonzero bit on the lhs, lookup the rhs to see if it's set there). Implement a direct join (find a nonzero bit on the incrementally built join). Also add generic bitmap benchmarks in the new `test_find_bit` module for new function (see `find_next_and_bit` in [2] and [3] below). For cpumask_next_and, direct benchmarking shows that it's 1.17x to 14x faster with a geometric mean of 2.1 on 32 CPUs [1]. No impact on memory usage. Note that on Arm, the new pure-C implementation still outperforms the old one that uses a mix of C and asm (`find_next_bit`) [3]. [1] Approximate benchmark code: ``` unsigned long src1p[nr_cpumask_longs] = {pattern1}; unsigned long src2p[nr_cpumask_longs] = {pattern2}; for (/*a bunch of repetitions*/) { for (int n = -1; n <= nr_cpu_ids; ++n) { asm volatile("" : "+rm"(src1p)); // prevent any optimization asm volatile("" : "+rm"(src2p)); unsigned long result = cpumask_next_and(n, src1p, src2p); asm volatile("" : "+rm"(result)); } } ``` Results: pattern1 pattern2 time_before/time_after 0x0000ffff 0x0000ffff 1.65 0x0000ffff 0x00005555 2.24 0x0000ffff 0x00001111 2.94 0x0000ffff 0x00000000 14.0 0x00005555 0x0000ffff 1.67 0x00005555 0x00005555 1.71 0x00005555 0x00001111 1.90 0x00005555 0x00000000 6.58 0x00001111 0x0000ffff 1.46 0x00001111 0x00005555 1.49 0x00001111 0x00001111 1.45 0x00001111 0x00000000 3.10 0x00000000 0x0000ffff 1.18 0x00000000 0x00005555 1.18 0x00000000 0x00001111 1.17 0x00000000 0x00000000 1.25 ----------------------------- geo.mean 2.06 [2] test_find_next_bit, X86 (skylake) [ 3913.477422] Start testing find_bit() with random-filled bitmap [ 3913.477847] find_next_bit: 160868 cycles, 16484 iterations [ 3913.477933] find_next_zero_bit: 169542 cycles, 16285 iterations [ 3913.478036] find_last_bit: 201638 cycles, 16483 iterations [ 3913.480214] find_first_bit: 4353244 cycles, 16484 iterations [ 3913.480216] Start testing find_next_and_bit() with random-filled bitmap [ 3913.481074] find_next_and_bit: 89604 cycles, 8216 iterations [ 3913.481075] Start testing find_bit() with sparse bitmap [ 3913.481078] find_next_bit: 2536 cycles, 66 iterations [ 3913.481252] find_next_zero_bit: 344404 cycles, 32703 iterations [ 3913.481255] find_last_bit: 2006 cycles, 66 iterations [ 3913.481265] find_first_bit: 17488 cycles, 66 iterations [ 3913.481266] Start testing find_next_and_bit() with sparse bitmap [ 3913.481272] find_next_and_bit: 764 cycles, 1 iterations [3] test_find_next_bit, arm (v7 odroid XU3). [ 267.206928] Start testing find_bit() with random-filled bitmap [ 267.214752] find_next_bit: 4474 cycles, 16419 iterations [ 267.221850] find_next_zero_bit: 5976 cycles, 16350 iterations [ 267.229294] find_last_bit: 4209 cycles, 16419 iterations [ 267.279131] find_first_bit: 1032991 cycles, 16420 iterations [ 267.286265] Start testing find_next_and_bit() with random-filled bitmap [ 267.302386] find_next_and_bit: 2290 cycles, 8140 iterations [ 267.309422] Start testing find_bit() with sparse bitmap [ 267.316054] find_next_bit: 191 cycles, 66 iterations [ 267.322726] find_next_zero_bit: 8758 cycles, 32703 iterations [ 267.329803] find_last_bit: 84 cycles, 66 iterations [ 267.336169] find_first_bit: 4118 cycles, 66 iterations [ 267.342627] Start testing find_next_and_bit() with sparse bitmap [ 267.356919] find_next_and_bit: 91 cycles, 1 iterations [courbet@google.com: v6] Link: http://lkml.kernel.org/r/20171129095715.23430-1-courbet@google.com [geert@linux-m68k.org: m68k/bitops: always include <asm-generic/bitops/find.h>] Link: http://lkml.kernel.org/r/1512556816-28627-1-git-send-email-geert@linux-m68k.org Link: http://lkml.kernel.org/r/20171128131334.23491-1-courbet@google.com Signed-off-by: Clement Courbet <courbet@google.com> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Yury Norov <ynorov@caviumnetworks.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
458 lines
17 KiB
C
458 lines
17 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef __LINUX_BITMAP_H
|
|
#define __LINUX_BITMAP_H
|
|
|
|
#ifndef __ASSEMBLY__
|
|
|
|
#include <linux/types.h>
|
|
#include <linux/bitops.h>
|
|
#include <linux/string.h>
|
|
#include <linux/kernel.h>
|
|
|
|
/*
|
|
* bitmaps provide bit arrays that consume one or more unsigned
|
|
* longs. The bitmap interface and available operations are listed
|
|
* here, in bitmap.h
|
|
*
|
|
* Function implementations generic to all architectures are in
|
|
* lib/bitmap.c. Functions implementations that are architecture
|
|
* specific are in various include/asm-<arch>/bitops.h headers
|
|
* and other arch/<arch> specific files.
|
|
*
|
|
* See lib/bitmap.c for more details.
|
|
*/
|
|
|
|
/**
|
|
* DOC: bitmap overview
|
|
*
|
|
* The available bitmap operations and their rough meaning in the
|
|
* case that the bitmap is a single unsigned long are thus:
|
|
*
|
|
* Note that nbits should be always a compile time evaluable constant.
|
|
* Otherwise many inlines will generate horrible code.
|
|
*
|
|
* ::
|
|
*
|
|
* bitmap_zero(dst, nbits) *dst = 0UL
|
|
* bitmap_fill(dst, nbits) *dst = ~0UL
|
|
* bitmap_copy(dst, src, nbits) *dst = *src
|
|
* bitmap_and(dst, src1, src2, nbits) *dst = *src1 & *src2
|
|
* bitmap_or(dst, src1, src2, nbits) *dst = *src1 | *src2
|
|
* bitmap_xor(dst, src1, src2, nbits) *dst = *src1 ^ *src2
|
|
* bitmap_andnot(dst, src1, src2, nbits) *dst = *src1 & ~(*src2)
|
|
* bitmap_complement(dst, src, nbits) *dst = ~(*src)
|
|
* bitmap_equal(src1, src2, nbits) Are *src1 and *src2 equal?
|
|
* bitmap_intersects(src1, src2, nbits) Do *src1 and *src2 overlap?
|
|
* bitmap_subset(src1, src2, nbits) Is *src1 a subset of *src2?
|
|
* bitmap_empty(src, nbits) Are all bits zero in *src?
|
|
* bitmap_full(src, nbits) Are all bits set in *src?
|
|
* bitmap_weight(src, nbits) Hamming Weight: number set bits
|
|
* bitmap_set(dst, pos, nbits) Set specified bit area
|
|
* bitmap_clear(dst, pos, nbits) Clear specified bit area
|
|
* bitmap_find_next_zero_area(buf, len, pos, n, mask) Find bit free area
|
|
* bitmap_find_next_zero_area_off(buf, len, pos, n, mask) as above
|
|
* bitmap_shift_right(dst, src, n, nbits) *dst = *src >> n
|
|
* bitmap_shift_left(dst, src, n, nbits) *dst = *src << n
|
|
* bitmap_remap(dst, src, old, new, nbits) *dst = map(old, new)(src)
|
|
* bitmap_bitremap(oldbit, old, new, nbits) newbit = map(old, new)(oldbit)
|
|
* bitmap_onto(dst, orig, relmap, nbits) *dst = orig relative to relmap
|
|
* bitmap_fold(dst, orig, sz, nbits) dst bits = orig bits mod sz
|
|
* bitmap_parse(buf, buflen, dst, nbits) Parse bitmap dst from kernel buf
|
|
* bitmap_parse_user(ubuf, ulen, dst, nbits) Parse bitmap dst from user buf
|
|
* bitmap_parselist(buf, dst, nbits) Parse bitmap dst from kernel buf
|
|
* bitmap_parselist_user(buf, dst, nbits) Parse bitmap dst from user buf
|
|
* bitmap_find_free_region(bitmap, bits, order) Find and allocate bit region
|
|
* bitmap_release_region(bitmap, pos, order) Free specified bit region
|
|
* bitmap_allocate_region(bitmap, pos, order) Allocate specified bit region
|
|
* bitmap_from_arr32(dst, buf, nbits) Copy nbits from u32[] buf to dst
|
|
* bitmap_to_arr32(buf, src, nbits) Copy nbits from buf to u32[] dst
|
|
*
|
|
* Note, bitmap_zero() and bitmap_fill() operate over the region of
|
|
* unsigned longs, that is, bits behind bitmap till the unsigned long
|
|
* boundary will be zeroed or filled as well. Consider to use
|
|
* bitmap_clear() or bitmap_set() to make explicit zeroing or filling
|
|
* respectively.
|
|
*/
|
|
|
|
/**
|
|
* DOC: bitmap bitops
|
|
*
|
|
* Also the following operations in asm/bitops.h apply to bitmaps.::
|
|
*
|
|
* set_bit(bit, addr) *addr |= bit
|
|
* clear_bit(bit, addr) *addr &= ~bit
|
|
* change_bit(bit, addr) *addr ^= bit
|
|
* test_bit(bit, addr) Is bit set in *addr?
|
|
* test_and_set_bit(bit, addr) Set bit and return old value
|
|
* test_and_clear_bit(bit, addr) Clear bit and return old value
|
|
* test_and_change_bit(bit, addr) Change bit and return old value
|
|
* find_first_zero_bit(addr, nbits) Position first zero bit in *addr
|
|
* find_first_bit(addr, nbits) Position first set bit in *addr
|
|
* find_next_zero_bit(addr, nbits, bit)
|
|
* Position next zero bit in *addr >= bit
|
|
* find_next_bit(addr, nbits, bit) Position next set bit in *addr >= bit
|
|
* find_next_and_bit(addr1, addr2, nbits, bit)
|
|
* Same as find_next_bit, but in
|
|
* (*addr1 & *addr2)
|
|
*
|
|
*/
|
|
|
|
/**
|
|
* DOC: declare bitmap
|
|
* The DECLARE_BITMAP(name,bits) macro, in linux/types.h, can be used
|
|
* to declare an array named 'name' of just enough unsigned longs to
|
|
* contain all bit positions from 0 to 'bits' - 1.
|
|
*/
|
|
|
|
/*
|
|
* lib/bitmap.c provides these functions:
|
|
*/
|
|
|
|
extern int __bitmap_empty(const unsigned long *bitmap, unsigned int nbits);
|
|
extern int __bitmap_full(const unsigned long *bitmap, unsigned int nbits);
|
|
extern int __bitmap_equal(const unsigned long *bitmap1,
|
|
const unsigned long *bitmap2, unsigned int nbits);
|
|
extern void __bitmap_complement(unsigned long *dst, const unsigned long *src,
|
|
unsigned int nbits);
|
|
extern void __bitmap_shift_right(unsigned long *dst, const unsigned long *src,
|
|
unsigned int shift, unsigned int nbits);
|
|
extern void __bitmap_shift_left(unsigned long *dst, const unsigned long *src,
|
|
unsigned int shift, unsigned int nbits);
|
|
extern int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1,
|
|
const unsigned long *bitmap2, unsigned int nbits);
|
|
extern void __bitmap_or(unsigned long *dst, const unsigned long *bitmap1,
|
|
const unsigned long *bitmap2, unsigned int nbits);
|
|
extern void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1,
|
|
const unsigned long *bitmap2, unsigned int nbits);
|
|
extern int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
|
|
const unsigned long *bitmap2, unsigned int nbits);
|
|
extern int __bitmap_intersects(const unsigned long *bitmap1,
|
|
const unsigned long *bitmap2, unsigned int nbits);
|
|
extern int __bitmap_subset(const unsigned long *bitmap1,
|
|
const unsigned long *bitmap2, unsigned int nbits);
|
|
extern int __bitmap_weight(const unsigned long *bitmap, unsigned int nbits);
|
|
extern void __bitmap_set(unsigned long *map, unsigned int start, int len);
|
|
extern void __bitmap_clear(unsigned long *map, unsigned int start, int len);
|
|
|
|
extern unsigned long bitmap_find_next_zero_area_off(unsigned long *map,
|
|
unsigned long size,
|
|
unsigned long start,
|
|
unsigned int nr,
|
|
unsigned long align_mask,
|
|
unsigned long align_offset);
|
|
|
|
/**
|
|
* bitmap_find_next_zero_area - find a contiguous aligned zero area
|
|
* @map: The address to base the search on
|
|
* @size: The bitmap size in bits
|
|
* @start: The bitnumber to start searching at
|
|
* @nr: The number of zeroed bits we're looking for
|
|
* @align_mask: Alignment mask for zero area
|
|
*
|
|
* The @align_mask should be one less than a power of 2; the effect is that
|
|
* the bit offset of all zero areas this function finds is multiples of that
|
|
* power of 2. A @align_mask of 0 means no alignment is required.
|
|
*/
|
|
static inline unsigned long
|
|
bitmap_find_next_zero_area(unsigned long *map,
|
|
unsigned long size,
|
|
unsigned long start,
|
|
unsigned int nr,
|
|
unsigned long align_mask)
|
|
{
|
|
return bitmap_find_next_zero_area_off(map, size, start, nr,
|
|
align_mask, 0);
|
|
}
|
|
|
|
extern int __bitmap_parse(const char *buf, unsigned int buflen, int is_user,
|
|
unsigned long *dst, int nbits);
|
|
extern int bitmap_parse_user(const char __user *ubuf, unsigned int ulen,
|
|
unsigned long *dst, int nbits);
|
|
extern int bitmap_parselist(const char *buf, unsigned long *maskp,
|
|
int nmaskbits);
|
|
extern int bitmap_parselist_user(const char __user *ubuf, unsigned int ulen,
|
|
unsigned long *dst, int nbits);
|
|
extern void bitmap_remap(unsigned long *dst, const unsigned long *src,
|
|
const unsigned long *old, const unsigned long *new, unsigned int nbits);
|
|
extern int bitmap_bitremap(int oldbit,
|
|
const unsigned long *old, const unsigned long *new, int bits);
|
|
extern void bitmap_onto(unsigned long *dst, const unsigned long *orig,
|
|
const unsigned long *relmap, unsigned int bits);
|
|
extern void bitmap_fold(unsigned long *dst, const unsigned long *orig,
|
|
unsigned int sz, unsigned int nbits);
|
|
extern int bitmap_find_free_region(unsigned long *bitmap, unsigned int bits, int order);
|
|
extern void bitmap_release_region(unsigned long *bitmap, unsigned int pos, int order);
|
|
extern int bitmap_allocate_region(unsigned long *bitmap, unsigned int pos, int order);
|
|
|
|
#ifdef __BIG_ENDIAN
|
|
extern void bitmap_copy_le(unsigned long *dst, const unsigned long *src, unsigned int nbits);
|
|
#else
|
|
#define bitmap_copy_le bitmap_copy
|
|
#endif
|
|
extern unsigned int bitmap_ord_to_pos(const unsigned long *bitmap, unsigned int ord, unsigned int nbits);
|
|
extern int bitmap_print_to_pagebuf(bool list, char *buf,
|
|
const unsigned long *maskp, int nmaskbits);
|
|
|
|
#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1)))
|
|
#define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG - 1)))
|
|
|
|
#define small_const_nbits(nbits) \
|
|
(__builtin_constant_p(nbits) && (nbits) <= BITS_PER_LONG)
|
|
|
|
static inline void bitmap_zero(unsigned long *dst, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
*dst = 0UL;
|
|
else {
|
|
unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
|
|
memset(dst, 0, len);
|
|
}
|
|
}
|
|
|
|
static inline void bitmap_fill(unsigned long *dst, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
*dst = ~0UL;
|
|
else {
|
|
unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
|
|
memset(dst, 0xff, len);
|
|
}
|
|
}
|
|
|
|
static inline void bitmap_copy(unsigned long *dst, const unsigned long *src,
|
|
unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
*dst = *src;
|
|
else {
|
|
unsigned int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
|
|
memcpy(dst, src, len);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Copy bitmap and clear tail bits in last word.
|
|
*/
|
|
static inline void bitmap_copy_clear_tail(unsigned long *dst,
|
|
const unsigned long *src, unsigned int nbits)
|
|
{
|
|
bitmap_copy(dst, src, nbits);
|
|
if (nbits % BITS_PER_LONG)
|
|
dst[nbits / BITS_PER_LONG] &= BITMAP_LAST_WORD_MASK(nbits);
|
|
}
|
|
|
|
/*
|
|
* On 32-bit systems bitmaps are represented as u32 arrays internally, and
|
|
* therefore conversion is not needed when copying data from/to arrays of u32.
|
|
*/
|
|
#if BITS_PER_LONG == 64
|
|
extern void bitmap_from_arr32(unsigned long *bitmap, const u32 *buf,
|
|
unsigned int nbits);
|
|
extern void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap,
|
|
unsigned int nbits);
|
|
#else
|
|
#define bitmap_from_arr32(bitmap, buf, nbits) \
|
|
bitmap_copy_clear_tail((unsigned long *) (bitmap), \
|
|
(const unsigned long *) (buf), (nbits))
|
|
#define bitmap_to_arr32(buf, bitmap, nbits) \
|
|
bitmap_copy_clear_tail((unsigned long *) (buf), \
|
|
(const unsigned long *) (bitmap), (nbits))
|
|
#endif
|
|
|
|
static inline int bitmap_and(unsigned long *dst, const unsigned long *src1,
|
|
const unsigned long *src2, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
return (*dst = *src1 & *src2 & BITMAP_LAST_WORD_MASK(nbits)) != 0;
|
|
return __bitmap_and(dst, src1, src2, nbits);
|
|
}
|
|
|
|
static inline void bitmap_or(unsigned long *dst, const unsigned long *src1,
|
|
const unsigned long *src2, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
*dst = *src1 | *src2;
|
|
else
|
|
__bitmap_or(dst, src1, src2, nbits);
|
|
}
|
|
|
|
static inline void bitmap_xor(unsigned long *dst, const unsigned long *src1,
|
|
const unsigned long *src2, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
*dst = *src1 ^ *src2;
|
|
else
|
|
__bitmap_xor(dst, src1, src2, nbits);
|
|
}
|
|
|
|
static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1,
|
|
const unsigned long *src2, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
return (*dst = *src1 & ~(*src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0;
|
|
return __bitmap_andnot(dst, src1, src2, nbits);
|
|
}
|
|
|
|
static inline void bitmap_complement(unsigned long *dst, const unsigned long *src,
|
|
unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
*dst = ~(*src);
|
|
else
|
|
__bitmap_complement(dst, src, nbits);
|
|
}
|
|
|
|
static inline int bitmap_equal(const unsigned long *src1,
|
|
const unsigned long *src2, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
|
|
if (__builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
|
|
return !memcmp(src1, src2, nbits / 8);
|
|
return __bitmap_equal(src1, src2, nbits);
|
|
}
|
|
|
|
static inline int bitmap_intersects(const unsigned long *src1,
|
|
const unsigned long *src2, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0;
|
|
else
|
|
return __bitmap_intersects(src1, src2, nbits);
|
|
}
|
|
|
|
static inline int bitmap_subset(const unsigned long *src1,
|
|
const unsigned long *src2, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
return ! ((*src1 & ~(*src2)) & BITMAP_LAST_WORD_MASK(nbits));
|
|
else
|
|
return __bitmap_subset(src1, src2, nbits);
|
|
}
|
|
|
|
static inline int bitmap_empty(const unsigned long *src, unsigned nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
return ! (*src & BITMAP_LAST_WORD_MASK(nbits));
|
|
|
|
return find_first_bit(src, nbits) == nbits;
|
|
}
|
|
|
|
static inline int bitmap_full(const unsigned long *src, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
return ! (~(*src) & BITMAP_LAST_WORD_MASK(nbits));
|
|
|
|
return find_first_zero_bit(src, nbits) == nbits;
|
|
}
|
|
|
|
static __always_inline int bitmap_weight(const unsigned long *src, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
return hweight_long(*src & BITMAP_LAST_WORD_MASK(nbits));
|
|
return __bitmap_weight(src, nbits);
|
|
}
|
|
|
|
static __always_inline void bitmap_set(unsigned long *map, unsigned int start,
|
|
unsigned int nbits)
|
|
{
|
|
if (__builtin_constant_p(nbits) && nbits == 1)
|
|
__set_bit(start, map);
|
|
else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
|
|
__builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
|
|
memset((char *)map + start / 8, 0xff, nbits / 8);
|
|
else
|
|
__bitmap_set(map, start, nbits);
|
|
}
|
|
|
|
static __always_inline void bitmap_clear(unsigned long *map, unsigned int start,
|
|
unsigned int nbits)
|
|
{
|
|
if (__builtin_constant_p(nbits) && nbits == 1)
|
|
__clear_bit(start, map);
|
|
else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
|
|
__builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
|
|
memset((char *)map + start / 8, 0, nbits / 8);
|
|
else
|
|
__bitmap_clear(map, start, nbits);
|
|
}
|
|
|
|
static inline void bitmap_shift_right(unsigned long *dst, const unsigned long *src,
|
|
unsigned int shift, int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
*dst = (*src & BITMAP_LAST_WORD_MASK(nbits)) >> shift;
|
|
else
|
|
__bitmap_shift_right(dst, src, shift, nbits);
|
|
}
|
|
|
|
static inline void bitmap_shift_left(unsigned long *dst, const unsigned long *src,
|
|
unsigned int shift, unsigned int nbits)
|
|
{
|
|
if (small_const_nbits(nbits))
|
|
*dst = (*src << shift) & BITMAP_LAST_WORD_MASK(nbits);
|
|
else
|
|
__bitmap_shift_left(dst, src, shift, nbits);
|
|
}
|
|
|
|
static inline int bitmap_parse(const char *buf, unsigned int buflen,
|
|
unsigned long *maskp, int nmaskbits)
|
|
{
|
|
return __bitmap_parse(buf, buflen, 0, maskp, nmaskbits);
|
|
}
|
|
|
|
/**
|
|
* BITMAP_FROM_U64() - Represent u64 value in the format suitable for bitmap.
|
|
* @n: u64 value
|
|
*
|
|
* Linux bitmaps are internally arrays of unsigned longs, i.e. 32-bit
|
|
* integers in 32-bit environment, and 64-bit integers in 64-bit one.
|
|
*
|
|
* There are four combinations of endianness and length of the word in linux
|
|
* ABIs: LE64, BE64, LE32 and BE32.
|
|
*
|
|
* On 64-bit kernels 64-bit LE and BE numbers are naturally ordered in
|
|
* bitmaps and therefore don't require any special handling.
|
|
*
|
|
* On 32-bit kernels 32-bit LE ABI orders lo word of 64-bit number in memory
|
|
* prior to hi, and 32-bit BE orders hi word prior to lo. The bitmap on the
|
|
* other hand is represented as an array of 32-bit words and the position of
|
|
* bit N may therefore be calculated as: word #(N/32) and bit #(N%32) in that
|
|
* word. For example, bit #42 is located at 10th position of 2nd word.
|
|
* It matches 32-bit LE ABI, and we can simply let the compiler store 64-bit
|
|
* values in memory as it usually does. But for BE we need to swap hi and lo
|
|
* words manually.
|
|
*
|
|
* With all that, the macro BITMAP_FROM_U64() does explicit reordering of hi and
|
|
* lo parts of u64. For LE32 it does nothing, and for BE environment it swaps
|
|
* hi and lo words, as is expected by bitmap.
|
|
*/
|
|
#if __BITS_PER_LONG == 64
|
|
#define BITMAP_FROM_U64(n) (n)
|
|
#else
|
|
#define BITMAP_FROM_U64(n) ((unsigned long) ((u64)(n) & ULONG_MAX)), \
|
|
((unsigned long) ((u64)(n) >> 32))
|
|
#endif
|
|
|
|
/**
|
|
* bitmap_from_u64 - Check and swap words within u64.
|
|
* @mask: source bitmap
|
|
* @dst: destination bitmap
|
|
*
|
|
* In 32-bit Big Endian kernel, when using ``(u32 *)(&val)[*]``
|
|
* to read u64 mask, we will get the wrong word.
|
|
* That is ``(u32 *)(&val)[0]`` gets the upper 32 bits,
|
|
* but we expect the lower 32-bits of u64.
|
|
*/
|
|
static inline void bitmap_from_u64(unsigned long *dst, u64 mask)
|
|
{
|
|
dst[0] = mask & ULONG_MAX;
|
|
|
|
if (sizeof(mask) > sizeof(unsigned long))
|
|
dst[1] = mask >> 32;
|
|
}
|
|
|
|
#endif /* __ASSEMBLY__ */
|
|
|
|
#endif /* __LINUX_BITMAP_H */
|