mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-05 09:46:43 +07:00
64970b68d2
This moves an optimization for searching constant-sized small bitmaps form x86_64-specific to generic code. On an i386 defconfig (the x86#testing one), the size of vmlinux hardly changes with this applied. I have observed only four places where this optimization avoids a call into find_next_bit: In the functions return_unused_surplus_pages, alloc_fresh_huge_page, and adjust_pool_surplus, this patch avoids a call for a 1-bit bitmap. In __next_cpu a call is avoided for a 32-bit bitmap. That's it. On x86_64, 52 locations are optimized with a minimal increase in code size: Current #testing defconfig: 146 x bsf, 27 x find_next_*bit text data bss dec hex filename 5392637 846592 724424 6963653 6a41c5 vmlinux After removing the x86_64 specific optimization for find_next_*bit: 94 x bsf, 79 x find_next_*bit text data bss dec hex filename 5392358 846592 724424 6963374 6a40ae vmlinux After this patch (making the optimization generic): 146 x bsf, 27 x find_next_*bit text data bss dec hex filename 5392396 846592 724424 6963412 6a40d4 vmlinux [ tglx@linutronix.de: build fixes ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
16 lines
525 B
C
16 lines
525 B
C
#ifndef _ASM_GENERIC_BITOPS_FIND_H_
|
|
#define _ASM_GENERIC_BITOPS_FIND_H_
|
|
|
|
#ifndef CONFIG_GENERIC_FIND_NEXT_BIT
|
|
extern unsigned long find_next_bit(const unsigned long *addr, unsigned long
|
|
size, unsigned long offset);
|
|
|
|
extern unsigned long find_next_zero_bit(const unsigned long *addr, unsigned
|
|
long size, unsigned long offset);
|
|
#endif
|
|
|
|
#define find_first_bit(addr, size) find_next_bit((addr), (size), 0)
|
|
#define find_first_zero_bit(addr, size) find_next_zero_bit((addr), (size), 0)
|
|
|
|
#endif /*_ASM_GENERIC_BITOPS_FIND_H_ */
|