linux_dsm_epyc7002/lib/Kconfig.debug

2290 lines
74 KiB
Plaintext
Raw Normal View History

# SPDX-License-Identifier: GPL-2.0-only
menu "Kernel hacking"
menu "printk and dmesg options"
config PRINTK_TIME
bool "Show timing information on printks"
depends on PRINTK
help
Selecting this option causes time stamps of the printk()
messages to be added to the output of the syslog() system
call and at the console.
The timestamp is always recorded internally, and exported
to /dev/kmsg. This flag just specifies if the timestamp should
be included, not that the timestamp is recorded.
The behavior is also controlled by the kernel command line
parameter printk.time=1. See Documentation/admin-guide/kernel-parameters.rst
printk: Add caller information to printk() output. Sometimes we want to print a series of printk() messages to consoles without being disturbed by concurrent printk() from interrupts and/or other threads. But we can't enforce printk() callers to use their local buffers because we need to ask them to make too much changes. Also, even buffering up to one line inside printk() might cause failing to emit an important clue under critical situation. Therefore, instead of trying to help buffering, let's try to help reconstructing messages by saving caller information as of calling log_store() and adding it as "[T$thread_id]" or "[C$processor_id]" upon printing to consoles. Some examples for console output: [ 1.222773][ T1] x86: Booting SMP configuration: [ 2.779635][ T1] pci 0000:00:01.0: PCI bridge to [bus 01] [ 5.069193][ T268] Fusion MPT base driver 3.04.20 [ 9.316504][ C2] random: fast init done [ 13.413336][ T3355] Initialized host personality Some examples for /dev/kmsg output: 6,496,1222773,-,caller=T1;x86: Booting SMP configuration: 6,968,2779635,-,caller=T1;pci 0000:00:01.0: PCI bridge to [bus 01] SUBSYSTEM=pci DEVICE=+pci:0000:00:01.0 6,1353,5069193,-,caller=T268;Fusion MPT base driver 3.04.20 5,1526,9316504,-,caller=C2;random: fast init done 6,1575,13413336,-,caller=T3355;Initialized host personality Note that this patch changes max length of messages which can be printed by printk() or written to /dev/kmsg interface from 992 bytes to 976 bytes, based on an assumption that userspace won't try to write messages hitting that border line to /dev/kmsg interface. Link: http://lkml.kernel.org/r/93f19e57-5051-c67d-9af4-b17624062d44@i-love.sakura.ne.jp Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: LKML <linux-kernel@vger.kernel.org> Cc: syzkaller <syzkaller@googlegroups.com> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
2018-12-18 04:05:04 +07:00
config PRINTK_CALLER
bool "Show caller information on printks"
depends on PRINTK
help
Selecting this option causes printk() to add a caller "thread id" (if
in task context) or a caller "processor id" (if not in task context)
to every message.
This option is intended for environments where multiple threads
concurrently call printk() for many times, for it is difficult to
interpret without knowing where these lines (or sometimes individual
line which was divided into multiple lines due to race) came from.
Since toggling after boot makes the code racy, currently there is
no option to enable/disable at the kernel command line parameter or
sysfs interface.
config CONSOLE_LOGLEVEL_DEFAULT
int "Default console loglevel (1-15)"
range 1 15
default "7"
help
Default loglevel to determine what will be printed on the console.
Setting a default here is equivalent to passing in loglevel=<x> in
the kernel bootargs. loglevel=<x> continues to override whatever
value is specified here as well.
Note: This does not affect the log level of un-prefixed printk()
usage in the kernel. That is controlled by the MESSAGE_LOGLEVEL_DEFAULT
option.
config CONSOLE_LOGLEVEL_QUIET
int "quiet console loglevel (1-15)"
range 1 15
default "4"
help
loglevel to use when "quiet" is passed on the kernel commandline.
When "quiet" is passed on the kernel commandline this loglevel
will be used as the loglevel. IOW passing "quiet" will be the
equivalent of passing "loglevel=<CONSOLE_LOGLEVEL_QUIET>"
config MESSAGE_LOGLEVEL_DEFAULT
int "Default message log level (1-7)"
range 1 7
default "4"
help
Default log level for printk statements with no specified priority.
This was hard-coded to KERN_WARNING since at least 2.6.10 but folks
that are auditing their logs closely may want to set it to a lower
priority.
Note: This does not affect what message level gets printed on the console
by default. To change that, use loglevel=<x> in the kernel bootargs,
or pick a different CONSOLE_LOGLEVEL_DEFAULT configuration value.
config BOOT_PRINTK_DELAY
bool "Delay each boot printk message by N milliseconds"
depends on DEBUG_KERNEL && PRINTK && GENERIC_CALIBRATE_DELAY
help
This build option allows you to read kernel boot messages
by inserting a short delay after each one. The delay is
specified in milliseconds on the kernel command line,
using "boot_delay=N".
It is likely that you would also need to use "lpj=M" to preset
the "loops per jiffie" value.
See a previous boot log for the "lpj" value to use for your
system, and then set "lpj=M" before setting "boot_delay=N".
NOTE: Using this option may adversely affect SMP systems.
I.e., processors other than the first one may not boot up.
BOOT_PRINTK_DELAY also may cause LOCKUP_DETECTOR to detect
what it believes to be lockup conditions.
config DYNAMIC_DEBUG
bool "Enable dynamic printk() support"
default n
depends on PRINTK
depends on (DEBUG_FS || PROC_FS)
help
Compiles debug level messages into the kernel, which would not
otherwise be available at runtime. These messages can then be
enabled/disabled based on various levels of scope - per source file,
function, module, format string, and line number. This mechanism
implicitly compiles in all pr_debug() and dev_dbg() calls, which
enlarges the kernel text size by about 2%.
If a source file is compiled with DEBUG flag set, any
pr_debug() calls in it are enabled by default, but can be
disabled at runtime as below. Note that DEBUG flag is
turned on by many CONFIG_*DEBUG* options.
Usage:
Dynamic debugging is controlled via the 'dynamic_debug/control' file,
which is contained in the 'debugfs' filesystem or procfs.
Thus, the debugfs or procfs filesystem must first be mounted before
making use of this feature.
We refer the control file as: <debugfs>/dynamic_debug/control. This
file contains a list of the debug statements that can be enabled. The
format for each line of the file is:
filename:lineno [module]function flags format
filename : source file of the debug statement
lineno : line number of the debug statement
module : module that contains the debug statement
function : function that contains the debug statement
flags : '=p' means the line is turned 'on' for printing
format : the format used for the debug statement
From a live system:
nullarbor:~ # cat <debugfs>/dynamic_debug/control
# filename:lineno [module]function flags format
fs/aio.c:222 [aio]__put_ioctx =_ "__put_ioctx:\040freeing\040%p\012"
fs/aio.c:248 [aio]ioctx_alloc =_ "ENOMEM:\040nr_events\040too\040high\012"
fs/aio.c:1770 [aio]sys_io_cancel =_ "calling\040cancel\012"
Example usage:
// enable the message at line 1603 of file svcsock.c
nullarbor:~ # echo -n 'file svcsock.c line 1603 +p' >
<debugfs>/dynamic_debug/control
// enable all the messages in file svcsock.c
nullarbor:~ # echo -n 'file svcsock.c +p' >
<debugfs>/dynamic_debug/control
// enable all the messages in the NFS server module
nullarbor:~ # echo -n 'module nfsd +p' >
<debugfs>/dynamic_debug/control
// enable all 12 messages in the function svc_process()
nullarbor:~ # echo -n 'func svc_process +p' >
<debugfs>/dynamic_debug/control
// disable all 12 messages in the function svc_process()
nullarbor:~ # echo -n 'func svc_process -p' >
<debugfs>/dynamic_debug/control
See Documentation/admin-guide/dynamic-debug-howto.rst for additional
information.
printf: add support for printing symbolic error names It has been suggested several times to extend vsnprintf() to be able to convert the numeric value of ENOSPC to print "ENOSPC". This implements that as a %p extension: With %pe, one can do if (IS_ERR(foo)) { pr_err("Sorry, can't do that: %pe\n", foo); return PTR_ERR(foo); } instead of what is seen in quite a few places in the kernel: if (IS_ERR(foo)) { pr_err("Sorry, can't do that: %ld\n", PTR_ERR(foo)); return PTR_ERR(foo); } If the value passed to %pe is an ERR_PTR, but the library function errname() added here doesn't know about the value, the value is simply printed in decimal. If the value passed to %pe is not an ERR_PTR, we treat it as an ordinary %p and thus print the hashed value (passing non-ERR_PTR values to %pe indicates a bug in the caller, but we can't do much about that). With my embedded hat on, and because it's not very invasive to do, I've made it possible to remove this. The errname() function and associated lookup tables take up about 3K. For most, that's probably quite acceptable and a price worth paying for more readable dmesg (once this starts getting used), while for those that disable printk() it's of very little use - I don't see a procfs/sysfs/seq_printf() file reasonably making use of this - and they clearly want to squeeze vmlinux as much as possible. Hence the default y if PRINTK. The symbols to include have been found by massaging the output of find arch include -iname 'errno*.h' | xargs grep -E 'define\s*E' In the cases where some common aliasing exists (e.g. EAGAIN=EWOULDBLOCK on all platforms, EDEADLOCK=EDEADLK on most), I've moved the more popular one (in terms of 'git grep -w Efoo | wc) to the bottom so that one takes precedence. Link: http://lkml.kernel.org/r/20191015190706.15989-1-linux@rasmusvillemoes.dk To: "Jonathan Corbet" <corbet@lwn.net> To: linux-kernel@vger.kernel.org Cc: "Andy Shevchenko" <andy.shevchenko@gmail.com> Cc: "Andrew Morton" <akpm@linux-foundation.org> Cc: "Joe Perches" <joe@perches.com> Cc: linux-doc@vger.kernel.org Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Acked-by: Uwe Kleine-König <uwe@kleine-koenig.org> Reviewed-by: Petr Mladek <pmladek@suse.com> [andy.shevchenko@gmail.com: use abs()] Acked-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Petr Mladek <pmladek@suse.com>
2019-10-16 02:07:05 +07:00
config SYMBOLIC_ERRNAME
bool "Support symbolic error names in printf"
default y if PRINTK
help
If you say Y here, the kernel's printf implementation will
be able to print symbolic error names such as ENOSPC instead
of the number 28. It makes the kernel image slightly larger
(about 3KB), but can make the kernel logs easier to read.
config DEBUG_BUGVERBOSE
bool "Verbose BUG() reporting (adds 70K)" if DEBUG_KERNEL && EXPERT
depends on BUG && (GENERIC_BUG || HAVE_DEBUG_BUGVERBOSE)
default y
help
Say Y here to make BUG() panics output the file name and line number
of the BUG call as well as the EIP and oops trace. This aids
debugging but costs about 70-100K of memory.
endmenu # "printk and dmesg options"
menu "Compile-time checks and compiler options"
config DEBUG_INFO
bool "Compile the kernel with debug info"
depends on DEBUG_KERNEL && !COMPILE_TEST
help
If you say Y here the resulting kernel image will include
debugging info resulting in a larger kernel image.
This adds debug symbols to the kernel and modules (gcc -g), and
is needed if you intend to use kernel crashdump or binary object
tools like crash, kgdb, LKCD, gdb, etc on the kernel.
Say Y here only if you plan to debug the kernel.
If unsure, say N.
config DEBUG_INFO_REDUCED
bool "Reduce debugging information"
depends on DEBUG_INFO
help
If you say Y here gcc is instructed to generate less debugging
information for structure types. This means that tools that
need full debugging information (like kgdb or systemtap) won't
be happy. But if you merely need debugging information to
resolve line numbers there is no loss. Advantage is that
build directory object sizes shrink dramatically over a full
DEBUG_INFO build and compile times are reduced too.
Only works with newer gcc versions.
kbuild: Support split debug info v4 This is an alternative approach to lower the overhead of debug info (as we discussed a few days ago) gcc 4.7+ and newer binutils have a new "split debug info" debug info model where the debug info is only written once into central ".dwo" files. This avoids having to copy it around multiple times, from the object files to the final executable. It lowers the disk space requirements. In addition it defaults to compressed debug data. More details here: http://gcc.gnu.org/wiki/DebugFission This patch adds a new option to enable it. It has to be an option, because it'll undoubtedly break everyone's debuginfo packaging scheme. gdb/objdump/etc. all still work, if you have new enough versions. I don't see big compile wins (maybe a second or two faster or so), but the object dirs with debuginfo get significantly smaller. My standard kernel config (slightly bigger than defconfig) shrinks from 2.9G disk space to 1.1G objdir (with non reduced debuginfo). I presume if you are IO limited the compile time difference will be larger. Only problem I've seen so far is that it doesn't play well with older versions of ccache (apparently fixed, see https://bugzilla.samba.org/show_bug.cgi?id=10005) v2: various fixes from Dirk Gouders. Improve commit message slightly. v3: Fix clean rules and improve Kconfig slightly v4: Fix merge error in last version (Sam Ravnborg) Clarify description that it mainly helps disk size. Cc: Dirk Gouders <dirk@gouders.net> Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Michal Marek <mmarek@suse.cz>
2014-07-31 01:50:18 +07:00
config DEBUG_INFO_SPLIT
bool "Produce split debuginfo in .dwo files"
depends on DEBUG_INFO
depends on $(cc-option,-gsplit-dwarf)
kbuild: Support split debug info v4 This is an alternative approach to lower the overhead of debug info (as we discussed a few days ago) gcc 4.7+ and newer binutils have a new "split debug info" debug info model where the debug info is only written once into central ".dwo" files. This avoids having to copy it around multiple times, from the object files to the final executable. It lowers the disk space requirements. In addition it defaults to compressed debug data. More details here: http://gcc.gnu.org/wiki/DebugFission This patch adds a new option to enable it. It has to be an option, because it'll undoubtedly break everyone's debuginfo packaging scheme. gdb/objdump/etc. all still work, if you have new enough versions. I don't see big compile wins (maybe a second or two faster or so), but the object dirs with debuginfo get significantly smaller. My standard kernel config (slightly bigger than defconfig) shrinks from 2.9G disk space to 1.1G objdir (with non reduced debuginfo). I presume if you are IO limited the compile time difference will be larger. Only problem I've seen so far is that it doesn't play well with older versions of ccache (apparently fixed, see https://bugzilla.samba.org/show_bug.cgi?id=10005) v2: various fixes from Dirk Gouders. Improve commit message slightly. v3: Fix clean rules and improve Kconfig slightly v4: Fix merge error in last version (Sam Ravnborg) Clarify description that it mainly helps disk size. Cc: Dirk Gouders <dirk@gouders.net> Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Michal Marek <mmarek@suse.cz>
2014-07-31 01:50:18 +07:00
help
Generate debug info into separate .dwo files. This significantly
reduces the build directory size for builds with DEBUG_INFO,
because it stores the information only once on disk in .dwo
files instead of multiple times in object files and executables.
In addition the debug information is also compressed.
Requires recent gcc (4.7+) and recent gdb/binutils.
Any tool that packages or reads debug information would need
to know about the .dwo files and include them.
Incompatible with older versions of ccache.
config DEBUG_INFO_DWARF4
bool "Generate dwarf4 debuginfo"
depends on DEBUG_INFO
depends on $(cc-option,-gdwarf-4)
help
Generate dwarf4 debug info. This requires recent versions
of gcc and gdb. It makes the debug information larger.
But it significantly improves the success of resolving
variables in gdb on optimized code.
kbuild: add ability to generate BTF type info for vmlinux This patch adds new config option to trigger generation of BTF type information from DWARF debuginfo for vmlinux and kernel modules through pahole, which in turn relies on libbpf for btf_dedup() algorithm. The intent is to record compact type information of all types used inside kernel, including all the structs/unions/typedefs/etc. This enables BPF's compile-once-run-everywhere ([0]) approach, in which tracing programs that are inspecting kernel's internal data (e.g., struct task_struct) can be compiled on a system running some kernel version, but would be possible to run on other kernel versions (and configurations) without recompilation, even if the layout of structs changed and/or some of the fields were added, removed, or renamed. This is only possible if BPF loader can get kernel type info to adjust all the offsets correctly. This patch is a first time in this direction, making sure that BTF type info is part of Linux kernel image in non-loadable ELF section. BTF deduplication ([1]) algorithm typically provides 100x savings compared to DWARF data, so resulting .BTF section is not big as is typically about 2MB in size. [0] http://vger.kernel.org/lpc-bpf2018.html#session-2 [1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@fb.com> Cc: Yonghong Song <yhs@fb.com> Cc: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-02 23:49:50 +07:00
config DEBUG_INFO_BTF
bool "Generate BTF typeinfo"
depends on DEBUG_INFO
depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED
depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST
kbuild: add ability to generate BTF type info for vmlinux This patch adds new config option to trigger generation of BTF type information from DWARF debuginfo for vmlinux and kernel modules through pahole, which in turn relies on libbpf for btf_dedup() algorithm. The intent is to record compact type information of all types used inside kernel, including all the structs/unions/typedefs/etc. This enables BPF's compile-once-run-everywhere ([0]) approach, in which tracing programs that are inspecting kernel's internal data (e.g., struct task_struct) can be compiled on a system running some kernel version, but would be possible to run on other kernel versions (and configurations) without recompilation, even if the layout of structs changed and/or some of the fields were added, removed, or renamed. This is only possible if BPF loader can get kernel type info to adjust all the offsets correctly. This patch is a first time in this direction, making sure that BTF type info is part of Linux kernel image in non-loadable ELF section. BTF deduplication ([1]) algorithm typically provides 100x savings compared to DWARF data, so resulting .BTF section is not big as is typically about 2MB in size. [0] http://vger.kernel.org/lpc-bpf2018.html#session-2 [1] https://facebookmicrosites.github.io/bpf/blog/2018/11/14/btf-enhancement.html Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@fb.com> Cc: Yonghong Song <yhs@fb.com> Cc: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-04-02 23:49:50 +07:00
help
Generate deduplicated BTF type information from DWARF debug info.
Turning this on expects presence of pahole tool, which will convert
DWARF type info into equivalent deduplicated BTF type info.
config GDB_SCRIPTS
bool "Provide GDB scripts for kernel debugging"
depends on DEBUG_INFO
help
This creates the required links to GDB helper scripts in the
build directory. If you load vmlinux into gdb, the helper
scripts will be automatically imported by gdb as well, and
additional functions are available to analyze a Linux kernel
instance. See Documentation/dev-tools/gdb-kernel-debugging.rst
for further details.
config ENABLE_MUST_CHECK
bool "Enable __must_check logic"
default y
help
Enable the __must_check logic in the kernel build. Disable this to
suppress the "warning: ignoring return value of 'foo', declared with
attribute warn_unused_result" messages.
config FRAME_WARN
int "Warn for stack frames larger than"
range 0 8192
default 2048 if GCC_PLUGIN_LATENT_ENTROPY
default 1280 if (!64BIT && PARISC)
default 1024 if (!64BIT && !PARISC)
default 2048 if 64BIT
help
Tell gcc to warn at build time for stack frames larger than this.
Setting this too low will cause a lot of warnings.
Setting it to 0 disables the warning.
config STRIP_ASM_SYMS
bool "Strip assembler-generated symbols during link"
default n
help
Strip internal assembler-generated symbols during a link (symbols
that look like '.Lxxx') so they don't pollute the output of
get_wchan() and suchlike.
config READABLE_ASM
bool "Generate readable assembler code"
depends on DEBUG_KERNEL
help
Disable some compiler optimizations that tend to generate human unreadable
assembler output. This may make the kernel slightly slower, but it helps
to keep kernel developers who have to stare a lot at assembler listings
sane.
config HEADERS_INSTALL
bool "Install uapi headers to usr/include"
depends on !UML
help
This option will install uapi headers (headers exported to user-space)
into the usr/include directory for use during the kernel build.
This is unneeded for building the kernel itself, but needed for some
user-space program samples. It is also needed by some features such
as uapi header sanity checks.
config DEBUG_SECTION_MISMATCH
bool "Enable full Section mismatch analysis"
help
The section mismatch analysis checks if there are illegal
references from one section to another section.
During linktime or runtime, some sections are dropped;
any use of code/data previously in these sections would
most likely result in an oops.
In the code, functions and variables are annotated with
__init,, etc. (see the full list in include/linux/init.h),
which results in the code/data being placed in specific sections.
The section mismatch analysis is always performed after a full
kernel build, and enabling this option causes the following
kbuild: create *.mod with full directory path and remove MODVERDIR While descending directories, Kbuild produces objects for modules, but do not link final *.ko files; it is done in the modpost. To keep track of modules, Kbuild creates a *.mod file in $(MODVERDIR) for every module it is building. Some post-processing steps read the necessary information from *.mod files. This avoids descending into directories again. This mechanism was introduced in 2003 or so. Later, commit 551559e13af1 ("kbuild: implement modules.order") added modules.order. So, we can simply read it out to know all the modules with directory paths. This is easier than parsing the first line of *.mod files. $(MODVERDIR) has a flat directory structure, that is, *.mod files are named only with base names. This is based on the assumption that the module name is unique across the tree. This assumption is really fragile. Stephen Rothwell reported a race condition caused by a module name conflict: https://lkml.org/lkml/2019/5/13/991 In parallel building, two different threads could write to the same $(MODVERDIR)/*.mod simultaneously. Non-unique module names are the source of all kind of troubles, hence commit 3a48a91901c5 ("kbuild: check uniqueness of module names") introduced a new checker script. However, it is still fragile in the build system point of view because this race happens before scripts/modules-check.sh is invoked. If it happens again, the modpost will emit unclear error messages. To fix this issue completely, create *.mod with full directory path so that two threads never attempt to write to the same file. $(MODVERDIR) is no longer needed. Since modules with directory paths are listed in modules.order, Kbuild is still able to find *.mod files without additional descending. I also killed cmd_secanalysis; scripts/mod/sumversion.c computes MD4 hash for modules with MODULE_VERSION(). When CONFIG_DEBUG_SECTION_MISMATCH=y, it occurs not only in the modpost stage, but also during directory descending, where sumversion.c may parse stale *.mod files. It would emit 'No such file or directory' warning when an object consisting a module is renamed, or when a single-obj module is turned into a multi-obj module or vice versa. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by: Nicolas Pitre <nico@fluxnic.net>
2019-07-17 13:17:57 +07:00
additional step to occur:
- Add the option -fno-inline-functions-called-once to gcc commands.
When inlining a function annotated with __init in a non-init
function, we would lose the section information and thus
the analysis would not catch the illegal reference.
This option tells gcc to inline less (but it does result in
a larger kernel).
config SECTION_MISMATCH_WARN_ONLY
bool "Make section mismatch errors non-fatal"
default y
help
If you say N here, the build process will fail if there are any
section mismatch, instead of just throwing warnings.
If unsure, say Y.
#
# Select this config option from the architecture Kconfig, if it
# is preferred to always offer frame pointers as a config
# option on the architecture (regardless of KERNEL_DEBUG):
#
config ARCH_WANT_FRAME_POINTERS
bool
config FRAME_POINTER
bool "Compile the kernel with frame pointers"
depends on DEBUG_KERNEL && (M68K || UML || SUPERH) || ARCH_WANT_FRAME_POINTERS
default y if (DEBUG_INFO && UML) || ARCH_WANT_FRAME_POINTERS
help
If you say Y here the resulting kernel image will be slightly
larger and slower, but it gives very useful debugging information
in case of kernel bugs. (precise oopses/stacktraces/warnings)
config STACK_VALIDATION
bool "Compile-time stack metadata validation"
depends on HAVE_STACK_VALIDATION
default n
help
Add compile-time checks to validate stack metadata, including frame
pointers (if CONFIG_FRAME_POINTER is enabled). This helps ensure
that runtime stack traces are more reliable.
This is also a prerequisite for generation of ORC unwind data, which
is needed for CONFIG_UNWINDER_ORC.
For more information, see
tools/objtool/Documentation/stack-validation.txt.
config VMLINUX_VALIDATION
bool
depends on STACK_VALIDATION && DEBUG_ENTRY && !PARAVIRT
default y
config DEBUG_FORCE_WEAK_PER_CPU
bool "Force weak per-cpu definitions"
depends on DEBUG_KERNEL
help
s390 and alpha require percpu variables in modules to be
defined weak to work around addressing range issue which
puts the following two restrictions on percpu variable
definitions.
1. percpu symbols must be unique whether static or not
2. percpu variables can't be defined inside a function
To ensure that generic code follows the above rules, this
option forces all percpu variables to be defined as weak.
endmenu # "Compiler options"
menu "Generic Kernel Debugging Instruments"
config MAGIC_SYSRQ
bool "Magic SysRq key"
depends on !UML
help
If you say Y here, you will have some control over the system even
if the system crashes for example during kernel debugging (e.g., you
will be able to flush the buffer cache to disk, reboot the system
immediately or dump some status information). This is accomplished
by pressing various keys while holding SysRq (Alt+PrintScreen). It
also works on a serial console (on PC hardware at least), if you
send a BREAK and then within 5 seconds a command keypress. The
keys are documented in <file:Documentation/admin-guide/sysrq.rst>.
Don't say Y unless you really know what this hack does.
config MAGIC_SYSRQ_DEFAULT_ENABLE
hex "Enable magic SysRq key functions by default"
depends on MAGIC_SYSRQ
default 0x1
help
Specifies which SysRq key functions are enabled by default.
This may be set to 1 or 0 to enable or disable them all, or
to a bitmask as described in Documentation/admin-guide/sysrq.rst.
config MAGIC_SYSRQ_SERIAL
bool "Enable magic SysRq key over serial"
depends on MAGIC_SYSRQ
default y
help
Many embedded boards have a disconnected TTL level serial which can
generate some garbage that can lead to spurious false sysrq detects.
This option allows you to decide whether you want to enable the
magic SysRq key.
config MAGIC_SYSRQ_SERIAL_SEQUENCE
string "Char sequence that enables magic SysRq over serial"
depends on MAGIC_SYSRQ_SERIAL
default ""
help
Specifies a sequence of characters that can follow BREAK to enable
SysRq on a serial console.
If unsure, leave an empty string and the option will not be enabled.
config DEBUG_FS
bool "Debug Filesystem"
help
debugfs is a virtual file system that kernel developers use to put
debugging files into. Enable this option to be able to read and
write to these files.
For detailed documentation on the debugfs API, see
Documentation/filesystems/.
If unsure, say N.
source "lib/Kconfig.kgdb"
source "lib/Kconfig.ubsan"
endmenu
config DEBUG_KERNEL
bool "Kernel debugging"
help
Say Y here if you are developing drivers or trying to debug and
identify kernel problems.
init: introduce DEBUG_MISC option Patch series "init: Do not select DEBUG_KERNEL by default", v5. CONFIG_DEBUG_KERNEL has been designed to just enable Kconfig options. Kernel code generatoin should not depend on CONFIG_DEBUG_KERNEL. Proposed alternative plan: let's add a new symbol, something like DEBUG_MISC ("Miscellaneous debug code that should be under a more specific debug option but isn't"), make it depend on DEBUG_KERNEL and be "default DEBUG_KERNEL" but allow itself to be turned off, and then mechanically change the small handful of "#ifdef CONFIG_DEBUG_KERNEL" to "#ifdef CONFIG_DEBUG_MISC". This patch (of 5): Introduce DEBUG_MISC ("Miscellaneous debug code that should be under a more specific debug option but isn't"), make it depend on DEBUG_KERNEL and be "default DEBUG_KERNEL" but allow itself to be turned off, and then mechanically change the small handful of "#ifdef CONFIG_DEBUG_KERNEL" to "#ifdef CONFIG_DEBUG_MISC". Link: http://lkml.kernel.org/r/20190413224438.10802-2-okaya@kernel.org Signed-off-by: Sinan Kaya <okaya@kernel.org> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Anders Roxell <anders.roxell@linaro.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Florian Westphal <fw@strlen.de> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: James Hogan <jhogan@kernel.org> Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Paul Burton <paul.burton@mips.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Thomas Bogendoerfer <tbogendoerfer@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-15 05:44:00 +07:00
config DEBUG_MISC
bool "Miscellaneous debug code"
default DEBUG_KERNEL
depends on DEBUG_KERNEL
help
Say Y here if you need to enable miscellaneous debug code that should
be under a more specific debug option but isn't.
menu "Memory Debugging"
source "mm/Kconfig.debug"
config DEBUG_OBJECTS
bool "Debug object operations"
depends on DEBUG_KERNEL
help
If you say Y here, additional code will be inserted into the
kernel to track the life time of various objects and validate
the operations on those objects.
config DEBUG_OBJECTS_SELFTEST
bool "Debug objects selftest"
depends on DEBUG_OBJECTS
help
This enables the selftest of the object debug code.
config DEBUG_OBJECTS_FREE
bool "Debug objects in freed memory"
depends on DEBUG_OBJECTS
help
This enables checks whether a k/v free operation frees an area
which contains an object which has not been deactivated
properly. This can make kmalloc/kfree-intensive workloads
much slower.
infrastructure to debug (dynamic) objects We can see an ever repeating problem pattern with objects of any kind in the kernel: 1) freeing of active objects 2) reinitialization of active objects Both problems can be hard to debug because the crash happens at a point where we have no chance to decode the root cause anymore. One problem spot are kernel timers, where the detection of the problem often happens in interrupt context and usually causes the machine to panic. While working on a timer related bug report I had to hack specialized code into the timer subsystem to get a reasonable hint for the root cause. This debug hack was fine for temporary use, but far from a mergeable solution due to the intrusiveness into the timer code. The code further lacked the ability to detect and report the root cause instantly and keep the system operational. Keeping the system operational is important to get hold of the debug information without special debugging aids like serial consoles and special knowledge of the bug reporter. The problems described above are not restricted to timers, but timers tend to expose it usually in a full system crash. Other objects are less explosive, but the symptoms caused by such mistakes can be even harder to debug. Instead of creating specialized debugging code for the timer subsystem a generic infrastructure is created which allows developers to verify their code and provides an easy to enable debug facility for users in case of trouble. The debugobjects core code keeps track of operations on static and dynamic objects by inserting them into a hashed list and sanity checking them on object operations and provides additional checks whenever kernel memory is freed. The tracked object operations are: - initializing an object - adding an object to a subsystem list - deleting an object from a subsystem list Each operation is sanity checked before the operation is executed and the subsystem specific code can provide a fixup function which allows to prevent the damage of the operation. When the sanity check triggers a warning message and a stack trace is printed. The list of operations can be extended if the need arises. For now it's limited to the requirements of the first user (timers). The core code enqueues the objects into hash buckets. The hash index is generated from the address of the object to simplify the lookup for the check on kfree/vfree. Each bucket has it's own spinlock to avoid contention on a global lock. The debug code can be compiled in without being active. The runtime overhead is minimal and could be optimized by asm alternatives. A kernel command line option enables the debugging code. Thanks to Ingo Molnar for review, suggestions and cleanup patches. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Greg KH <greg@kroah.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-30 14:55:01 +07:00
config DEBUG_OBJECTS_TIMERS
bool "Debug timer objects"
depends on DEBUG_OBJECTS
help
If you say Y here, additional code will be inserted into the
timer routines to track the life time of timer objects and
validate the timer operations.
config DEBUG_OBJECTS_WORK
bool "Debug work objects"
depends on DEBUG_OBJECTS
help
If you say Y here, additional code will be inserted into the
work queue routines to track the life time of work objects and
validate the work operations.
tree/tiny rcu: Add debug RCU head objects Helps finding racy users of call_rcu(), which results in hangs because list entries are overwritten and/or skipped. Changelog since v4: - Bissectability is now OK - Now generate a WARN_ON_ONCE() for non-initialized rcu_head passed to call_rcu(). Statically initialized objects are detected with object_is_static(). - Rename rcu_head_init_on_stack to init_rcu_head_on_stack. - Remove init_rcu_head() completely. Changelog since v3: - Include comments from Lai Jiangshan This new patch version is based on the debugobjects with the newly introduced "active state" tracker. Non-initialized entries are all considered as "statically initialized". An activation fixup (triggered by call_rcu()) takes care of performing the debug object initialization without issuing any warning. Since we cannot increase the size of struct rcu_head, I don't see much room to put an identifier for statically initialized rcu_head structures. So for now, we have to live without "activation without explicit init" detection. But the main purpose of this debug option is to detect double-activations (double call_rcu() use of a rcu_head before the callback is executed), which is correctly addressed here. This also detects potential internal RCU callback corruption, which would cause the callbacks to be executed twice. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> CC: David S. Miller <davem@davemloft.net> CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> CC: akpm@linux-foundation.org CC: mingo@elte.hu CC: laijs@cn.fujitsu.com CC: dipankar@in.ibm.com CC: josh@joshtriplett.org CC: dvhltc@us.ibm.com CC: niv@us.ibm.com CC: tglx@linutronix.de CC: peterz@infradead.org CC: rostedt@goodmis.org CC: Valdis.Kletnieks@vt.edu CC: dhowells@redhat.com CC: eric.dumazet@gmail.com CC: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Lai Jiangshan <laijs@cn.fujitsu.com>
2010-04-17 19:48:42 +07:00
config DEBUG_OBJECTS_RCU_HEAD
bool "Debug RCU callbacks objects"
depends on DEBUG_OBJECTS
tree/tiny rcu: Add debug RCU head objects Helps finding racy users of call_rcu(), which results in hangs because list entries are overwritten and/or skipped. Changelog since v4: - Bissectability is now OK - Now generate a WARN_ON_ONCE() for non-initialized rcu_head passed to call_rcu(). Statically initialized objects are detected with object_is_static(). - Rename rcu_head_init_on_stack to init_rcu_head_on_stack. - Remove init_rcu_head() completely. Changelog since v3: - Include comments from Lai Jiangshan This new patch version is based on the debugobjects with the newly introduced "active state" tracker. Non-initialized entries are all considered as "statically initialized". An activation fixup (triggered by call_rcu()) takes care of performing the debug object initialization without issuing any warning. Since we cannot increase the size of struct rcu_head, I don't see much room to put an identifier for statically initialized rcu_head structures. So for now, we have to live without "activation without explicit init" detection. But the main purpose of this debug option is to detect double-activations (double call_rcu() use of a rcu_head before the callback is executed), which is correctly addressed here. This also detects potential internal RCU callback corruption, which would cause the callbacks to be executed twice. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> CC: David S. Miller <davem@davemloft.net> CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> CC: akpm@linux-foundation.org CC: mingo@elte.hu CC: laijs@cn.fujitsu.com CC: dipankar@in.ibm.com CC: josh@joshtriplett.org CC: dvhltc@us.ibm.com CC: niv@us.ibm.com CC: tglx@linutronix.de CC: peterz@infradead.org CC: rostedt@goodmis.org CC: Valdis.Kletnieks@vt.edu CC: dhowells@redhat.com CC: eric.dumazet@gmail.com CC: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Lai Jiangshan <laijs@cn.fujitsu.com>
2010-04-17 19:48:42 +07:00
help
Enable this to turn on debugging of RCU list heads (call_rcu() usage).
percpu_counter: add debugobj support All percpu counters are linked to a global list on initialization and removed from it on destruction. The list is walked during CPU up/down. If a percpu counter is freed without being properly destroyed, the system will oops only on the next CPU up/down making it pretty nasty to track down. This patch adds debugobj support for percpu counters so that such problems can be found easily. As percpu counters don't make sense on stack and can't be statically initialized, debugobj support is pretty simple. It's initialized and activated on counter initialization, and deactivatd and destroyed on counter destruction. With this patch applied, the bug fixed by commit 602586a83b719df0fbd94196a1359ed35aeb2df3 (shmem: put_super must percpu_counter_destroy) triggers the following warning on tmpfs unmount and the system won't oops on the next cpu up/down operation. ------------[ cut here ]------------ WARNING: at lib/debugobjects.c:259 debug_print_object+0x5c/0x70() Hardware name: Bochs ODEBUG: free active (active state 0) object type: percpu_counter Modules linked in: Pid: 3999, comm: umount Not tainted 2.6.36-rc2-work+ #5 Call Trace: [<ffffffff81083f7f>] warn_slowpath_common+0x7f/0xc0 [<ffffffff81084076>] warn_slowpath_fmt+0x46/0x50 [<ffffffff813b45cc>] debug_print_object+0x5c/0x70 [<ffffffff813b50e5>] debug_check_no_obj_freed+0x125/0x210 [<ffffffff811577d3>] kfree+0xb3/0x2f0 [<ffffffff81132edd>] shmem_put_super+0x1d/0x30 [<ffffffff81162e96>] generic_shutdown_super+0x56/0xe0 [<ffffffff81162f86>] kill_anon_super+0x16/0x60 [<ffffffff81162ff7>] kill_litter_super+0x27/0x30 [<ffffffff81163295>] deactivate_locked_super+0x45/0x60 [<ffffffff81163cfa>] deactivate_super+0x4a/0x70 [<ffffffff8117d446>] mntput_no_expire+0x86/0xe0 [<ffffffff8117df7f>] sys_umount+0x6f/0x360 [<ffffffff8103f01b>] system_call_fastpath+0x16/0x1b ---[ end trace cce2a341ba3611a7 ]--- Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Thomas Gleixner <tglxlinutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-10-27 04:23:05 +07:00
config DEBUG_OBJECTS_PERCPU_COUNTER
bool "Debug percpu counter objects"
depends on DEBUG_OBJECTS
help
If you say Y here, additional code will be inserted into the
percpu counter routines to track the life time of percpu counter
objects and validate the percpu counter operations.
config DEBUG_OBJECTS_ENABLE_DEFAULT
int "debug_objects bootup default value (0-1)"
range 0 1
default "1"
depends on DEBUG_OBJECTS
help
Debug objects boot parameter default value
config DEBUG_SLAB
bool "Debug slab memory allocations"
depends on DEBUG_KERNEL && SLAB
help
Say Y here to have the kernel do limited verification on memory
allocation as well as poisoning memory on free to catch use of freed
memory. This can make kmalloc/kfree-intensive workloads much slower.
config SLUB_DEBUG_ON
bool "SLUB debugging on by default"
depends on SLUB && SLUB_DEBUG
default n
help
Boot with debugging on by default. SLUB boots by default with
the runtime debug capabilities switched off. Enabling this is
equivalent to specifying the "slub_debug" parameter on boot.
There is no support for more fine grained debug control like
possible with slub_debug=xxx. SLUB debugging may be switched
off in a kernel built with CONFIG_SLUB_DEBUG_ON by specifying
"slub_debug=-".
SLUB: Support for performance statistics The statistics provided here allow the monitoring of allocator behavior but at the cost of some (minimal) loss of performance. Counters are placed in SLUB's per cpu data structure. The per cpu structure may be extended by the statistics to grow larger than one cacheline which will increase the cache footprint of SLUB. There is a compile option to enable/disable the inclusion of the runtime statistics and its off by default. The slabinfo tool is enhanced to support these statistics via two options: -D Switches the line of information displayed for a slab from size mode to activity mode. -A Sorts the slabs displayed by activity. This allows the display of the slabs most important to the performance of a certain load. -r Report option will report detailed statistics on Example (tbench load): slabinfo -AD ->Shows the most active slabs Name Objects Alloc Free %Fast skbuff_fclone_cache 33 111953835 111953835 99 99 :0000192 2666 5283688 5281047 99 99 :0001024 849 5247230 5246389 83 83 vm_area_struct 1349 119642 118355 91 22 :0004096 15 66753 66751 98 98 :0000064 2067 25297 23383 98 78 dentry 10259 28635 18464 91 45 :0000080 11004 18950 8089 98 98 :0000096 1703 12358 10784 99 98 :0000128 762 10582 9875 94 18 :0000512 184 9807 9647 95 81 :0002048 479 9669 9195 83 65 anon_vma 777 9461 9002 99 71 kmalloc-8 6492 9981 5624 99 97 :0000768 258 7174 6931 58 15 So the skbuff_fclone_cache is of highest importance for the tbench load. Pretty high load on the 192 sized slab. Look for the aliases slabinfo -a | grep 000192 :0000192 <- xfs_btree_cur filp kmalloc-192 uid_cache tw_sock_TCP request_sock_TCPv6 tw_sock_TCPv6 skbuff_head_cache xfs_ili Likely skbuff_head_cache. Looking into the statistics of the skbuff_fclone_cache is possible through slabinfo skbuff_fclone_cache ->-r option implied if cache name is mentioned .... Usual output ... Slab Perf Counter Alloc Free %Al %Fr -------------------------------------------------- Fastpath 111953360 111946981 99 99 Slowpath 1044 7423 0 0 Page Alloc 272 264 0 0 Add partial 25 325 0 0 Remove partial 86 264 0 0 RemoteObj/SlabFrozen 350 4832 0 0 Total 111954404 111954404 Flushes 49 Refill 0 Deactivate Full=325(92%) Empty=0(0%) ToHead=24(6%) ToTail=1(0%) Looks good because the fastpath is overwhelmingly taken. skbuff_head_cache: Slab Perf Counter Alloc Free %Al %Fr -------------------------------------------------- Fastpath 5297262 5259882 99 99 Slowpath 4477 39586 0 0 Page Alloc 937 824 0 0 Add partial 0 2515 0 0 Remove partial 1691 824 0 0 RemoteObj/SlabFrozen 2621 9684 0 0 Total 5301739 5299468 Deactivate Full=2620(100%) Empty=0(0%) ToHead=0(0%) ToTail=0(0%) Descriptions of the output: Total: The total number of allocation and frees that occurred for a slab Fastpath: The number of allocations/frees that used the fastpath. Slowpath: Other allocations Page Alloc: Number of calls to the page allocator as a result of slowpath processing Add Partial: Number of slabs added to the partial list through free or alloc (occurs during cpuslab flushes) Remove Partial: Number of slabs removed from the partial list as a result of allocations retrieving a partial slab or by a free freeing the last object of a slab. RemoteObj/Froz: How many times were remotely freed object encountered when a slab was about to be deactivated. Frozen: How many times was free able to skip list processing because the slab was in use as the cpuslab of another processor. Flushes: Number of times the cpuslab was flushed on request (kmem_cache_shrink, may result from races in __slab_alloc) Refill: Number of times we were able to refill the cpuslab from remotely freed objects for the same slab. Deactivate: Statistics how slabs were deactivated. Shows how they were put onto the partial list. In general fastpath is very good. Slowpath without partial list processing is also desirable. Any touching of partial list uses node specific locks which may potentially cause list lock contention. Signed-off-by: Christoph Lameter <clameter@sgi.com>
2008-02-08 08:47:41 +07:00
config SLUB_STATS
default n
bool "Enable SLUB performance statistics"
depends on SLUB && SYSFS
SLUB: Support for performance statistics The statistics provided here allow the monitoring of allocator behavior but at the cost of some (minimal) loss of performance. Counters are placed in SLUB's per cpu data structure. The per cpu structure may be extended by the statistics to grow larger than one cacheline which will increase the cache footprint of SLUB. There is a compile option to enable/disable the inclusion of the runtime statistics and its off by default. The slabinfo tool is enhanced to support these statistics via two options: -D Switches the line of information displayed for a slab from size mode to activity mode. -A Sorts the slabs displayed by activity. This allows the display of the slabs most important to the performance of a certain load. -r Report option will report detailed statistics on Example (tbench load): slabinfo -AD ->Shows the most active slabs Name Objects Alloc Free %Fast skbuff_fclone_cache 33 111953835 111953835 99 99 :0000192 2666 5283688 5281047 99 99 :0001024 849 5247230 5246389 83 83 vm_area_struct 1349 119642 118355 91 22 :0004096 15 66753 66751 98 98 :0000064 2067 25297 23383 98 78 dentry 10259 28635 18464 91 45 :0000080 11004 18950 8089 98 98 :0000096 1703 12358 10784 99 98 :0000128 762 10582 9875 94 18 :0000512 184 9807 9647 95 81 :0002048 479 9669 9195 83 65 anon_vma 777 9461 9002 99 71 kmalloc-8 6492 9981 5624 99 97 :0000768 258 7174 6931 58 15 So the skbuff_fclone_cache is of highest importance for the tbench load. Pretty high load on the 192 sized slab. Look for the aliases slabinfo -a | grep 000192 :0000192 <- xfs_btree_cur filp kmalloc-192 uid_cache tw_sock_TCP request_sock_TCPv6 tw_sock_TCPv6 skbuff_head_cache xfs_ili Likely skbuff_head_cache. Looking into the statistics of the skbuff_fclone_cache is possible through slabinfo skbuff_fclone_cache ->-r option implied if cache name is mentioned .... Usual output ... Slab Perf Counter Alloc Free %Al %Fr -------------------------------------------------- Fastpath 111953360 111946981 99 99 Slowpath 1044 7423 0 0 Page Alloc 272 264 0 0 Add partial 25 325 0 0 Remove partial 86 264 0 0 RemoteObj/SlabFrozen 350 4832 0 0 Total 111954404 111954404 Flushes 49 Refill 0 Deactivate Full=325(92%) Empty=0(0%) ToHead=24(6%) ToTail=1(0%) Looks good because the fastpath is overwhelmingly taken. skbuff_head_cache: Slab Perf Counter Alloc Free %Al %Fr -------------------------------------------------- Fastpath 5297262 5259882 99 99 Slowpath 4477 39586 0 0 Page Alloc 937 824 0 0 Add partial 0 2515 0 0 Remove partial 1691 824 0 0 RemoteObj/SlabFrozen 2621 9684 0 0 Total 5301739 5299468 Deactivate Full=2620(100%) Empty=0(0%) ToHead=0(0%) ToTail=0(0%) Descriptions of the output: Total: The total number of allocation and frees that occurred for a slab Fastpath: The number of allocations/frees that used the fastpath. Slowpath: Other allocations Page Alloc: Number of calls to the page allocator as a result of slowpath processing Add Partial: Number of slabs added to the partial list through free or alloc (occurs during cpuslab flushes) Remove Partial: Number of slabs removed from the partial list as a result of allocations retrieving a partial slab or by a free freeing the last object of a slab. RemoteObj/Froz: How many times were remotely freed object encountered when a slab was about to be deactivated. Frozen: How many times was free able to skip list processing because the slab was in use as the cpuslab of another processor. Flushes: Number of times the cpuslab was flushed on request (kmem_cache_shrink, may result from races in __slab_alloc) Refill: Number of times we were able to refill the cpuslab from remotely freed objects for the same slab. Deactivate: Statistics how slabs were deactivated. Shows how they were put onto the partial list. In general fastpath is very good. Slowpath without partial list processing is also desirable. Any touching of partial list uses node specific locks which may potentially cause list lock contention. Signed-off-by: Christoph Lameter <clameter@sgi.com>
2008-02-08 08:47:41 +07:00
help
SLUB statistics are useful to debug SLUBs allocation behavior in
order find ways to optimize the allocator. This should never be
enabled for production use since keeping statistics slows down
the allocator by a few percentage points. The slabinfo command
supports the determination of the most active slabs to figure
out which slabs are relevant to a particular load.
Try running: slabinfo -DA
config HAVE_DEBUG_KMEMLEAK
bool
config DEBUG_KMEMLEAK
bool "Kernel memory leak detector"
depends on DEBUG_KERNEL && HAVE_DEBUG_KMEMLEAK
select DEBUG_FS
select STACKTRACE if STACKTRACE_SUPPORT
select KALLSYMS
select CRC32
help
Say Y here if you want to enable the memory leak
detector. The memory allocation/freeing is traced in a way
similar to the Boehm's conservative garbage collector, the
difference being that the orphan objects are not freed but
only shown in /sys/kernel/debug/kmemleak. Enabling this
feature will introduce an overhead to memory
allocations. See Documentation/dev-tools/kmemleak.rst for more
details.
Enabling DEBUG_SLAB or SLUB_DEBUG may increase the chances
of finding leaks due to the slab objects poisoning.
In order to access the kmemleak file, debugfs needs to be
mounted (usually at /sys/kernel/debug).
config DEBUG_KMEMLEAK_MEM_POOL_SIZE
int "Kmemleak memory pool size"
depends on DEBUG_KMEMLEAK
range 200 1000000
default 16000
help
Kmemleak must track all the memory allocations to avoid
reporting false positives. Since memory may be allocated or
freed before kmemleak is fully initialised, use a static pool
of metadata objects to track such callbacks. After kmemleak is
fully initialised, this memory pool acts as an emergency one
if slab allocations fail.
config DEBUG_KMEMLEAK_TEST
tristate "Simple test for the kernel memory leak detector"
depends on DEBUG_KMEMLEAK && m
help
This option enables a module that explicitly leaks memory.
If unsure, say N.
config DEBUG_KMEMLEAK_DEFAULT_OFF
bool "Default kmemleak to off"
depends on DEBUG_KMEMLEAK
help
Say Y here to disable kmemleak by default. It can then be enabled
on the command line via kmemleak=on.
config DEBUG_KMEMLEAK_AUTO_SCAN
bool "Enable kmemleak auto scan thread on boot up"
default y
depends on DEBUG_KMEMLEAK
help
Depending on the cpu, kmemleak scan may be cpu intensive and can
stall user tasks at times. This option enables/disables automatic
kmemleak scan at boot up.
Say N here to disable kmemleak auto scan thread to stop automatic
scanning. Disabling this option disables automatic reporting of
memory leaks.
If unsure, say Y.
config DEBUG_STACK_USAGE
bool "Stack utilization instrumentation"
depends on DEBUG_KERNEL && !IA64
help
Enables the display of the minimum amount of free stack which each
task has ever had available in the sysrq-T and sysrq-P debug output.
This option will slow down process creation somewhat.
config SCHED_STACK_END_CHECK
bool "Detect stack corruption on calls to schedule()"
depends on DEBUG_KERNEL
default n
help
This option checks for a stack overrun on calls to schedule().
If the stack end location is found to be over written always panic as
the content of the corrupted region can no longer be trusted.
This is to ensure no erroneous behaviour occurs which could result in
data corruption or a sporadic crash at a later stage once the region
is examined. The runtime overhead introduced is minimal.
mm/debug: add tests validating architecture page table helpers This adds tests which will validate architecture page table helpers and other accessors in their compliance with expected generic MM semantics. This will help various architectures in validating changes to existing page table helpers or addition of new ones. This test covers basic page table entry transformations including but not limited to old, young, dirty, clean, write, write protect etc at various level along with populating intermediate entries with next page table page and validating them. Test page table pages are allocated from system memory with required size and alignments. The mapped pfns at page table levels are derived from a real pfn representing a valid kernel text symbol. This test gets called via late_initcall(). This test gets built and run when CONFIG_DEBUG_VM_PGTABLE is selected. Any architecture, which is willing to subscribe this test will need to select ARCH_HAS_DEBUG_VM_PGTABLE. For now this is limited to arc, arm64, x86, s390 and powerpc platforms where the test is known to build and run successfully Going forward, other architectures too can subscribe the test after fixing any build or runtime problems with their page table helpers. Folks interested in making sure that a given platform's page table helpers conform to expected generic MM semantics should enable the above config which will just trigger this test during boot. Any non conformity here will be reported as an warning which would need to be fixed. This test will help catch any changes to the agreed upon semantics expected from generic MM and enable platforms to accommodate it thereafter. [anshuman.khandual@arm.com: v17] Link: http://lkml.kernel.org/r/1587436495-22033-3-git-send-email-anshuman.khandual@arm.com [anshuman.khandual@arm.com: v18] Link: http://lkml.kernel.org/r/1588564865-31160-3-git-send-email-anshuman.khandual@arm.com Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Tested-by: Christophe Leroy <christophe.leroy@c-s.fr> [ppc32] Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Link: http://lkml.kernel.org/r/1583919272-24178-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-05 06:47:15 +07:00
config ARCH_HAS_DEBUG_VM_PGTABLE
bool
help
An architecture should select this when it can successfully
build and run DEBUG_VM_PGTABLE.
config DEBUG_VM
bool "Debug VM"
depends on DEBUG_KERNEL
help
Enable this to turn on extended checks in the virtual-memory system
that may impact performance.
If unsure, say N.
config DEBUG_VM_VMACACHE
bool "Debug VMA caching"
depends on DEBUG_VM
help
Enable this to turn on VMA caching debug information. Doing so
can cause significant overhead, so only enable it in non-production
environments.
If unsure, say N.
config DEBUG_VM_RB
bool "Debug VM red-black trees"
depends on DEBUG_VM
help
Enable VM red-black tree debugging information and extra validations.
If unsure, say N.
page-flags: introduce page flags policies wrt compound pages This patch adds a third argument to macros which create function definitions for page flags. This argument defines how page-flags helpers behave on compound functions. For now we define four policies: - PF_ANY: the helper function operates on the page it gets, regardless if it's non-compound, head or tail. - PF_HEAD: the helper function operates on the head page of the compound page if it gets tail page. - PF_NO_TAIL: only head and non-compond pages are acceptable for this helper function. - PF_NO_COMPOUND: only non-compound pages are acceptable for this helper function. For now we use policy PF_ANY for all helpers, which matches current behaviour. We do not enforce the policy for TESTPAGEFLAG, because we have flags checked for random pages all over the kernel. Noticeable exception to this is PageTransHuge() which triggers VM_BUG_ON() for tail page. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Jérôme Glisse <jglisse@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-16 07:51:21 +07:00
config DEBUG_VM_PGFLAGS
bool "Debug page-flags operations"
depends on DEBUG_VM
help
Enables extra validation on page flags operations.
If unsure, say N.
mm/debug: add tests validating architecture page table helpers This adds tests which will validate architecture page table helpers and other accessors in their compliance with expected generic MM semantics. This will help various architectures in validating changes to existing page table helpers or addition of new ones. This test covers basic page table entry transformations including but not limited to old, young, dirty, clean, write, write protect etc at various level along with populating intermediate entries with next page table page and validating them. Test page table pages are allocated from system memory with required size and alignments. The mapped pfns at page table levels are derived from a real pfn representing a valid kernel text symbol. This test gets called via late_initcall(). This test gets built and run when CONFIG_DEBUG_VM_PGTABLE is selected. Any architecture, which is willing to subscribe this test will need to select ARCH_HAS_DEBUG_VM_PGTABLE. For now this is limited to arc, arm64, x86, s390 and powerpc platforms where the test is known to build and run successfully Going forward, other architectures too can subscribe the test after fixing any build or runtime problems with their page table helpers. Folks interested in making sure that a given platform's page table helpers conform to expected generic MM semantics should enable the above config which will just trigger this test during boot. Any non conformity here will be reported as an warning which would need to be fixed. This test will help catch any changes to the agreed upon semantics expected from generic MM and enable platforms to accommodate it thereafter. [anshuman.khandual@arm.com: v17] Link: http://lkml.kernel.org/r/1587436495-22033-3-git-send-email-anshuman.khandual@arm.com [anshuman.khandual@arm.com: v18] Link: http://lkml.kernel.org/r/1588564865-31160-3-git-send-email-anshuman.khandual@arm.com Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Tested-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Tested-by: Christophe Leroy <christophe.leroy@c-s.fr> [ppc32] Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Kirill A. Shutemov <kirill@shutemov.name> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Link: http://lkml.kernel.org/r/1583919272-24178-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-05 06:47:15 +07:00
config DEBUG_VM_PGTABLE
bool "Debug arch page table for semantics compliance"
depends on MMU
depends on ARCH_HAS_DEBUG_VM_PGTABLE
default y if DEBUG_VM
help
This option provides a debug method which can be used to test
architecture page table helper functions on various platforms in
verifying if they comply with expected generic MM semantics. This
will help architecture code in making sure that any changes or
new additions of these helpers still conform to expected
semantics of the generic MM. Platforms will have to opt in for
this through ARCH_HAS_DEBUG_VM_PGTABLE.
If unsure, say N.
config ARCH_HAS_DEBUG_VIRTUAL
bool
config DEBUG_VIRTUAL
bool "Debug VM translations"
depends on DEBUG_KERNEL && ARCH_HAS_DEBUG_VIRTUAL
help
Enable some costly sanity checks in virtual to page code. This can
catch mistakes with virt_to_page() and friends.
If unsure, say N.
config DEBUG_NOMMU_REGIONS
bool "Debug the global anon/private NOMMU mapping region tree"
depends on DEBUG_KERNEL && !MMU
help
This option causes the global tree of anonymous and private mapping
regions to be regularly checked for invalid topology.
config DEBUG_MEMORY_INIT
bool "Debug memory initialisation" if EXPERT
default !EXPERT
help
Enable this for additional checks during memory initialisation.
The sanity checks verify aspects of the VM such as the memory model
and other information provided by the architecture. Verbose
information will be printed at KERN_DEBUG loglevel depending
on the mminit_loglevel= command-line option.
If unsure, say Y
config MEMORY_NOTIFIER_ERROR_INJECT
tristate "Memory hotplug notifier error injection module"
depends on MEMORY_HOTPLUG_SPARSE && NOTIFIER_ERROR_INJECTION
help
This option provides the ability to inject artificial errors to
memory hotplug notifier chain callbacks. It is controlled through
debugfs interface under /sys/kernel/debug/notifier-error-inject/memory
If the notifier call chain should be failed with some events
notified, write the error code to "actions/<notifier event>/error".
Example: Inject memory hotplug offline error (-12 == -ENOMEM)
# cd /sys/kernel/debug/notifier-error-inject/memory
# echo -12 > actions/MEM_GOING_OFFLINE/error
# echo offline > /sys/devices/system/memory/memoryXXX/state
bash: echo: write error: Cannot allocate memory
To compile this code as a module, choose M here: the module will
be called memory-notifier-error-inject.
If unsure, say N.
config DEBUG_PER_CPU_MAPS
bool "Debug access to per_cpu maps"
depends on DEBUG_KERNEL
depends on SMP
help
Say Y to verify that the per_cpu map being accessed has
been set up. This adds a fair amount of code to kernel memory
and decreases performance.
Say N if unsure.
config DEBUG_HIGHMEM
bool "Highmem debugging"
depends on DEBUG_KERNEL && HIGHMEM
help
This option enables additional error checking for high memory
systems. Disable for production systems.
config HAVE_DEBUG_STACKOVERFLOW
bool
config DEBUG_STACKOVERFLOW
bool "Check for stack overflows"
depends on DEBUG_KERNEL && HAVE_DEBUG_STACKOVERFLOW
---help---
Say Y here if you want to check for overflows of kernel, IRQ
and exception stacks (if your architecture uses them). This
option will show detailed messages if free stack space drops
below a certain limit.
These kinds of bugs usually occur when call-chains in the
kernel get too deep, especially when interrupts are
involved.
Use this in cases where you see apparently random memory
corruption, especially if it appears in 'struct thread_info'
If in doubt, say "N".
kasan: add kernel address sanitizer infrastructure Kernel Address sanitizer (KASan) is a dynamic memory error detector. It provides fast and comprehensive solution for finding use-after-free and out-of-bounds bugs. KASAN uses compile-time instrumentation for checking every memory access, therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with putting symbol aliases into the wrong section, which breaks kasan instrumentation of globals. This patch only adds infrastructure for kernel address sanitizer. It's not available for use yet. The idea and some code was borrowed from [1]. Basic idea: The main idea of KASAN is to use shadow memory to record whether each byte of memory is safe to access or not, and use compiler's instrumentation to check the shadow memory on each memory access. Address sanitizer uses 1/8 of the memory addressable in kernel for shadow memory and uses direct mapping with a scale and offset to translate a memory address to its corresponding shadow address. Here is function to translate address to corresponding shadow address: unsigned long kasan_mem_to_shadow(unsigned long addr) { return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET; } where KASAN_SHADOW_SCALE_SHIFT = 3. So for every 8 bytes there is one corresponding byte of shadow memory. The following encoding used for each shadow byte: 0 means that all 8 bytes of the corresponding memory region are valid for access; k (1 <= k <= 7) means that the first k bytes are valid for access, and other (8 - k) bytes are not; Any negative value indicates that the entire 8-bytes are inaccessible. Different negative values used to distinguish between different kinds of inaccessible memory (redzones, freed memory) (see mm/kasan/kasan.h). To be able to detect accesses to bad memory we need a special compiler. Such compiler inserts a specific function calls (__asan_load*(addr), __asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16. These functions check whether memory region is valid to access or not by checking corresponding shadow memory. If access is not valid an error printed. Historical background of the address sanitizer from Dmitry Vyukov: "We've developed the set of tools, AddressSanitizer (Asan), ThreadSanitizer and MemorySanitizer, for user space. We actively use them for testing inside of Google (continuous testing, fuzzing, running prod services). To date the tools have found more than 10'000 scary bugs in Chromium, Google internal codebase and various open-source projects (Firefox, OpenSSL, gcc, clang, ffmpeg, MySQL and lots of others): [2] [3] [4]. The tools are part of both gcc and clang compilers. We have not yet done massive testing under the Kernel AddressSanitizer (it's kind of chicken and egg problem, you need it to be upstream to start applying it extensively). To date it has found about 50 bugs. Bugs that we've found in upstream kernel are listed in [5]. We've also found ~20 bugs in out internal version of the kernel. Also people from Samsung and Oracle have found some. [...] As others noted, the main feature of AddressSanitizer is its performance due to inline compiler instrumentation and simple linear shadow memory. User-space Asan has ~2x slowdown on computational programs and ~2x memory consumption increase. Taking into account that kernel usually consumes only small fraction of CPU and memory when running real user-space programs, I would expect that kernel Asan will have ~10-30% slowdown and similar memory consumption increase (when we finish all tuning). I agree that Asan can well replace kmemcheck. We have plans to start working on Kernel MemorySanitizer that finds uses of unitialized memory. Asan+Msan will provide feature-parity with kmemcheck. As others noted, Asan will unlikely replace debug slab and pagealloc that can be enabled at runtime. Asan uses compiler instrumentation, so even if it is disabled, it still incurs visible overheads. Asan technology is easily portable to other architectures. Compiler instrumentation is fully portable. Runtime has some arch-dependent parts like shadow mapping and atomic operation interception. They are relatively easy to port." Comparison with other debugging features: ======================================== KMEMCHECK: - KASan can do almost everything that kmemcheck can. KASan uses compile-time instrumentation, which makes it significantly faster than kmemcheck. The only advantage of kmemcheck over KASan is detection of uninitialized memory reads. Some brief performance testing showed that kasan could be x500-x600 times faster than kmemcheck: $ netperf -l 30 MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec no debug: 87380 16384 16384 30.00 41624.72 kasan inline: 87380 16384 16384 30.00 12870.54 kasan outline: 87380 16384 16384 30.00 10586.39 kmemcheck: 87380 16384 16384 30.03 20.23 - Also kmemcheck couldn't work on several CPUs. It always sets number of CPUs to 1. KASan doesn't have such limitation. DEBUG_PAGEALLOC: - KASan is slower than DEBUG_PAGEALLOC, but KASan works on sub-page granularity level, so it able to find more bugs. SLUB_DEBUG (poisoning, redzones): - SLUB_DEBUG has lower overhead than KASan. - SLUB_DEBUG in most cases are not able to detect bad reads, KASan able to detect both reads and writes. - In some cases (e.g. redzone overwritten) SLUB_DEBUG detect bugs only on allocation/freeing of object. KASan catch bugs right before it will happen, so we always know exact place of first bad read/write. [1] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel [2] https://code.google.com/p/address-sanitizer/wiki/FoundBugs [3] https://code.google.com/p/thread-sanitizer/wiki/FoundBugs [4] https://code.google.com/p/memory-sanitizer/wiki/FoundBugs [5] https://code.google.com/p/address-sanitizer/wiki/AddressSanitizerForKernel#Trophies Based on work by Andrey Konovalov. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Acked-by: Michal Marek <mmarek@suse.cz> Signed-off-by: Andrey Konovalov <adech.fo@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-14 05:39:17 +07:00
source "lib/Kconfig.kasan"
endmenu # "Memory Debugging"
config DEBUG_SHIRQ
bool "Debug shared IRQ handlers"
depends on DEBUG_KERNEL
help
Enable this to generate a spurious interrupt as soon as a shared
interrupt handler is registered, and just before one is deregistered.
Drivers ought to be able to handle interrupts coming in at those
points; some don't and need to be caught.
menu "Debug Oops, Lockups and Hangs"
config PANIC_ON_OOPS
bool "Panic on Oops"
help
Say Y here to enable the kernel to panic when it oopses. This
has the same effect as setting oops=panic on the kernel command
line.
This feature is useful to ensure that the kernel does not do
anything erroneous after an oops which could result in data
corruption or other issues.
Say N if unsure.
config PANIC_ON_OOPS_VALUE
int
range 0 1
default 0 if !PANIC_ON_OOPS
default 1 if PANIC_ON_OOPS
config PANIC_TIMEOUT
int "panic timeout"
default 0
help
Set the timeout value (in seconds) until a reboot occurs when the
the kernel panics. If n = 0, then we wait forever. A timeout
value n > 0 will wait n seconds before rebooting, while a timeout
value n < 0 will reboot immediately.
lockup_detector: Combine nmi_watchdog and softlockup detector The new nmi_watchdog (which uses the perf event subsystem) is very similar in structure to the softlockup detector. Using Ingo's suggestion, I combined the two functionalities into one file: kernel/watchdog.c. Now both the nmi_watchdog (or hardlockup detector) and softlockup detector sit on top of the perf event subsystem, which is run every 60 seconds or so to see if there are any lockups. To detect hardlockups, cpus not responding to interrupts, I implemented an hrtimer that runs 5 times for every perf event overflow event. If that stops counting on a cpu, then the cpu is most likely in trouble. To detect softlockups, tasks not yielding to the scheduler, I used the previous kthread idea that now gets kicked every time the hrtimer fires. If the kthread isn't being scheduled neither is anyone else and the warning is printed to the console. I tested this on x86_64 and both the softlockup and hardlockup paths work. V2: - cleaned up the Kconfig and softlockup combination - surrounded hardlockup cases with #ifdef CONFIG_PERF_EVENTS_NMI - seperated out the softlockup case from perf event subsystem - re-arranged the enabling/disabling nmi watchdog from proc space - added cpumasks for hardlockup failure cases - removed fallback to soft events if no PMU exists for hard events V3: - comment cleanups - drop support for older softlockup code - per_cpu cleanups - completely remove software clock base hardlockup detector - use per_cpu masking on hard/soft lockup detection - #ifdef cleanups - rename config option NMI_WATCHDOG to LOCKUP_DETECTOR - documentation additions V4: - documentation fixes - convert per_cpu to __get_cpu_var - powerpc compile fixes V5: - split apart warn flags for hard and soft lockups TODO: - figure out how to make an arch-agnostic clock2cycles call (if possible) to feed into perf events as a sample period [fweisbec: merged conflict patch] Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Eric Paris <eparis@redhat.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <1273266711-18706-2-git-send-email-dzickus@redhat.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2010-05-08 04:11:44 +07:00
config LOCKUP_DETECTOR
bool
config SOFTLOCKUP_DETECTOR
bool "Detect Soft Lockups"
depends on DEBUG_KERNEL && !S390
select LOCKUP_DETECTOR
help
lockup_detector: Combine nmi_watchdog and softlockup detector The new nmi_watchdog (which uses the perf event subsystem) is very similar in structure to the softlockup detector. Using Ingo's suggestion, I combined the two functionalities into one file: kernel/watchdog.c. Now both the nmi_watchdog (or hardlockup detector) and softlockup detector sit on top of the perf event subsystem, which is run every 60 seconds or so to see if there are any lockups. To detect hardlockups, cpus not responding to interrupts, I implemented an hrtimer that runs 5 times for every perf event overflow event. If that stops counting on a cpu, then the cpu is most likely in trouble. To detect softlockups, tasks not yielding to the scheduler, I used the previous kthread idea that now gets kicked every time the hrtimer fires. If the kthread isn't being scheduled neither is anyone else and the warning is printed to the console. I tested this on x86_64 and both the softlockup and hardlockup paths work. V2: - cleaned up the Kconfig and softlockup combination - surrounded hardlockup cases with #ifdef CONFIG_PERF_EVENTS_NMI - seperated out the softlockup case from perf event subsystem - re-arranged the enabling/disabling nmi watchdog from proc space - added cpumasks for hardlockup failure cases - removed fallback to soft events if no PMU exists for hard events V3: - comment cleanups - drop support for older softlockup code - per_cpu cleanups - completely remove software clock base hardlockup detector - use per_cpu masking on hard/soft lockup detection - #ifdef cleanups - rename config option NMI_WATCHDOG to LOCKUP_DETECTOR - documentation additions V4: - documentation fixes - convert per_cpu to __get_cpu_var - powerpc compile fixes V5: - split apart warn flags for hard and soft lockups TODO: - figure out how to make an arch-agnostic clock2cycles call (if possible) to feed into perf events as a sample period [fweisbec: merged conflict patch] Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Eric Paris <eparis@redhat.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <1273266711-18706-2-git-send-email-dzickus@redhat.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2010-05-08 04:11:44 +07:00
Say Y here to enable the kernel to act as a watchdog to detect
soft lockups.
lockup_detector: Combine nmi_watchdog and softlockup detector The new nmi_watchdog (which uses the perf event subsystem) is very similar in structure to the softlockup detector. Using Ingo's suggestion, I combined the two functionalities into one file: kernel/watchdog.c. Now both the nmi_watchdog (or hardlockup detector) and softlockup detector sit on top of the perf event subsystem, which is run every 60 seconds or so to see if there are any lockups. To detect hardlockups, cpus not responding to interrupts, I implemented an hrtimer that runs 5 times for every perf event overflow event. If that stops counting on a cpu, then the cpu is most likely in trouble. To detect softlockups, tasks not yielding to the scheduler, I used the previous kthread idea that now gets kicked every time the hrtimer fires. If the kthread isn't being scheduled neither is anyone else and the warning is printed to the console. I tested this on x86_64 and both the softlockup and hardlockup paths work. V2: - cleaned up the Kconfig and softlockup combination - surrounded hardlockup cases with #ifdef CONFIG_PERF_EVENTS_NMI - seperated out the softlockup case from perf event subsystem - re-arranged the enabling/disabling nmi watchdog from proc space - added cpumasks for hardlockup failure cases - removed fallback to soft events if no PMU exists for hard events V3: - comment cleanups - drop support for older softlockup code - per_cpu cleanups - completely remove software clock base hardlockup detector - use per_cpu masking on hard/soft lockup detection - #ifdef cleanups - rename config option NMI_WATCHDOG to LOCKUP_DETECTOR - documentation additions V4: - documentation fixes - convert per_cpu to __get_cpu_var - powerpc compile fixes V5: - split apart warn flags for hard and soft lockups TODO: - figure out how to make an arch-agnostic clock2cycles call (if possible) to feed into perf events as a sample period [fweisbec: merged conflict patch] Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Eric Paris <eparis@redhat.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <1273266711-18706-2-git-send-email-dzickus@redhat.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2010-05-08 04:11:44 +07:00
Softlockups are bugs that cause the kernel to loop in kernel
mode for more than 20 seconds, without giving other tasks a
lockup_detector: Combine nmi_watchdog and softlockup detector The new nmi_watchdog (which uses the perf event subsystem) is very similar in structure to the softlockup detector. Using Ingo's suggestion, I combined the two functionalities into one file: kernel/watchdog.c. Now both the nmi_watchdog (or hardlockup detector) and softlockup detector sit on top of the perf event subsystem, which is run every 60 seconds or so to see if there are any lockups. To detect hardlockups, cpus not responding to interrupts, I implemented an hrtimer that runs 5 times for every perf event overflow event. If that stops counting on a cpu, then the cpu is most likely in trouble. To detect softlockups, tasks not yielding to the scheduler, I used the previous kthread idea that now gets kicked every time the hrtimer fires. If the kthread isn't being scheduled neither is anyone else and the warning is printed to the console. I tested this on x86_64 and both the softlockup and hardlockup paths work. V2: - cleaned up the Kconfig and softlockup combination - surrounded hardlockup cases with #ifdef CONFIG_PERF_EVENTS_NMI - seperated out the softlockup case from perf event subsystem - re-arranged the enabling/disabling nmi watchdog from proc space - added cpumasks for hardlockup failure cases - removed fallback to soft events if no PMU exists for hard events V3: - comment cleanups - drop support for older softlockup code - per_cpu cleanups - completely remove software clock base hardlockup detector - use per_cpu masking on hard/soft lockup detection - #ifdef cleanups - rename config option NMI_WATCHDOG to LOCKUP_DETECTOR - documentation additions V4: - documentation fixes - convert per_cpu to __get_cpu_var - powerpc compile fixes V5: - split apart warn flags for hard and soft lockups TODO: - figure out how to make an arch-agnostic clock2cycles call (if possible) to feed into perf events as a sample period [fweisbec: merged conflict patch] Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Eric Paris <eparis@redhat.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <1273266711-18706-2-git-send-email-dzickus@redhat.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2010-05-08 04:11:44 +07:00
chance to run. The current stack trace is displayed upon
detection and the system will stay locked up.
config BOOTPARAM_SOFTLOCKUP_PANIC
bool "Panic (Reboot) On Soft Lockups"
depends on SOFTLOCKUP_DETECTOR
help
Say Y here to enable the kernel to panic on "soft lockups",
which are bugs that cause the kernel to loop in kernel
mode for more than 20 seconds (configurable using the watchdog_thresh
sysctl), without giving other tasks a chance to run.
The panic can be used in combination with panic_timeout,
to cause the system to reboot automatically after a
lockup has been detected. This feature is useful for
high-availability systems that have uptime guarantees and
where a lockup must be resolved ASAP.
Say N if unsure.
config BOOTPARAM_SOFTLOCKUP_PANIC_VALUE
int
depends on SOFTLOCKUP_DETECTOR
range 0 1
default 0 if !BOOTPARAM_SOFTLOCKUP_PANIC
default 1 if BOOTPARAM_SOFTLOCKUP_PANIC
config HARDLOCKUP_DETECTOR_PERF
bool
select SOFTLOCKUP_DETECTOR
kernel/watchdog: Prevent false positives with turbo modes The hardlockup detector on x86 uses a performance counter based on unhalted CPU cycles and a periodic hrtimer. The hrtimer period is about 2/5 of the performance counter period, so the hrtimer should fire 2-3 times before the performance counter NMI fires. The NMI code checks whether the hrtimer fired since the last invocation. If not, it assumess a hard lockup. The calculation of those periods is based on the nominal CPU frequency. Turbo modes increase the CPU clock frequency and therefore shorten the period of the perf/NMI watchdog. With extreme Turbo-modes (3x nominal frequency) the perf/NMI period is shorter than the hrtimer period which leads to false positives. A simple fix would be to shorten the hrtimer period, but that comes with the side effect of more frequent hrtimer and softlockup thread wakeups, which is not desired. Implement a low pass filter, which checks the perf/NMI period against kernel time. If the perf/NMI fires before 4/5 of the watchdog period has elapsed then the event is ignored and postponed to the next perf/NMI. That solves the problem and avoids the overhead of shorter hrtimer periods and more frequent softlockup thread wakeups. Fixes: 58687acba592 ("lockup_detector: Combine nmi_watchdog and softlockup detector") Reported-and-tested-by: Kan Liang <Kan.liang@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: dzickus@redhat.com Cc: prarit@redhat.com Cc: ak@linux.intel.com Cc: babu.moger@oracle.com Cc: peterz@infradead.org Cc: eranian@google.com Cc: acme@redhat.com Cc: stable@vger.kernel.org Cc: atomlin@redhat.com Cc: akpm@linux-foundation.org Cc: torvalds@linux-foundation.org Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1708150931310.1886@nanos
2017-08-15 14:50:13 +07:00
#
# Enables a timestamp based low pass filter to compensate for perf based
# hard lockup detection which runs too fast due to turbo modes.
#
config HARDLOCKUP_CHECK_TIMESTAMP
bool
#
# arch/ can define HAVE_HARDLOCKUP_DETECTOR_ARCH to provide their own hard
# lockup detector rather than the perf based detector.
#
config HARDLOCKUP_DETECTOR
bool "Detect Hard Lockups"
depends on DEBUG_KERNEL && !S390
depends on HAVE_HARDLOCKUP_DETECTOR_PERF || HAVE_HARDLOCKUP_DETECTOR_ARCH
select LOCKUP_DETECTOR
select HARDLOCKUP_DETECTOR_PERF if HAVE_HARDLOCKUP_DETECTOR_PERF
select HARDLOCKUP_DETECTOR_ARCH if HAVE_HARDLOCKUP_DETECTOR_ARCH
help
Say Y here to enable the kernel to act as a watchdog to detect
hard lockups.
lockup_detector: Combine nmi_watchdog and softlockup detector The new nmi_watchdog (which uses the perf event subsystem) is very similar in structure to the softlockup detector. Using Ingo's suggestion, I combined the two functionalities into one file: kernel/watchdog.c. Now both the nmi_watchdog (or hardlockup detector) and softlockup detector sit on top of the perf event subsystem, which is run every 60 seconds or so to see if there are any lockups. To detect hardlockups, cpus not responding to interrupts, I implemented an hrtimer that runs 5 times for every perf event overflow event. If that stops counting on a cpu, then the cpu is most likely in trouble. To detect softlockups, tasks not yielding to the scheduler, I used the previous kthread idea that now gets kicked every time the hrtimer fires. If the kthread isn't being scheduled neither is anyone else and the warning is printed to the console. I tested this on x86_64 and both the softlockup and hardlockup paths work. V2: - cleaned up the Kconfig and softlockup combination - surrounded hardlockup cases with #ifdef CONFIG_PERF_EVENTS_NMI - seperated out the softlockup case from perf event subsystem - re-arranged the enabling/disabling nmi watchdog from proc space - added cpumasks for hardlockup failure cases - removed fallback to soft events if no PMU exists for hard events V3: - comment cleanups - drop support for older softlockup code - per_cpu cleanups - completely remove software clock base hardlockup detector - use per_cpu masking on hard/soft lockup detection - #ifdef cleanups - rename config option NMI_WATCHDOG to LOCKUP_DETECTOR - documentation additions V4: - documentation fixes - convert per_cpu to __get_cpu_var - powerpc compile fixes V5: - split apart warn flags for hard and soft lockups TODO: - figure out how to make an arch-agnostic clock2cycles call (if possible) to feed into perf events as a sample period [fweisbec: merged conflict patch] Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Eric Paris <eparis@redhat.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <1273266711-18706-2-git-send-email-dzickus@redhat.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2010-05-08 04:11:44 +07:00
Hardlockups are bugs that cause the CPU to loop in kernel mode
for more than 10 seconds, without letting other interrupts have a
lockup_detector: Combine nmi_watchdog and softlockup detector The new nmi_watchdog (which uses the perf event subsystem) is very similar in structure to the softlockup detector. Using Ingo's suggestion, I combined the two functionalities into one file: kernel/watchdog.c. Now both the nmi_watchdog (or hardlockup detector) and softlockup detector sit on top of the perf event subsystem, which is run every 60 seconds or so to see if there are any lockups. To detect hardlockups, cpus not responding to interrupts, I implemented an hrtimer that runs 5 times for every perf event overflow event. If that stops counting on a cpu, then the cpu is most likely in trouble. To detect softlockups, tasks not yielding to the scheduler, I used the previous kthread idea that now gets kicked every time the hrtimer fires. If the kthread isn't being scheduled neither is anyone else and the warning is printed to the console. I tested this on x86_64 and both the softlockup and hardlockup paths work. V2: - cleaned up the Kconfig and softlockup combination - surrounded hardlockup cases with #ifdef CONFIG_PERF_EVENTS_NMI - seperated out the softlockup case from perf event subsystem - re-arranged the enabling/disabling nmi watchdog from proc space - added cpumasks for hardlockup failure cases - removed fallback to soft events if no PMU exists for hard events V3: - comment cleanups - drop support for older softlockup code - per_cpu cleanups - completely remove software clock base hardlockup detector - use per_cpu masking on hard/soft lockup detection - #ifdef cleanups - rename config option NMI_WATCHDOG to LOCKUP_DETECTOR - documentation additions V4: - documentation fixes - convert per_cpu to __get_cpu_var - powerpc compile fixes V5: - split apart warn flags for hard and soft lockups TODO: - figure out how to make an arch-agnostic clock2cycles call (if possible) to feed into perf events as a sample period [fweisbec: merged conflict patch] Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Cc: Eric Paris <eparis@redhat.com> Cc: Randy Dunlap <randy.dunlap@oracle.com> LKML-Reference: <1273266711-18706-2-git-send-email-dzickus@redhat.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2010-05-08 04:11:44 +07:00
chance to run. The current stack trace is displayed upon detection
and the system will stay locked up.
config BOOTPARAM_HARDLOCKUP_PANIC
bool "Panic (Reboot) On Hard Lockups"
depends on HARDLOCKUP_DETECTOR
help
Say Y here to enable the kernel to panic on "hard lockups",
which are bugs that cause the kernel to loop in kernel
mode with interrupts disabled for more than 10 seconds (configurable
using the watchdog_thresh sysctl).
Say N if unsure.
config BOOTPARAM_HARDLOCKUP_PANIC_VALUE
int
depends on HARDLOCKUP_DETECTOR
range 0 1
default 0 if !BOOTPARAM_HARDLOCKUP_PANIC
default 1 if BOOTPARAM_HARDLOCKUP_PANIC
config DETECT_HUNG_TASK
bool "Detect Hung Tasks"
depends on DEBUG_KERNEL
default SOFTLOCKUP_DETECTOR
help
Say Y here to enable the kernel to detect "hung tasks",
which are bugs that cause the task to be stuck in
uninterruptible "D" state indefinitely.
When a hung task is detected, the kernel will print the
current stack trace (which you should report), but the
task will stay in uninterruptible state. If lockdep is
enabled then all held locks will also be reported. This
feature has negligible overhead.
[PATCH] slab: implement /proc/slab_allocators Implement /proc/slab_allocators. It produces output like: idr_layer_cache: 80 idr_pre_get+0x33/0x4e buffer_head: 2555 alloc_buffer_head+0x20/0x75 mm_struct: 9 mm_alloc+0x1e/0x42 mm_struct: 20 dup_mm+0x36/0x370 vm_area_struct: 384 dup_mm+0x18f/0x370 vm_area_struct: 151 do_mmap_pgoff+0x2e0/0x7c3 vm_area_struct: 1 split_vma+0x5a/0x10e vm_area_struct: 11 do_brk+0x206/0x2e2 vm_area_struct: 2 copy_vma+0xda/0x142 vm_area_struct: 9 setup_arg_pages+0x99/0x214 fs_cache: 8 copy_fs_struct+0x21/0x133 fs_cache: 29 copy_process+0xf38/0x10e3 files_cache: 30 alloc_files+0x1b/0xcf signal_cache: 81 copy_process+0xbaa/0x10e3 sighand_cache: 77 copy_process+0xe65/0x10e3 sighand_cache: 1 de_thread+0x4d/0x5f8 anon_vma: 241 anon_vma_prepare+0xd9/0xf3 size-2048: 1 add_sect_attrs+0x5f/0x145 size-2048: 2 journal_init_revoke+0x99/0x302 size-2048: 2 journal_init_revoke+0x137/0x302 size-2048: 2 journal_init_inode+0xf9/0x1c4 Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Alexander Nyberg <alexn@telia.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Christoph Lameter <clameter@engr.sgi.com> Cc: Ravikiran Thirumalai <kiran@scalex86.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> DESC slab-leaks3-locking-fix EDESC From: Andrew Morton <akpm@osdl.org> Update for slab-remove-cachep-spinlock.patch Cc: Al Viro <viro@ftp.linux.org.uk> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Alexander Nyberg <alexn@telia.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Christoph Lameter <clameter@engr.sgi.com> Cc: Ravikiran Thirumalai <kiran@scalex86.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-25 18:06:39 +07:00
config DEFAULT_HUNG_TASK_TIMEOUT
int "Default timeout for hung task detection (in seconds)"
depends on DETECT_HUNG_TASK
default 120
help
This option controls the default timeout (in seconds) used
to determine when a task has become non-responsive and should
be considered hung.
It can be adjusted at runtime via the kernel.hung_task_timeout_secs
sysctl or by writing a value to
/proc/sys/kernel/hung_task_timeout_secs.
SLUB: Support for performance statistics The statistics provided here allow the monitoring of allocator behavior but at the cost of some (minimal) loss of performance. Counters are placed in SLUB's per cpu data structure. The per cpu structure may be extended by the statistics to grow larger than one cacheline which will increase the cache footprint of SLUB. There is a compile option to enable/disable the inclusion of the runtime statistics and its off by default. The slabinfo tool is enhanced to support these statistics via two options: -D Switches the line of information displayed for a slab from size mode to activity mode. -A Sorts the slabs displayed by activity. This allows the display of the slabs most important to the performance of a certain load. -r Report option will report detailed statistics on Example (tbench load): slabinfo -AD ->Shows the most active slabs Name Objects Alloc Free %Fast skbuff_fclone_cache 33 111953835 111953835 99 99 :0000192 2666 5283688 5281047 99 99 :0001024 849 5247230 5246389 83 83 vm_area_struct 1349 119642 118355 91 22 :0004096 15 66753 66751 98 98 :0000064 2067 25297 23383 98 78 dentry 10259 28635 18464 91 45 :0000080 11004 18950 8089 98 98 :0000096 1703 12358 10784 99 98 :0000128 762 10582 9875 94 18 :0000512 184 9807 9647 95 81 :0002048 479 9669 9195 83 65 anon_vma 777 9461 9002 99 71 kmalloc-8 6492 9981 5624 99 97 :0000768 258 7174 6931 58 15 So the skbuff_fclone_cache is of highest importance for the tbench load. Pretty high load on the 192 sized slab. Look for the aliases slabinfo -a | grep 000192 :0000192 <- xfs_btree_cur filp kmalloc-192 uid_cache tw_sock_TCP request_sock_TCPv6 tw_sock_TCPv6 skbuff_head_cache xfs_ili Likely skbuff_head_cache. Looking into the statistics of the skbuff_fclone_cache is possible through slabinfo skbuff_fclone_cache ->-r option implied if cache name is mentioned .... Usual output ... Slab Perf Counter Alloc Free %Al %Fr -------------------------------------------------- Fastpath 111953360 111946981 99 99 Slowpath 1044 7423 0 0 Page Alloc 272 264 0 0 Add partial 25 325 0 0 Remove partial 86 264 0 0 RemoteObj/SlabFrozen 350 4832 0 0 Total 111954404 111954404 Flushes 49 Refill 0 Deactivate Full=325(92%) Empty=0(0%) ToHead=24(6%) ToTail=1(0%) Looks good because the fastpath is overwhelmingly taken. skbuff_head_cache: Slab Perf Counter Alloc Free %Al %Fr -------------------------------------------------- Fastpath 5297262 5259882 99 99 Slowpath 4477 39586 0 0 Page Alloc 937 824 0 0 Add partial 0 2515 0 0 Remove partial 1691 824 0 0 RemoteObj/SlabFrozen 2621 9684 0 0 Total 5301739 5299468 Deactivate Full=2620(100%) Empty=0(0%) ToHead=0(0%) ToTail=0(0%) Descriptions of the output: Total: The total number of allocation and frees that occurred for a slab Fastpath: The number of allocations/frees that used the fastpath. Slowpath: Other allocations Page Alloc: Number of calls to the page allocator as a result of slowpath processing Add Partial: Number of slabs added to the partial list through free or alloc (occurs during cpuslab flushes) Remove Partial: Number of slabs removed from the partial list as a result of allocations retrieving a partial slab or by a free freeing the last object of a slab. RemoteObj/Froz: How many times were remotely freed object encountered when a slab was about to be deactivated. Frozen: How many times was free able to skip list processing because the slab was in use as the cpuslab of another processor. Flushes: Number of times the cpuslab was flushed on request (kmem_cache_shrink, may result from races in __slab_alloc) Refill: Number of times we were able to refill the cpuslab from remotely freed objects for the same slab. Deactivate: Statistics how slabs were deactivated. Shows how they were put onto the partial list. In general fastpath is very good. Slowpath without partial list processing is also desirable. Any touching of partial list uses node specific locks which may potentially cause list lock contention. Signed-off-by: Christoph Lameter <clameter@sgi.com>
2008-02-08 08:47:41 +07:00
A timeout of 0 disables the check. The default is two minutes.
Keeping the default should be fine in most cases.
config BOOTPARAM_HUNG_TASK_PANIC
bool "Panic (Reboot) On Hung Tasks"
depends on DETECT_HUNG_TASK
help
Say Y here to enable the kernel to panic on "hung tasks",
which are bugs that cause the kernel to leave a task stuck
in uninterruptible "D" state.
The panic can be used in combination with panic_timeout,
to cause the system to reboot automatically after a
hung task has been detected. This feature is useful for
high-availability systems that have uptime guarantees and
where a hung tasks must be resolved ASAP.
Say N if unsure.
config BOOTPARAM_HUNG_TASK_PANIC_VALUE
int
depends on DETECT_HUNG_TASK
range 0 1
default 0 if !BOOTPARAM_HUNG_TASK_PANIC
default 1 if BOOTPARAM_HUNG_TASK_PANIC
workqueue: implement lockup detector Workqueue stalls can happen from a variety of usage bugs such as missing WQ_MEM_RECLAIM flag or concurrency managed work item indefinitely staying RUNNING. These stalls can be extremely difficult to hunt down because the usual warning mechanisms can't detect workqueue stalls and the internal state is pretty opaque. To alleviate the situation, this patch implements workqueue lockup detector. It periodically monitors all worker_pools periodically and, if any pool failed to make forward progress longer than the threshold duration, triggers warning and dumps workqueue state as follows. BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 31s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=17/256 pending: monkey_wrench_fn, e1000_watchdog, cache_reap, vmstat_shepherd, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, release_one_tty, cgroup_release_agent workqueue events_power_efficient: flags=0x80 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256 pending: check_lifetime, neigh_periodic_work workqueue cgroup_pidlist_destroy: flags=0x0 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/1 pending: cgroup_pidlist_destroy_work_fn ... The detection mechanism is controller through kernel parameter workqueue.watchdog_thresh and can be updated at runtime through the sysfs module parameter file. v2: Decoupled from softlockup control knobs. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Don Zickus <dzickus@redhat.com> Cc: Ulrich Obergfell <uobergfe@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Chris Mason <clm@fb.com> Cc: Andrew Morton <akpm@linux-foundation.org>
2015-12-08 23:28:04 +07:00
config WQ_WATCHDOG
bool "Detect Workqueue Stalls"
depends on DEBUG_KERNEL
help
Say Y here to enable stall detection on workqueues. If a
worker pool doesn't make forward progress on a pending work
item for over a given amount of time, 30s by default, a
warning message is printed along with dump of workqueue
state. This can be configured through kernel parameter
"workqueue.watchdog_thresh" and its sysfs counterpart.
lib/test_lockup: test module to generate lockups CONFIG_TEST_LOCKUP=m adds module "test_lockup" that helps to make sure that watchdogs and lockup detectors are working properly. Depending on module parameters test_lockup could emulate soft or hard lockup, "hung task", hold arbitrary lock, allocate bunch of pages. Also it could generate series of lockups with cooling-down periods, in this way it could be used as "ping" for locks or page allocator. Loop checks signals between iteration thus could be stopped by ^C. # modinfo test_lockup ... parm: time_secs:lockup time in seconds, default 0 (uint) parm: time_nsecs:nanoseconds part of lockup time, default 0 (uint) parm: cooldown_secs:cooldown time between iterations in seconds, default 0 (uint) parm: cooldown_nsecs:nanoseconds part of cooldown, default 0 (uint) parm: iterations:lockup iterations, default 1 (uint) parm: all_cpus:trigger lockup at all cpus at once (bool) parm: state:wait in 'R' running (default), 'D' uninterruptible, 'K' killable, 'S' interruptible state (charp) parm: use_hrtimer:use high-resolution timer for sleeping (bool) parm: iowait:account sleep time as iowait (bool) parm: lock_read:lock read-write locks for read (bool) parm: lock_single:acquire locks only at one cpu (bool) parm: reacquire_locks:release and reacquire locks/irq/preempt between iterations (bool) parm: touch_softlockup:touch soft-lockup watchdog between iterations (bool) parm: touch_hardlockup:touch hard-lockup watchdog between iterations (bool) parm: call_cond_resched:call cond_resched() between iterations (bool) parm: measure_lock_wait:measure lock wait time (bool) parm: lock_wait_threshold:print lock wait time longer than this in nanoseconds, default off (ulong) parm: disable_irq:disable interrupts: generate hard-lockups (bool) parm: disable_softirq:disable bottom-half irq handlers (bool) parm: disable_preempt:disable preemption: generate soft-lockups (bool) parm: lock_rcu:grab rcu_read_lock: generate rcu stalls (bool) parm: lock_mmap_sem:lock mm->mmap_sem: block procfs interfaces (bool) parm: lock_rwsem_ptr:lock rw_semaphore at address (ulong) parm: lock_mutex_ptr:lock mutex at address (ulong) parm: lock_spinlock_ptr:lock spinlock at address (ulong) parm: lock_rwlock_ptr:lock rwlock at address (ulong) parm: alloc_pages_nr:allocate and free pages under locks (uint) parm: alloc_pages_order:page order to allocate (uint) parm: alloc_pages_gfp:allocate pages with this gfp_mask, default GFP_KERNEL (uint) parm: alloc_pages_atomic:allocate pages with GFP_ATOMIC (bool) parm: reallocate_pages:free and allocate pages between iterations (bool) Parameters for locking by address are unsafe and taints kernel. With CONFIG_DEBUG_SPINLOCK=y they at least check magics for embedded spinlocks. Examples: task hang in D-state: modprobe test_lockup time_secs=1 iterations=60 state=D task hang in io-wait D-state: modprobe test_lockup time_secs=1 iterations=60 state=D iowait softlockup: modprobe test_lockup time_secs=1 iterations=60 state=R hardlockup: modprobe test_lockup time_secs=1 iterations=60 state=R disable_irq system-wide hardlockup: modprobe test_lockup time_secs=1 iterations=60 state=R \ disable_irq all_cpus rcu stall: modprobe test_lockup time_secs=1 iterations=60 state=R \ lock_rcu touch_softlockup lock mmap_sem / block procfs interfaces: modprobe test_lockup time_secs=1 iterations=60 state=S lock_mmap_sem lock tasklist_lock for read / block forks: TASKLIST_LOCK=$(awk '$3 == "tasklist_lock" {print "0x"$1}' /proc/kallsyms) modprobe test_lockup time_secs=1 iterations=60 state=R \ disable_irq lock_read lock_rwlock_ptr=$TASKLIST_LOCK lock namespace_sem / block vfs mount operations: NAMESPACE_SEM=$(awk '$3 == "namespace_sem" {print "0x"$1}' /proc/kallsyms) modprobe test_lockup time_secs=1 iterations=60 state=S \ lock_rwsem_ptr=$NAMESPACE_SEM lock cgroup mutex / block cgroup operations: CGROUP_MUTEX=$(awk '$3 == "cgroup_mutex" {print "0x"$1}' /proc/kallsyms) modprobe test_lockup time_secs=1 iterations=60 state=S \ lock_mutex_ptr=$CGROUP_MUTEX ping cgroup_mutex every second and measure maximum lock wait time: modprobe test_lockup cooldown_secs=1 iterations=60 state=S \ lock_mutex_ptr=$CGROUP_MUTEX reacquire_locks measure_lock_wait [linux@roeck-us.net: rename disable_irq to fix build error] Link: http://lkml.kernel.org/r/20200317133614.23152-1-linux@roeck-us.net Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Sasha Levin <sashal@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Dmitry Monakhov <dmtrmonakhov@yandex-team.ru Cc: Colin Ian King <colin.king@canonical.com> Cc: Guenter Roeck <linux@roeck-us.net> Link: http://lkml.kernel.org/r/158132859146.2797.525923171323227836.stgit@buzz Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-04-07 10:09:47 +07:00
config TEST_LOCKUP
tristate "Test module to generate lockups"
help
This builds the "test_lockup" module that helps to make sure
that watchdogs and lockup detectors are working properly.
Depending on module parameters it could emulate soft or hard
lockup, "hung task", or locking arbitrary lock for a long time.
Also it could generate series of lockups with cooling-down periods.
If unsure, say N.
endmenu # "Debug lockups and hangs"
menu "Scheduler Debugging"
config SCHED_DEBUG
bool "Collect scheduler debugging info"
depends on DEBUG_KERNEL && PROC_FS
default y
help
If you say Y here, the /proc/sched_debug file will be provided
that can help debug the scheduler. The runtime overhead of this
option is minimal.
config SCHED_INFO
bool
default n
config SCHEDSTATS
bool "Collect scheduler statistics"
depends on DEBUG_KERNEL && PROC_FS
select SCHED_INFO
help
If you say Y here, additional code will be inserted into the
scheduler and related routines to collect statistics about
scheduler behavior and provide them in /proc/schedstat. These
stats may be useful for both tuning and debugging the scheduler
If you aren't debugging the scheduler or trying to tune a specific
application, you can say N to avoid the very slight overhead
this adds.
endmenu
sched: Add default-disabled option to BUG() when stack end location is overwritten Currently in the event of a stack overrun a call to schedule() does not check for this type of corruption. This corruption is often silent and can go unnoticed. However once the corrupted region is examined at a later stage, the outcome is undefined and often results in a sporadic page fault which cannot be handled. This patch checks for a stack overrun and takes appropriate action since the damage is already done, there is no point in continuing. Signed-off-by: Aaron Tomlin <atomlin@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: aneesh.kumar@linux.vnet.ibm.com Cc: dzickus@redhat.com Cc: bmr@redhat.com Cc: jcastillo@redhat.com Cc: oleg@redhat.com Cc: riel@redhat.com Cc: prarit@redhat.com Cc: jgh@redhat.com Cc: minchan@kernel.org Cc: mpe@ellerman.id.au Cc: tglx@linutronix.de Cc: rostedt@goodmis.org Cc: hannes@cmpxchg.org Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Dan Streetman <ddstreet@ieee.org> Cc: Davidlohr Bueso <davidlohr@hp.com> Cc: David S. Miller <davem@davemloft.net> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Lubomir Rintel <lkundrak@v3.sk> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1410527779-8133-4-git-send-email-atomlin@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-09-12 20:16:19 +07:00
config DEBUG_TIMEKEEPING
bool "Enable extra timekeeping sanity checking"
help
This option will enable additional timekeeping sanity checks
which may be helpful when diagnosing issues where timekeeping
problems are suspected.
This may include checks in the timekeeping hotpaths, so this
option may have a (very small) performance impact to some
workloads.
If unsure, say N.
config DEBUG_PREEMPT
bool "Debug preemptible kernel"
depends on DEBUG_KERNEL && PREEMPTION && TRACE_IRQFLAGS_SUPPORT
default y
help
If you say Y here then the kernel will use a debug variant of the
commonly used smp_processor_id() function and will print warnings
if kernel code uses it in a preemption-unsafe way. Also, the kernel
will detect preemption count underflows.
menu "Lock Debugging (spinlocks, mutexes, etc...)"
config LOCK_DEBUGGING_SUPPORT
bool
depends on TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT
default y
config PROVE_LOCKING
bool "Lock debugging: prove locking correctness"
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select LOCKDEP
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
select DEBUG_RT_MUTEXES if RT_MUTEXES
select DEBUG_RWSEMS
select DEBUG_WW_MUTEX_SLOWPATH
select DEBUG_LOCK_ALLOC
select TRACE_IRQFLAGS
default n
help
This feature enables the kernel to prove that all locking
that occurs in the kernel runtime is mathematically
correct: that under no circumstance could an arbitrary (and
not yet triggered) combination of observed locking
sequences (on an arbitrary number of CPUs, running an
arbitrary number of tasks and interrupt contexts) cause a
deadlock.
In short, this feature enables the kernel to report locking
related deadlocks before they actually occur.
The proof does not depend on how hard and complex a
deadlock scenario would be to trigger: how many
participant CPUs, tasks and irq-contexts would be needed
for it to trigger. The proof also does not depend on
timing: if a race and a resulting deadlock is possible
theoretically (no matter how unlikely the race scenario
is), it will be proven so and will immediately be
reported by the kernel (once the event is observed that
makes the deadlock theoretically possible).
If a deadlock is impossible (i.e. the locking rules, as
observed by the kernel, are mathematically correct), the
kernel reports nothing.
NOTE: this feature can also be enabled for rwlocks, mutexes
and rwsems - in which case all dependencies between these
different locking variants are observed and mapped too, and
the proof of observed correctness is also maintained for an
arbitrary combination of these separate locking variants.
For more details, see Documentation/locking/lockdep-design.rst.
lockdep: Introduce wait-type checks Extend lockdep to validate lock wait-type context. The current wait-types are: LD_WAIT_FREE, /* wait free, rcu etc.. */ LD_WAIT_SPIN, /* spin loops, raw_spinlock_t etc.. */ LD_WAIT_CONFIG, /* CONFIG_PREEMPT_LOCK, spinlock_t etc.. */ LD_WAIT_SLEEP, /* sleeping locks, mutex_t etc.. */ Where lockdep validates that the current lock (the one being acquired) fits in the current wait-context (as generated by the held stack). This ensures that there is no attempt to acquire mutexes while holding spinlocks, to acquire spinlocks while holding raw_spinlocks and so on. In other words, its a more fancy might_sleep(). Obviously RCU made the entire ordeal more complex than a simple single value test because RCU can be acquired in (pretty much) any context and while it presents a context to nested locks it is not the same as it got acquired in. Therefore its necessary to split the wait_type into two values, one representing the acquire (outer) and one representing the nested context (inner). For most 'normal' locks these two are the same. [ To make static initialization easier we have the rule that: .outer == INV means .outer == .inner; because INV == 0. ] It further means that its required to find the minimal .inner of the held stack to compare against the outer of the new lock; because while 'normal' RCU presents a CONFIG type to nested locks, if it is taken while already holding a SPIN type it obviously doesn't relax the rules. Below is an example output generated by the trivial test code: raw_spin_lock(&foo); spin_lock(&bar); spin_unlock(&bar); raw_spin_unlock(&foo); [ BUG: Invalid wait context ] ----------------------------- swapper/0/1 is trying to lock: ffffc90000013f20 (&bar){....}-{3:3}, at: kernel_init+0xdb/0x187 other info that might help us debug this: 1 lock held by swapper/0/1: #0: ffffc90000013ee0 (&foo){+.+.}-{2:2}, at: kernel_init+0xd1/0x187 The way to read it is to look at the new -{n,m} part in the lock description; -{3:3} for the attempted lock, and try and match that up to the held locks, which in this case is the one: -{2,2}. This tells that the acquiring lock requires a more relaxed environment than presented by the lock stack. Currently only the normal locks and RCU are converted, the rest of the lockdep users defaults to .inner = INV which is ignored. More conversions can be done when desired. The check for spinlock_t nesting is not enabled by default. It's a separate config option for now as there are known problems which are currently addressed. The config option allows to identify these problems and to verify that the solutions found are indeed solving them. The config switch will be removed and the checks will permanently enabled once the vast majority of issues has been addressed. [ bigeasy: Move LD_WAIT_FREE,… out of CONFIG_LOCKDEP to avoid compile failure with CONFIG_DEBUG_SPINLOCK + !CONFIG_LOCKDEP] [ tglx: Add the config option ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200321113242.427089655@linutronix.de
2020-03-21 18:26:01 +07:00
config PROVE_RAW_LOCK_NESTING
bool "Enable raw_spinlock - spinlock nesting checks"
depends on PROVE_LOCKING
default n
help
Enable the raw_spinlock vs. spinlock nesting checks which ensure
that the lock nesting rules for PREEMPT_RT enabled kernels are
not violated.
NOTE: There are known nesting problems. So if you enable this
option expect lockdep splats until these problems have been fully
addressed which is work in progress. This config switch allows to
identify and analyze these problems. It will be removed and the
check permanentely enabled once the main issues have been fixed.
If unsure, select N.
config LOCK_STAT
bool "Lock usage statistics"
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select LOCKDEP
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
select DEBUG_RT_MUTEXES if RT_MUTEXES
select DEBUG_LOCK_ALLOC
default n
help
This feature enables tracking lock contention points
For more details, see Documentation/locking/lockstat.rst
This also enables lock events required by "perf lock",
subcommand of perf.
If you want to use "perf lock", you also need to turn on
CONFIG_EVENT_TRACING.
CONFIG_LOCK_STAT defines "contended" and "acquired" lock events.
(CONFIG_LOCKDEP defines "acquire" and "release" events.)
config DEBUG_RT_MUTEXES
bool "RT Mutex debugging, deadlock detection"
depends on DEBUG_KERNEL && RT_MUTEXES
help
This allows rt mutex semantics violations and rt mutex related
deadlocks (lockups) to be detected and reported automatically.
config DEBUG_SPINLOCK
bool "Spinlock and rw-lock debugging: basic checks"
depends on DEBUG_KERNEL
select UNINLINE_SPIN_UNLOCK
help
Say Y here and build SMP to catch missing spinlock initialization
and certain other kinds of spinlock errors commonly made. This is
best used in conjunction with the NMI watchdog so that spinlock
deadlocks are also debuggable.
config DEBUG_MUTEXES
bool "Mutex debugging: basic checks"
depends on DEBUG_KERNEL
help
This feature allows mutex semantics violations to be detected and
reported.
mutex: Add w/w mutex slowpath debugging Injects EDEADLK conditions at pseudo-random interval, with exponential backoff up to UINT_MAX (to ensure that every lock operation still completes in a reasonable time). This way we can test the wound slowpath even for ww mutex users where contention is never expected, and the ww deadlock avoidance algorithm is only needed for correctness against malicious userspace. An example would be protecting kernel modesetting properties, which thanks to single-threaded X isn't really expected to contend, ever. I've looked into using the CONFIG_FAULT_INJECTION infrastructure, but decided against it for two reasons: - EDEADLK handling is mandatory for ww mutex users and should never affect the outcome of a syscall. This is in contrast to -ENOMEM injection. So fine configurability isn't required. - The fault injection framework only allows to set a simple probability for failure. Now the probability that a ww mutex acquire stage with N locks will never complete (due to too many injected EDEADLK backoffs) is zero. But the expected number of ww_mutex_lock operations for the completely uncontended case would be O(exp(N)). The per-acuiqire ctx exponential backoff solution choosen here only results in O(log N) overhead due to injection and so O(log N * N) lock operations. This way we can fail with high probability (and so have good test coverage even for fancy backoff and lock acquisition paths) without running into patalogical cases. Note that EDEADLK will only ever be injected when we managed to acquire the lock. This prevents any behaviour changes for users which rely on the EALREADY semantics. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: rostedt@goodmis.org Cc: daniel@ffwll.ch Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20130620113117.4001.21681.stgit@patser Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-06-20 18:31:17 +07:00
config DEBUG_WW_MUTEX_SLOWPATH
bool "Wait/wound mutex debugging: Slowpath testing"
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
mutex: Add w/w mutex slowpath debugging Injects EDEADLK conditions at pseudo-random interval, with exponential backoff up to UINT_MAX (to ensure that every lock operation still completes in a reasonable time). This way we can test the wound slowpath even for ww mutex users where contention is never expected, and the ww deadlock avoidance algorithm is only needed for correctness against malicious userspace. An example would be protecting kernel modesetting properties, which thanks to single-threaded X isn't really expected to contend, ever. I've looked into using the CONFIG_FAULT_INJECTION infrastructure, but decided against it for two reasons: - EDEADLK handling is mandatory for ww mutex users and should never affect the outcome of a syscall. This is in contrast to -ENOMEM injection. So fine configurability isn't required. - The fault injection framework only allows to set a simple probability for failure. Now the probability that a ww mutex acquire stage with N locks will never complete (due to too many injected EDEADLK backoffs) is zero. But the expected number of ww_mutex_lock operations for the completely uncontended case would be O(exp(N)). The per-acuiqire ctx exponential backoff solution choosen here only results in O(log N) overhead due to injection and so O(log N * N) lock operations. This way we can fail with high probability (and so have good test coverage even for fancy backoff and lock acquisition paths) without running into patalogical cases. Note that EDEADLK will only ever be injected when we managed to acquire the lock. This prevents any behaviour changes for users which rely on the EALREADY semantics. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: rostedt@goodmis.org Cc: daniel@ffwll.ch Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20130620113117.4001.21681.stgit@patser Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-06-20 18:31:17 +07:00
select DEBUG_LOCK_ALLOC
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
help
This feature enables slowpath testing for w/w mutex users by
injecting additional -EDEADLK wound/backoff cases. Together with
the full mutex checks enabled with (CONFIG_PROVE_LOCKING) this
will test all possible w/w mutex interface abuse with the
exception of simply not acquiring all the required locks.
Note that this feature can introduce significant overhead, so
it really should not be enabled in a production or distro kernel,
even a debug kernel. If you are a driver writer, enable it. If
you are a distro, do not.
mutex: Add w/w mutex slowpath debugging Injects EDEADLK conditions at pseudo-random interval, with exponential backoff up to UINT_MAX (to ensure that every lock operation still completes in a reasonable time). This way we can test the wound slowpath even for ww mutex users where contention is never expected, and the ww deadlock avoidance algorithm is only needed for correctness against malicious userspace. An example would be protecting kernel modesetting properties, which thanks to single-threaded X isn't really expected to contend, ever. I've looked into using the CONFIG_FAULT_INJECTION infrastructure, but decided against it for two reasons: - EDEADLK handling is mandatory for ww mutex users and should never affect the outcome of a syscall. This is in contrast to -ENOMEM injection. So fine configurability isn't required. - The fault injection framework only allows to set a simple probability for failure. Now the probability that a ww mutex acquire stage with N locks will never complete (due to too many injected EDEADLK backoffs) is zero. But the expected number of ww_mutex_lock operations for the completely uncontended case would be O(exp(N)). The per-acuiqire ctx exponential backoff solution choosen here only results in O(log N) overhead due to injection and so O(log N * N) lock operations. This way we can fail with high probability (and so have good test coverage even for fancy backoff and lock acquisition paths) without running into patalogical cases. Note that EDEADLK will only ever be injected when we managed to acquire the lock. This prevents any behaviour changes for users which rely on the EALREADY semantics. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: rostedt@goodmis.org Cc: daniel@ffwll.ch Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20130620113117.4001.21681.stgit@patser Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-06-20 18:31:17 +07:00
config DEBUG_RWSEMS
bool "RW Semaphore debugging: basic checks"
depends on DEBUG_KERNEL
help
This debugging feature allows mismatched rw semaphore locks
and unlocks to be detected and reported.
config DEBUG_LOCK_ALLOC
bool "Lock debugging: detect incorrect freeing of live locks"
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select DEBUG_SPINLOCK
select DEBUG_MUTEXES
select DEBUG_RT_MUTEXES if RT_MUTEXES
select LOCKDEP
help
This feature will check whether any held lock (spinlock, rwlock,
mutex or rwsem) is incorrectly freed by the kernel, via any of the
memory-freeing routines (kfree(), kmem_cache_free(), free_pages(),
vfree(), etc.), whether a live lock is incorrectly reinitialized via
spin_lock_init()/mutex_init()/etc., or whether there is any lock
held during task exit.
config LOCKDEP
bool
depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT
select STACKTRACE
select FRAME_POINTER if !MIPS && !PPC && !ARM && !S390 && !MICROBLAZE && !ARC && !X86
select KALLSYMS
select KALLSYMS_ALL
config LOCKDEP_SMALL
bool
config DEBUG_LOCKDEP
bool "Lock dependency engine debugging"
depends on DEBUG_KERNEL && LOCKDEP
help
If you say Y here, the lock dependency engine will do
additional runtime checks to debug itself, at the price
of more runtime overhead.
config DEBUG_ATOMIC_SLEEP
bool "Sleep inside atomic section checking"
select PREEMPT_COUNT
depends on DEBUG_KERNEL
depends on !ARCH_NO_PREEMPT
help
If you say Y here, various routines which may sleep will become very
noisy if they are called inside atomic sections: when a spinlock is
held, inside an rcu read side critical section, inside preempt disabled
sections, inside an interrupt, etc...
[PATCH] lockdep: locking API self tests Introduce DEBUG_LOCKING_API_SELFTESTS, which uses the generic lock debugging code's silent-failure feature to run a matrix of testcases. There are 210 testcases currently: +----------------------- | Locking API testsuite: +------------------------------+------+------+------+------+------+------+ | spin |wlock |rlock |mutex | wsem | rsem | -------------------------------+------+------+------+------+------+------+ A-A deadlock: ok | ok | ok | ok | ok | ok | A-B-B-A deadlock: ok | ok | ok | ok | ok | ok | A-B-B-C-C-A deadlock: ok | ok | ok | ok | ok | ok | A-B-C-A-B-C deadlock: ok | ok | ok | ok | ok | ok | A-B-B-C-C-D-D-A deadlock: ok | ok | ok | ok | ok | ok | A-B-C-D-B-D-D-A deadlock: ok | ok | ok | ok | ok | ok | A-B-C-D-B-C-D-A deadlock: ok | ok | ok | ok | ok | ok | double unlock: ok | ok | ok | ok | ok | ok | bad unlock order: ok | ok | ok | ok | ok | ok | --------------------------------------+------+------+------+------+------+ recursive read-lock: | ok | | ok | --------------------------------------+------+------+------+------+------+ non-nested unlock: ok | ok | ok | ok | --------------------------------------+------+------+------+ hard-irqs-on + irq-safe-A/12: ok | ok | ok | soft-irqs-on + irq-safe-A/12: ok | ok | ok | hard-irqs-on + irq-safe-A/21: ok | ok | ok | soft-irqs-on + irq-safe-A/21: ok | ok | ok | sirq-safe-A => hirqs-on/12: ok | ok | ok | sirq-safe-A => hirqs-on/21: ok | ok | ok | hard-safe-A + irqs-on/12: ok | ok | ok | soft-safe-A + irqs-on/12: ok | ok | ok | hard-safe-A + irqs-on/21: ok | ok | ok | soft-safe-A + irqs-on/21: ok | ok | ok | hard-safe-A + unsafe-B #1/123: ok | ok | ok | soft-safe-A + unsafe-B #1/123: ok | ok | ok | hard-safe-A + unsafe-B #1/132: ok | ok | ok | soft-safe-A + unsafe-B #1/132: ok | ok | ok | hard-safe-A + unsafe-B #1/213: ok | ok | ok | soft-safe-A + unsafe-B #1/213: ok | ok | ok | hard-safe-A + unsafe-B #1/231: ok | ok | ok | soft-safe-A + unsafe-B #1/231: ok | ok | ok | hard-safe-A + unsafe-B #1/312: ok | ok | ok | soft-safe-A + unsafe-B #1/312: ok | ok | ok | hard-safe-A + unsafe-B #1/321: ok | ok | ok | soft-safe-A + unsafe-B #1/321: ok | ok | ok | hard-safe-A + unsafe-B #2/123: ok | ok | ok | soft-safe-A + unsafe-B #2/123: ok | ok | ok | hard-safe-A + unsafe-B #2/132: ok | ok | ok | soft-safe-A + unsafe-B #2/132: ok | ok | ok | hard-safe-A + unsafe-B #2/213: ok | ok | ok | soft-safe-A + unsafe-B #2/213: ok | ok | ok | hard-safe-A + unsafe-B #2/231: ok | ok | ok | soft-safe-A + unsafe-B #2/231: ok | ok | ok | hard-safe-A + unsafe-B #2/312: ok | ok | ok | soft-safe-A + unsafe-B #2/312: ok | ok | ok | hard-safe-A + unsafe-B #2/321: ok | ok | ok | soft-safe-A + unsafe-B #2/321: ok | ok | ok | hard-irq lock-inversion/123: ok | ok | ok | soft-irq lock-inversion/123: ok | ok | ok | hard-irq lock-inversion/132: ok | ok | ok | soft-irq lock-inversion/132: ok | ok | ok | hard-irq lock-inversion/213: ok | ok | ok | soft-irq lock-inversion/213: ok | ok | ok | hard-irq lock-inversion/231: ok | ok | ok | soft-irq lock-inversion/231: ok | ok | ok | hard-irq lock-inversion/312: ok | ok | ok | soft-irq lock-inversion/312: ok | ok | ok | hard-irq lock-inversion/321: ok | ok | ok | soft-irq lock-inversion/321: ok | ok | ok | hard-irq read-recursion/123: ok | soft-irq read-recursion/123: ok | hard-irq read-recursion/132: ok | soft-irq read-recursion/132: ok | hard-irq read-recursion/213: ok | soft-irq read-recursion/213: ok | hard-irq read-recursion/231: ok | soft-irq read-recursion/231: ok | hard-irq read-recursion/312: ok | soft-irq read-recursion/312: ok | hard-irq read-recursion/321: ok | soft-irq read-recursion/321: ok | --------------------------------+-----+---------------- Good, all 210 testcases passed! | --------------------------------+ Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-07-03 14:24:48 +07:00
config DEBUG_LOCKING_API_SELFTESTS
bool "Locking API boot-time self-tests"
depends on DEBUG_KERNEL
help
Say Y here if you want the kernel to run a short self-test during
bootup. The self-test checks whether common types of locking bugs
are detected by debugging mechanisms or not. (if you disable
lock debugging then those bugs wont be detected of course.)
The following locking APIs are covered: spinlocks, rwlocks,
mutexes and rwsems.
config LOCK_TORTURE_TEST
tristate "torture tests for locking"
depends on DEBUG_KERNEL
select TORTURE_TEST
help
This option provides a kernel module that runs torture tests
on kernel locking primitives. The kernel module may be built
after the fact on the running kernel to be tested, if desired.
Say Y here if you want kernel locking-primitive torture tests
to be built into the kernel.
Say M if you want these torture tests to build as a module.
Say N if you are unsure.
config WW_MUTEX_SELFTEST
tristate "Wait/wound mutex selftests"
help
This option provides a kernel module that runs tests on the
on the struct ww_mutex locking API.
It is recommended to enable DEBUG_WW_MUTEX_SLOWPATH in conjunction
with this test harness.
Say M if you want these self tests to build as a module.
Say N if you are unsure.
endmenu # lock debugging
config TRACE_IRQFLAGS
bool
help
Enables hooks to interrupt enabling and disabling for
either tracing or lock debugging.
config STACKTRACE
bool "Stack backtrace support"
depends on STACKTRACE_SUPPORT
help
This option causes the kernel to create a /proc/pid/stack for
every process, showing its current stack trace.
It is also used by various kernel debugging features that require
stack trace generation.
config WARN_ALL_UNSEEDED_RANDOM
bool "Warn for all uses of unseeded randomness"
default n
help
Some parts of the kernel contain bugs relating to their use of
cryptographically secure random numbers before it's actually possible
to generate those numbers securely. This setting ensures that these
flaws don't go unnoticed, by enabling a message, should this ever
occur. This will allow people with obscure setups to know when things
are going wrong, so that they might contact developers about fixing
it.
Unfortunately, on some models of some architectures getting
a fully seeded CRNG is extremely difficult, and so this can
result in dmesg getting spammed for a surprisingly long
time. This is really bad from a security perspective, and
so architecture maintainers really need to do what they can
to get the CRNG seeded sooner after the system is booted.
However, since users cannot do anything actionable to
address this, by default the kernel will issue only a single
warning for the first use of unseeded randomness.
Say Y here if you want to receive warnings for all uses of
unseeded randomness. This will be of use primarily for
those developers interested in improving the security of
Linux kernels running on their architecture (or
subarchitecture).
config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
help
If you say Y here, some extra kobject debugging messages will be sent
mm: remove CONFIG_HAVE_MEMBLOCK All architecures use memblock for early memory management. There is no need for the CONFIG_HAVE_MEMBLOCK configuration option. [rppt@linux.vnet.ibm.com: of/fdt: fixup #ifdefs] Link: http://lkml.kernel.org/r/20180919103457.GA20545@rapoport-lnx [rppt@linux.vnet.ibm.com: csky: fixups after bootmem removal] Link: http://lkml.kernel.org/r/20180926112744.GC4628@rapoport-lnx [rppt@linux.vnet.ibm.com: remove stale #else and the code it protects] Link: http://lkml.kernel.org/r/1538067825-24835-1-git-send-email-rppt@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1536927045-23536-4-git-send-email-rppt@linux.vnet.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com> Acked-by: Michal Hocko <mhocko@suse.com> Tested-by: Jonathan Cameron <jonathan.cameron@huawei.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-10-31 05:07:44 +07:00
to the syslog.
config DEBUG_KOBJECT_RELEASE
bool "kobject release debugging"
depends on DEBUG_OBJECTS_TIMERS
help
kobjects are reference counted objects. This means that their
last reference count put is not predictable, and the kobject can
live on past the point at which a driver decides to drop it's
initial reference to the kobject gained on allocation. An
example of this would be a struct device which has just been
unregistered.
However, some buggy drivers assume that after such an operation,
the memory backing the kobject can be immediately freed. This
goes completely against the principles of a refcounted object.
If you say Y here, the kernel will delay the release of kobjects
on the last reference count to improve the visibility of this
kind of kobject release bug.
config HAVE_DEBUG_BUGVERBOSE
bool
menu "Debug kernel data structures"
config DEBUG_LIST
bool "Debug linked list manipulation"
depends on DEBUG_KERNEL || BUG_ON_DATA_CORRUPTION
help
Enable this to turn on extended checks in the linked-list
walking routines.
If unsure, say N.
config DEBUG_PLIST
bool "Debug priority linked list manipulation"
depends on DEBUG_KERNEL
help
Enable this to turn on extended checks in the priority-ordered
linked-list (plist) walking routines. This checks the entire
list multiple times during each manipulation.
If unsure, say N.
config DEBUG_SG
bool "Debug SG table operations"
depends on DEBUG_KERNEL
help
Enable this to turn on checks on scatter-gather tables. This can
help find problems with drivers that do not properly initialize
their sg tables.
If unsure, say N.
config DEBUG_NOTIFIERS
bool "Debug notifier call chains"
depends on DEBUG_KERNEL
help
Enable this to turn on sanity checking for notifier call chains.
This is most useful for kernel developers to make sure that
modules properly unregister themselves from notifier chains.
This is a relatively cheap check but if you care about maximum
performance, say N.
config BUG_ON_DATA_CORRUPTION
bool "Trigger a BUG when data corruption is detected"
select DEBUG_LIST
help
Select this option if the kernel should BUG when it encounters
data corruption in kernel memory structures when they get checked
for validity.
If unsure, say N.
endmenu
config DEBUG_CREDENTIALS
bool "Debug credential management"
depends on DEBUG_KERNEL
help
Enable this to turn on some debug checking for credential
management. The additional code keeps track of the number of
pointers from task_structs to any given cred struct, and checks to
see that this number never exceeds the usage count of the cred
struct.
Furthermore, if SELinux is enabled, this also checks that the
security pointer in the cred struct is never seen to be invalid.
If unsure, say N.
source "kernel/rcu/Kconfig.debug"
config DEBUG_WQ_FORCE_RR_CPU
bool "Force round-robin CPU selection for unbound work items"
depends on DEBUG_KERNEL
default n
help
Workqueue used to implicitly guarantee that work items queued
without explicit CPU specified are put on the local CPU. This
guarantee is no longer true and while local CPU is still
preferred work items may be put on foreign CPUs. Kernel
parameter "workqueue.debug_force_rr_cpu" is added to force
round-robin CPU selection to flush out usages which depend on the
now broken guarantee. This config option enables the debug
feature by default. When enabled, memory and cache locality will
be impacted.
config DEBUG_BLOCK_EXT_DEVT
bool "Force extended block device numbers and spread them"
depends on DEBUG_KERNEL
depends on BLOCK
default n
help
BIG FAT WARNING: ENABLING THIS OPTION MIGHT BREAK BOOTING ON
SOME DISTRIBUTIONS. DO NOT ENABLE THIS UNLESS YOU KNOW WHAT
YOU ARE DOING. Distros, please enable this and fix whatever
is broken.
Conventionally, block device numbers are allocated from
predetermined contiguous area. However, extended block area
may introduce non-contiguous block device numbers. This
option forces most block device numbers to be allocated from
the extended space and spreads them to discover kernel or
userland code paths which assume predetermined contiguous
device number allocation.
Note that turning on this debug option shuffles all the
device numbers for all IDE and SCSI devices including libata
ones, so root partition specified using device number
directly (via rdev or root=MAJ:MIN) won't work anymore.
Textual device names (root=/dev/sdXn) will continue to work.
Say N if you are unsure.
config CPU_HOTPLUG_STATE_CONTROL
bool "Enable CPU hotplug state control"
depends on DEBUG_KERNEL
depends on HOTPLUG_CPU
default n
help
Allows to write steps between "offline" and "online" to the CPUs
sysfs target file so states can be stepped granular. This is a debug
option for now as the hotplug machinery cannot be stopped and
restarted at arbitrary points yet.
Say N if your are unsure.
config LATENCYTOP
bool "Latency measuring infrastructure"
depends on DEBUG_KERNEL
depends on STACKTRACE_SUPPORT
depends on PROC_FS
select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86
select KALLSYMS
select KALLSYMS_ALL
select STACKTRACE
select SCHEDSTATS
select SCHED_DEBUG
help
Enable this option if you want to use the LatencyTOP tool
to find out which userspace is blocking on what kernel operations.
source "kernel/trace/Kconfig"
config PROVIDE_OHCI1394_DMA_INIT
bool "Remote debugging over FireWire early on boot"
depends on PCI && X86
help
If you want to debug problems which hang or crash the kernel early
on boot and the crashing machine has a FireWire port, you can use
this feature to remotely access the memory of the crashed machine
over FireWire. This employs remote DMA as part of the OHCI1394
specification which is now the standard for FireWire controllers.
With remote DMA, you can monitor the printk buffer remotely using
firescope and access all memory below 4GB using fireproxy from gdb.
Even controlling a kernel debugger is possible using remote DMA.
Usage:
If ohci1394_dma=early is used as boot parameter, it will initialize
all OHCI1394 controllers which are found in the PCI config space.
As all changes to the FireWire bus such as enabling and disabling
devices cause a bus reset and thereby disable remote DMA for all
devices, be sure to have the cable plugged and FireWire enabled on
the debugging host before booting the debug target for debugging.
This code (~1k) is freed after boot. By then, the firewire stack
in charge of the OHCI-1394 controllers should be used instead.
See Documentation/core-api/debugging-via-ohci1394.rst for more information.
source "samples/Kconfig"
config ARCH_HAS_DEVMEM_IS_ALLOWED
bool
config STRICT_DEVMEM
bool "Filter access to /dev/mem"
depends on MMU && DEVMEM
depends on ARCH_HAS_DEVMEM_IS_ALLOWED
default y if PPC || X86 || ARM64
help
If this option is disabled, you allow userspace (root) access to all
of memory, including kernel and userspace memory. Accidental
access to this is obviously disastrous, but specific access can
be used by people debugging the kernel. Note that with PAT support
enabled, even in this case there are restrictions on /dev/mem
use due to the cache aliasing requirements.
If this option is switched on, and IO_STRICT_DEVMEM=n, the /dev/mem
file only allows userspace access to PCI space and the BIOS code and
data regions. This is sufficient for dosemu and X and all common
users of /dev/mem.
If in doubt, say Y.
config IO_STRICT_DEVMEM
bool "Filter I/O access to /dev/mem"
depends on STRICT_DEVMEM
help
If this option is disabled, you allow userspace (root) access to all
io-memory regardless of whether a driver is actively using that
range. Accidental access to this is obviously disastrous, but
specific access can be used by people debugging kernel drivers.
If this option is switched on, the /dev/mem file only allows
userspace access to *idle* io-memory ranges (see /proc/iomem) This
may break traditional users of /dev/mem (dosemu, legacy X, etc...)
if the driver using a given range cannot be disabled.
If in doubt, say Y.
menu "$(SRCARCH) Debugging"
source "arch/$(SRCARCH)/Kconfig.debug"
endmenu
menu "Kernel Testing and Coverage"
source "lib/kunit/Kconfig"
fault-injection: notifier error injection This patchset provides kernel modules that can be used to test the error handling of notifier call chain failures by injecting artifical errors to the following notifier chain callbacks. * CPU notifier * PM notifier * memory hotplug notifier * powerpc pSeries reconfig notifier Example: Inject CPU offline error (-1 == -EPERM) # cd /sys/kernel/debug/notifier-error-inject/cpu # echo -1 > actions/CPU_DOWN_PREPARE/error # echo 0 > /sys/devices/system/cpu/cpu1/online bash: echo: write error: Operation not permitted The patchset also adds cpu and memory hotplug tests to tools/testing/selftests These tests first do simple online and offline test and then do fault injection tests if notifier error injection module is available. This patch: The notifier error injection provides the ability to inject artifical errors to specified notifier chain callbacks. It is useful to test the error handling of notifier call chain failures. This adds common basic functions to define which type of events can be fail and to initialize the debugfs interface to control what error code should be returned and which event should be failed. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Greg KH <greg@kroah.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Dave Jones <davej@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-07-31 04:43:02 +07:00
config NOTIFIER_ERROR_INJECTION
tristate "Notifier error injection"
depends on DEBUG_KERNEL
select DEBUG_FS
help
This option provides the ability to inject artificial errors to
fault-injection: notifier error injection This patchset provides kernel modules that can be used to test the error handling of notifier call chain failures by injecting artifical errors to the following notifier chain callbacks. * CPU notifier * PM notifier * memory hotplug notifier * powerpc pSeries reconfig notifier Example: Inject CPU offline error (-1 == -EPERM) # cd /sys/kernel/debug/notifier-error-inject/cpu # echo -1 > actions/CPU_DOWN_PREPARE/error # echo 0 > /sys/devices/system/cpu/cpu1/online bash: echo: write error: Operation not permitted The patchset also adds cpu and memory hotplug tests to tools/testing/selftests These tests first do simple online and offline test and then do fault injection tests if notifier error injection module is available. This patch: The notifier error injection provides the ability to inject artifical errors to specified notifier chain callbacks. It is useful to test the error handling of notifier call chain failures. This adds common basic functions to define which type of events can be fail and to initialize the debugfs interface to control what error code should be returned and which event should be failed. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Greg KH <greg@kroah.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Dave Jones <davej@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-07-31 04:43:02 +07:00
specified notifier chain callbacks. It is useful to test the error
handling of notifier call chain failures.
Say N if unsure.
config PM_NOTIFIER_ERROR_INJECT
tristate "PM notifier error injection module"
depends on PM && NOTIFIER_ERROR_INJECTION
default m if PM_DEBUG
help
This option provides the ability to inject artificial errors to
PM notifier chain callbacks. It is controlled through debugfs
interface /sys/kernel/debug/notifier-error-inject/pm
If the notifier call chain should be failed with some events
notified, write the error code to "actions/<notifier event>/error".
Example: Inject PM suspend error (-12 = -ENOMEM)
# cd /sys/kernel/debug/notifier-error-inject/pm/
# echo -12 > actions/PM_SUSPEND_PREPARE/error
# echo mem > /sys/power/state
bash: echo: write error: Cannot allocate memory
To compile this code as a module, choose M here: the module will
be called pm-notifier-error-inject.
If unsure, say N.
config OF_RECONFIG_NOTIFIER_ERROR_INJECT
tristate "OF reconfig notifier error injection module"
depends on OF_DYNAMIC && NOTIFIER_ERROR_INJECTION
help
This option provides the ability to inject artificial errors to
OF reconfig notifier chain callbacks. It is controlled
through debugfs interface under
/sys/kernel/debug/notifier-error-inject/OF-reconfig/
If the notifier call chain should be failed with some events
notified, write the error code to "actions/<notifier event>/error".
To compile this code as a module, choose M here: the module will
be called of-reconfig-notifier-error-inject.
If unsure, say N.
config NETDEV_NOTIFIER_ERROR_INJECT
tristate "Netdev notifier error injection module"
depends on NET && NOTIFIER_ERROR_INJECTION
help
This option provides the ability to inject artificial errors to
netdevice notifier chain callbacks. It is controlled through debugfs
interface /sys/kernel/debug/notifier-error-inject/netdev
If the notifier call chain should be failed with some events
notified, write the error code to "actions/<notifier event>/error".
Example: Inject netdevice mtu change error (-22 = -EINVAL)
# cd /sys/kernel/debug/notifier-error-inject/netdev
# echo -22 > actions/NETDEV_CHANGEMTU/error
# ip link set eth0 mtu 1024
RTNETLINK answers: Invalid argument
To compile this code as a module, choose M here: the module will
be called netdev-notifier-error-inject.
If unsure, say N.
config FUNCTION_ERROR_INJECTION
def_bool y
depends on HAVE_FUNCTION_ERROR_INJECTION && KPROBES
config FAULT_INJECTION
bool "Fault-injection framework"
depends on DEBUG_KERNEL
help
Provide fault-injection framework.
For more details, see Documentation/fault-injection/.
config FAILSLAB
bool "Fault-injection capability for kmalloc"
depends on FAULT_INJECTION
depends on SLAB || SLUB
help
Provide fault-injection capability for kmalloc.
config FAIL_PAGE_ALLOC
bool "Fault-injection capability for alloc_pages()"
depends on FAULT_INJECTION
help
Provide fault-injection capability for alloc_pages().
config FAIL_MAKE_REQUEST
bool "Fault-injection capability for disk IO"
depends on FAULT_INJECTION && BLOCK
help
Provide fault-injection capability for disk IO.
config FAIL_IO_TIMEOUT
bool "Fault-injection capability for faking disk interrupts"
depends on FAULT_INJECTION && BLOCK
help
Provide fault-injection capability on end IO handling. This
will make the block layer "forget" an interrupt as configured,
thus exercising the error handling.
Only works with drivers that use the generic timeout handling,
for others it wont do anything.
config FAIL_FUTEX
bool "Fault-injection capability for futexes"
select DEBUG_FS
depends on FAULT_INJECTION && FUTEX
help
Provide fault-injection capability for futexes.
config FAULT_INJECTION_DEBUG_FS
bool "Debugfs entries for fault-injection capabilities"
depends on FAULT_INJECTION && SYSFS && DEBUG_FS
help
Enable configuration of fault-injection capabilities via debugfs.
config FAIL_FUNCTION
bool "Fault-injection capability for functions"
depends on FAULT_INJECTION_DEBUG_FS && FUNCTION_ERROR_INJECTION
help
Provide function-based fault-injection capability.
This will allow you to override a specific function with a return
with given return value. As a result, function caller will see
an error value and have to handle it. This is useful to test the
error handling in various subsystems.
config FAIL_MMC_REQUEST
bool "Fault-injection capability for MMC IO"
depends on FAULT_INJECTION_DEBUG_FS && MMC
help
Provide fault-injection capability for MMC IO.
This will make the mmc core return data errors. This is
useful to test the error handling in the mmc block device
and to test how the mmc host driver handles retries from
the block device.
config FAULT_INJECTION_STACKTRACE_FILTER
bool "stacktrace filter for fault-injection capabilities"
depends on FAULT_INJECTION_DEBUG_FS && STACKTRACE_SUPPORT
depends on !X86_64
select STACKTRACE
select FRAME_POINTER if !MIPS && !PPC && !S390 && !MICROBLAZE && !ARM && !ARC && !X86
help
Provide stacktrace filter for fault-injection capabilities
config ARCH_HAS_KCOV
bool
help
An architecture should select this when it can successfully
build and run with CONFIG_KCOV. This typically requires
disabling instrumentation for some early boot code.
config CC_HAS_SANCOV_TRACE_PC
def_bool $(cc-option,-fsanitize-coverage=trace-pc)
config KCOV
bool "Code coverage for fuzzing"
depends on ARCH_HAS_KCOV
depends on CC_HAS_SANCOV_TRACE_PC || GCC_PLUGINS
select DEBUG_FS
select GCC_PLUGIN_SANCOV if !CC_HAS_SANCOV_TRACE_PC
help
KCOV exposes kernel code coverage information in a form suitable
for coverage-guided fuzzing (randomized testing).
If RANDOMIZE_BASE is enabled, PC values will not be stable across
different machines and across reboots. If you need stable PC values,
disable RANDOMIZE_BASE.
For more details, see Documentation/dev-tools/kcov.rst.
config KCOV_ENABLE_COMPARISONS
bool "Enable comparison operands collection by KCOV"
depends on KCOV
depends on $(cc-option,-fsanitize-coverage=trace-cmp)
help
KCOV also exposes operands of every comparison in the instrumented
code along with operand sizes and PCs of the comparison instructions.
These operands can be used by fuzzing engines to improve the quality
of fuzzing coverage.
config KCOV_INSTRUMENT_ALL
bool "Instrument all code by default"
depends on KCOV
default y
help
If you are doing generic system call fuzzing (like e.g. syzkaller),
then you will want to instrument the whole kernel and you should
say y here. If you are doing more targeted fuzzing (like e.g.
filesystem fuzzing with AFL) then you will want to enable coverage
for more specific subsets of files, and should say n here.
kcov: collect coverage from interrupts This change extends kcov remote coverage support to allow collecting coverage from soft interrupts in addition to kernel background threads. To collect coverage from code that is executed in softirq context, a part of that code has to be annotated with kcov_remote_start/stop() in a similar way as how it is done for global kernel background threads. Then the handle used for the annotations has to be passed to the KCOV_REMOTE_ENABLE ioctl. Internally this patch adjusts the __sanitizer_cov_trace_pc() compiler inserted callback to not bail out when called from softirq context. kcov_remote_start/stop() are updated to save/restore the current per task kcov state in a per-cpu area (in case the softirq came when the kernel was already collecting coverage in task context). Coverage from softirqs is collected into pre-allocated per-cpu areas, whose size is controlled by the new CONFIG_KCOV_IRQ_AREA_SIZE. [andreyknvl@google.com: turn current->kcov_softirq into unsigned int to fix objtool warning] Link: http://lkml.kernel.org/r/841c778aa3849c5cb8c3761f56b87ce653a88671.1585233617.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Marco Elver <elver@google.com> Link: http://lkml.kernel.org/r/469bd385c431d050bc38a593296eff4baae50666.1584655448.git.andreyknvl@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-05 06:46:04 +07:00
config KCOV_IRQ_AREA_SIZE
hex "Size of interrupt coverage collection area in words"
depends on KCOV
default 0x40000
help
KCOV uses preallocated per-cpu areas to collect coverage from
soft interrupts. This specifies the size of those areas in the
number of unsigned long words.
menuconfig RUNTIME_TESTING_MENU
bool "Runtime Testing"
def_bool y
if RUNTIME_TESTING_MENU
config LKDTM
tristate "Linux Kernel Dump Test Tool Module"
depends on DEBUG_FS
help
This module enables testing of the different dumping mechanisms by
inducing system failures at predefined crash points.
If you don't need it: say N
Choose M here to compile this code as a module. The module will be
called lkdtm.
Documentation on how to use the module can be found in
Documentation/fault-injection/provoke-crashes.rst
config TEST_LIST_SORT
tristate "Linked list sorting test"
depends on DEBUG_KERNEL || m
help
Enable this to turn on 'list_sort()' function test. This test is
executed only once during system boot (so affects only boot time),
or at module load time.
If unsure, say N.
config TEST_MIN_HEAP
tristate "Min heap test"
depends on DEBUG_KERNEL || m
help
Enable this to turn on min heap function tests. This test is
executed only once during system boot (so affects only boot time),
or at module load time.
If unsure, say N.
config TEST_SORT
tristate "Array-based sort test"
depends on DEBUG_KERNEL || m
help
This option enables the self-test function of 'sort()' at boot,
or at module load time.
If unsure, say N.
config KPROBES_SANITY_TEST
bool "Kprobes sanity tests"
depends on DEBUG_KERNEL
depends on KPROBES
help
This option provides for testing basic kprobes functionality on
boot. Samples of kprobe and kretprobe are inserted and
verified for functionality.
Say N if you are unsure.
config BACKTRACE_SELF_TEST
tristate "Self test for the backtrace code"
depends on DEBUG_KERNEL
help
This option provides a kernel module that can be used to test
the kernel stack backtrace code. This option is not useful
for distributions or general kernels, but only for kernel
developers working on architecture code.
Note that if you want to also test saved backtraces, you will
have to enable STACKTRACE as well.
Say N if you are unsure.
config RBTREE_TEST
tristate "Red-Black tree test"
depends on DEBUG_KERNEL
help
A benchmark measuring the performance of the rbtree library.
Also includes rbtree invariant checks.
rslib: Add tests for the encoder and decoder A Reed-Solomon code with minimum distance d can correct any error and erasure pattern that satisfies 2 * #error + #erasures < d. If the error correction capacity is exceeded, then correct decoding cannot be guaranteed. The decoder must, however, return a valid codeword or report failure. There are two main tests: - Check for correct behaviour up to the error correction capacity - Check for correct behaviour beyond error corrupted capacity Both tests are simple: 1. Generate random data 2. Encode data with the chosen code 3. Add errors and erasures to data 4. Decode the corrupted word 5. Check for correct behaviour When testing up to capacity we test for: - Correct decoding - Correct return value (i.e. the number of corrected symbols) - That the returned error positions are correct There are two kinds of erasures; the erased symbol can be corrupted or not. When counting the number of corrected symbols, erasures without symbol corruption should not be counted. Similarly, the returned error positions should only include positions where a correction is necessary. We run the up to capacity tests for three different interfaces of decode_rs: - Use the correction buffers - Use the correction buffers with syndromes provided by the caller - Error correction in place (does not check the error positions) When testing beyond capacity test for silent failures. A silent failure is when the decoder returns success but the returned word is not a valid codeword. There are a couple of options for the tests: - Verbosity. - Whether to test for correct behaviour beyond capacity. Default is to test beyond capacity. - Whether to allow erasures without symbol corruption. Defaults to yes. Note that the tests take a couple of minutes to complete. Signed-off-by: Ferdinand Blomqvist <ferdinand.blomqvist@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190620141039.9874-2-ferdinand.blomqvist@gmail.com
2019-06-20 21:10:33 +07:00
config REED_SOLOMON_TEST
tristate "Reed-Solomon library test"
depends on DEBUG_KERNEL || m
select REED_SOLOMON
select REED_SOLOMON_ENC16
select REED_SOLOMON_DEC16
help
This option enables the self-test function of rslib at boot,
or at module load time.
If unsure, say N.
rbtree: add prio tree and interval tree tests Patch 1 implements support for interval trees, on top of the augmented rbtree API. It also adds synthetic tests to compare the performance of interval trees vs prio trees. Short answers is that interval trees are slightly faster (~25%) on insert/erase, and much faster (~2.4 - 3x) on search. It is debatable how realistic the synthetic test is, and I have not made such measurements yet, but my impression is that interval trees would still come out faster. Patch 2 uses a preprocessor template to make the interval tree generic, and uses it as a replacement for the vma prio_tree. Patch 3 takes the other prio_tree user, kmemleak, and converts it to use a basic rbtree. We don't actually need the augmented rbtree support here because the intervals are always non-overlapping. Patch 4 removes the now-unused prio tree library. Patch 5 proposes an additional optimization to rb_erase_augmented, now providing it as an inline function so that the augmented callbacks can be inlined in. This provides an additional 5-10% performance improvement for the interval tree insert/erase benchmark. There is a maintainance cost as it exposes augmented rbtree users to some of the rbtree library internals; however I think this cost shouldn't be too high as I expect the augmented rbtree will always have much less users than the base rbtree. I should probably add a quick summary of why I think it makes sense to replace prio trees with augmented rbtree based interval trees now. One of the drivers is that we need augmented rbtrees for Rik's vma gap finding code, and once you have them, it just makes sense to use them for interval trees as well, as this is the simpler and more well known algorithm. prio trees, in comparison, seem *too* clever: they impose an additional 'heap' constraint on the tree, which they use to guarantee a faster worst-case complexity of O(k+log N) for stabbing queries in a well-balanced prio tree, vs O(k*log N) for interval trees (where k=number of matches, N=number of intervals). Now this sounds great, but in practice prio trees don't realize this theorical benefit. First, the additional constraint makes them harder to update, so that the kernel implementation has to simplify things by balancing them like a radix tree, which is not always ideal. Second, the fact that there are both index and heap properties makes both tree manipulation and search more complex, which results in a higher multiplicative time constant. As it turns out, the simple interval tree algorithm ends up running faster than the more clever prio tree. This patch: Add two test modules: - prio_tree_test measures the performance of lib/prio_tree.c, both for insertion/removal and for stabbing searches - interval_tree_test measures the performance of a library of equivalent functionality, built using the augmented rbtree support. In order to support the second test module, lib/interval_tree.c is introduced. It is kept separate from the interval_tree_test main file for two reasons: first we don't want to provide an unfair advantage over prio_tree_test by having everything in a single compilation unit, and second there is the possibility that the interval tree functionality could get some non-test users in kernel over time. Signed-off-by: Michel Lespinasse <walken@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-09 06:31:23 +07:00
config INTERVAL_TREE_TEST
tristate "Interval tree test"
depends on DEBUG_KERNEL
select INTERVAL_TREE
rbtree: add prio tree and interval tree tests Patch 1 implements support for interval trees, on top of the augmented rbtree API. It also adds synthetic tests to compare the performance of interval trees vs prio trees. Short answers is that interval trees are slightly faster (~25%) on insert/erase, and much faster (~2.4 - 3x) on search. It is debatable how realistic the synthetic test is, and I have not made such measurements yet, but my impression is that interval trees would still come out faster. Patch 2 uses a preprocessor template to make the interval tree generic, and uses it as a replacement for the vma prio_tree. Patch 3 takes the other prio_tree user, kmemleak, and converts it to use a basic rbtree. We don't actually need the augmented rbtree support here because the intervals are always non-overlapping. Patch 4 removes the now-unused prio tree library. Patch 5 proposes an additional optimization to rb_erase_augmented, now providing it as an inline function so that the augmented callbacks can be inlined in. This provides an additional 5-10% performance improvement for the interval tree insert/erase benchmark. There is a maintainance cost as it exposes augmented rbtree users to some of the rbtree library internals; however I think this cost shouldn't be too high as I expect the augmented rbtree will always have much less users than the base rbtree. I should probably add a quick summary of why I think it makes sense to replace prio trees with augmented rbtree based interval trees now. One of the drivers is that we need augmented rbtrees for Rik's vma gap finding code, and once you have them, it just makes sense to use them for interval trees as well, as this is the simpler and more well known algorithm. prio trees, in comparison, seem *too* clever: they impose an additional 'heap' constraint on the tree, which they use to guarantee a faster worst-case complexity of O(k+log N) for stabbing queries in a well-balanced prio tree, vs O(k*log N) for interval trees (where k=number of matches, N=number of intervals). Now this sounds great, but in practice prio trees don't realize this theorical benefit. First, the additional constraint makes them harder to update, so that the kernel implementation has to simplify things by balancing them like a radix tree, which is not always ideal. Second, the fact that there are both index and heap properties makes both tree manipulation and search more complex, which results in a higher multiplicative time constant. As it turns out, the simple interval tree algorithm ends up running faster than the more clever prio tree. This patch: Add two test modules: - prio_tree_test measures the performance of lib/prio_tree.c, both for insertion/removal and for stabbing searches - interval_tree_test measures the performance of a library of equivalent functionality, built using the augmented rbtree support. In order to support the second test module, lib/interval_tree.c is introduced. It is kept separate from the interval_tree_test main file for two reasons: first we don't want to provide an unfair advantage over prio_tree_test by having everything in a single compilation unit, and second there is the possibility that the interval tree functionality could get some non-test users in kernel over time. Signed-off-by: Michel Lespinasse <walken@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-09 06:31:23 +07:00
help
A benchmark measuring the performance of the interval tree library
config PERCPU_TEST
tristate "Per cpu operations test"
depends on m && DEBUG_KERNEL
help
Enable this option to build test module which validates per-cpu
operations.
If unsure, say N.
config ATOMIC64_SELFTEST
tristate "Perform an atomic64_t self-test"
help
Enable this option to test the atomic64_t functions at boot or
at module load time.
If unsure, say N.
config ASYNC_RAID6_TEST
tristate "Self test for hardware accelerated raid6 recovery"
depends on ASYNC_RAID6_RECOV
select ASYNC_MEMCPY
---help---
This is a one-shot self test that permutes through the
recovery of all the possible two disk failure scenarios for a
N-disk array. Recovery is performed with the asynchronous
raid6 recovery routines, and will optionally use an offload
engine if one is available.
If unsure, say N.
config TEST_HEXDUMP
tristate "Test functions located in the hexdump module at runtime"
config TEST_STRING_HELPERS
tristate "Test functions located in the string_helpers module at runtime"
config TEST_STRSCPY
tristate "Test strscpy*() family of functions at runtime"
config TEST_KSTRTOX
tristate "Test kstrto*() family of functions at runtime"
config TEST_PRINTF
tristate "Test printf() family of functions at runtime"
config TEST_BITMAP
tristate "Test bitmap_*() family of functions at runtime"
help
Enable this option to test the bitmap functions at boot.
If unsure, say N.
config TEST_BITFIELD
tristate "Test bitfield functions at runtime"
help
Enable this option to test the bitfield functions at boot.
If unsure, say N.
config TEST_UUID
tristate "Test functions located in the uuid module at runtime"
config TEST_XARRAY
tristate "Test the XArray code at runtime"
config TEST_OVERFLOW
tristate "Test check_*_overflow() functions at runtime"
config TEST_RHASHTABLE
tristate "Perform selftest on resizable hash table"
help
Enable this option to test the rhashtable functions at boot.
If unsure, say N.
config TEST_HASH
tristate "Perform selftest on hash functions"
help
siphash: add cryptographically secure PRF SipHash is a 64-bit keyed hash function that is actually a cryptographically secure PRF, like HMAC. Except SipHash is super fast, and is meant to be used as a hashtable keyed lookup function, or as a general PRF for short input use cases, such as sequence numbers or RNG chaining. For the first usage: There are a variety of attacks known as "hashtable poisoning" in which an attacker forms some data such that the hash of that data will be the same, and then preceeds to fill up all entries of a hashbucket. This is a realistic and well-known denial-of-service vector. Currently hashtables use jhash, which is fast but not secure, and some kind of rotating key scheme (or none at all, which isn't good). SipHash is meant as a replacement for jhash in these cases. There are a modicum of places in the kernel that are vulnerable to hashtable poisoning attacks, either via userspace vectors or network vectors, and there's not a reliable mechanism inside the kernel at the moment to fix it. The first step toward fixing these issues is actually getting a secure primitive into the kernel for developers to use. Then we can, bit by bit, port things over to it as deemed appropriate. While SipHash is extremely fast for a cryptographically secure function, it is likely a bit slower than the insecure jhash, and so replacements will be evaluated on a case-by-case basis based on whether or not the difference in speed is negligible and whether or not the current jhash usage poses a real security risk. For the second usage: A few places in the kernel are using MD5 or SHA1 for creating secure sequence numbers, syn cookies, port numbers, or fast random numbers. SipHash is a faster and more fitting, and more secure replacement for MD5 in those situations. Replacing MD5 and SHA1 with SipHash for these uses is obvious and straight-forward, and so is submitted along with this patch series. There shouldn't be much of a debate over its efficacy. Dozens of languages are already using this internally for their hash tables and PRFs. Some of the BSDs already use this in their kernels. SipHash is a widely known high-speed solution to a widely known set of problems, and it's time we catch-up. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Jean-Philippe Aumasson <jeanphilippe.aumasson@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: David Laight <David.Laight@aculab.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-08 19:54:00 +07:00
Enable this option to test the kernel's integer (<linux/hash.h>),
string (<linux/stringhash.h>), and siphash (<linux/siphash.h>)
hash functions on boot (or module load).
This is intended to help people writing architecture-specific
optimized versions. If unsure, say N.
config TEST_IDA
tristate "Perform selftest on IDA functions"
config TEST_PARMAN
tristate "Perform selftest on priority array manager"
depends on PARMAN
help
Enable this option to test priority array manager on boot
(or module load).
If unsure, say N.
config TEST_IRQ_TIMINGS
bool "IRQ timings selftest"
depends on IRQ_TIMINGS
help
Enable this option to test the irq timings code on boot.
If unsure, say N.
config TEST_LKM
test: add minimal module for verification testing This is a pair of test modules I'd like to see in the tree. Instead of putting these in lkdtm, where I've been adding various tests that trigger crashes, these don't make sense there since they need to be either distinctly separate, or their pass/fail state don't need to crash the machine. These live in lib/ for now, along with a few other in-kernel test modules, and use the slightly more common "test_" naming convention, instead of "test-". We should likely standardize on the former: $ find . -name 'test_*.c' | grep -v /tools/ | wc -l 4 $ find . -name 'test-*.c' | grep -v /tools/ | wc -l 2 The first is entirely a no-op module, designed to allow simple testing of the module loading and verification interface. It's useful to have a module that has no other uses or dependencies so it can be reliably used for just testing module loading and verification. The second is a module that exercises the user memory access functions, in an effort to make sure that we can quickly catch any regressions in boundary checking (e.g. like what was recently fixed on ARM). This patch (of 2): When doing module loading verification tests (for example, with module signing, or LSM hooks), it is very handy to have a module that can be built on all systems under test, isn't auto-loaded at boot, and has no device or similar dependencies. This creates the "test_module.ko" module for that purpose, which only reports its load and unload to printk. Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-01-24 06:54:37 +07:00
tristate "Test module loading with 'hello world' module"
depends on m
help
This builds the "test_module" module that emits "Hello, world"
on printk when loaded. It is designed to be used for basic
evaluation of the module loading subsystem (for example when
validating module verification). It lacks any extra dependencies,
and will not normally be loaded by the system unless explicitly
requested by name.
If unsure, say N.
vmalloc: add test driver to analyse vmalloc allocator This adds a new kernel module for analysis of vmalloc allocator. It is only enabled as a module. There are two main reasons this module should be used for: performance evaluation and stressing of vmalloc subsystem. It consists of several test cases. As of now there are 8. The module has five parameters we can specify to change its the behaviour. 1) run_test_mask - set of tests to be run id: 1, name: fix_size_alloc_test id: 2, name: full_fit_alloc_test id: 4, name: long_busy_list_alloc_test id: 8, name: random_size_alloc_test id: 16, name: fix_align_alloc_test id: 32, name: random_size_align_alloc_test id: 64, name: align_shift_alloc_test id: 128, name: pcpu_alloc_test By default all tests are in run test mask. If you want to select some specific tests it is possible to pass the mask. For example for first, second and fourth tests we go 11 value. 2) test_repeat_count - how many times each test should be repeated By default it is one time per test. It is possible to pass any number. As high the value is the test duration gets increased. 3) test_loop_count - internal test loop counter. By default it is set to 1000000. 4) single_cpu_test - use one CPU to run the tests By default this parameter is set to false. It means that all online CPUs execute tests. By setting it to 1, the tests are executed by first online CPU only. 5) sequential_test_order - run tests in sequential order By default this parameter is set to false. It means that before running tests the order is shuffled. It is possible to make it sequential, just set it to 1. Performance analysis: In order to evaluate performance of vmalloc allocations, usually it makes sense to use only one CPU that runs tests, use sequential order, number of repeat tests can be different as well as set of test mask. For example if we want to run all tests, to use one CPU and repeat each test 3 times. Insert the module passing following parameters: single_cpu_test=1 sequential_test_order=1 test_repeat_count=3 with following output: <snip> Summary: fix_size_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 901177 usec Summary: full_fit_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 1039341 usec Summary: long_busy_list_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 11775763 usec Summary: random_size_alloc_test passed 3: failed: 0 repeat: 3 loops: 1000000 avg: 6081992 usec Summary: fix_align_alloc_test passed: 3 failed: 0 repeat: 3, loops: 1000000 avg: 2003712 usec Summary: random_size_align_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 2895689 usec Summary: align_shift_alloc_test passed: 0 failed: 3 repeat: 3 loops: 1000000 avg: 573 usec Summary: pcpu_alloc_test passed: 3 failed: 0 repeat: 3 loops: 1000000 avg: 95802 usec All test took CPU0=192945605995 cycles <snip> The align_shift_alloc_test is expected to be failed. Stressing: In order to stress the vmalloc subsystem we run all available test cases on all available CPUs simultaneously. In order to prevent constant behaviour pattern, the test cases array is shuffled by default to randomize the order of test execution. For example if we want to run all tests(default), use all online CPUs(default) with shuffled order(default) and to repeat each test 30 times. The command would be like: modprobe vmalloc_test test_repeat_count=30 Expected results are the system is alive, there are no any BUG_ONs or Kernel Panics the tests are completed, no memory leaks. [urezki@gmail.com: fix 32-bit builds] Link: http://lkml.kernel.org/r/20190106214839.ffvjvmrn52uqog7k@pc636 [urezki@gmail.com: make CONFIG_TEST_VMALLOC depend on CONFIG_MMU] Link: http://lkml.kernel.org/r/20190219085441.s6bg2gpy4esny5vw@pc636 Link: http://lkml.kernel.org/r/20190103142108.20744-3-urezki@gmail.com Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-06 06:43:34 +07:00
config TEST_VMALLOC
tristate "Test module for stress/performance analysis of vmalloc allocator"
default n
depends on MMU
depends on m
help
This builds the "test_vmalloc" module that should be used for
stress and performance analysis. So, any new change for vmalloc
subsystem can be evaluated from performance and stability point
of view.
If unsure, say N.
config TEST_USER_COPY
tristate "Test user/kernel boundary protections"
depends on m
help
This builds the "test_user_copy" module that runs sanity checks
on the copy_to/from_user infrastructure, making sure basic
user/kernel boundary testing is working. If it fails to load,
a regression has been detected in the user/kernel memory boundary
protections.
If unsure, say N.
config TEST_BPF
tristate "Test BPF filter functionality"
depends on m && NET
help
This builds the "test_bpf" module that runs various test vectors
against the BPF interpreter or BPF JIT compiler depending on the
current setting. This is in particular useful for BPF JIT compiler
development, but also to run regression tests against changes in
the interpreter code. It also enables test stubs for eBPF maps and
verifier used by user space verifier testsuite.
If unsure, say N.
config TEST_BLACKHOLE_DEV
tristate "Test blackhole netdev functionality"
depends on m && NET
help
This builds the "test_blackhole_dev" module that validates the
data path through this blackhole netdev.
If unsure, say N.
config FIND_BIT_BENCHMARK
tristate "Test find_bit functions"
help
This builds the "test_find_bit" module that measure find_*_bit()
functions performance.
If unsure, say N.
config TEST_FIRMWARE
tristate "Test firmware loading via userspace interface"
depends on FW_LOADER
help
This builds the "test_firmware" module that creates a userspace
interface for testing firmware loading. This can be used to
control the triggering of firmware loading without needing an
actual firmware-using device. The contents can be rechecked by
userspace.
If unsure, say N.
config TEST_SYSCTL
tristate "sysctl test driver"
depends on PROC_SYSCTL
help
This builds the "test_sysctl" module. This driver enables to test the
proc sysctl interfaces available to drivers safely without affecting
production knobs which might alter system functionality.
If unsure, say N.
config SYSCTL_KUNIT_TEST
kunit: allow kunit tests to be loaded as a module As tests are added to kunit, it will become less feasible to execute all built tests together. By supporting modular tests we provide a simple way to do selective execution on a running system; specifying CONFIG_KUNIT=y CONFIG_KUNIT_EXAMPLE_TEST=m ...means we can simply "insmod example-test.ko" to run the tests. To achieve this we need to do the following: o export the required symbols in kunit o string-stream tests utilize non-exported symbols so for now we skip building them when CONFIG_KUNIT_TEST=m. o drivers/base/power/qos-test.c contains a few unexported interface references, namely freq_qos_read_value() and freq_constraints_init(). Both of these could be potentially defined as static inline functions in include/linux/pm_qos.h, but for now we simply avoid supporting module build for that test suite. o support a new way of declaring test suites. Because a module cannot do multiple late_initcall()s, we provide a kunit_test_suites() macro to declare multiple suites within the same module at once. o some test module names would have been too general ("test-test" and "example-test" for kunit tests, "inode-test" for ext4 tests); rename these as appropriate ("kunit-test", "kunit-example-test" and "ext4-inode-test" respectively). Also define kunit_test_suite() via kunit_test_suites() as callers in other trees may need the old definition. Co-developed-by: Knut Omang <knut.omang@oracle.com> Signed-off-by: Knut Omang <knut.omang@oracle.com> Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Acked-by: Theodore Ts'o <tytso@mit.edu> # for ext4 bits Acked-by: David Gow <davidgow@google.com> # For list-test Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-01-07 05:28:20 +07:00
tristate "KUnit test for sysctl"
depends on KUNIT
help
This builds the proc sysctl unit test, which runs on boot.
Tests the API contract and implementation correctness of sysctl.
For more information on KUnit and unit tests in general please refer
to the KUnit documentation in Documentation/dev-tools/kunit/.
If unsure, say N.
config LIST_KUNIT_TEST
kunit: allow kunit tests to be loaded as a module As tests are added to kunit, it will become less feasible to execute all built tests together. By supporting modular tests we provide a simple way to do selective execution on a running system; specifying CONFIG_KUNIT=y CONFIG_KUNIT_EXAMPLE_TEST=m ...means we can simply "insmod example-test.ko" to run the tests. To achieve this we need to do the following: o export the required symbols in kunit o string-stream tests utilize non-exported symbols so for now we skip building them when CONFIG_KUNIT_TEST=m. o drivers/base/power/qos-test.c contains a few unexported interface references, namely freq_qos_read_value() and freq_constraints_init(). Both of these could be potentially defined as static inline functions in include/linux/pm_qos.h, but for now we simply avoid supporting module build for that test suite. o support a new way of declaring test suites. Because a module cannot do multiple late_initcall()s, we provide a kunit_test_suites() macro to declare multiple suites within the same module at once. o some test module names would have been too general ("test-test" and "example-test" for kunit tests, "inode-test" for ext4 tests); rename these as appropriate ("kunit-test", "kunit-example-test" and "ext4-inode-test" respectively). Also define kunit_test_suite() via kunit_test_suites() as callers in other trees may need the old definition. Co-developed-by: Knut Omang <knut.omang@oracle.com> Signed-off-by: Knut Omang <knut.omang@oracle.com> Signed-off-by: Alan Maguire <alan.maguire@oracle.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Acked-by: Theodore Ts'o <tytso@mit.edu> # for ext4 bits Acked-by: David Gow <davidgow@google.com> # For list-test Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-01-07 05:28:20 +07:00
tristate "KUnit Test for Kernel Linked-list structures"
depends on KUNIT
help
This builds the linked list KUnit test suite.
It tests that the API and basic functionality of the list_head type
and associated macros.
KUnit tests run during boot and output the results to the debug log
in TAP format (http://testanything.org/). Only useful for kernel devs
running the KUnit test harness, and not intended for inclusion into a
production build.
For more information on KUnit and unit tests in general please refer
to the KUnit documentation in Documentation/dev-tools/kunit/.
If unsure, say N.
config LINEAR_RANGES_TEST
tristate "KUnit test for linear_ranges"
depends on KUNIT
select LINEAR_RANGES
help
This builds the linear_ranges unit test, which runs on boot.
Tests the linear_ranges logic correctness.
For more information on KUnit and unit tests in general please refer
to the KUnit documentation in Documentation/dev-tools/kunit/.
If unsure, say N.
config TEST_UDELAY
tristate "udelay test driver"
help
This builds the "udelay_test" module that helps to make sure
that udelay() is working properly.
If unsure, say N.
config TEST_STATIC_KEYS
tristate "Test static keys"
depends on m
help
Test the static key interfaces.
If unsure, say N.
kmod: add test driver to stress test the module loader This adds a new stress test driver for kmod: the kernel module loader. The new stress test driver, test_kmod, is only enabled as a module right now. It should be possible to load this as built-in and load tests early (refer to the force_init_test module parameter), however since a lot of test can get a system out of memory fast we leave this disabled for now. Using a system with 1024 MiB of RAM can *easily* get your kernel OOM fast with this test driver. The test_kmod driver exposes API knobs for us to fine tune simple request_module() and get_fs_type() calls. Since these API calls only allow each one parameter a test driver for these is rather simple. Other factors that can help out test driver though are the number of calls we issue and knowing current limitations of each. This exposes configuration as much as possible through userspace to be able to build tests directly from userspace. Since it allows multiple misc devices its will eventually (once we add a knob to let us create new devices at will) also be possible to perform more tests in parallel, provided you have enough memory. We only enable tests we know work as of right now. Demo screenshots: # tools/testing/selftests/kmod/kmod.sh kmod_test_0001_driver: OK! - loading kmod test kmod_test_0001_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND kmod_test_0001_fs: OK! - loading kmod test kmod_test_0001_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL kmod_test_0002_driver: OK! - loading kmod test kmod_test_0002_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND kmod_test_0002_fs: OK! - loading kmod test kmod_test_0002_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL kmod_test_0003: OK! - loading kmod test kmod_test_0003: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0004: OK! - loading kmod test kmod_test_0004: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0005: OK! - loading kmod test kmod_test_0005: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0006: OK! - loading kmod test kmod_test_0006: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0005: OK! - loading kmod test kmod_test_0005: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0006: OK! - loading kmod test kmod_test_0006: OK! - Return value: 0 (SUCCESS), expected SUCCESS XXX: add test restult for 0007 Test completed You can also request for specific tests: # tools/testing/selftests/kmod/kmod.sh -t 0001 kmod_test_0001_driver: OK! - loading kmod test kmod_test_0001_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND kmod_test_0001_fs: OK! - loading kmod test kmod_test_0001_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL Test completed Lastly, the current available number of tests: # tools/testing/selftests/kmod/kmod.sh --help Usage: tools/testing/selftests/kmod/kmod.sh [ -t <4-number-digit> ] Valid tests: 0001-0009 0001 - Simple test - 1 thread for empty string 0002 - Simple test - 1 thread for modules/filesystems that do not exist 0003 - Simple test - 1 thread for get_fs_type() only 0004 - Simple test - 2 threads for get_fs_type() only 0005 - multithreaded tests with default setup - request_module() only 0006 - multithreaded tests with default setup - get_fs_type() only 0007 - multithreaded tests with default setup test request_module() and get_fs_type() 0008 - multithreaded - push kmod_concurrent over max_modprobes for request_module() 0009 - multithreaded - push kmod_concurrent over max_modprobes for get_fs_type() The following test cases currently fail, as such they are not currently enabled by default: # tools/testing/selftests/kmod/kmod.sh -t 0008 # tools/testing/selftests/kmod/kmod.sh -t 0009 To be sure to run them as intended please unload both of the modules: o test_module o xfs And ensure they are not loaded on your system prior to testing them. If you use these paritions for your rootfs you can change the default test driver used for get_fs_type() by exporting it into your environment. For example of other test defaults you can override refer to kmod.sh allow_user_defaults(). Behind the scenes this is how we fine tune at a test case prior to hitting a trigger to run it: cat /sys/devices/virtual/misc/test_kmod0/config echo -n "2" > /sys/devices/virtual/misc/test_kmod0/config_test_case echo -n "ext4" > /sys/devices/virtual/misc/test_kmod0/config_test_fs echo -n "80" > /sys/devices/virtual/misc/test_kmod0/config_num_threads cat /sys/devices/virtual/misc/test_kmod0/config echo -n "1" > /sys/devices/virtual/misc/test_kmod0/config_num_threads Finally to trigger: echo -n "1" > /sys/devices/virtual/misc/test_kmod0/trigger_config The kmod.sh script uses the above constructs to build different test cases. A bit of interpretation of the current failures follows, first two premises: a) When request_module() is used userspace figures out an optimized version of module order for us. Once it finds the modules it needs, as per depmod symbol dep map, it will finit_module() the respective modules which are needed for the original request_module() request. b) We have an optimization in place whereby if a kernel uses request_module() on a module already loaded we never bother userspace as the module already is loaded. This is all handled by kernel/kmod.c. A few things to consider to help identify root causes of issues: 0) kmod 19 has a broken heuristic for modules being assumed to be built-in to your kernel and will return 0 even though request_module() failed. Upgrade to a newer version of kmod. 1) A get_fs_type() call for "xfs" will request_module() for "fs-xfs", not for "xfs". The optimization in kernel described in b) fails to catch if we have a lot of consecutive get_fs_type() calls. The reason is the optimization in place does not look for aliases. This means two consecutive get_fs_type() calls will bump kmod_concurrent, whereas request_module() will not. This one explanation why test case 0009 fails at least once for get_fs_type(). 2) If a module fails to load --- for whatever reason (kmod_concurrent limit reached, file not yet present due to rootfs switch, out of memory) we have a period of time during which module request for the same name either with request_module() or get_fs_type() will *also* fail to load even if the file for the module is ready. This explains why *multiple* NULLs are possible on test 0009. 3) finit_module() consumes quite a bit of memory. 4) Filesystems typically also have more dependent modules than other modules, its important to note though that even though a get_fs_type() call does not incur additional kmod_concurrent bumps, since userspace loads dependencies it finds it needs via finit_module_fd(), it *will* take much more memory to load a module with a lot of dependencies. Because of 3) and 4) we will easily run into out of memory failures with certain tests. For instance test 0006 fails on qemu with 1024 MiB of RAM. It panics a box after reaping all userspace processes and still not having enough memory to reap. [arnd@arndb.de: add dependencies for test module] Link: http://lkml.kernel.org/r/20170630154834.3689272-1-arnd@arndb.de Link: http://lkml.kernel.org/r/20170628223155.26472-3-mcgrof@kernel.org Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> Cc: Jessica Yu <jeyu@redhat.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Michal Marek <mmarek@suse.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-15 04:50:08 +07:00
config TEST_KMOD
tristate "kmod stress tester"
depends on m
depends on NETDEVICES && NET_CORE && INET # for TUN
depends on BLOCK
kmod: add test driver to stress test the module loader This adds a new stress test driver for kmod: the kernel module loader. The new stress test driver, test_kmod, is only enabled as a module right now. It should be possible to load this as built-in and load tests early (refer to the force_init_test module parameter), however since a lot of test can get a system out of memory fast we leave this disabled for now. Using a system with 1024 MiB of RAM can *easily* get your kernel OOM fast with this test driver. The test_kmod driver exposes API knobs for us to fine tune simple request_module() and get_fs_type() calls. Since these API calls only allow each one parameter a test driver for these is rather simple. Other factors that can help out test driver though are the number of calls we issue and knowing current limitations of each. This exposes configuration as much as possible through userspace to be able to build tests directly from userspace. Since it allows multiple misc devices its will eventually (once we add a knob to let us create new devices at will) also be possible to perform more tests in parallel, provided you have enough memory. We only enable tests we know work as of right now. Demo screenshots: # tools/testing/selftests/kmod/kmod.sh kmod_test_0001_driver: OK! - loading kmod test kmod_test_0001_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND kmod_test_0001_fs: OK! - loading kmod test kmod_test_0001_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL kmod_test_0002_driver: OK! - loading kmod test kmod_test_0002_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND kmod_test_0002_fs: OK! - loading kmod test kmod_test_0002_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL kmod_test_0003: OK! - loading kmod test kmod_test_0003: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0004: OK! - loading kmod test kmod_test_0004: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0005: OK! - loading kmod test kmod_test_0005: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0006: OK! - loading kmod test kmod_test_0006: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0005: OK! - loading kmod test kmod_test_0005: OK! - Return value: 0 (SUCCESS), expected SUCCESS kmod_test_0006: OK! - loading kmod test kmod_test_0006: OK! - Return value: 0 (SUCCESS), expected SUCCESS XXX: add test restult for 0007 Test completed You can also request for specific tests: # tools/testing/selftests/kmod/kmod.sh -t 0001 kmod_test_0001_driver: OK! - loading kmod test kmod_test_0001_driver: OK! - Return value: 256 (MODULE_NOT_FOUND), expected MODULE_NOT_FOUND kmod_test_0001_fs: OK! - loading kmod test kmod_test_0001_fs: OK! - Return value: -22 (-EINVAL), expected -EINVAL Test completed Lastly, the current available number of tests: # tools/testing/selftests/kmod/kmod.sh --help Usage: tools/testing/selftests/kmod/kmod.sh [ -t <4-number-digit> ] Valid tests: 0001-0009 0001 - Simple test - 1 thread for empty string 0002 - Simple test - 1 thread for modules/filesystems that do not exist 0003 - Simple test - 1 thread for get_fs_type() only 0004 - Simple test - 2 threads for get_fs_type() only 0005 - multithreaded tests with default setup - request_module() only 0006 - multithreaded tests with default setup - get_fs_type() only 0007 - multithreaded tests with default setup test request_module() and get_fs_type() 0008 - multithreaded - push kmod_concurrent over max_modprobes for request_module() 0009 - multithreaded - push kmod_concurrent over max_modprobes for get_fs_type() The following test cases currently fail, as such they are not currently enabled by default: # tools/testing/selftests/kmod/kmod.sh -t 0008 # tools/testing/selftests/kmod/kmod.sh -t 0009 To be sure to run them as intended please unload both of the modules: o test_module o xfs And ensure they are not loaded on your system prior to testing them. If you use these paritions for your rootfs you can change the default test driver used for get_fs_type() by exporting it into your environment. For example of other test defaults you can override refer to kmod.sh allow_user_defaults(). Behind the scenes this is how we fine tune at a test case prior to hitting a trigger to run it: cat /sys/devices/virtual/misc/test_kmod0/config echo -n "2" > /sys/devices/virtual/misc/test_kmod0/config_test_case echo -n "ext4" > /sys/devices/virtual/misc/test_kmod0/config_test_fs echo -n "80" > /sys/devices/virtual/misc/test_kmod0/config_num_threads cat /sys/devices/virtual/misc/test_kmod0/config echo -n "1" > /sys/devices/virtual/misc/test_kmod0/config_num_threads Finally to trigger: echo -n "1" > /sys/devices/virtual/misc/test_kmod0/trigger_config The kmod.sh script uses the above constructs to build different test cases. A bit of interpretation of the current failures follows, first two premises: a) When request_module() is used userspace figures out an optimized version of module order for us. Once it finds the modules it needs, as per depmod symbol dep map, it will finit_module() the respective modules which are needed for the original request_module() request. b) We have an optimization in place whereby if a kernel uses request_module() on a module already loaded we never bother userspace as the module already is loaded. This is all handled by kernel/kmod.c. A few things to consider to help identify root causes of issues: 0) kmod 19 has a broken heuristic for modules being assumed to be built-in to your kernel and will return 0 even though request_module() failed. Upgrade to a newer version of kmod. 1) A get_fs_type() call for "xfs" will request_module() for "fs-xfs", not for "xfs". The optimization in kernel described in b) fails to catch if we have a lot of consecutive get_fs_type() calls. The reason is the optimization in place does not look for aliases. This means two consecutive get_fs_type() calls will bump kmod_concurrent, whereas request_module() will not. This one explanation why test case 0009 fails at least once for get_fs_type(). 2) If a module fails to load --- for whatever reason (kmod_concurrent limit reached, file not yet present due to rootfs switch, out of memory) we have a period of time during which module request for the same name either with request_module() or get_fs_type() will *also* fail to load even if the file for the module is ready. This explains why *multiple* NULLs are possible on test 0009. 3) finit_module() consumes quite a bit of memory. 4) Filesystems typically also have more dependent modules than other modules, its important to note though that even though a get_fs_type() call does not incur additional kmod_concurrent bumps, since userspace loads dependencies it finds it needs via finit_module_fd(), it *will* take much more memory to load a module with a lot of dependencies. Because of 3) and 4) we will easily run into out of memory failures with certain tests. For instance test 0006 fails on qemu with 1024 MiB of RAM. It panics a box after reaping all userspace processes and still not having enough memory to reap. [arnd@arndb.de: add dependencies for test module] Link: http://lkml.kernel.org/r/20170630154834.3689272-1-arnd@arndb.de Link: http://lkml.kernel.org/r/20170628223155.26472-3-mcgrof@kernel.org Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> Cc: Jessica Yu <jeyu@redhat.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Michal Marek <mmarek@suse.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-07-15 04:50:08 +07:00
select TEST_LKM
select XFS_FS
select TUN
select BTRFS_FS
help
Test the kernel's module loading mechanism: kmod. kmod implements
support to load modules using the Linux kernel's usermode helper.
This test provides a series of tests against kmod.
Although technically you can either build test_kmod as a module or
into the kernel we disallow building it into the kernel since
it stress tests request_module() and this will very likely cause
some issues by taking over precious threads available from other
module load requests, ultimately this could be fatal.
To run tests run:
tools/testing/selftests/kmod/kmod.sh --help
If unsure, say N.
config TEST_DEBUG_VIRTUAL
tristate "Test CONFIG_DEBUG_VIRTUAL feature"
depends on DEBUG_VIRTUAL
help
Test the kernel's ability to detect incorrect calls to
virt_to_phys() done against the non-linear part of the
kernel's virtual address map.
If unsure, say N.
config TEST_MEMCAT_P
tristate "Test memcat_p() helper function"
help
Test the memcat_p() helper for correctly merging two
pointer arrays together.
If unsure, say N.
config TEST_LIVEPATCH
tristate "Test livepatching"
default n
depends on DYNAMIC_DEBUG
depends on LIVEPATCH
depends on m
help
Test kernel livepatching features for correctness. The tests will
load test modules that will be livepatched in various scenarios.
To run all the livepatching tests:
make -C tools/testing/selftests TARGETS=livepatch run_tests
Alternatively, individual tests may be invoked:
tools/testing/selftests/livepatch/test-callbacks.sh
tools/testing/selftests/livepatch/test-livepatch.sh
tools/testing/selftests/livepatch/test-shadow-vars.sh
If unsure, say N.
config TEST_OBJAGG
tristate "Perform selftest on object aggreration manager"
default n
depends on OBJAGG
help
Enable this option to test object aggregation manager on boot
(or module load).
lib: Introduce test_stackinit module Adds test for stack initialization coverage. We have several build options that control the level of stack variable initialization. This test lets us visualize which options cover which cases, and provide tests for some of the pathological padding conditions the compiler will sometimes fail to initialize. All options pass the explicit initialization cases and the partial initializers (even with padding): test_stackinit: u8_zero ok test_stackinit: u16_zero ok test_stackinit: u32_zero ok test_stackinit: u64_zero ok test_stackinit: char_array_zero ok test_stackinit: small_hole_zero ok test_stackinit: big_hole_zero ok test_stackinit: trailing_hole_zero ok test_stackinit: packed_zero ok test_stackinit: small_hole_dynamic_partial ok test_stackinit: big_hole_dynamic_partial ok test_stackinit: trailing_hole_dynamic_partial ok test_stackinit: packed_dynamic_partial ok test_stackinit: small_hole_static_partial ok test_stackinit: big_hole_static_partial ok test_stackinit: trailing_hole_static_partial ok test_stackinit: packed_static_partial ok test_stackinit: packed_static_all ok test_stackinit: packed_dynamic_all ok test_stackinit: packed_runtime_all ok The results of the other tests (which contain no explicit initialization), change based on the build's configured compiler instrumentation. No options: test_stackinit: small_hole_static_all FAIL (uninit bytes: 3) test_stackinit: big_hole_static_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_static_all FAIL (uninit bytes: 7) test_stackinit: small_hole_dynamic_all FAIL (uninit bytes: 3) test_stackinit: big_hole_dynamic_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_dynamic_all FAIL (uninit bytes: 7) test_stackinit: small_hole_runtime_partial FAIL (uninit bytes: 23) test_stackinit: big_hole_runtime_partial FAIL (uninit bytes: 127) test_stackinit: trailing_hole_runtime_partial FAIL (uninit bytes: 24) test_stackinit: packed_runtime_partial FAIL (uninit bytes: 24) test_stackinit: small_hole_runtime_all FAIL (uninit bytes: 3) test_stackinit: big_hole_runtime_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_runtime_all FAIL (uninit bytes: 7) test_stackinit: u8_none FAIL (uninit bytes: 1) test_stackinit: u16_none FAIL (uninit bytes: 2) test_stackinit: u32_none FAIL (uninit bytes: 4) test_stackinit: u64_none FAIL (uninit bytes: 8) test_stackinit: char_array_none FAIL (uninit bytes: 16) test_stackinit: switch_1_none FAIL (uninit bytes: 8) test_stackinit: switch_2_none FAIL (uninit bytes: 8) test_stackinit: small_hole_none FAIL (uninit bytes: 24) test_stackinit: big_hole_none FAIL (uninit bytes: 128) test_stackinit: trailing_hole_none FAIL (uninit bytes: 32) test_stackinit: packed_none FAIL (uninit bytes: 32) test_stackinit: user FAIL (uninit bytes: 32) test_stackinit: failures: 25 CONFIG_GCC_PLUGIN_STRUCTLEAK_USER=y This only tries to initialize structs with __user markings, so only the difference from above is now the "user" test passes: test_stackinit: small_hole_static_all FAIL (uninit bytes: 3) test_stackinit: big_hole_static_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_static_all FAIL (uninit bytes: 7) test_stackinit: small_hole_dynamic_all FAIL (uninit bytes: 3) test_stackinit: big_hole_dynamic_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_dynamic_all FAIL (uninit bytes: 7) test_stackinit: small_hole_runtime_partial FAIL (uninit bytes: 23) test_stackinit: big_hole_runtime_partial FAIL (uninit bytes: 127) test_stackinit: trailing_hole_runtime_partial FAIL (uninit bytes: 24) test_stackinit: packed_runtime_partial FAIL (uninit bytes: 24) test_stackinit: small_hole_runtime_all FAIL (uninit bytes: 3) test_stackinit: big_hole_runtime_all FAIL (uninit bytes: 61) test_stackinit: trailing_hole_runtime_all FAIL (uninit bytes: 7) test_stackinit: u8_none FAIL (uninit bytes: 1) test_stackinit: u16_none FAIL (uninit bytes: 2) test_stackinit: u32_none FAIL (uninit bytes: 4) test_stackinit: u64_none FAIL (uninit bytes: 8) test_stackinit: char_array_none FAIL (uninit bytes: 16) test_stackinit: switch_1_none FAIL (uninit bytes: 8) test_stackinit: switch_2_none FAIL (uninit bytes: 8) test_stackinit: small_hole_none FAIL (uninit bytes: 24) test_stackinit: big_hole_none FAIL (uninit bytes: 128) test_stackinit: trailing_hole_none FAIL (uninit bytes: 32) test_stackinit: packed_none FAIL (uninit bytes: 32) test_stackinit: user ok test_stackinit: failures: 24 CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF=y This initializes all structures passed by reference (scalars and strings remain uninitialized): test_stackinit: small_hole_static_all ok test_stackinit: big_hole_static_all ok test_stackinit: trailing_hole_static_all ok test_stackinit: small_hole_dynamic_all ok test_stackinit: big_hole_dynamic_all ok test_stackinit: trailing_hole_dynamic_all ok test_stackinit: small_hole_runtime_partial ok test_stackinit: big_hole_runtime_partial ok test_stackinit: trailing_hole_runtime_partial ok test_stackinit: packed_runtime_partial ok test_stackinit: small_hole_runtime_all ok test_stackinit: big_hole_runtime_all ok test_stackinit: trailing_hole_runtime_all ok test_stackinit: u8_none FAIL (uninit bytes: 1) test_stackinit: u16_none FAIL (uninit bytes: 2) test_stackinit: u32_none FAIL (uninit bytes: 4) test_stackinit: u64_none FAIL (uninit bytes: 8) test_stackinit: char_array_none FAIL (uninit bytes: 16) test_stackinit: switch_1_none FAIL (uninit bytes: 8) test_stackinit: switch_2_none FAIL (uninit bytes: 8) test_stackinit: small_hole_none ok test_stackinit: big_hole_none ok test_stackinit: trailing_hole_none ok test_stackinit: packed_none ok test_stackinit: user ok test_stackinit: failures: 7 CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL=y This initializes all variables, so it matches above with the scalars and arrays included: test_stackinit: small_hole_static_all ok test_stackinit: big_hole_static_all ok test_stackinit: trailing_hole_static_all ok test_stackinit: small_hole_dynamic_all ok test_stackinit: big_hole_dynamic_all ok test_stackinit: trailing_hole_dynamic_all ok test_stackinit: small_hole_runtime_partial ok test_stackinit: big_hole_runtime_partial ok test_stackinit: trailing_hole_runtime_partial ok test_stackinit: packed_runtime_partial ok test_stackinit: small_hole_runtime_all ok test_stackinit: big_hole_runtime_all ok test_stackinit: trailing_hole_runtime_all ok test_stackinit: u8_none ok test_stackinit: u16_none ok test_stackinit: u32_none ok test_stackinit: u64_none ok test_stackinit: char_array_none ok test_stackinit: switch_1_none ok test_stackinit: switch_2_none ok test_stackinit: small_hole_none ok test_stackinit: big_hole_none ok test_stackinit: trailing_hole_none ok test_stackinit: packed_none ok test_stackinit: user ok test_stackinit: all tests passed! Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2019-01-24 02:24:32 +07:00
config TEST_STACKINIT
tristate "Test level of stack variable initialization"
help
Test if the kernel is zero-initializing stack variables and
padding. Coverage is controlled by compiler flags,
CONFIG_GCC_PLUGIN_STRUCTLEAK, CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF,
or CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL.
If unsure, say N.
config TEST_MEMINIT
tristate "Test heap/page initialization"
help
Test if the kernel is zero-initializing heap and page allocations.
This can be useful to test init_on_alloc and init_on_free features.
If unsure, say N.
config TEST_HMM
tristate "Test HMM (Heterogeneous Memory Management)"
depends on TRANSPARENT_HUGEPAGE
depends on DEVICE_PRIVATE
select HMM_MIRROR
select MMU_NOTIFIER
help
This is a pseudo device driver solely for testing HMM.
Say M here if you want to build the HMM test module.
Doing so will allow you to run tools/testing/selftest/vm/hmm-tests.
If unsure, say N.
endif # RUNTIME_TESTING_MENU
config MEMTEST
bool "Memtest"
---help---
This option adds a kernel parameter 'memtest', which allows memtest
to be set.
memtest=0, mean disabled; -- default
memtest=1, mean do 1 test pattern;
...
memtest=17, mean do 17 test patterns.
If you are unsure how to answer this question, answer N.
config HYPERV_TESTING
bool "Microsoft Hyper-V driver testing"
default n
depends on HYPERV && DEBUG_FS
help
Select this option to enable Hyper-V vmbus testing.
endmenu # "Kernel Testing and Coverage"
endmenu # Kernel hacking