"fault-injection: add tool to run command with failslab or
fail_page_alloc" added tools/testing/fault-injection/failcmd.sh to make it
easier to inject slab/page allocation failures by fault injection.
failcmd.sh prints the following warning when running with arguments
for command.
# ./failcmd.sh echo aaa
failcmd.sh: line 209: [: echo: binary operator expected
aaa
This warning is caused by an improper check whether at least one
parameter is left after parsing command options.
Fix it by testing the length of $1 instead of $@
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Merge Andrew's first set of patches:
"Non-MM patches:
- lots of misc bits
- tree-wide have_clk() cleanups
- quite a lot of printk tweaks. I draw your attention to "printk:
convert the format for KERN_<LEVEL> to a 2 byte pattern" which
looks a bit scary. But afaict it's solid.
- backlight updates
- lib/ feature work (notably the addition and use of memweight())
- checkpatch updates
- rtc updates
- nilfs updates
- fatfs updates (partial, still waiting for acks)
- kdump, proc, fork, IPC, sysctl, taskstats, pps, etc
- new fault-injection feature work"
* Merge emailed patches from Andrew Morton <akpm@linux-foundation.org>: (128 commits)
drivers/misc/lkdtm.c: fix missing allocation failure check
lib/scatterlist: do not re-write gfp_flags in __sg_alloc_table()
fault-injection: add tool to run command with failslab or fail_page_alloc
fault-injection: add selftests for cpu and memory hotplug
powerpc: pSeries reconfig notifier error injection module
memory: memory notifier error injection module
PM: PM notifier error injection module
cpu: rewrite cpu-notifier-error-inject module
fault-injection: notifier error injection
c/r: fcntl: add F_GETOWNER_UIDS option
resource: make sure requested range is included in the root range
include/linux/aio.h: cpp->C conversions
fs: cachefiles: add support for large files in filesystem caching
pps: return PTR_ERR on error in device_create
taskstats: check nla_reserve() return
sysctl: suppress kmemleak messages
ipc: use Kconfig options for __ARCH_WANT_[COMPAT_]IPC_PARSE_VERSION
ipc: compat: use signed size_t types for msgsnd and msgrcv
ipc: allow compat IPC version field parsing if !ARCH_WANT_OLD_COMPAT_IPC
ipc: add COMPAT_SHMLBA support
...
This adds tools/testing/fault-injection/failcmd.sh to run a command while
injecting slab/page allocation failures via fault injection.
Example:
Run a command "make -C tools/testing/selftests/ run_tests" with
injecting slab allocation failure.
# ./tools/testing/fault-injection/failcmd.sh \
-- make -C tools/testing/selftests/ run_tests
Same as above except to specify 100 times failures at most instead of
one time at most by default.
# ./tools/testing/fault-injection/failcmd.sh --times=100 \
-- make -C tools/testing/selftests/ run_tests
Same as above except to inject page allocation failure instead of slab
allocation failure.
# env FAILCMD_TYPE=fail_page_alloc \
./tools/testing/fault-injection/failcmd.sh --times=100 \
-- make -C tools/testing/selftests/ run_tests
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This adds two selftests
* tools/testing/selftests/cpu-hotplug/on-off-test.sh is testing script
for CPU hotplug
1. Online all hot-pluggable CPUs
2. Offline all hot-pluggable CPUs
3. Online all hot-pluggable CPUs again
4. Exit if cpu-notifier-error-inject.ko is not available
5. Offline all hot-pluggable CPUs in preparation for testing
6. Test CPU hot-add error handling by injecting notifier errors
7. Online all hot-pluggable CPUs in preparation for testing
8. Test CPU hot-remove error handling by injecting notifier errors
* tools/testing/selftests/memory-hotplug/on-off-test.sh is doing the
similar thing for memory hotplug.
1. Online all hot-pluggable memory
2. Offline 10% of hot-pluggable memory
3. Online all hot-pluggable memory again
4. Exit if memory-notifier-error-inject.ko is not available
5. Offline 10% of hot-pluggable memory in preparation for testing
6. Test memory hot-add error handling by injecting notifier errors
7. Online all hot-pluggable memory in preparation for testing
8. Test memory hot-remove error handling by injecting notifier errors
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Greg KH <greg@kroah.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Dave Jones <davej@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add '=~' and '!~' to the list of allowed conditionals for DEFAULT and
TEST_START section if statements.
ie.
TEST_START IF TEST =~ .*test$
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The option IGNORE_ERRORS is used to allow a test to succeed even if a
warning appears from the kernel. Sometimes kernels will produce warnings
that are not associated with a test, and the user wants to test
something else.
The IGNORE_ERRORS works for boot up, but was not preventing test runs to
succeed if the kernel produced a warning.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The min configs are saved in a perl hash called force_configs, and this
hash is used to add configs to the .config file. But it was not being
reset between tests and a min config from a previous test would affect
the min config of the next test causing undesirable results.
Reset the force_config hash at the start of each test.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Usually the target is booted into a dependable kernel when a test
starts. The test will install the test kernel and reboot the box. But
there may be a time that the kernel is running an unreliable kernel and
the reboot may crash.
Have ktest detect crashes on a reboot and force a power-cycle instead.
This can usually happen if a test kernel was installed to run manual
tests, but the user forgot to reboot to the known good kernel.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
If the console is constantly outputting content, this can cause ktest
to get stuck waiting on the monitor to settle down.
The option MAX_MONITOR_WAIT is the maximum time (in seconds) for ktest
to wait for the console to flush.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
With a name like 'oldnoconfig' one may think that the config generated
would disable all configs that were not defined (selecting "no" for all
options). But this is not the case. It selects the default. If a config
has a 'default y', then it is added if not specified.
This broke the config bisect, because options not specified by a config
will just use the default, where it expected to turn off. This caused an
option to be enabled that disabled an option that would break the build.
The end result was that we never found the bad config at the end of the
test.
Instead of using 'make oldnoconfig', ktest now builds the options it
expects enabled and disabled. When it turns off an option, it will no
longer remove it, but actually set it to:
# CONFIG_FOO is not set.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The config-bisect can take a bad config and bisect it down to find out
what config actually breaks the config. But as all tests will apply a
minconfig (defined by a user) to apply before booting, it is possible
that the minconfig could actually make the bad config work (minconfigs
can disable configs). The end result is that the config bisect test will
not find a config that breaks. This can be rather frustrating to the
user.
The CONFIG_BISECT_CHECK option, when set to 1, will make sure that the
bad config (with the minconfig applied) still fails before trying to
bisect.
And yes, I did get burned by this.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add the PRE_INSTALL option that will allow a user to specify a shell
command to be executed before the install operation executes.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
In order to let the user add commands before and after ktest runs, the
PRE_KTEST and POST_KTEST options are defined. They hold shell commands
that will execute befor ktest runs its first test, as well as when it
completed its last test.
The PRE_TEST and POST_TEST will be run befor and after (respectively)
for a given test. They can either be global (done for all tests) or
defined by a single test.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
While doing the checkpoint-restore in the user space one need to determine
whether various kernel objects (like mm_struct-s of file_struct-s) are
shared between tasks and restore this state.
The 2nd step can be solved by using appropriate CLONE_ flags and the
unshare syscall, while there's currently no ways for solving the 1st one.
One of the ways for checking whether two tasks share e.g. mm_struct is to
provide some mm_struct ID of a task to its proc file, but showing such
info considered to be not that good for security reasons.
Thus after some debates we end up in conclusion that using that named
'comparison' syscall might be the best candidate. So here is it --
__NR_kcmp.
It takes up to 5 arguments - the pids of the two tasks (which
characteristics should be compared), the comparison type and (in case of
comparison of files) two file descriptors.
Lookups for pids are done in the caller's PID namespace only.
At moment only x86 is supported and tested.
[akpm@linux-foundation.org: fix up selftests, warnings]
[akpm@linux-foundation.org: include errno.h]
[akpm@linux-foundation.org: tweak comment text]
Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Andrey Vagin <avagin@openvz.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Glauber Costa <glommer@parallels.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Matt Helsley <matthltc@us.ibm.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Vasiliy Kulikov <segoon@openwall.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Valdis.Kletnieks@vt.edu
Cc: Michal Marek <mmarek@suse.cz>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add the mq_perf_tests tool I used when creating my mq performance patch.
Also add a local .gitignore to keep the binaries from showing up in git
status output.
[akpm@linux-foundation.org: checkpatch fixes]
Signed-off-by: Doug Ledford <dledford@redhat.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a directory to house POSIX message queue subsystem specific tests.
Add first test which checks the operation of mq_open() under various
corner conditions.
Signed-off-by: Doug Ledford <dledford@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Joe Korty <joe.korty@ccur.com>
Cc: Amerigo Wang <amwang@redhat.com>
Cc: Serge E. Hallyn <serue@us.ibm.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Add a README that explains what the different example configs in the
ktest example directory are about.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
I used the snowball.conf in a live demo that demonstrated how to use
ktest.pl with a snowball ARM board. I've been asked to included that
config in the ktest repository.
Here it is.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add the config that I use to test several archs. I downloaded several
cross compilers from:
http://kernel.org/pub/tools/crosstool/files/bin/x86_64/
and this config is an example to crosscompile several archs to make sure
that your changes do not break archs that you are not working on.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
I've been asked several times to provide more useful example configs for
ktest.pl, as the sample.conf is too complex (because it explains all
configs). This adds configs broken up by use case, and these configs are
based on actual configs that I use on a daily basis.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
If the file that OUTPUT_MIN_CONFIG exists then ktest.pl will prompt the
user and ask them if the OUTPUT_MIN_CONFIG should be used as the
starting point for make_min_config instead of MIN_CONFIG.
This is usually the case, and to allow the user to do so, which is
helpful if the user is creating different min configs based on tests,
and they know one is a superset of another test, they can set
USE_OUTPUT_MIN_CONFIG to one, which will prevent kest.pl from prompting
to use the OUTPUT_MIN_CONFIG and it will just use it.
If USE_OUTPUT_MIN_CONIFG is set to zero, then ktest.pl will continue to
use MIN_CONFIG instead.
The default is that USE_OUTPUT_MIN_CONFIG is undefined.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add a MIN_CONFIG_TYPE that can be set to 'test' or 'boot'. The default
is 'boot' which is what make_min_config has done previously: makes a
config file that is the minimum needed to boot the target.
But when MIN_CONFIG_TYPE is set to 'test', not only must the target
boot, but it must also successfully run the TEST. This allows the
creation of a config file that is the minimum to boot and also
perform ssh to the target, or anything else a developer wants.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The PRE_BUILD and POST_BUILD options of ktest are added to allow the
user to add temporary patch to the system and remove it on builds. This
is sometimes use to take a change from another git branch and add it to
a series without the fix so that this series can be tested, when an
unrelated bug exists in the series.
The problem comes when a tagged commit is being used. For example, if
v3.2 is being tested, and we add a patch to it, the kernelrelease for
that commit will be 3.2.0+, but without the patch the version will be
3.2.0. This can cause problems when the kernelrelease is determined for
creating the /lib/modules directory. The kernel booting has the '+' but
the module directory will not, and the modules will be missing for that
boot, and may not allow the kernel to succeed.
The fix is to put the creation of the kernelrelease in the POST_BUILD
logic, before it applies the POST_BUILD operation. The POST_BUILD is
where the patch may be removed, removing the '+' from the kernelrelease.
The calculation of the kernelrelease will also stay in its current
location but will be ignored if it was already calculated previously.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The change to let individual tests decide to reboot the machine on
success of the entire test also prevented errors from rebooting
when an error was detected.
The "no_reboot" variable was only cleared if the test had
reboot_on_success set. But the no_reboot variable also prevents the test
rebooting when an error was detected even when REBOOT_ON_ERROR was set.
Add a new "reboot_success" variable that is used to determine if the
test should reboot on success and not touch the no_reboot variable.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When BISECT_REVERSE and BISECT_SKIP are used together with boot or test
testing, build failures are treated as boot or test failures and
'git bisect bad' is executed instead of 'git bisect skip'. This is because
the $ret value of -1 is treated as a build failure, but the $reverse_bisect
logic does not properly handle this.
Simple fix, only invert it if it is positive.
Link: http://lkml.kernel.org/r/1335235380-8509-1-git-send-email-Russ.Dill@ti.com
Signed-off-by: Russ Dill <Russ.Dill@ti.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
hugepage-mmap.c, hugepage-shm.c and map_hugetlb.c in Documentation/vm are
simple pass/fail tests, It's better to promote them to
tools/testing/selftests.
Thanks suggestion of Andrew Morton about this. They all need firstly
setting up proper nr_hugepages and hugepage-mmap need to mount hugetlbfs.
So I add a shell script run_vmtests to do such work which will call the
three test programs and check the return value of them.
Changes to original code including below:
a. add run_vmtests script
b. return error when read_bytes mismatch with writed bytes.
c. coding style fixes: do not use assignment in if condition
[akpm@linux-foundation.org: build the targets before trying to execute them]
[akpm@linux-foundation.org: Documentation/vm/ no longer has a Makefile. Fixes "make clean"]
Signed-off-by: Dave Young <dyoung@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
So a "make run_tests" will build the tests before trying to run them.
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Remove the run_tests script and launch the selftests by calling "make
run_tests" from the selftests top directory instead. This delegates to
the Makefile in each selftest directory, where it is decided how to launch
the local test.
This removes the need to add each selftest directory to the now removed
"run_tests" top script.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Dave Young <dyoung@redhat.com>
Cc: Christoph Lameter <cl@linux.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPafMcAAoJEIy3vGnGbaoALNUQAMZ8G+WDXD/RKtsO4V41rWOa
vCCUQ6BDTld53fLdTMwYBH9Uev9LLsQ/itOp8ne/apVn5DCghqm1rAfd5sIk9TaQ
zqnvF8QvZXnPCbUZe8mXH4ki2Shh+mvxkucAtsC9C4RBcu8QloIy7UGZPioEVxYj
TiadqKNicvSSIvEy3fM98qzbVu+GZZpZwfH0asiWV4rxLLovWNn2Ly7BGPLLHWa8
iamWVyG9QS51MzyEAxo4LBWGA4aK0k6+W5QxK2SmruA7g0YWzBf3DU5CfsXncj43
eOs/aVkBApPPIJ/VQDuhbdHAY+wIQAItojVAbSdebv0xC4SDtLbHHAgMVNtgLOIa
ysfLI+h8APbOh4nyqwNh5FuSnew4QvR1ZW47bZHmN7K7NjmcJ93geEvt//7eQnxz
X0iWJSCwAOyKORgsPU0lZgGM1lD50Qtmt1IZX5T87ksDSIhx5Lk/pAQo43nByj3g
e53swGxIQyaUqJwQ5ItJh7e8jQBZ6eyZDEWslro2S6uV9MFJnrH+cJJSk4LqO94y
dTu4uk/aCyXH3fbk1qfincfn5bzCvlrbSMsldbU6ODLFqyr8R1I8kcTZy/2k3i4q
eURDh94D6HEuu/QL1NvB5dKkuOfxmH3dr3S/5xv/IfIxruC+N+jGns0rmoFiE0KO
pnThls+OFAXhrw7YhV2d
=2NU4
-----END PGP SIGNATURE-----
Merge tag 'ktest-v3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest
Pull ktest changes from Steven Rostedt.
* tag 'ktest-v3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest:
ktest: Allow a test to override REBOOT_ON_SUCCESS
ktest: Fix SWITCH_TO_GOOD to also reboot the machine
ktest: Add SCP_TO_TARGET_INSTALL option
ktest: Add warning when bugs are ignored
ktest: Add INSTALL_MOD_STRIP=1 when installing modules
The option REBOOT_ON_SUCCESS is global, and will have the machine reboot
the the box if all tests are successful. But a test may not want the
machine to reboot, and perhaps have the kernel it loaded be used to
install the next kernel. Or the last test may set up a kernel that the
user may want to look at. In this case, the user could have the global
option REBOOT_ON_SUCCESS be true, but if a test is defined to run at the
end, that test can override the global option and keep the kernel it
installed for the user to log in with.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When the option SWITCH_TO_GOOD is set, it will be called when the system
needs to reboot to the good server. But currently, this keeps the reboot
from happening. The SWITCH_TO_GOOD is just a way to get to a new kernel,
it may not mean to not reboot.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Currently the option used to scp both the modules to the target as well
as the kernel image are the same (SCP_TO_TARGET). But some embedded
boards may require them to be different. The modules may need to be put
directly on the board, but the kernel image may need to go to a
tftpserver.
Add the option SCP_TO_TARGET_INSTALL that will allow the user to change
the config so that they may have the modules and image got to different
machines.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When IGNORE_ERRORS is set, ktest will not fail a test if a backtrace
is detected. But this can be an issue if the user added it in the
config but forgot to remove it. They may be left wondering why their
test did not fail, or even worse, why their bisect gave the wrong
commit.
Add a warning in the output if IGNORE_WARNINGS is set, and ktest detects
a kernel error.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Pull trivial tree from Jiri Kosina:
"It's indeed trivial -- mostly documentation updates and a bunch of
typo fixes from Masanari.
There are also several linux/version.h include removals from Jesper."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (101 commits)
kcore: fix spelling in read_kcore() comment
constify struct pci_dev * in obvious cases
Revert "char: Fix typo in viotape.c"
init: fix wording error in mm_init comment
usb: gadget: Kconfig: fix typo for 'different'
Revert "power, max8998: Include linux/module.h just once in drivers/power/max8998_charger.c"
writeback: fix fn name in writeback_inodes_sb_nr_if_idle() comment header
writeback: fix typo in the writeback_control comment
Documentation: Fix multiple typo in Documentation
tpm_tis: fix tis_lock with respect to RCU
Revert "media: Fix typo in mixer_drv.c and hdmi_drv.c"
Doc: Update numastat.txt
qla4xxx: Add missing spaces to error messages
compiler.h: Fix typo
security: struct security_operations kerneldoc fix
Documentation: broken URL in libata.tmpl
Documentation: broken URL in filesystems.tmpl
mtd: simplify return logic in do_map_probe()
mm: fix comment typo of truncate_inode_pages_range
power: bq27x00: Fix typos in comment
...
make_min_config test failed to work because the snowball board I was tesing
it against had a config that would not build. But the make_min_config
only tested the testing part and ignored build failures. The end result
was a config file that would not boot.
This time, for real.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJPTBMbAAoJEIy3vGnGbaoACVAQAONDPEJZZgvtsqgLkNrqBsym
/Z+SH+eAtJVHEMgvBMHh5fLSkccTZKM8jyi1b/iEsbmzZY3tVuDrvZ8YJYk+lcdO
PUrIkSIhySxwenD8oqY3rmKNS3UhcmFG7rQGLfNPuQC8yorMaANHDqpVs4uauhWQ
jgvUAGwoacqNxsH+dON4g3aVSbCZrIa1M7IqvjOQlaX5j6BppVFE1dAp12060q1b
AjsZgNaTGUQxbZzEikIPDq7ey0RW180CtYriL8xZgRZMN0ly+7wvDYefCZQvNjRx
9v4aCUKZoc2bFCGD+haU9HKkkKHEVw3C2I8YV16ImcY3wmuYCEYGoqtZBFJfuPXQ
0JYj6jrh2g4XnKQ/uekf7hyHYHW7NNb5rbHJExrk4gQTNKUSS3fbrGTK1KZW0+TH
gXaSJHce+0d1iDZkmgKsGinU5hKoas4z0X4yEtCFHI3kKJGLnN7eNv0yYPheUtKu
MsacHK1Yyu6wtedpclo5sWZl/ynQ6nrviE33nNqKTUstcZMqU4U6AXZrW2sNkX+3
6sbCwQlEOhlqyaxzKzOHB5ezR6WzcPAXLIV7m041573znXHjL9AQx+SwcvVVLxuN
l83L5JA69pB3nKqzGjjJgnAQt1aeEVgYuO1Q2MxgrHwEbnnanG/jcDVVT5pXCFAP
ikoyZ5rzFZmzEXXcAR7h
=wlL8
-----END PGP SIGNATURE-----
Merge tag 'ktest-fix-make-min-failed-build-for-real' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest
While demoing ktest at ELC in 2012, it was embarrassing that the
make_min_config test failed to work because the snowball board I was
testing it against had a config that would not build. But the
make_min_config only tested the testing part and ignored build failures.
The end result was a config file that would not boot.
This time, for real.
* tag 'ktest-fix-make-min-failed-build-for-real' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest:
ktest: Fix make_min_config test when build fails
The make_min_config does not take into account when the build fails,
resulting in a invalid MIN_CONFIG .config file. When the build fails,
it is ignored and the boot test is executed, using the previous built
kernel. The configs that should be tested are not tested and they may
be added or removed depending on the result of the last kernel that
succeeded to be built.
If the build fails, mark the current config as a failure and the
configs that were disabled may still be needed.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-ktest:
ktest: Add INGORE_ERRORS to ignore warnings in boot up
ktest: Still do reboot even for REBOOT_TYPE = script
ktest: Fix compare script to test if options are not documented
ktest: Detect typos in option names
ktest: Have all values be set by defaults
ktest: Change initialization of defaults hash to perl format
ktest: Add options SWITCH_TO_GOOD and SWITCH_TO_TEST
ktest: Allow overriding bisect test results
ktest: Evaluate options before processing them
ktest: Evaluate $KERNEL_VERSION in both install and post install
ktest: Only ask options needed for install
ktest: When creating a new config, ask for BUILD_OPTIONS
ktest: Do not ask for some options if the only test is build
ktest: Ask for type of test when creating a new config
ktest: Allow bisect test to restart where it left off
ktest: When creating new config, allow the use of ${THIS_DIR}
ktest: Add default for ssh-user, build-target and target-image
ktest: Allow success logs to be stored
ktest: Save test output
Bring a first selftest in the relevant directory. This tests several
combinations of breakpoints and watchpoints in x86, as well as icebp traps
and int3 traps. Given the amount of breakpoint regressions we raised
after we merged the generic breakpoint infrastructure, such selftest
became necessary and can still serve today as a basis for new patches that
touch the do_debug() path.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bring a new kernel selftests directory in tools/testing/selftests. To
add a new selftest, create a subdirectory with the sources and a
makefile that creates a target named "run_test" then add the
subdirectory name to the TARGET var in tools/testing/selftests/Makefile
and tools/testing/selftests/run_tests script.
This can help centralizing and maintaining any useful selftest that
developers usually tend to let rust in peace on some random server.
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Michal Marek <mmarek@suse.cz>
Cc: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (53 commits)
Kconfig: acpi: Fix typo in comment.
misc latin1 to utf8 conversions
devres: Fix a typo in devm_kfree comment
btrfs: free-space-cache.c: remove extra semicolon.
fat: Spelling s/obsolate/obsolete/g
SCSI, pmcraid: Fix spelling error in a pmcraid_err() call
tools/power turbostat: update fields in manpage
mac80211: drop spelling fix
types.h: fix comment spelling for 'architectures'
typo fixes: aera -> area, exntension -> extension
devices.txt: Fix typo of 'VMware'.
sis900: Fix enum typo 'sis900_rx_bufer_status'
decompress_bunzip2: remove invalid vi modeline
treewide: Fix comment and string typo 'bufer'
hyper-v: Update MAINTAINERS
treewide: Fix typos in various parts of the kernel, and fix some comments.
clockevents: drop unknown Kconfig symbol GENERIC_CLOCKEVENTS_MIGR
gpio: Kconfig: drop unknown symbol 'CS5535_GPIO'
leds: Kconfig: Fix typo 'D2NET_V2'
sound: Kconfig: drop unknown symbol ARCH_CLPS7500
...
Fix up trivial conflicts in arch/powerpc/platforms/40x/Kconfig (some new
kconfig additions, close to removed commented-out old ones)
When testing a kernel that has warnings, ktest.pl will fail the test
when it sees the warning. If you need to test the the kernel and want
to ignore the errors that are produced, the option IGNORE_ERRORS has
been added. When IGNORE_ERRORS is set to something other than 0, it will
ignore call traces due to WARN_ON().
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The REBOOT_TYPE may be either grub or script, if it is script
it is expected that a REBOOT_SCRIPT is defined.
With the SWITCH_TO_TEST which is the complement of SWITCH_TO_GOOD,
which does basically the same thing as REBOOT_SCRIPT and but for
both grub and script, the REBOOT_SCRIPT does not need to be mandatory
anymore.
Do not require the REBOOT_SCRIPT and always run the reboot code
for both grub and script.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The compare script compare-ktest-sample.pl checks for options
that are defined in ktest.pl and not documented in samples.conf,
as well as samples in samples.conf that are not used in ktest.pl.
With the switch to the hash format to initialize the ktest variables
the compare script needs to be updated to handle the change.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
It becomes quite annoying when you go to run a test and then
realize that you typed an option name wrong, and the test starts
doing the default action and not what you expected it to do.
It is even more annoying when you wake up the next day after
running the test over night when you discover this.
By testing if all options specified in a config file are
used by either ktest or were used in one of the option's values
we can see if there are any dangling options that were not used.
In such a case, show the user the options that were not used
and ask them if they want to continue or not.
The option IGNORE_UNUSED was also added to allow the user to
override this feature.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Currently the patchcheck, bisect, and config_bisect variables
are only able to be set per test. You can not set a default
value for them.
By letting default values be set, it makes some config files
a bit easier, and also makes it easier to find typos in the
option names.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Initializing each default value by specifying the hash name is
ugly. This is one of the rare cases that the "perl way" is actually
much cleaner and easier to read.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
For machines that do no use grub, it may be needed to update an
external image (tftp) before doing a reboot into either the
test image or the known good image.
The option SWITCH_TO_GOOD is added, where if it is defined, the
command that is specified as its value will be executed before
doing a reboot into a known good image.
The option SWITCH_TO_TEST is added, where if it is defined, the
command that is specified as its value will be executed before
doing a reboot into the test image.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When running the ktest git bisect test, if the BISECT_TYPE is "test",
the bisect is determined to be good or bad based off of the error
code of the test that is run. Currently, if the test returns 0,
it is considered a pass (good), a non-zero is considered a fail (bad).
But it has been requested to add more options, and also change
the meanings of the error codes of the test. For example, one may
want the test to detect if the commit is not good or bad,
(maybe the bisect came to a point where the code in question
does not exist). The test could report an error code that should tell
ktest to skip the commit.
Also, a test could detect that something is horribly wrong and the
biscet should just be aborted.
The new options:
BISECT_RET_GOOD
BISECT_RET_BAD
BISECT_RET_SKIP
BISECT_RET_ABORT
BISECT_RET_DEFAULT
have been added. The first 4 take an integer value that will
represent if the test should be considered a pass, fail, neither
good nor bad, or abort respectively.
The BISECT_RET_DEFAULT will bo whatever is not defined by the
above codes. If only BISECT_RET_DEFAULT is defined, then all tests
will do the default.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
All options can take variables "${var}". Before doing any processing
or decision making on the content of an option, evaluate it incase
there are variables that may change the outcome.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The install process may also need to know what the kernel version
is, to add it to the name. Evaluate it for both install and
post install.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
If all the tests are only for build or install, do not ask
for options not needed to do the install, if the options do
not exist.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When creating a new config, ask for the BUILD_OPTIONS variable
that lets users add things like -j20 to the make.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When creating a ktest config or if te config only has build only
tests, some of the manditory config options are not needed.
Do not ask for them if all tests in the config file are just build
tests.
Suggested-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When no argument is supplied to ktest, or the config applied does
not exist and a new config is being created, instead of just using
the default test type, give the user an option to pick the test type
of either 'build, install, or boot'. Other options may be added later
but then those would require more questions as they require more
fields. But that's for another release of ktest to add that feature.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
If a bisect is killed for some reason, have ktest detect that a bisect
is in progress and if so, allow the user to start the bisect where
it left off.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Typing in a full path when you know that the path exists within
the directory your are running is tedious and unnecessary.
Allow the user to use ${PWD} if they want a dynamic path name
which will be the path that ktest.pl is executed from
or use ${THIS_DIR} which is a variable assigned `pwd` and
the the variable will exist within the config, allowing the user
to change it and affect all other paths using this variable as well
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When a user runs ktest without an argument, or the argument given
is not a config file that exists, ktest will ask the user a few
questions to create a simple ktest config file.
A few of the questions should have a default value set, that if anything
it will make it easier for the user to know what is suppose to
be in that value.
These new values are:
SSH_USER, BUILD_TARGET and TARGET_IMAGE
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add a STORE_SUCCESSES option, to allow success logs to be stored, for
example to double-check or otherwise post-process the test logs.
Link: http://lkml.kernel.org/r/1321616131-21352-3-git-send-email-rabin@rab.in
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The test output may contain useful information; save it along with the
already-saved buildlog, dmesg, and .config.
Link: http://lkml.kernel.org/r/1321616131-21352-1-git-send-email-rabin@rab.in
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Let's say we have "OUTPUT_DIR = build/${TEST_NAME}", and we're iterating
a test. In the second iteration of a test, the TEST_NAME of the test
we're repeating is not used. Instead, ${TEST_NAME} appears literally:
touch /home/rabin/kernel/test/build/${TEST_NAME}/.config ... SUCCESS
Fix this by making __eval_option() check the parent test options
for a repeated test.
Link: http://lkml.kernel.org/r/1321616131-21352-2-git-send-email-rabin@rab.in
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When ktest.pl is called without any arguments, or if the config
file does not exist, ktest.pl will ask the user for some information.
Some of these questions are code paths. Allowing the user to type
${PWD} for the current directory greatly simplifies these entries.
Add variable processing to the entered values.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
On some tests that do multiple boots (patchcheck, bisect, etc), the build
of the next kernel to run may finish before the stable kernel has finished
booting. Then the install of the new kernel will fail when it tries to connect
as the machine has not finished the boot process.
Do one more monitor flush to make sure the machine is up and running before
trying to connect to it again.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When setting the next kernel to boot to with grub, do not opencode
the reboot operation. The normal reboot operation can be modified by
config options (namely POWERCYCLE_AFTER_REBOOT). This needs to affect
all reboots. Remove the opencoded reboot to make sure that any changes
to the reboot code also affect all reboots.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The IF statements for DEFAULTS and TEST_START sections now handle
complex statements (&&,||)
Example:
TEST_START IF (DEFINED ALL_TESTS || ${MYTEST} == boottest) && ${MACHINE} == gandalf
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The order for some of the keywords on a section line
(TEST_START or DEFAULTS) does not really matter. Simply need
to remove the keyword from the line as we process it and
evaluate the next keyword in the line. By removing the keywords
as we find them, we do not need to keep track of where on the
line they were found.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The make_min_config test will turn off one config at a time and check
if the config boots or not, and if it does, it will remove that config
plus any config that depended on that config.
ktest already looks if a config has a dependency and will try the
dependency config first. But by sorting the configs and trying the
config with the most configs dependent on it, we can shrink the
minconfig faster.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Have IF statements process if a config variable or option has been
defined or not. Can use NOT DEFINED in the case for telling if
a variable or option has not been defined.
DEFAULTS IF NOT DEFINED SSH_USER
SSH_USER = root
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The OVERRIDE keyword will allow options defined in the given
DEFAULTS section to override options defined in previous DEFAULT
sections.
Normally, options will error if they were previous defined.
The OVERRIDE keyword allows options that have been previously
defined to be changed in the given section.
Note, the same option can not be defined in the same DEFAULT section
even if that section is marked as OVERRIDE.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The code that handles parsing the TEST_TYPE and DEFAULT code share
a lot of common functionality. Combine the two and add a if statement
that does what is different between them.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Have the reading of the config file allow reading of other config
files using the INCLUDE keyword. This allows multiple config files
to share config options.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Allow ==, !=, <=, >=, <, and > to be used in IF statements
to compare if a section should be processed or not.
For example:
BITS := 32
DEFAULTS IF ${BITS} == 32
MIN_CONFIG = ${CONFIG_DIR}/config-32
ELSE
MIN_CONFIG = ${CONFIG_DIR}/config-64
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add IF keyword to sections within the config. Also added an ELSE
keyword that allows different config options to be set for a given
section.
For example:
TYPE := 1
STATUS := 0
DEFAULTS IF ${TYPE}
[...]
ELSE IF ${STATUS}
[...]
ELSE
[...]
The above will process the first section as $TYPE is true. If it
was false, it would process the last section as $STATUS is false.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Even if REBOOT_ON_ERROR is set, it becomes annoying that the target
machine is rebooted when a config option is incorrect or a build
fails. There's no reason to reboot the target for host only issues.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When ktest.pl reboots, it will usuall wait SLEEP_TIME seconds of idle
console before starting the next test. By setting the
REBOOT_SUCCESS_LINE, ktest will not wait SLEEP_TIME when it detects the
line while rebooting to a new kernel.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
There's cases where running the same kernel over and over again
is useful, and being able to not install the same kernel can
save time between tests.
Add a NO_INSTALL option that tells ktest.pl to not install the
new kernel.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Currently if the grub menu that is supplied is not found, it will
just boot into the last grub menu in menu.lst. Fail instead of
confusing the user why their kernel is not booting.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Several places that call reboot do the same thing with respect to the
monitor. By adding this code into the reboot code, redundant code is
removed and it paves the way for the the reset time patch.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Link: http://lkml.kernel.org/r/1313155932-20092-4-git-send-email-drjones@redhat.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
In testing one of my boxes, I found that I only wanted to build and
install the kernel. I wanted to manually reboot the box and test it.
Adding a TEST_TYPE option "install" allows this to happen.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The MIN_CONFIG is a single config that is considered to have all the
configs that are required to boot the box.
ADD_CONFIG is a list of configs that we add that may contain configs
known to be broken (set off) or just configs that we want every box to
have and this can include shared configs.
If a config has no MIN_CONFIG defined, but has multiple files defined
for the ADD_CONFIG, the test will die, because the MIN_CONFIG will
default to ADD_CONFIG. The problem is the code to open MIN_CONFIG
expects a string of one file, not multiple, and the open will fail.
Since the real minconfig that is used is a concatination of MIN_CONFIG
and ADD_CONFIG files, we change the code to open that instead of
whatever MIN_CONFIG defaults to.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The IGNORE_CONFIG file holds the configs that we don't want to change
(with their proper settings). But on start up, the make noconfig is
executed, and the configs that are on are also put into the ignore
config category. But these are configs that were forced on by the
kconfig scripts and not something that we found must be enabled to boot
our machine. By keeping the configs that are forced on by default,
separate from the configs we found that are required to boot the box, we
can get a much more interesting IGNORE_CONFIG. In fact, the
IGNORE_CONFIG can usually end up being the must have configs to boot,
and only have 6 or 7 configs set.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
If the defined OUTPUT_MIN_CONFIG in the make_min_config test exists,
then give a prompt to ask the user if they want to use that config
instead, as it is very often the case, especially when the test has been
interrupted. The OUTPUT_MIN_CONFIG is usually the config that one wants
to use to continue the test where they left off.
But if START_MIN_CONFIG is defined (thus the MIN_CONFIG is not the
default), then do not prompt, as it will be annoying if the user has
this as one of many tests, and the test pauses waiting for input, while
the user is sleeping.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
To save time, the test does not just grab any option and test
it. The Kconfig files are examined to determine the dependencies
of the configs. If a config is chosen that depends on another
config, that config will be checked first. By checking the
parents first, we can eliminate whole groups of configs that
may have been enabled.
For example, if a USB device config is chosen and depends on
CONFIG_USB, the CONFIG_USB will be tested before the device.
If CONFIG_USB is found not to be needed, it, as well as all
configs that depend on it, will be disabled and removed from
the current min_config.
Note, the code from streamline_config (make localmodconfig)
was copied and used to find the dependencies in the Kconfig file.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
After doing a make localyesconfig, your kernel configuration may
not be the most useful minimum configuration. Having a true minimum
config that you can use against other configs is very useful if
someone else has a config that breaks on your code. By only forcing
those configurations that are truly required to boot your machine
will give you less of a chance that one of your set configurations
will make the bug go away. This will give you a better chance to
be able to reproduce the reported bug matching the broken config.
Note, this does take some time, and may require you to run the
test over night, or perhaps over the weekend. But it also allows
you to interrupt it, and gives you the current minimum config
that was found till that time.
Note, this test automatically assumes a BUILD_TYPE of oldconfig
and its test type acts like boot.
TODO: add a test version that makes the config do more than just
boot, like having network access.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
There has been too many times that I put in one too many SKIP
TEST_STARTs and start the test with the default randconfig by accident
that I added this to have ktest ask the user for which test they want to
run if no TEST_START is specified.
Now if I accidently start the test with all TEST_STARTs skipped, ktest
asks what test do I want to run, and I now have a chance to kill it
before it does a make mrproper on my build directory.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Several places had the following code:
get_grub_index;
get_version;
install;
start_monitor;
return monitor;
Creating a function "start_monitor_and_boot()" replaces these mulitple
uses with a single call.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Doing a patchcheck test, there may be warnings that gcc produces which
may be OK, and the test should not fail on that commit. By adding a
IGNORE_WARNINGS option to list a space delimited SHA1s that are ignored
lets the user avoid having the test fail on certain commits.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The tar command to create the module directory is cjf, but the
extraction only had xf. This works on most versions of tar, but some
versions of tar require xjf for extraction as well.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
As multiple tests may be executed by the same server, have the test
machine name add uniqueness to the value of the temp directory.
Otherwise the temp directories may overwrite each other's tests.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
There are some cases that a patch may be needed to apply to the kernel
in patchcheck or bisect tests. Adding a PRE_BUILD option to apply the
patch and POST_BUILD to remove it, allows for this to be done easily.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When a config is set with CONFIG_MODULES=n, it does not mean that the
kernel does not need an initrd to boot. For systems that depend on LVM
and such, an initrd must run first.
If POST_INSTALL is defined, then run the post install regardless if
modules are needed or not.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
After a bug is found, the STOP_AFTER_FAILURE timeout is used to
determine how much output should be printed before breaking out
of the monitor loop. This is to get things like call traces and
enough infromation about the bug to help determine what caused it.
The STOP_AFTER_FAILURE is usually much shorter than the TIMEOUT
that is used to determine when to quit after no more stdio is given.
But since the stdio read uses a wait on I/O, the STOP_AFTER_FAILURE is
only checked after we get something from I/O. But if the I/O does
not return any more data, we wait the TIMEOUT period instead, even
though we already triggered a bug report.
The wait on I/O should honor the STOP_AFTER_FAILURE time if a bug has
been found.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Using the build KCONFIG_ALLCONFIG environment variable to force
the min config may not always work properly. Since ktest is
written in perl, it is trivial to read and replace the current
config with the configs specified by the min config.
Now the min config (and add configs) are read by perl and before
a make is done, these configs in the .config file are replaced
by the version in the min config.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Searching through several tests, it gets confusing which test result
is for which test. By adding the TEST_NAME option, the user can tell
which test result belongs to which test.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Currently the config_bisect compares the min config with the
CONFIG_BISECT config. There may be another config that we know
is good that we want to ignore configs on. By passing in this
config it will ignore the options that are set in the good config.
Note: This only ignores the config, it does not (yet) handle
options that are different between the two configs. If the good
config has "SLAB" set and the bad config has "SLUB" it will not
find the bug if the bug had to do with changing these two options.
This is something that I intend to implement in the future.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When a triple fault happens in a test, no call trace nor panic
is displayed. Instead, the system reboots to the good kernel.
Since the good kernel may display a boot prompt that matches the
success string, ktest may think that the test succeeded, when it
did not.
Detecting triple faults is tricky because it is hard to generalize
what a reboot looks like. The best that we can come up with for now
is to examine the Linux banner. If we detect that the Linux banner
matches the test we want to test, then look to see if we hit another
Linux banner with a different kernel is booted. This can be assumed
to be a triple fault.
We can't just check for two Linux banners because things like
early printk may cause the Linux banner to be displayed twice. Checking
for different kernel versions should be the safe bet.
If this for some reason detects a false triple boot. A new ktest
config option is also created:
DETECT_TRIPLE_FAULT
This can be set to 0 to disable this checking.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Different timeouts can cause the ktest monitor to break out of the
loop. It becomes annoying that one does not know the reason why
it exited the monitor loop. Display the cause of the reason why
the loop was exited.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
By ignoring the unset values of the minconfig in deciding
what to test in the config_bisect can cause the problem
config from being tested too.
Just do not test the configs that are set in the minconfig.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The command that is called that reboots the kernel may fail
but the return code is not passed back to the ktest.pl script.
This is because a ';' is used between the two commands and
if the second command fails, only the first command's return
code is returned. Using a '&&' between the two commands fixes
this.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Because in perl the array size returned by $#arr, is the last
index and not the actually size of the array, we end the config
bisect early, thinking there is only one config left when there
are in fact two. Thus the result has a 50% chance of picking
the correct config that caused the problem.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
There are cases where one ktest option may be used within another
ktest option. Allow them to be reused just like config variables
but there are evaluated at time of test not config processing time.
Thus having something like:
MAKE_CMD = make ARCH=${ARCH}
TEST_START
ARCH = powerpc
TEST_START
ARCH = arm
Will have the arch defined for each test iteration.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
I found that I constantly reuse information for each test case.
It would be nice to just define a variable to reuse.
For example I may have:
TEST_START
[...]
TEST = ssh root@mybox /path/to/my/script
TEST_START
[...]
TEST = ssh root@mybox /path/to/my/script
[etc]
The issue is, I may wont to change that script or one of the other
fields. Then I need to update each line individually.
With the addition of config variables (variables only used during parsing
the config) we can simplify the config files. These variables can
also be defined multiple times and each time the new value will
overwrite the old value.
The convention to use a config variable over a ktest option is to use :=
instead of =.
Now we could do:
USER := root
TARGET := mybox
TEST_SCRIPT := /path/to/my/script
TEST_CASE := ${USER}@${TARGET} ${TEST_SCRIPT}
TEST_START
[...]
TEST = ${TEST_CASE}
TEST_START
[...]
TEST = ${TEST_CASE}
[etc]
Now we just need to update the variables at the top.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The patches being checked may not leave the kernel in a state
that the next run will allow the new kernel to be copied to the
machine. Reboot to a known good kernel before continuing to the
next kernel to test.
Added option PATCHCHECK_SLEEP_TIME for the max time to sleep between
patchcheck reboots.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Reboot after each bisect run regardless if the bisect passed
or failed. The test may just be to boot the kernel and that kernel
may not have a way to copy the next kerne to it. Reboot to a known
good kernel after each bisect run.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
If the test failed due to timeout for boot, print a message saying
so. Otherwise the user will be confused to why their test just failed.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The command to run post install (for those that want initrds) was
broken. Instead of doing a substitution for the $KERNEL_VERSION
variable. It was replacing the entire command with nothing.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-ktest:
ktest: Add STOP_TEST_AFTER to stop the test after a period of time
ktest: Monitor kernel while running of user tests
ktest: Fix bug where the test would not end after failure
ktest: Add BISECT_FILES to run git bisect on paths
ktest: Add BISECT_SKIP
ktest: Add manual bisect
ktest: Handle kernels before make oldnoconfig
ktest: Start failure timeout on panic too
ktest: Print logfile name on failure
Currently, if a test causes constant output but never reaches a
boot prompt, or crashes, the test will never stop. Add STOP_TEST_AFTER
to create a variable that will stop (and fail) the test after it has run
for this amount of time. The default is 10 minutes. Setting this
variable to -1 will disable it.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Record the console of tests to both the console and the log.
Also, record the bug reports afte the test has completed.
Currently, if a kernel bug happens while running the userland
test, the test stops and will not record the kernel bug. This
makes it difficult to solve what happened.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The config STOP_AFTER_FAILURE is the number of seconds to continue
the test when a failure is detected. This lets the monitor record
more data to the logs and console that may be helpful in solving
the bug that was found.
But the test had a bug. If the failure caused multiple
"Call Trace" stack dumps, the start time to compare the
STOP_AFTER_FAILURE would constantly be reset. Only update the start
time at the first "Call Trace" instance.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add the config option BISECT_FILES that allows the user to
specify what path in the kernel to run the git bisect on.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
If a during a git bisect, ktest fails on something other than
what it is testing (if BISECT_TYPE is test but it fails on build),
if BISECT_SKIP is set, then it will do a "git bisect skip" instead
of just failing the bisect and letting the user find a good commit
to test.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
For both git bisect and config bisect, if BISECT_MANUAL is set to 1,
then bisect will stop between iterations and ask the user for the
result. The actual result is ignored. This makes it possible to
use ktest.pl for bisecting configs and git and let the user examine
the results themselves and enter their own results.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When bisecting, one may come across a kernel that does not have
make oldnoconfig. In this case, we need to run the command "yes"
into a make oldconfig. This will select defaults instead of 'n'
into each command, but it works as a work around.
Note, "yes n" will not work because a config may have a value that
"n" is not acceptable for.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Currently we just look for a Call Trace to start the time out
when to reboot the box. But if the kernel panics and does not
show a Call Trace, the test will not reboot the box after
the specified timeout.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
If the test fails and a logfile was specified. Print the name to
let the user know where to look for more information on the
failure.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
OK, the copyright allows you to write a copy, still I think the lawyers
prefer the correct spelling.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
LKML-Reference: <1295899921-11333-1-git-send-email-u.kleine-koenig@pengutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
In keeping with the notion that all tools should be simple for
all to use. I've changed ktest.pl to ask for mandatory options
instead of just failing. It will append (or create) the options
the user types in, onto the config file.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
During the config_bisect, in case of failure, it is nice to have
the last good and bad .configs that were used. This would let
us restart the config_bisect from those configs.
Copy the last good config into the output dir as config_good,
and the last bad config as config_bad.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
The run_ssh handles the ssh variable $SSH_COMMAND, which was not
being used by the run_command in reboot_to function.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Added the options STOP_AFTER_SUCCESS and STOP_AFTER_FAILURE to
allow the user to give a time (in seconds) to stop the monitor
after a stack trace or login has been detected. Sometimes the
kernel constantly prints out to the console and this may cause
the test to run indefinitely.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When we store failures, we create a directory that has the build_type
in it. For useconfig, it also contains the name path of the config
file it uses. This unfortunately gets its own directory on failure.
Parse off the directory name when creating the directory to store
the failures.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
By using the "use_config" for minconfig and addconfig we risk
trying to copy itself to itself, which will cause an unexpected failure.
Use a different name instead.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add a compare script that makes sure that all the options in
sample.conf are used in ktest.pl, and all the options in
ktest.pl are described in sample.conf.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Added the ability to do a config_bisect. It starts with a bad
config and does the following loop.
Enable half the configs.
if none of the configs to check are not enabled
(caused by missing dependencies) enable the other half.
Run the test
if the test passes, remove the configs from the check
but enabled them for further tests (to satisfy
dependencies).
else
Remove any config that was not enabled, as we have found
a new config that can cause a failure.
loop till we have only one config left.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Updated to version 0.2.
Now have SSH_EXEC options.
Also added some cleanups for keeping track of success and
reading the config file.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Have a easy way to parse the log file for success or failure.
KTEST RESULT: ...
Suggested-by: Tim Bird <tim.bird@am.sony.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Allow a test case in the config file to undefine a default
value by specifying the option and equal sign but not assigning
it a value:
OPTION =
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Running the command "yes ''" through the make oldconfig may enable
things we do not want enabled. If something is default enabled, the
yes command with '' as an argument will enable it.
Use oldnoconfig, which runs everything as if 'no' was used.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Change the config to use TEST_START where the options after a
TEST_START automatically get the [] as it is read and they do
not need to exist in the config file;
TEST_START
MIN_CONFIG = myconfig
is the same as
MIN_CONFIG[1] = myconfig
The benefit is that you no longer need to keep track of test numbers
with tests.
Also process the commit ids that are passed to the options
to get the actually SHA1 so it is no longer relative to the branch.
Ie, saying HEAD will get the current SHA1 and then that will
be used, and will work even if another branch is checked out.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Added the options POWEROFF_AFTER_HALT to handle boxes that do not
really shut off after a halt is called.
Added POWERCYCLE_AFTER_REBOOT to force a power cycle for boxes that
don't reboot but get stuck during the reboot.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add a POST_INSTALL option that runs after the build and install
but before rebooting to the test kernel. This alls the user to
run a script that will install an initrd (or anything else that may
be special) before booting.
An environment variable KERNEL_VERSION is set.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Added sample.conf as a nice document to show new users.
Use a %default hash to separate out the options that are default
and allow us to complain about options being set twice.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
It is much better to keep the monitor running throughout a
test than to constantly start and stop it. Some console readers
will show everything that has happened before when opening the
console, and by opening it several times, causes the old content to
be read multiple times in a single test.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Add option to continue after a test fails.
Add option to reset the log at start of running ktest.
Update default timeout to 2 minutes.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Added the ability to do a reverse bisect.
Better logging of running commands.
Added the copyright statement.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Added patchcheck functionality. It will checkout a given SHA1
and test that commit and all commits to another given SHA1.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
REBOOT_ON_ERROR to reboot the box on error
BUILD_OPTIONS to add options to the make build (like -j40)
Added "useconfig:<config>".
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Added dodie function to have a bit more control over die calls.
BUILD_NOCLEAN to not run make mrproper or remove .config.
POWEROFF_ON_{SUCCESS,ERROR} to turn off the power after tests.
Skip backtrace calls that were done by the backtrace tests.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Originally named autotest.pl, but renamed to ktest.pl now because
the autotest name is used by other projects.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>