linux_dsm_epyc7002/arch/x86/kernel/apic/hw_nmi.c

184 lines
4.6 KiB
C
Raw Normal View History

nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
/*
* HW NMI watchdog support
*
* started by Don Zickus, Copyright (C) 2010 Red Hat, Inc.
*
* Arch specific calls to support NMI watchdog
*
* Bits copied from original nmi.c file
*
*/
#include <asm/apic.h>
#include <asm/nmi.h>
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
#include <linux/cpumask.h>
#include <linux/kdebug.h>
#include <linux/notifier.h>
#include <linux/kprobes.h>
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
#include <linux/nmi.h>
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/seq_buf.h>
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
#ifdef CONFIG_HARDLOCKUP_DETECTOR
u64 hw_nmi_get_sample_period(int watchdog_thresh)
{
return (u64)(cpu_khz) * 1000 * watchdog_thresh;
}
#endif
#ifdef arch_trigger_all_cpu_backtrace
/* For reliability, we're prepared to waste bits here. */
static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
x86/nmi: Fix use of unallocated cpumask_var_t Commit "x86/nmi: Perform a safe NMI stack trace on all CPUs" has introduced a cpumask_var_t variable: +static cpumask_var_t printtrace_mask; But never allocated it before using it, which caused a NULL ptr deref when trying to print the stack trace: [ 1110.296154] BUG: unable to handle kernel NULL pointer dereference at (null) [ 1110.296169] IP: __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296178] PGD 4c34b3067 PUD 4c351b067 PMD 0 [ 1110.296186] Oops: 0002 [#1] PREEMPT SMP KASAN [ 1110.296234] Dumping ftrace buffer: [ 1110.296330] (ftrace buffer empty) [ 1110.296339] Modules linked in: [ 1110.296345] CPU: 1 PID: 10538 Comm: trinity-c99 Not tainted 3.18.0-rc5-next-20141124-sasha-00058-ge2a8c09-dirty #1499 [ 1110.296348] task: ffff880152650000 ti: ffff8804c3560000 task.ti: ffff8804c3560000 [ 1110.296357] RIP: __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296360] RSP: 0000:ffff8804c3563870 EFLAGS: 00010246 [ 1110.296363] RAX: 0000000000000000 RBX: ffffe8fff3c4a809 RCX: 0000000000000000 [ 1110.296366] RDX: 0000000000000008 RSI: ffffffff9e254040 RDI: 0000000000000000 [ 1110.296369] RBP: ffff8804c3563908 R08: 0000000000ffffff R09: 0000000000ffffff [ 1110.296371] R10: 0000000000000000 R11: 0000000000000006 R12: 0000000000000000 [ 1110.296375] R13: 0000000000000000 R14: ffffffff9e254040 R15: ffffe8fff3c4a809 [ 1110.296379] FS: 00007f9e43b0b700(0000) GS:ffff880107e00000(0000) knlGS:0000000000000000 [ 1110.296382] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 1110.296385] CR2: 0000000000000000 CR3: 00000004e4334000 CR4: 00000000000006a0 [ 1110.296400] Stack: [ 1110.296406] ffffffff81b1e46c 0000000000000000 ffff880107e03fb8 000000000000000b [ 1110.296413] ffff880107dfffc0 ffff880107e03fc0 0000000000000008 ffffffff93f2e9c8 [ 1110.296419] 0000000000000000 ffffda0020fc07f7 0000000000000008 ffff8804c3563901 [ 1110.296420] Call Trace: [ 1110.296429] ? memcpy (mm/kasan/kasan.c:275) [ 1110.296437] ? arch_trigger_all_cpu_backtrace (include/linux/bitmap.h:215 include/linux/cpumask.h:506 arch/x86/kernel/apic/hw_nmi.c:76) [ 1110.296444] arch_trigger_all_cpu_backtrace (include/linux/bitmap.h:215 include/linux/cpumask.h:506 arch/x86/kernel/apic/hw_nmi.c:76) [ 1110.296451] ? dump_stack (./arch/x86/include/asm/preempt.h:95 lib/dump_stack.c:55) [ 1110.296458] do_raw_spin_lock (./arch/x86/include/asm/spinlock.h:86 kernel/locking/spinlock_debug.c:130 kernel/locking/spinlock_debug.c:137) [ 1110.296468] _raw_spin_lock (include/linux/spinlock_api_smp.h:143 kernel/locking/spinlock.c:151) [ 1110.296474] ? __page_check_address (include/linux/spinlock.h:309 mm/rmap.c:630) [ 1110.296481] __page_check_address (include/linux/spinlock.h:309 mm/rmap.c:630) [ 1110.296487] ? preempt_count_sub (kernel/sched/core.c:2615) [ 1110.296493] try_to_unmap_one (include/linux/rmap.h:202 mm/rmap.c:1146) [ 1110.296504] ? anon_vma_interval_tree_iter_next (mm/interval_tree.c:72 mm/interval_tree.c:103) [ 1110.296514] rmap_walk (mm/rmap.c:1653 mm/rmap.c:1725) [ 1110.296521] ? page_get_anon_vma (include/linux/rcupdate.h:423 include/linux/rcupdate.h:935 mm/rmap.c:435) [ 1110.296530] try_to_unmap (mm/rmap.c:1545) [ 1110.296536] ? page_get_anon_vma (mm/rmap.c:437) [ 1110.296545] ? try_to_unmap_nonlinear (mm/rmap.c:1138) [ 1110.296551] ? SyS_msync (mm/rmap.c:1501) [ 1110.296558] ? page_remove_rmap (mm/rmap.c:1409) [ 1110.296565] ? page_get_anon_vma (mm/rmap.c:448) [ 1110.296571] ? anon_vma_ctor (mm/rmap.c:1496) [ 1110.296579] migrate_pages (mm/migrate.c:913 mm/migrate.c:956 mm/migrate.c:1136) [ 1110.296586] ? _raw_spin_unlock_irq (./arch/x86/include/asm/preempt.h:95 include/linux/spinlock_api_smp.h:169 kernel/locking/spinlock.c:199) [ 1110.296593] ? buffer_migrate_lock_buffers (mm/migrate.c:1584) [ 1110.296601] ? handle_mm_fault (mm/memory.c:3163 mm/memory.c:3223 mm/memory.c:3336 mm/memory.c:3365) [ 1110.296607] migrate_misplaced_page (mm/migrate.c:1738) [ 1110.296614] handle_mm_fault (mm/memory.c:3170 mm/memory.c:3223 mm/memory.c:3336 mm/memory.c:3365) [ 1110.296623] __do_page_fault (arch/x86/mm/fault.c:1246) [ 1110.296630] ? vtime_account_user (kernel/sched/cputime.c:701) [ 1110.296638] ? get_parent_ip (kernel/sched/core.c:2559) [ 1110.296646] ? context_tracking_user_exit (kernel/context_tracking.c:144) [ 1110.296656] trace_do_page_fault (arch/x86/mm/fault.c:1329 include/linux/jump_label.h:114 include/linux/context_tracking_state.h:27 include/linux/context_tracking.h:45 arch/x86/mm/fault.c:1330) [ 1110.296664] do_async_page_fault (arch/x86/kernel/kvm.c:280) [ 1110.296670] async_page_fault (arch/x86/kernel/entry_64.S:1285) [ 1110.296755] Code: 08 4c 8b 54 16 f0 4c 8b 5c 16 f8 4c 89 07 4c 89 4f 08 4c 89 54 17 f0 4c 89 5c 17 f8 c3 90 83 fa 08 72 1b 4c 8b 06 4c 8b 4c 16 f8 <4c> 89 07 4c 89 4c 17 f8 c3 66 2e 0f 1f 84 00 00 00 00 00 83 fa All code ======== 0: 08 4c 8b 54 or %cl,0x54(%rbx,%rcx,4) 4: 16 (bad) 5: f0 4c 8b 5c 16 f8 lock mov -0x8(%rsi,%rdx,1),%r11 b: 4c 89 07 mov %r8,(%rdi) e: 4c 89 4f 08 mov %r9,0x8(%rdi) 12: 4c 89 54 17 f0 mov %r10,-0x10(%rdi,%rdx,1) 17: 4c 89 5c 17 f8 mov %r11,-0x8(%rdi,%rdx,1) 1c: c3 retq 1d: 90 nop 1e: 83 fa 08 cmp $0x8,%edx 21: 72 1b jb 0x3e 23: 4c 8b 06 mov (%rsi),%r8 26: 4c 8b 4c 16 f8 mov -0x8(%rsi,%rdx,1),%r9 2b:* 4c 89 07 mov %r8,(%rdi) <-- trapping instruction 2e: 4c 89 4c 17 f8 mov %r9,-0x8(%rdi,%rdx,1) 33: c3 retq 34: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1) 3b: 00 00 00 3e: 83 fa 00 cmp $0x0,%edx Code starting with the faulting instruction =========================================== 0: 4c 89 07 mov %r8,(%rdi) 3: 4c 89 4c 17 f8 mov %r9,-0x8(%rdi,%rdx,1) 8: c3 retq 9: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1) 10: 00 00 00 13: 83 fa 00 cmp $0x0,%edx [ 1110.296760] RIP __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296763] RSP <ffff8804c3563870> [ 1110.296765] CR2: 0000000000000000 Link: http://lkml.kernel.org/r/1416931560-10603-1-git-send-email-sasha.levin@oracle.com Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-11-25 23:06:00 +07:00
static cpumask_t printtrace_mask;
#define NMI_BUF_SIZE 4096
struct nmi_seq_buf {
unsigned char buffer[NMI_BUF_SIZE];
struct seq_buf seq;
};
/* Safe printing in NMI context */
static DEFINE_PER_CPU(struct nmi_seq_buf, nmi_print_seq);
/* "in progress" flag of arch_trigger_all_cpu_backtrace */
static unsigned long backtrace_flag;
static void print_seq_line(struct nmi_seq_buf *s, int start, int end)
{
const char *buf = s->buffer + start;
printk("%.*s", (end - start) + 1, buf);
}
void arch_trigger_all_cpu_backtrace(bool include_self)
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
{
struct nmi_seq_buf *s;
int len;
int cpu;
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
int i;
int this_cpu = get_cpu();
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
if (test_and_set_bit(0, &backtrace_flag)) {
/*
* If there is already a trigger_all_cpu_backtrace() in progress
* (backtrace_flag == 1), don't output double cpu dump infos.
*/
put_cpu();
return;
}
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
if (!include_self)
cpumask_clear_cpu(this_cpu, to_cpumask(backtrace_mask));
x86/nmi: Fix use of unallocated cpumask_var_t Commit "x86/nmi: Perform a safe NMI stack trace on all CPUs" has introduced a cpumask_var_t variable: +static cpumask_var_t printtrace_mask; But never allocated it before using it, which caused a NULL ptr deref when trying to print the stack trace: [ 1110.296154] BUG: unable to handle kernel NULL pointer dereference at (null) [ 1110.296169] IP: __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296178] PGD 4c34b3067 PUD 4c351b067 PMD 0 [ 1110.296186] Oops: 0002 [#1] PREEMPT SMP KASAN [ 1110.296234] Dumping ftrace buffer: [ 1110.296330] (ftrace buffer empty) [ 1110.296339] Modules linked in: [ 1110.296345] CPU: 1 PID: 10538 Comm: trinity-c99 Not tainted 3.18.0-rc5-next-20141124-sasha-00058-ge2a8c09-dirty #1499 [ 1110.296348] task: ffff880152650000 ti: ffff8804c3560000 task.ti: ffff8804c3560000 [ 1110.296357] RIP: __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296360] RSP: 0000:ffff8804c3563870 EFLAGS: 00010246 [ 1110.296363] RAX: 0000000000000000 RBX: ffffe8fff3c4a809 RCX: 0000000000000000 [ 1110.296366] RDX: 0000000000000008 RSI: ffffffff9e254040 RDI: 0000000000000000 [ 1110.296369] RBP: ffff8804c3563908 R08: 0000000000ffffff R09: 0000000000ffffff [ 1110.296371] R10: 0000000000000000 R11: 0000000000000006 R12: 0000000000000000 [ 1110.296375] R13: 0000000000000000 R14: ffffffff9e254040 R15: ffffe8fff3c4a809 [ 1110.296379] FS: 00007f9e43b0b700(0000) GS:ffff880107e00000(0000) knlGS:0000000000000000 [ 1110.296382] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 1110.296385] CR2: 0000000000000000 CR3: 00000004e4334000 CR4: 00000000000006a0 [ 1110.296400] Stack: [ 1110.296406] ffffffff81b1e46c 0000000000000000 ffff880107e03fb8 000000000000000b [ 1110.296413] ffff880107dfffc0 ffff880107e03fc0 0000000000000008 ffffffff93f2e9c8 [ 1110.296419] 0000000000000000 ffffda0020fc07f7 0000000000000008 ffff8804c3563901 [ 1110.296420] Call Trace: [ 1110.296429] ? memcpy (mm/kasan/kasan.c:275) [ 1110.296437] ? arch_trigger_all_cpu_backtrace (include/linux/bitmap.h:215 include/linux/cpumask.h:506 arch/x86/kernel/apic/hw_nmi.c:76) [ 1110.296444] arch_trigger_all_cpu_backtrace (include/linux/bitmap.h:215 include/linux/cpumask.h:506 arch/x86/kernel/apic/hw_nmi.c:76) [ 1110.296451] ? dump_stack (./arch/x86/include/asm/preempt.h:95 lib/dump_stack.c:55) [ 1110.296458] do_raw_spin_lock (./arch/x86/include/asm/spinlock.h:86 kernel/locking/spinlock_debug.c:130 kernel/locking/spinlock_debug.c:137) [ 1110.296468] _raw_spin_lock (include/linux/spinlock_api_smp.h:143 kernel/locking/spinlock.c:151) [ 1110.296474] ? __page_check_address (include/linux/spinlock.h:309 mm/rmap.c:630) [ 1110.296481] __page_check_address (include/linux/spinlock.h:309 mm/rmap.c:630) [ 1110.296487] ? preempt_count_sub (kernel/sched/core.c:2615) [ 1110.296493] try_to_unmap_one (include/linux/rmap.h:202 mm/rmap.c:1146) [ 1110.296504] ? anon_vma_interval_tree_iter_next (mm/interval_tree.c:72 mm/interval_tree.c:103) [ 1110.296514] rmap_walk (mm/rmap.c:1653 mm/rmap.c:1725) [ 1110.296521] ? page_get_anon_vma (include/linux/rcupdate.h:423 include/linux/rcupdate.h:935 mm/rmap.c:435) [ 1110.296530] try_to_unmap (mm/rmap.c:1545) [ 1110.296536] ? page_get_anon_vma (mm/rmap.c:437) [ 1110.296545] ? try_to_unmap_nonlinear (mm/rmap.c:1138) [ 1110.296551] ? SyS_msync (mm/rmap.c:1501) [ 1110.296558] ? page_remove_rmap (mm/rmap.c:1409) [ 1110.296565] ? page_get_anon_vma (mm/rmap.c:448) [ 1110.296571] ? anon_vma_ctor (mm/rmap.c:1496) [ 1110.296579] migrate_pages (mm/migrate.c:913 mm/migrate.c:956 mm/migrate.c:1136) [ 1110.296586] ? _raw_spin_unlock_irq (./arch/x86/include/asm/preempt.h:95 include/linux/spinlock_api_smp.h:169 kernel/locking/spinlock.c:199) [ 1110.296593] ? buffer_migrate_lock_buffers (mm/migrate.c:1584) [ 1110.296601] ? handle_mm_fault (mm/memory.c:3163 mm/memory.c:3223 mm/memory.c:3336 mm/memory.c:3365) [ 1110.296607] migrate_misplaced_page (mm/migrate.c:1738) [ 1110.296614] handle_mm_fault (mm/memory.c:3170 mm/memory.c:3223 mm/memory.c:3336 mm/memory.c:3365) [ 1110.296623] __do_page_fault (arch/x86/mm/fault.c:1246) [ 1110.296630] ? vtime_account_user (kernel/sched/cputime.c:701) [ 1110.296638] ? get_parent_ip (kernel/sched/core.c:2559) [ 1110.296646] ? context_tracking_user_exit (kernel/context_tracking.c:144) [ 1110.296656] trace_do_page_fault (arch/x86/mm/fault.c:1329 include/linux/jump_label.h:114 include/linux/context_tracking_state.h:27 include/linux/context_tracking.h:45 arch/x86/mm/fault.c:1330) [ 1110.296664] do_async_page_fault (arch/x86/kernel/kvm.c:280) [ 1110.296670] async_page_fault (arch/x86/kernel/entry_64.S:1285) [ 1110.296755] Code: 08 4c 8b 54 16 f0 4c 8b 5c 16 f8 4c 89 07 4c 89 4f 08 4c 89 54 17 f0 4c 89 5c 17 f8 c3 90 83 fa 08 72 1b 4c 8b 06 4c 8b 4c 16 f8 <4c> 89 07 4c 89 4c 17 f8 c3 66 2e 0f 1f 84 00 00 00 00 00 83 fa All code ======== 0: 08 4c 8b 54 or %cl,0x54(%rbx,%rcx,4) 4: 16 (bad) 5: f0 4c 8b 5c 16 f8 lock mov -0x8(%rsi,%rdx,1),%r11 b: 4c 89 07 mov %r8,(%rdi) e: 4c 89 4f 08 mov %r9,0x8(%rdi) 12: 4c 89 54 17 f0 mov %r10,-0x10(%rdi,%rdx,1) 17: 4c 89 5c 17 f8 mov %r11,-0x8(%rdi,%rdx,1) 1c: c3 retq 1d: 90 nop 1e: 83 fa 08 cmp $0x8,%edx 21: 72 1b jb 0x3e 23: 4c 8b 06 mov (%rsi),%r8 26: 4c 8b 4c 16 f8 mov -0x8(%rsi,%rdx,1),%r9 2b:* 4c 89 07 mov %r8,(%rdi) <-- trapping instruction 2e: 4c 89 4c 17 f8 mov %r9,-0x8(%rdi,%rdx,1) 33: c3 retq 34: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1) 3b: 00 00 00 3e: 83 fa 00 cmp $0x0,%edx Code starting with the faulting instruction =========================================== 0: 4c 89 07 mov %r8,(%rdi) 3: 4c 89 4c 17 f8 mov %r9,-0x8(%rdi,%rdx,1) 8: c3 retq 9: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1) 10: 00 00 00 13: 83 fa 00 cmp $0x0,%edx [ 1110.296760] RIP __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296763] RSP <ffff8804c3563870> [ 1110.296765] CR2: 0000000000000000 Link: http://lkml.kernel.org/r/1416931560-10603-1-git-send-email-sasha.levin@oracle.com Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-11-25 23:06:00 +07:00
cpumask_copy(&printtrace_mask, to_cpumask(backtrace_mask));
/*
* Set up per_cpu seq_buf buffers that the NMIs running on the other
* CPUs will write to.
*/
for_each_cpu(cpu, to_cpumask(backtrace_mask)) {
s = &per_cpu(nmi_print_seq, cpu);
seq_buf_init(&s->seq, s->buffer, NMI_BUF_SIZE);
}
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
if (!cpumask_empty(to_cpumask(backtrace_mask))) {
pr_info("sending NMI to %s CPUs:\n",
(include_self ? "all" : "other"));
apic->send_IPI_mask(to_cpumask(backtrace_mask), NMI_VECTOR);
}
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
/* Wait for up to 10 seconds for all CPUs to do the backtrace */
for (i = 0; i < 10 * 1000; i++) {
if (cpumask_empty(to_cpumask(backtrace_mask)))
break;
mdelay(1);
touch_softlockup_watchdog();
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
}
/*
* Now that all the NMIs have triggered, we can dump out their
* back traces safely to the console.
*/
x86/nmi: Fix use of unallocated cpumask_var_t Commit "x86/nmi: Perform a safe NMI stack trace on all CPUs" has introduced a cpumask_var_t variable: +static cpumask_var_t printtrace_mask; But never allocated it before using it, which caused a NULL ptr deref when trying to print the stack trace: [ 1110.296154] BUG: unable to handle kernel NULL pointer dereference at (null) [ 1110.296169] IP: __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296178] PGD 4c34b3067 PUD 4c351b067 PMD 0 [ 1110.296186] Oops: 0002 [#1] PREEMPT SMP KASAN [ 1110.296234] Dumping ftrace buffer: [ 1110.296330] (ftrace buffer empty) [ 1110.296339] Modules linked in: [ 1110.296345] CPU: 1 PID: 10538 Comm: trinity-c99 Not tainted 3.18.0-rc5-next-20141124-sasha-00058-ge2a8c09-dirty #1499 [ 1110.296348] task: ffff880152650000 ti: ffff8804c3560000 task.ti: ffff8804c3560000 [ 1110.296357] RIP: __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296360] RSP: 0000:ffff8804c3563870 EFLAGS: 00010246 [ 1110.296363] RAX: 0000000000000000 RBX: ffffe8fff3c4a809 RCX: 0000000000000000 [ 1110.296366] RDX: 0000000000000008 RSI: ffffffff9e254040 RDI: 0000000000000000 [ 1110.296369] RBP: ffff8804c3563908 R08: 0000000000ffffff R09: 0000000000ffffff [ 1110.296371] R10: 0000000000000000 R11: 0000000000000006 R12: 0000000000000000 [ 1110.296375] R13: 0000000000000000 R14: ffffffff9e254040 R15: ffffe8fff3c4a809 [ 1110.296379] FS: 00007f9e43b0b700(0000) GS:ffff880107e00000(0000) knlGS:0000000000000000 [ 1110.296382] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 1110.296385] CR2: 0000000000000000 CR3: 00000004e4334000 CR4: 00000000000006a0 [ 1110.296400] Stack: [ 1110.296406] ffffffff81b1e46c 0000000000000000 ffff880107e03fb8 000000000000000b [ 1110.296413] ffff880107dfffc0 ffff880107e03fc0 0000000000000008 ffffffff93f2e9c8 [ 1110.296419] 0000000000000000 ffffda0020fc07f7 0000000000000008 ffff8804c3563901 [ 1110.296420] Call Trace: [ 1110.296429] ? memcpy (mm/kasan/kasan.c:275) [ 1110.296437] ? arch_trigger_all_cpu_backtrace (include/linux/bitmap.h:215 include/linux/cpumask.h:506 arch/x86/kernel/apic/hw_nmi.c:76) [ 1110.296444] arch_trigger_all_cpu_backtrace (include/linux/bitmap.h:215 include/linux/cpumask.h:506 arch/x86/kernel/apic/hw_nmi.c:76) [ 1110.296451] ? dump_stack (./arch/x86/include/asm/preempt.h:95 lib/dump_stack.c:55) [ 1110.296458] do_raw_spin_lock (./arch/x86/include/asm/spinlock.h:86 kernel/locking/spinlock_debug.c:130 kernel/locking/spinlock_debug.c:137) [ 1110.296468] _raw_spin_lock (include/linux/spinlock_api_smp.h:143 kernel/locking/spinlock.c:151) [ 1110.296474] ? __page_check_address (include/linux/spinlock.h:309 mm/rmap.c:630) [ 1110.296481] __page_check_address (include/linux/spinlock.h:309 mm/rmap.c:630) [ 1110.296487] ? preempt_count_sub (kernel/sched/core.c:2615) [ 1110.296493] try_to_unmap_one (include/linux/rmap.h:202 mm/rmap.c:1146) [ 1110.296504] ? anon_vma_interval_tree_iter_next (mm/interval_tree.c:72 mm/interval_tree.c:103) [ 1110.296514] rmap_walk (mm/rmap.c:1653 mm/rmap.c:1725) [ 1110.296521] ? page_get_anon_vma (include/linux/rcupdate.h:423 include/linux/rcupdate.h:935 mm/rmap.c:435) [ 1110.296530] try_to_unmap (mm/rmap.c:1545) [ 1110.296536] ? page_get_anon_vma (mm/rmap.c:437) [ 1110.296545] ? try_to_unmap_nonlinear (mm/rmap.c:1138) [ 1110.296551] ? SyS_msync (mm/rmap.c:1501) [ 1110.296558] ? page_remove_rmap (mm/rmap.c:1409) [ 1110.296565] ? page_get_anon_vma (mm/rmap.c:448) [ 1110.296571] ? anon_vma_ctor (mm/rmap.c:1496) [ 1110.296579] migrate_pages (mm/migrate.c:913 mm/migrate.c:956 mm/migrate.c:1136) [ 1110.296586] ? _raw_spin_unlock_irq (./arch/x86/include/asm/preempt.h:95 include/linux/spinlock_api_smp.h:169 kernel/locking/spinlock.c:199) [ 1110.296593] ? buffer_migrate_lock_buffers (mm/migrate.c:1584) [ 1110.296601] ? handle_mm_fault (mm/memory.c:3163 mm/memory.c:3223 mm/memory.c:3336 mm/memory.c:3365) [ 1110.296607] migrate_misplaced_page (mm/migrate.c:1738) [ 1110.296614] handle_mm_fault (mm/memory.c:3170 mm/memory.c:3223 mm/memory.c:3336 mm/memory.c:3365) [ 1110.296623] __do_page_fault (arch/x86/mm/fault.c:1246) [ 1110.296630] ? vtime_account_user (kernel/sched/cputime.c:701) [ 1110.296638] ? get_parent_ip (kernel/sched/core.c:2559) [ 1110.296646] ? context_tracking_user_exit (kernel/context_tracking.c:144) [ 1110.296656] trace_do_page_fault (arch/x86/mm/fault.c:1329 include/linux/jump_label.h:114 include/linux/context_tracking_state.h:27 include/linux/context_tracking.h:45 arch/x86/mm/fault.c:1330) [ 1110.296664] do_async_page_fault (arch/x86/kernel/kvm.c:280) [ 1110.296670] async_page_fault (arch/x86/kernel/entry_64.S:1285) [ 1110.296755] Code: 08 4c 8b 54 16 f0 4c 8b 5c 16 f8 4c 89 07 4c 89 4f 08 4c 89 54 17 f0 4c 89 5c 17 f8 c3 90 83 fa 08 72 1b 4c 8b 06 4c 8b 4c 16 f8 <4c> 89 07 4c 89 4c 17 f8 c3 66 2e 0f 1f 84 00 00 00 00 00 83 fa All code ======== 0: 08 4c 8b 54 or %cl,0x54(%rbx,%rcx,4) 4: 16 (bad) 5: f0 4c 8b 5c 16 f8 lock mov -0x8(%rsi,%rdx,1),%r11 b: 4c 89 07 mov %r8,(%rdi) e: 4c 89 4f 08 mov %r9,0x8(%rdi) 12: 4c 89 54 17 f0 mov %r10,-0x10(%rdi,%rdx,1) 17: 4c 89 5c 17 f8 mov %r11,-0x8(%rdi,%rdx,1) 1c: c3 retq 1d: 90 nop 1e: 83 fa 08 cmp $0x8,%edx 21: 72 1b jb 0x3e 23: 4c 8b 06 mov (%rsi),%r8 26: 4c 8b 4c 16 f8 mov -0x8(%rsi,%rdx,1),%r9 2b:* 4c 89 07 mov %r8,(%rdi) <-- trapping instruction 2e: 4c 89 4c 17 f8 mov %r9,-0x8(%rdi,%rdx,1) 33: c3 retq 34: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1) 3b: 00 00 00 3e: 83 fa 00 cmp $0x0,%edx Code starting with the faulting instruction =========================================== 0: 4c 89 07 mov %r8,(%rdi) 3: 4c 89 4c 17 f8 mov %r9,-0x8(%rdi,%rdx,1) 8: c3 retq 9: 66 2e 0f 1f 84 00 00 nopw %cs:0x0(%rax,%rax,1) 10: 00 00 00 13: 83 fa 00 cmp $0x0,%edx [ 1110.296760] RIP __memcpy (arch/x86/lib/memcpy_64.S:151) [ 1110.296763] RSP <ffff8804c3563870> [ 1110.296765] CR2: 0000000000000000 Link: http://lkml.kernel.org/r/1416931560-10603-1-git-send-email-sasha.levin@oracle.com Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-11-25 23:06:00 +07:00
for_each_cpu(cpu, &printtrace_mask) {
int last_i = 0;
s = &per_cpu(nmi_print_seq, cpu);
len = seq_buf_used(&s->seq);
if (!len)
continue;
/* Print line by line. */
for (i = 0; i < len; i++) {
if (s->buffer[i] == '\n') {
print_seq_line(s, last_i, i);
last_i = i + 1;
}
}
/* Check if there was a partial line. */
if (last_i < len) {
print_seq_line(s, last_i, len - 1);
pr_cont("\n");
}
}
clear_bit(0, &backtrace_flag);
smp_mb__after_atomic();
put_cpu();
nmi_watchdog: Add new, generic implementation, using perf events This is a new generic nmi_watchdog implementation using the perf events infrastructure as suggested by Ingo. The implementation is simple, just create an in-kernel perf event and register an overflow handler to check for cpu lockups. I created a generic implementation that lives in kernel/ and the hardware specific part that for now lives in arch/x86. This approach has a number of advantages: - It simplifies the x86 PMU implementation in the long run, in that it removes the hardcoded low-level PMU implementation that was the NMI watchdog before. - It allows new NMI watchdog features to be added in a central place. - It allows other architectures to enable the NMI watchdog, as long as they have perf events (that provide NMIs) implemented. - It also allows for more graceful co-existence of existing perf events apps and the NMI watchdog - before these changes the relationship was exclusive. (The NMI watchdog will 'spend' a perf event when enabled. In later iterations we might be able to piggyback from an existing NMI event without having to allocate a hardware event for the NMI watchdog - turning this into a no-hardware-cost feature.) As for compatibility, we'll keep the old NMI watchdog code as well until the new one can 100% replace it on all CPUs, old and new alike. That might take some time as the NMI watchdog has been ported to many CPU models. I have done light testing to make sure the framework works correctly and it does. v2: Set the correct timeout values based on the old nmi watchdog Signed-off-by: Don Zickus <dzickus@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: gorcunov@gmail.com Cc: aris@redhat.com Cc: peterz@infradead.org LKML-Reference: <1265424425-31562-3-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-02-06 09:47:04 +07:00
}
/*
* It is not safe to call printk() directly from NMI handlers.
* It may be fine if the NMI detected a lock up and we have no choice
* but to do so, but doing a NMI on all other CPUs to get a back trace
* can be done with a sysrq-l. We don't want that to lock up, which
* can happen if the NMI interrupts a printk in progress.
*
* Instead, we redirect the vprintk() to this nmi_vprintk() that writes
* the content into a per cpu seq_buf buffer. Then when the NMIs are
* all done, we can safely dump the contents of the seq_buf to a printk()
* from a non NMI context.
*/
static int nmi_vprintk(const char *fmt, va_list args)
{
struct nmi_seq_buf *s = this_cpu_ptr(&nmi_print_seq);
unsigned int len = seq_buf_used(&s->seq);
seq_buf_vprintf(&s->seq, fmt, args);
return seq_buf_used(&s->seq) - len;
}
kprobes, x86: Use NOKPROBE_SYMBOL() instead of __kprobes annotation Use NOKPROBE_SYMBOL macro for protecting functions from kprobes instead of __kprobes annotation under arch/x86. This applies nokprobe_inline annotation for some cases, because NOKPROBE_SYMBOL() will inhibit inlining by referring the symbol address. This just folds a bunch of previous NOKPROBE_SYMBOL() cleanup patches for x86 to one patch. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Link: http://lkml.kernel.org/r/20140417081814.26341.51656.stgit@ltc230.yrl.intra.hitachi.co.jp Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Fernando Luis Vázquez Cao <fernando_b1@lab.ntt.co.jp> Cc: Gleb Natapov <gleb@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Lebon <jlebon@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Matt Fleming <matt.fleming@intel.com> Cc: Michel Lespinasse <walken@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Seiji Aguchi <seiji.aguchi@hds.com> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-17 15:18:14 +07:00
static int
arch_trigger_all_cpu_backtrace_handler(unsigned int cmd, struct pt_regs *regs)
{
int cpu;
cpu = smp_processor_id();
if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
printk_func_t printk_func_save = this_cpu_read(printk_func);
/* Replace printk to write into the NMI seq */
this_cpu_write(printk_func, nmi_vprintk);
printk(KERN_WARNING "NMI backtrace for cpu %d\n", cpu);
show_regs(regs);
this_cpu_write(printk_func, printk_func_save);
cpumask_clear_cpu(cpu, to_cpumask(backtrace_mask));
return NMI_HANDLED;
}
return NMI_DONE;
}
kprobes, x86: Use NOKPROBE_SYMBOL() instead of __kprobes annotation Use NOKPROBE_SYMBOL macro for protecting functions from kprobes instead of __kprobes annotation under arch/x86. This applies nokprobe_inline annotation for some cases, because NOKPROBE_SYMBOL() will inhibit inlining by referring the symbol address. This just folds a bunch of previous NOKPROBE_SYMBOL() cleanup patches for x86 to one patch. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Link: http://lkml.kernel.org/r/20140417081814.26341.51656.stgit@ltc230.yrl.intra.hitachi.co.jp Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Fernando Luis Vázquez Cao <fernando_b1@lab.ntt.co.jp> Cc: Gleb Natapov <gleb@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Lebon <jlebon@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Matt Fleming <matt.fleming@intel.com> Cc: Michel Lespinasse <walken@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Seiji Aguchi <seiji.aguchi@hds.com> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Tejun Heo <tj@kernel.org> Cc: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-17 15:18:14 +07:00
NOKPROBE_SYMBOL(arch_trigger_all_cpu_backtrace_handler);
static int __init register_trigger_all_cpu_backtrace(void)
{
register_nmi_handler(NMI_LOCAL, arch_trigger_all_cpu_backtrace_handler,
0, "arch_bt");
return 0;
}
early_initcall(register_trigger_all_cpu_backtrace);
#endif