mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-11-24 02:40:52 +07:00
a33614d52e
commit aafe104aa9096827a429bc1358f8260ee565b7cc upstream. It was reported that a fix to the ring buffer recursion detection would cause a hung machine when performing suspend / resume testing. The following backtrace was extracted from debugging that case: Call Trace: trace_clock_global+0x91/0xa0 __rb_reserve_next+0x237/0x460 ring_buffer_lock_reserve+0x12a/0x3f0 trace_buffer_lock_reserve+0x10/0x50 __trace_graph_return+0x1f/0x80 trace_graph_return+0xb7/0xf0 ? trace_clock_global+0x91/0xa0 ftrace_return_to_handler+0x8b/0xf0 ? pv_hash+0xa0/0xa0 return_to_handler+0x15/0x30 ? ftrace_graph_caller+0xa0/0xa0 ? trace_clock_global+0x91/0xa0 ? __rb_reserve_next+0x237/0x460 ? ring_buffer_lock_reserve+0x12a/0x3f0 ? trace_event_buffer_lock_reserve+0x3c/0x120 ? trace_event_buffer_reserve+0x6b/0xc0 ? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0 ? dpm_run_callback+0x3b/0xc0 ? pm_ops_is_empty+0x50/0x50 ? platform_get_irq_byname_optional+0x90/0x90 ? trace_device_pm_callback_start+0x82/0xd0 ? dpm_run_callback+0x49/0xc0 With the following RIP: RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200 Since the fix to the recursion detection would allow a single recursion to happen while tracing, this lead to the trace_clock_global() taking a spin lock and then trying to take it again: ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* lock taken */ (something else gets traced by function graph tracer) ring_buffer_lock_reserve() { trace_clock_global() { arch_spin_lock() { queued_spin_lock_slowpath() { /* DEAD LOCK! */ Tracing should *never* block, as it can lead to strange lockups like the above. Restructure the trace_clock_global() code to instead of simply taking a lock to update the recorded "prev_time" simply use it, as two events happening on two different CPUs that calls this at the same time, really doesn't matter which one goes first. Use a trylock to grab the lock for updating the prev_time, and if it fails, simply try again the next time. If it failed to be taken, that means something else is already updating it. Link: https://lkml.kernel.org/r/20210430121758.650b6e8a@gandalf.local.home Cc: stable@vger.kernel.org Tested-by: Konstantin Kharlamov <hi-angel@yandex.ru> Tested-by: Todd Brandt <todd.e.brandt@linux.intel.com> Fixes: |
||
---|---|---|
.. | ||
blktrace.c | ||
bpf_trace.c | ||
bpf_trace.h | ||
fgraph.c | ||
ftrace_internal.h | ||
ftrace.c | ||
Kconfig | ||
kprobe_event_gen_test.c | ||
Makefile | ||
power-traces.c | ||
preemptirq_delay_test.c | ||
ring_buffer_benchmark.c | ||
ring_buffer.c | ||
rpm-traces.c | ||
synth_event_gen_test.c | ||
trace_benchmark.c | ||
trace_benchmark.h | ||
trace_boot.c | ||
trace_branch.c | ||
trace_clock.c | ||
trace_dynevent.c | ||
trace_dynevent.h | ||
trace_entries.h | ||
trace_event_perf.c | ||
trace_events_filter_test.h | ||
trace_events_filter.c | ||
trace_events_hist.c | ||
trace_events_inject.c | ||
trace_events_synth.c | ||
trace_events_trigger.c | ||
trace_events.c | ||
trace_export.c | ||
trace_functions_graph.c | ||
trace_functions.c | ||
trace_hwlat.c | ||
trace_irqsoff.c | ||
trace_kdb.c | ||
trace_kprobe_selftest.c | ||
trace_kprobe_selftest.h | ||
trace_kprobe.c | ||
trace_mmiotrace.c | ||
trace_nop.c | ||
trace_output.c | ||
trace_output.h | ||
trace_preemptirq.c | ||
trace_printk.c | ||
trace_probe_tmpl.h | ||
trace_probe.c | ||
trace_probe.h | ||
trace_sched_switch.c | ||
trace_sched_wakeup.c | ||
trace_selftest_dynamic.c | ||
trace_selftest.c | ||
trace_seq.c | ||
trace_stack.c | ||
trace_stat.c | ||
trace_stat.h | ||
trace_synth.h | ||
trace_syscalls.c | ||
trace_uprobe.c | ||
trace.c | ||
trace.h | ||
tracing_map.c | ||
tracing_map.h |