mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-01 14:16:37 +07:00
e4906eff9e
The entries counter in cpu buffer is not atomic. It can be updated by other interrupts or from another CPU (readers). But making entries into "atomic_t" causes an atomic operation that can hurt performance. Instead we convert it to a local_t that will increment a counter with a local CPU atomic operation (if the arch supports it). Instead of fighting with readers and overwrites that decrement the counter, I added a "read" counter. Every time a reader reads an entry it is incremented. We already have a overrun counter and with that, the entries counter and the read counter, we can calculate the total number of entries in the buffer with: (entries - overrun) - read As long as the total number of entries in the ring buffer is less than the word size, this will work. But since the entries counter was previously a long, this is no different than what we had before. Thanks to Andrew Morton for pointing out in the first version that atomic_t does not replace unsigned long. I switched to atomic_long_t even though it is signed. A negative count is most likely a bug. [ Impact: keep accurate count of cpu buffer entries ] Signed-off-by: Steven Rostedt <rostedt@goodmis.org> |
||
---|---|---|
.. | ||
blktrace.c | ||
ftrace.c | ||
Kconfig | ||
kmemtrace.c | ||
Makefile | ||
ring_buffer.c | ||
trace_boot.c | ||
trace_branch.c | ||
trace_clock.c | ||
trace_event_profile.c | ||
trace_event_types.h | ||
trace_events_filter.c | ||
trace_events.c | ||
trace_export.c | ||
trace_functions_graph.c | ||
trace_functions.c | ||
trace_hw_branches.c | ||
trace_irqsoff.c | ||
trace_mmiotrace.c | ||
trace_nop.c | ||
trace_output.c | ||
trace_output.h | ||
trace_power.c | ||
trace_printk.c | ||
trace_sched_switch.c | ||
trace_sched_wakeup.c | ||
trace_selftest_dynamic.c | ||
trace_selftest.c | ||
trace_stack.c | ||
trace_stat.c | ||
trace_stat.h | ||
trace_syscalls.c | ||
trace_sysprof.c | ||
trace_workqueue.c | ||
trace.c | ||
trace.h |