Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

tracing: Add recursion protection in kernel stack trace recording

A bug was reported about an infinite recursion caused by tracing the rcu
events with the kernel stack trace trigger enabled. The stack trace code
called back into RCU which then called the stack trace again.

Expand the ftrace recursion protection to add a set of bits to protect
events from recursion. Each bit represents the context that the event is
in (normal, softirq, interrupt and NMI).

Have the stack trace code use the interrupt context to protect against
recursion.

Note, the bug showed an issue in both the RCU code as well as the tracing
stacktrace code. This only handles the tracing stack trace side of the
bug. The RCU fix will be handled separately.

Link: https://lore.kernel.org/all/20260102122807.7025fc87@gandalf.local.home/

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Link: https://patch.msgid.link/20260105203141.515cd49f@gandalf.local.home
Reported-by: Yao Kai <yaokai34@huawei.com>
Tested-by: Yao Kai <yaokai34@huawei.com>
Fixes: 5f5fa7ea89dc ("rcu: Don't use negative nesting depth in __rcu_read_unlock()")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>

+15
+9
include/linux/trace_recursion.h
··· 34 34 TRACE_INTERNAL_SIRQ_BIT, 35 35 TRACE_INTERNAL_TRANSITION_BIT, 36 36 37 + /* Internal event use recursion bits */ 38 + TRACE_INTERNAL_EVENT_BIT, 39 + TRACE_INTERNAL_EVENT_NMI_BIT, 40 + TRACE_INTERNAL_EVENT_IRQ_BIT, 41 + TRACE_INTERNAL_EVENT_SIRQ_BIT, 42 + TRACE_INTERNAL_EVENT_TRANSITION_BIT, 43 + 37 44 TRACE_BRANCH_BIT, 38 45 /* 39 46 * Abuse of the trace_recursion. ··· 64 57 #define TRACE_FTRACE_START TRACE_FTRACE_BIT 65 58 66 59 #define TRACE_LIST_START TRACE_INTERNAL_BIT 60 + 61 + #define TRACE_EVENT_START TRACE_INTERNAL_EVENT_BIT 67 62 68 63 #define TRACE_CONTEXT_MASK ((1 << (TRACE_LIST_START + TRACE_CONTEXT_BITS)) - 1) 69 64
+6
kernel/trace/trace.c
··· 3012 3012 struct ftrace_stack *fstack; 3013 3013 struct stack_entry *entry; 3014 3014 int stackidx; 3015 + int bit; 3016 + 3017 + bit = trace_test_and_set_recursion(_THIS_IP_, _RET_IP_, TRACE_EVENT_START); 3018 + if (bit < 0) 3019 + return; 3015 3020 3016 3021 /* 3017 3022 * Add one, for this function and the call to save_stack_trace() ··· 3085 3080 /* Again, don't let gcc optimize things here */ 3086 3081 barrier(); 3087 3082 __this_cpu_dec(ftrace_stack_reserve); 3083 + trace_clear_recursion(bit); 3088 3084 } 3089 3085 3090 3086 static inline void ftrace_trace_stack(struct trace_array *tr,