Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

kasan: make kasan_record_aux_stack_noalloc() the default behaviour

kasan_record_aux_stack_noalloc() was introduced to record a stack trace
without allocating memory in the process. It has been added to callers
which were invoked while a raw_spinlock_t was held. More and more callers
were identified and changed over time. Is it a good thing to have this
while functions try their best to do a locklessly setup? The only
downside of having kasan_record_aux_stack() not allocate any memory is
that we end up without a stacktrace if stackdepot runs out of memory and
at the same stacktrace was not recorded before To quote Marco Elver from
https://lore.kernel.org/all/CANpmjNPmQYJ7pv1N3cuU8cP18u7PP_uoZD8YxwZd4jtbof9nVQ@mail.gmail.com/

| I'd be in favor, it simplifies things. And stack depot should be
| able to replenish its pool sufficiently in the "non-aux" cases
| i.e. regular allocations. Worst case we fail to record some
| aux stacks, but I think that's only really bad if there's a bug
| around one of these allocations. In general the probabilities
| of this being a regression are extremely small [...]

Make the kasan_record_aux_stack_noalloc() behaviour default as
kasan_record_aux_stack().

[bigeasy@linutronix.de: dressed the diff as patch]
Link: https://lkml.kernel.org/r/20241122155451.Mb2pmeyJ@linutronix.de
Fixes: 7cb3007ce2da ("kasan: generic: introduce kasan_record_aux_stack_noalloc()")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reported-by: syzbot+39f85d612b7c20d8db48@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/67275485.050a0220.3c8d68.0a37.GAE@google.com
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Marco Elver <elver@google.com>
Reviewed-by: Waiman Long <longman@redhat.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jann Horn <jannh@google.com>
Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: <kasan-dev@googlegroups.com>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Neeraj Upadhyay <neeraj.upadhyay@kernel.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: syzkaller-bugs@googlegroups.com
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Zqiang <qiang.zhang1211@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Peter Zijlstra and committed by
Andrew Morton
d40797d6 773fc6ab

+14 -37
-2
include/linux/kasan.h
··· 491 491 void kasan_cache_shrink(struct kmem_cache *cache); 492 492 void kasan_cache_shutdown(struct kmem_cache *cache); 493 493 void kasan_record_aux_stack(void *ptr); 494 - void kasan_record_aux_stack_noalloc(void *ptr); 495 494 496 495 #else /* CONFIG_KASAN_GENERIC */ 497 496 ··· 508 509 static inline void kasan_cache_shrink(struct kmem_cache *cache) {} 509 510 static inline void kasan_cache_shutdown(struct kmem_cache *cache) {} 510 511 static inline void kasan_record_aux_stack(void *ptr) {} 511 - static inline void kasan_record_aux_stack_noalloc(void *ptr) {} 512 512 513 513 #endif /* CONFIG_KASAN_GENERIC */ 514 514
-3
include/linux/task_work.h
··· 19 19 TWA_SIGNAL, 20 20 TWA_SIGNAL_NO_IPI, 21 21 TWA_NMI_CURRENT, 22 - 23 - TWA_FLAGS = 0xff00, 24 - TWAF_NO_ALLOC = 0x0100, 25 22 }; 26 23 27 24 static inline bool task_work_pending(struct task_struct *task)
+1 -1
kernel/irq_work.c
··· 147 147 if (!irq_work_claim(work)) 148 148 return false; 149 149 150 - kasan_record_aux_stack_noalloc(work); 150 + kasan_record_aux_stack(work); 151 151 152 152 preempt_disable(); 153 153 if (cpu != smp_processor_id()) {
+1 -1
kernel/rcu/tiny.c
··· 250 250 void kvfree_call_rcu(struct rcu_head *head, void *ptr) 251 251 { 252 252 if (head) 253 - kasan_record_aux_stack_noalloc(ptr); 253 + kasan_record_aux_stack(ptr); 254 254 255 255 __kvfree_call_rcu(head, ptr); 256 256 }
+2 -2
kernel/rcu/tree.c
··· 3083 3083 } 3084 3084 head->func = func; 3085 3085 head->next = NULL; 3086 - kasan_record_aux_stack_noalloc(head); 3086 + kasan_record_aux_stack(head); 3087 3087 local_irq_save(flags); 3088 3088 rdp = this_cpu_ptr(&rcu_data); 3089 3089 lazy = lazy_in && !rcu_async_should_hurry(); ··· 3817 3817 return; 3818 3818 } 3819 3819 3820 - kasan_record_aux_stack_noalloc(ptr); 3820 + kasan_record_aux_stack(ptr); 3821 3821 success = add_ptr_to_bulk_krc_lock(&krcp, &flags, ptr, !head); 3822 3822 if (!success) { 3823 3823 run_page_cache_worker(krcp);
+1 -1
kernel/sched/core.c
··· 10590 10590 return; 10591 10591 10592 10592 /* No page allocation under rq lock */ 10593 - task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC); 10593 + task_work_add(curr, work, TWA_RESUME); 10594 10594 } 10595 10595 10596 10596 void sched_mm_cid_exit_signals(struct task_struct *t)
+1 -13
kernel/task_work.c
··· 55 55 enum task_work_notify_mode notify) 56 56 { 57 57 struct callback_head *head; 58 - int flags = notify & TWA_FLAGS; 59 58 60 - notify &= ~TWA_FLAGS; 61 59 if (notify == TWA_NMI_CURRENT) { 62 60 if (WARN_ON_ONCE(task != current)) 63 61 return -EINVAL; 64 62 if (!IS_ENABLED(CONFIG_IRQ_WORK)) 65 63 return -EINVAL; 66 64 } else { 67 - /* 68 - * Record the work call stack in order to print it in KASAN 69 - * reports. 70 - * 71 - * Note that stack allocation can fail if TWAF_NO_ALLOC flag 72 - * is set and new page is needed to expand the stack buffer. 73 - */ 74 - if (flags & TWAF_NO_ALLOC) 75 - kasan_record_aux_stack_noalloc(work); 76 - else 77 - kasan_record_aux_stack(work); 65 + kasan_record_aux_stack(work); 78 66 } 79 67 80 68 head = READ_ONCE(task->task_works);
+1 -1
kernel/workqueue.c
··· 2180 2180 debug_work_activate(work); 2181 2181 2182 2182 /* record the work call stack in order to print it in KASAN reports */ 2183 - kasan_record_aux_stack_noalloc(work); 2183 + kasan_record_aux_stack(work); 2184 2184 2185 2185 /* we own @work, set data and link */ 2186 2186 set_work_pwq(work, pwq, extra_flags);
+6 -12
mm/kasan/generic.c
··· 524 524 sizeof(struct kasan_free_meta) : 0); 525 525 } 526 526 527 - static void __kasan_record_aux_stack(void *addr, depot_flags_t depot_flags) 527 + /* 528 + * This function avoids dynamic memory allocations and thus can be called from 529 + * contexts that do not allow allocating memory. 530 + */ 531 + void kasan_record_aux_stack(void *addr) 528 532 { 529 533 struct slab *slab = kasan_addr_to_slab(addr); 530 534 struct kmem_cache *cache; ··· 545 541 return; 546 542 547 543 alloc_meta->aux_stack[1] = alloc_meta->aux_stack[0]; 548 - alloc_meta->aux_stack[0] = kasan_save_stack(0, depot_flags); 549 - } 550 - 551 - void kasan_record_aux_stack(void *addr) 552 - { 553 - return __kasan_record_aux_stack(addr, STACK_DEPOT_FLAG_CAN_ALLOC); 554 - } 555 - 556 - void kasan_record_aux_stack_noalloc(void *addr) 557 - { 558 - return __kasan_record_aux_stack(addr, 0); 544 + alloc_meta->aux_stack[0] = kasan_save_stack(0, 0); 559 545 } 560 546 561 547 void kasan_save_alloc_info(struct kmem_cache *cache, void *object, gfp_t flags)
+1 -1
mm/slub.c
··· 2311 2311 * We have to do this manually because the rcu_head is 2312 2312 * not located inside the object. 2313 2313 */ 2314 - kasan_record_aux_stack_noalloc(x); 2314 + kasan_record_aux_stack(x); 2315 2315 2316 2316 delayed_free->object = x; 2317 2317 call_rcu(&delayed_free->head, slab_free_after_rcu_debug);