Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/slab: fix false lockdep warning in __kfree_rcu_sheaf()

kvfree_call_rcu() can be called while holding a raw_spinlock_t.
Since __kfree_rcu_sheaf() may acquire a spinlock_t (which becomes a
sleeping lock on PREEMPT_RT) and violate lock nesting rules,
kvfree_call_rcu() bypasses the sheaves layer entirely on PREEMPT_RT.

However, lockdep still complains about acquiring spinlock_t while holding
raw_spinlock_t, even on !PREEMPT_RT where spinlock_t is a spinning lock.
This causes a false lockdep warning [1]:

=============================
[ BUG: Invalid wait context ]
6.19.0-rc6-next-20260120 #21508 Not tainted
-----------------------------
migration/1/23 is trying to lock:
ffff8afd01054e98 (&barn->lock){..-.}-{3:3}, at: barn_get_empty_sheaf+0x1d/0xb0
other info that might help us debug this:
context-{5:5}
3 locks held by migration/1/23:
#0: ffff8afd01fd89a8 (&p->pi_lock){-.-.}-{2:2}, at: __balance_push_cpu_stop+0x3f/0x200
#1: ffffffff9f15c5c8 (rcu_read_lock){....}-{1:3}, at: cpuset_cpus_allowed_fallback+0x27/0x250
#2: ffff8afd1f470be0 ((local_lock_t *)&pcs->lock){+.+.}-{3:3}, at: __kfree_rcu_sheaf+0x52/0x3d0
stack backtrace:
CPU: 1 UID: 0 PID: 23 Comm: migration/1 Not tainted 6.19.0-rc6-next-20260120 #21508 PREEMPTLAZY
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
Stopper: __balance_push_cpu_stop+0x0/0x200 <- balance_push+0x118/0x170
Call Trace:
<TASK>
__dump_stack+0x22/0x30
dump_stack_lvl+0x60/0x80
dump_stack+0x19/0x24
__lock_acquire+0xd3a/0x28e0
? __lock_acquire+0x5a9/0x28e0
? __lock_acquire+0x5a9/0x28e0
? barn_get_empty_sheaf+0x1d/0xb0
lock_acquire+0xc3/0x270
? barn_get_empty_sheaf+0x1d/0xb0
? __kfree_rcu_sheaf+0x52/0x3d0
_raw_spin_lock_irqsave+0x47/0x70
? barn_get_empty_sheaf+0x1d/0xb0
barn_get_empty_sheaf+0x1d/0xb0
? __kfree_rcu_sheaf+0x52/0x3d0
__kfree_rcu_sheaf+0x19f/0x3d0
kvfree_call_rcu+0xaf/0x390
set_cpus_allowed_force+0xc8/0xf0
[...]
</TASK>

This wasn't triggered until sheaves were enabled for all slab caches,
since kfree_rcu() wasn't being called with a raw spinlock held for
caches with sheaves (vma, maple node).

As suggested by Vlastimil Babka, fix this by using a lockdep map with
LD_WAIT_CONFIG wait type to tell lockdep that acquiring spinlock_t is valid
in this case, as those spinlocks won't be used on PREEMPT_RT.

Note that kfree_rcu_sheaf_map should be acquired using _try() variant,
otherwise the acquisition of the lockdep map itself will trigger an invalid
wait context warning.

Reported-by: Paul E. McKenney <paulmck@kernel.org>
Closes: https://lore.kernel.org/linux-mm/c858b9af-2510-448b-9ab3-058f7b80dd42@paulmck-laptop [1]
Fixes: ec66e0d59952 ("slab: add sheaf support for batching kfree_rcu() operations")
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

authored by

Harry Yoo and committed by
Vlastimil Babka
f8b4cd2d b55b423e

+20
+20
mm/slub.c
··· 6265 6265 free_empty_sheaf(s, sheaf); 6266 6266 } 6267 6267 6268 + /* 6269 + * kvfree_call_rcu() can be called while holding a raw_spinlock_t. Since 6270 + * __kfree_rcu_sheaf() may acquire a spinlock_t (sleeping lock on PREEMPT_RT), 6271 + * this would violate lock nesting rules. Therefore, kvfree_call_rcu() avoids 6272 + * this problem by bypassing the sheaves layer entirely on PREEMPT_RT. 6273 + * 6274 + * However, lockdep still complains that it is invalid to acquire spinlock_t 6275 + * while holding raw_spinlock_t, even on !PREEMPT_RT where spinlock_t is a 6276 + * spinning lock. Tell lockdep that acquiring spinlock_t is valid here 6277 + * by temporarily raising the wait-type to LD_WAIT_CONFIG. 6278 + */ 6279 + static DEFINE_WAIT_OVERRIDE_MAP(kfree_rcu_sheaf_map, LD_WAIT_CONFIG); 6280 + 6268 6281 bool __kfree_rcu_sheaf(struct kmem_cache *s, void *obj) 6269 6282 { 6270 6283 struct slub_percpu_sheaves *pcs; 6271 6284 struct slab_sheaf *rcu_sheaf; 6285 + 6286 + if (WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_RT))) 6287 + return false; 6288 + 6289 + lock_map_acquire_try(&kfree_rcu_sheaf_map); 6272 6290 6273 6291 if (!local_trylock(&s->cpu_sheaves->lock)) 6274 6292 goto fail; ··· 6364 6346 local_unlock(&s->cpu_sheaves->lock); 6365 6347 6366 6348 stat(s, FREE_RCU_SHEAF); 6349 + lock_map_release(&kfree_rcu_sheaf_map); 6367 6350 return true; 6368 6351 6369 6352 fail: 6370 6353 stat(s, FREE_RCU_SHEAF_FAIL); 6354 + lock_map_release(&kfree_rcu_sheaf_map); 6371 6355 return false; 6372 6356 } 6373 6357