Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/slab: allow freeing kmalloc_nolock()'d objects using kfree[_rcu]()

Slab objects that are allocated with kmalloc_nolock() must be freed
using kfree_nolock() because only a subset of alloc hooks are called,
since kmalloc_nolock() can't spin on a lock during allocation.

This imposes a limitation: such objects cannot be freed with kfree_rcu(),
forcing users to work around this limitation by calling call_rcu()
with a callback that frees the object using kfree_nolock().

Remove this limitation by teaching kmemleak to gracefully ignore cases
when kmemleak_free() or kmemleak_ignore() is called without a prior
kmemleak_alloc().

Unlike kmemleak, kfence already handles this case, because,
due to its design, only a subset of allocations are served from kfence.

With this change, kfree() and kfree_rcu() can be used to free objects
that are allocated using kmalloc_nolock().

Suggested-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Link: https://patch.msgid.link/20260210044642.139482-2-harry.yoo@oracle.com
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

authored by

Harry Yoo and committed by
Vlastimil Babka
c4d6d782 a1e244a9

+32 -15
+2 -2
include/linux/rcupdate.h
··· 1076 1076 * either fall back to use of call_rcu() or rearrange the structure to 1077 1077 * position the rcu_head structure into the first 4096 bytes. 1078 1078 * 1079 - * The object to be freed can be allocated either by kmalloc() or 1080 - * kmem_cache_alloc(). 1079 + * The object to be freed can be allocated either by kmalloc(), 1080 + * kmalloc_nolock(), or kmem_cache_alloc(). 1081 1081 * 1082 1082 * Note that the allowable offset might decrease in the future. 1083 1083 *
+10 -12
mm/kmemleak.c
··· 837 837 struct kmemleak_object *object; 838 838 839 839 object = find_and_remove_object(ptr, 0, objflags); 840 - if (!object) { 841 - #ifdef DEBUG 842 - kmemleak_warn("Freeing unknown object at 0x%08lx\n", 843 - ptr); 844 - #endif 840 + if (!object) 841 + /* 842 + * kmalloc_nolock() -> kfree() calls kmemleak_free() 843 + * without kmemleak_alloc(). 844 + */ 845 845 return; 846 - } 847 846 __delete_object(object); 848 847 } 849 848 ··· 925 926 struct kmemleak_object *object; 926 927 927 928 object = __find_and_get_object(ptr, 0, objflags); 928 - if (!object) { 929 - kmemleak_warn("Trying to color unknown object at 0x%08lx as %s\n", 930 - ptr, 931 - (color == KMEMLEAK_GREY) ? "Grey" : 932 - (color == KMEMLEAK_BLACK) ? "Black" : "Unknown"); 929 + if (!object) 930 + /* 931 + * kmalloc_nolock() -> kfree_rcu() calls kmemleak_ignore() 932 + * without kmemleak_alloc(). 933 + */ 933 934 return; 934 - } 935 935 paint_it(object, color); 936 936 put_object(object); 937 937 }
+20 -1
mm/slub.c
··· 2585 2585 * Returns true if freeing of the object can proceed, false if its reuse 2586 2586 * was delayed by CONFIG_SLUB_RCU_DEBUG or KASAN quarantine, or it was returned 2587 2587 * to KFENCE. 2588 + * 2589 + * For objects allocated via kmalloc_nolock(), only a subset of alloc hooks 2590 + * are invoked, so some free hooks must handle asymmetric hook calls. 2591 + * 2592 + * Alloc hooks called for kmalloc_nolock(): 2593 + * - kmsan_slab_alloc() 2594 + * - kasan_slab_alloc() 2595 + * - memcg_slab_post_alloc_hook() 2596 + * - alloc_tagging_slab_alloc_hook() 2597 + * 2598 + * Free hooks that must handle missing corresponding alloc hooks: 2599 + * - kmemleak_free_recursive() 2600 + * - kfence_free() 2601 + * 2602 + * Free hooks that have no alloc hook counterpart, and thus safe to call: 2603 + * - debug_check_no_locks_freed() 2604 + * - debug_check_no_obj_freed() 2605 + * - __kcsan_check_access() 2588 2606 */ 2589 2607 static __always_inline 2590 2608 bool slab_free_hook(struct kmem_cache *s, void *x, bool init, ··· 6412 6394 6413 6395 /** 6414 6396 * kfree - free previously allocated memory 6415 - * @object: pointer returned by kmalloc() or kmem_cache_alloc() 6397 + * @object: pointer returned by kmalloc(), kmalloc_nolock(), or kmem_cache_alloc() 6416 6398 * 6417 6399 * If @object is NULL, no operation is performed. 6418 6400 */ ··· 6431 6413 page = virt_to_page(object); 6432 6414 slab = page_slab(page); 6433 6415 if (!slab) { 6416 + /* kmalloc_nolock() doesn't support large kmalloc */ 6434 6417 free_large_kmalloc(page, (void *)object); 6435 6418 return; 6436 6419 }