Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

slab: replace cache_from_obj() with inline checks

Eric Dumazet has noticed cache_from_obj() is not inlined with clang and
suggested splitting it into two functions, where the smaller inlined one
assumes the fastpath is !CONFIG_SLAB_FREELIST_HARDENED. However most
distros enable it these days and so this would likely add a function
call to the object free fastpaths.

Instead take a step back and consider that cache_from_obj() is a relict
from when memcgs created their separate kmem_cache copies, as the
outdated comment in build_detached_freelist() reminds us.

Meanwhile hardening/debugging had reused cache_from_obj() to validate
that the freed object really belongs to a slab from the cache we think
we are freeing from.

In build_detached_freelist() simply remove this, because it did not
handle the NULL result from cache_from_obj() failure properly, nor
validate objects (for the NULL slab->slab_cache pointer) when called via
kfree_bulk(). If anyone is motivated to implement it properly, it should
be possible in a similar way to kmem_cache_free().

In kmem_cache_free(), do the hardening/debugging checks directly so they
are inlined by definition and virt_to_slab(obj) is performed just once.
In case they failed, call a newly introduced warn_free_bad_obj() that
performs the warnings outside of the fastpath, and leak the object.

As an intentional change, leak the object when slab->slab_cache differs
from the cache given to kmem_cache_free(). Previously we would only leak
when the object is not in a valid slab page or the slab->slab_cache
pointer is NULL, and otherwise trust the slab->slab_cache over the
kmem_cache_free() argument. But if those differ, it means something went
wrong enough that it's best not to continue freeing.

As a result the fastpath should be inlined in all configs and the
warnings are moved away.

Reported-by: Eric Dumazet <edumazet@google.com>
Closes: https://lore.kernel.org/all/20260115130642.3419324-1-edumazet@google.com/
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Hao Li <hao.li@linux.dev>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

+33 -23
+33 -23
mm/slub.c
··· 6742 6742 } 6743 6743 #endif 6744 6744 6745 - static inline struct kmem_cache *virt_to_cache(const void *obj) 6745 + static noinline void warn_free_bad_obj(struct kmem_cache *s, void *obj) 6746 6746 { 6747 + struct kmem_cache *cachep; 6747 6748 struct slab *slab; 6748 6749 6749 6750 slab = virt_to_slab(obj); 6750 - if (WARN_ONCE(!slab, "%s: Object is not a Slab page!\n", __func__)) 6751 - return NULL; 6752 - return slab->slab_cache; 6753 - } 6751 + if (WARN_ONCE(!slab, 6752 + "kmem_cache_free(%s, %p): object is not in a slab page\n", 6753 + s->name, obj)) 6754 + return; 6754 6755 6755 - static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) 6756 - { 6757 - struct kmem_cache *cachep; 6756 + cachep = slab->slab_cache; 6758 6757 6759 - if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) && 6760 - !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) 6761 - return s; 6762 - 6763 - cachep = virt_to_cache(x); 6764 - if (WARN(cachep && cachep != s, 6765 - "%s: Wrong slab cache. %s but object is from %s\n", 6766 - __func__, s->name, cachep->name)) 6767 - print_tracking(cachep, x); 6768 - return cachep; 6758 + if (WARN_ONCE(cachep != s, 6759 + "kmem_cache_free(%s, %p): object belongs to different cache %s\n", 6760 + s->name, obj, cachep ? cachep->name : "(NULL)")) { 6761 + if (cachep) 6762 + print_tracking(cachep, obj); 6763 + return; 6764 + } 6769 6765 } 6770 6766 6771 6767 /** ··· 6774 6778 */ 6775 6779 void kmem_cache_free(struct kmem_cache *s, void *x) 6776 6780 { 6777 - s = cache_from_obj(s, x); 6778 - if (!s) 6779 - return; 6781 + struct slab *slab; 6782 + 6783 + slab = virt_to_slab(x); 6784 + 6785 + if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) || 6786 + kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS)) { 6787 + 6788 + /* 6789 + * Intentionally leak the object in these cases, because it 6790 + * would be too dangerous to continue. 6791 + */ 6792 + if (unlikely(!slab || (slab->slab_cache != s))) { 6793 + warn_free_bad_obj(s, x); 6794 + return; 6795 + } 6796 + } 6797 + 6780 6798 trace_kmem_cache_free(_RET_IP_, x, s); 6781 - slab_free(s, virt_to_slab(x), x, _RET_IP_); 6799 + slab_free(s, slab, x, _RET_IP_); 6782 6800 } 6783 6801 EXPORT_SYMBOL(kmem_cache_free); 6784 6802 ··· 7319 7309 df->s = slab->slab_cache; 7320 7310 } else { 7321 7311 df->slab = slab; 7322 - df->s = cache_from_obj(s, object); /* Support for memcg */ 7312 + df->s = s; 7323 7313 } 7324 7314 7325 7315 /* Start new detached freelist */