Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/slab: use unsigned long for orig_size to ensure proper metadata align

When both KASAN and SLAB_STORE_USER are enabled, accesses to
struct kasan_alloc_meta fields can be misaligned on 64-bit architectures.
This occurs because orig_size is currently defined as unsigned int,
which only guarantees 4-byte alignment. When struct kasan_alloc_meta is
placed after orig_size, it may end up at a 4-byte boundary rather than
the required 8-byte boundary on 64-bit systems.

Note that 64-bit architectures without HAVE_EFFICIENT_UNALIGNED_ACCESS
are assumed to require 64-bit accesses to be 64-bit aligned.
See HAVE_64BIT_ALIGNED_ACCESS and commit adab66b71abf ("Revert:
"ring-buffer: Remove HAVE_64BIT_ALIGNED_ACCESS"") for more details.

Change orig_size from unsigned int to unsigned long to ensure proper
alignment for any subsequent metadata. This should not waste additional
memory because kmalloc objects are already aligned to at least
ARCH_KMALLOC_MINALIGN.

Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo
Suggested-by: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: stable@vger.kernel.org
Fixes: 6edf2576a6cc ("mm/slub: enable debugging memory wasting of kmalloc")
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Closes: https://lore.kernel.org/all/aPrLF0OUK651M4dk@hyeyoo/
Link: https://patch.msgid.link/20260113061845.159790-2-harry.yoo@oracle.com
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

authored by

Harry Yoo and committed by
Vlastimil Babka
b85f369b 9346ee2b

+7 -7
+7 -7
mm/slub.c
··· 854 854 * request size in the meta data area, for better debug and sanity check. 855 855 */ 856 856 static inline void set_orig_size(struct kmem_cache *s, 857 - void *object, unsigned int orig_size) 857 + void *object, unsigned long orig_size) 858 858 { 859 859 void *p = kasan_reset_tag(object); 860 860 ··· 864 864 p += get_info_end(s); 865 865 p += sizeof(struct track) * 2; 866 866 867 - *(unsigned int *)p = orig_size; 867 + *(unsigned long *)p = orig_size; 868 868 } 869 869 870 - static inline unsigned int get_orig_size(struct kmem_cache *s, void *object) 870 + static inline unsigned long get_orig_size(struct kmem_cache *s, void *object) 871 871 { 872 872 void *p = kasan_reset_tag(object); 873 873 ··· 880 880 p += get_info_end(s); 881 881 p += sizeof(struct track) * 2; 882 882 883 - return *(unsigned int *)p; 883 + return *(unsigned long *)p; 884 884 } 885 885 886 886 #ifdef CONFIG_SLUB_DEBUG ··· 1195 1195 off += 2 * sizeof(struct track); 1196 1196 1197 1197 if (slub_debug_orig_size(s)) 1198 - off += sizeof(unsigned int); 1198 + off += sizeof(unsigned long); 1199 1199 1200 1200 off += kasan_metadata_size(s, false); 1201 1201 ··· 1407 1407 off += 2 * sizeof(struct track); 1408 1408 1409 1409 if (s->flags & SLAB_KMALLOC) 1410 - off += sizeof(unsigned int); 1410 + off += sizeof(unsigned long); 1411 1411 } 1412 1412 1413 1413 off += kasan_metadata_size(s, false); ··· 8040 8040 8041 8041 /* Save the original kmalloc request size */ 8042 8042 if (flags & SLAB_KMALLOC) 8043 - size += sizeof(unsigned int); 8043 + size += sizeof(unsigned long); 8044 8044 } 8045 8045 #endif 8046 8046