Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

bio: fix kmemleak false positives from percpu bio alloc cache

When a bio is allocated from the mempool with REQ_ALLOC_CACHE set and
later completed, bio_put() places it into the per-cpu bio_alloc_cache
via bio_put_percpu_cache() instead of freeing it back to the
mempool/slab. The slab allocation remains tracked by kmemleak, but the
only reference to the bio is through the percpu cache's free_list,
which kmemleak fails to trace through percpu memory. This causes
kmemleak to report the cached bios as unreferenced objects.

Use symmetric kmemleak_free()/kmemleak_alloc() calls to properly track
bios across percpu cache transitions:

- bio_put_percpu_cache: call kmemleak_free() when a bio enters the
cache, unregistering it from kmemleak tracking.

- bio_alloc_percpu_cache: call kmemleak_alloc() when a bio is taken
from the cache for reuse, re-registering it so that genuine leaks
of reused bios remain detectable.

- __bio_alloc_cache_prune: call kmemleak_alloc() before bio_free() so
that kmem_cache_free()'s internal kmemleak_free() has a matching
allocation to pair with.

Tested-by: Yi Zhang <yi.zhang@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://patch.msgid.link/20260326144058.2392319-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

authored by

Ming Lei and committed by
Jens Axboe
c691e4b0 f91ffe89

+14
+14
block/bio.c
··· 18 18 #include <linux/highmem.h> 19 19 #include <linux/blk-crypto.h> 20 20 #include <linux/xarray.h> 21 + #include <linux/kmemleak.h> 21 22 22 23 #include <trace/events/block.h> 23 24 #include "blk.h" ··· 115 114 static inline unsigned int bs_bio_slab_size(struct bio_set *bs) 116 115 { 117 116 return bs->front_pad + sizeof(struct bio) + bs->back_pad; 117 + } 118 + 119 + static inline void *bio_slab_addr(struct bio *bio) 120 + { 121 + return (void *)bio - bio->bi_pool->front_pad; 118 122 } 119 123 120 124 static struct kmem_cache *bio_find_or_create_slab(struct bio_set *bs) ··· 492 486 cache->nr--; 493 487 put_cpu(); 494 488 bio->bi_pool = bs; 489 + 490 + kmemleak_alloc(bio_slab_addr(bio), 491 + kmem_cache_size(bs->bio_slab), 1, GFP_NOIO); 495 492 return bio; 496 493 } 497 494 ··· 737 728 while ((bio = cache->free_list) != NULL) { 738 729 cache->free_list = bio->bi_next; 739 730 cache->nr--; 731 + kmemleak_alloc(bio_slab_addr(bio), 732 + kmem_cache_size(bio->bi_pool->bio_slab), 733 + 1, GFP_KERNEL); 740 734 bio_free(bio); 741 735 if (++i == nr) 742 736 break; ··· 803 791 bio->bi_bdev = NULL; 804 792 cache->free_list = bio; 805 793 cache->nr++; 794 + kmemleak_free(bio_slab_addr(bio)); 806 795 } else if (in_hardirq()) { 807 796 lockdep_assert_irqs_disabled(); 808 797 ··· 811 798 bio->bi_next = cache->free_list_irq; 812 799 cache->free_list_irq = bio; 813 800 cache->nr_irq++; 801 + kmemleak_free(bio_slab_addr(bio)); 814 802 } else { 815 803 goto out_free; 816 804 }