Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

io_uring/zcrx: fix user_ref race between scrub and refill paths

The io_zcrx_put_niov_uref() function uses a non-atomic
check-then-decrement pattern (atomic_read followed by separate
atomic_dec) to manipulate user_refs. This is serialized against other
callers by rq_lock, but io_zcrx_scrub() modifies the same counter with
atomic_xchg() WITHOUT holding rq_lock.

On SMP systems, the following race exists:

CPU0 (refill, holds rq_lock) CPU1 (scrub, no rq_lock)
put_niov_uref:
atomic_read(uref) - 1
// window opens
atomic_xchg(uref, 0) - 1
return_niov_freelist(niov) [PUSH #1]
// window closes
atomic_dec(uref) - wraps to -1
returns true
return_niov(niov)
return_niov_freelist(niov) [PUSH #2: DOUBLE-FREE]

The same niov is pushed to the freelist twice, causing free_count to
exceed nr_iovs. Subsequent freelist pushes then perform an out-of-bounds
write (a u32 value) past the kvmalloc'd freelist array into the adjacent
slab object.

Fix this by replacing the non-atomic read-then-dec in
io_zcrx_put_niov_uref() with an atomic_try_cmpxchg loop that atomically
tests and decrements user_refs. This makes the operation safe against
concurrent atomic_xchg from scrub without requiring scrub to acquire
rq_lock.

Fixes: 34a3e60821ab ("io_uring/zcrx: implement zerocopy receive pp memory provider")
Cc: stable@vger.kernel.org
Signed-off-by: Kai Aizen <kai@snailsploit.com>
[pavel: removed a warning and a comment]
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>

authored by

Kai Aizen and committed by
Jens Axboe
003049b1 2961f841

+7 -3
+7 -3
io_uring/zcrx.c
··· 349 349 static bool io_zcrx_put_niov_uref(struct net_iov *niov) 350 350 { 351 351 atomic_t *uref = io_get_user_counter(niov); 352 + int old; 352 353 353 - if (unlikely(!atomic_read(uref))) 354 - return false; 355 - atomic_dec(uref); 354 + old = atomic_read(uref); 355 + do { 356 + if (unlikely(old == 0)) 357 + return false; 358 + } while (!atomic_try_cmpxchg(uref, &old, old - 1)); 359 + 356 360 return true; 357 361 } 358 362