Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

io_uring/kbuf: propagate BUF_MORE through early buffer commit path

When io_should_commit() returns true (eg for non-pollable files), buffer
commit happens at buffer selection time and sel->buf_list is set to
NULL. When __io_put_kbufs() generates CQE flags at completion time, it
calls __io_put_kbuf_ring() which finds a NULL buffer_list and hence
cannot determine whether the buffer was consumed or not. This means that
IORING_CQE_F_BUF_MORE is never set for non-pollable input with
incrementally consumed buffers.

Likewise for io_buffers_select(), which always commits upfront and
discards the return value of io_kbuf_commit().

Add REQ_F_BUF_MORE to store the result of io_kbuf_commit() during early
commit. Then __io_put_kbuf_ring() can check this flag and set
IORING_F_BUF_MORE accordingy.

Reported-by: Martin Michaelis <code@mgjm.de>
Cc: stable@vger.kernel.org
Fixes: ae98dbf43d75 ("io_uring/kbuf: add support for incremental buffer consumption")
Link: https://github.com/axboe/liburing/issues/1553
Signed-off-by: Jens Axboe <axboe@kernel.dk>

+10 -3
+3
include/linux/io_uring_types.h
··· 541 541 REQ_F_BL_NO_RECYCLE_BIT, 542 542 REQ_F_BUFFERS_COMMIT_BIT, 543 543 REQ_F_BUF_NODE_BIT, 544 + REQ_F_BUF_MORE_BIT, 544 545 REQ_F_HAS_METADATA_BIT, 545 546 REQ_F_IMPORT_BUFFER_BIT, 546 547 REQ_F_SQE_COPIED_BIT, ··· 627 626 REQ_F_BUFFERS_COMMIT = IO_REQ_FLAG(REQ_F_BUFFERS_COMMIT_BIT), 628 627 /* buf node is valid */ 629 628 REQ_F_BUF_NODE = IO_REQ_FLAG(REQ_F_BUF_NODE_BIT), 629 + /* incremental buffer consumption, more space available */ 630 + REQ_F_BUF_MORE = IO_REQ_FLAG(REQ_F_BUF_MORE_BIT), 630 631 /* request has read/write metadata assigned */ 631 632 REQ_F_HAS_METADATA = IO_REQ_FLAG(REQ_F_HAS_METADATA_BIT), 632 633 /*
+7 -3
io_uring/kbuf.c
··· 216 216 sel.addr = u64_to_user_ptr(READ_ONCE(buf->addr)); 217 217 218 218 if (io_should_commit(req, issue_flags)) { 219 - io_kbuf_commit(req, sel.buf_list, *len, 1); 219 + if (!io_kbuf_commit(req, sel.buf_list, *len, 1)) 220 + req->flags |= REQ_F_BUF_MORE; 220 221 sel.buf_list = NULL; 221 222 } 222 223 return sel; ··· 350 349 */ 351 350 if (ret > 0) { 352 351 req->flags |= REQ_F_BUFFERS_COMMIT | REQ_F_BL_NO_RECYCLE; 353 - io_kbuf_commit(req, sel->buf_list, arg->out_len, ret); 352 + if (!io_kbuf_commit(req, sel->buf_list, arg->out_len, ret)) 353 + req->flags |= REQ_F_BUF_MORE; 354 354 } 355 355 } else { 356 356 ret = io_provided_buffers_select(req, &arg->out_len, sel->buf_list, arg->iovs); ··· 397 395 398 396 if (bl) 399 397 ret = io_kbuf_commit(req, bl, len, nr); 398 + if (ret && (req->flags & REQ_F_BUF_MORE)) 399 + ret = false; 400 400 401 - req->flags &= ~REQ_F_BUFFER_RING; 401 + req->flags &= ~(REQ_F_BUFFER_RING | REQ_F_BUF_MORE); 402 402 return ret; 403 403 } 404 404