Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

io_uring/kbuf: support min length left for incremental buffers

Incrementally consumed buffer rings are generally fully consumed, but
it's quite possible that the application has a minimum size it needs to
meet to avoid truncation. Currently that minimum limit is 1 byte, but
this should be a setting that is the hands of the application. For
recvmsg multishot, a prime use case for incrementally consumed buffers,
the application may get spurious -EFAULT returned at the end of an
incrementally consumed buffer, as less space is available than the
headers need.

Grab a u32 field in struct io_uring_buf_reg, which the application can
use to inform the kernel of the minimum size that should be available
in an incrementally consumed buffer. If less than that is available,
the current buffer is fully processed and the next one will be picked.

Cc: stable@vger.kernel.org
Fixes: ae98dbf43d75 ("io_uring/kbuf: add support for incremental buffer consumption")
Link: https://github.com/axboe/liburing/issues/1433
Signed-off-by: Martin Michaelis <code@mgjm.de>
[axboe: write commit message, change io_buffer_list member name]
Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>

authored by

Martin Michaelis and committed by
Jens Axboe
7deba791 55ea9683

+16 -2
+2 -1
include/uapi/linux/io_uring.h
··· 905 905 __u32 ring_entries; 906 906 __u16 bgid; 907 907 __u16 flags; 908 - __u64 resv[3]; 908 + __u32 min_left; 909 + __u32 resv[5]; 909 910 }; 910 911 911 912 /* argument for IORING_REGISTER_PBUF_STATUS */
+7 -1
io_uring/kbuf.c
··· 47 47 this_len = min_t(u32, len, buf_len); 48 48 buf_len -= this_len; 49 49 /* Stop looping for invalid buffer length of 0 */ 50 - if (buf_len || !this_len) { 50 + if (buf_len > bl->min_left_sub_one || !this_len) { 51 51 WRITE_ONCE(buf->addr, READ_ONCE(buf->addr) + this_len); 52 52 WRITE_ONCE(buf->len, buf_len); 53 53 return false; ··· 637 637 if (reg.ring_entries >= 65536) 638 638 return -EINVAL; 639 639 640 + /* minimum left byte count is a property of incremental buffers */ 641 + if (!(reg.flags & IOU_PBUF_RING_INC) && reg.min_left) 642 + return -EINVAL; 643 + 640 644 bl = io_buffer_get_list(ctx, reg.bgid); 641 645 if (bl) { 642 646 /* if mapped buffer ring OR classic exists, don't allow */ ··· 687 683 bl->mask = reg.ring_entries - 1; 688 684 bl->flags |= IOBL_BUF_RING; 689 685 bl->buf_ring = br; 686 + if (reg.min_left) 687 + bl->min_left_sub_one = reg.min_left - 1; 690 688 if (reg.flags & IOU_PBUF_RING_INC) 691 689 bl->flags |= IOBL_INC; 692 690 ret = io_buffer_add_list(ctx, bl, reg.bgid);
+7
io_uring/kbuf.h
··· 32 32 33 33 __u16 flags; 34 34 35 + /* 36 + * minimum required amount to be left to reuse an incrementally 37 + * consumed buffer. If less than this is left at consumption time, 38 + * buffer is done and head is incremented to the next buffer. 39 + */ 40 + __u32 min_left_sub_one; 41 + 35 42 struct io_mapped_region region; 36 43 }; 37 44