Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

netfs: Fix unbuffered/DIO writes to dispatch subrequests in strict sequence

Fix netfslib such that when it's making an unbuffered or DIO write, to make
sure that it sends each subrequest strictly sequentially, waiting till the
previous one is 'committed' before sending the next so that we don't have
pieces landing out of order and potentially leaving a hole if an error
occurs (ENOSPC for example).

This is done by copying in just those bits of issuing, collecting and
retrying subrequests that are necessary to do one subrequest at a time.
Retrying, in particular, is simpler because if the current subrequest needs
retrying, the source iterator can just be copied again and the subrequest
prepped and issued again without needing to be concerned about whether it
needs merging with the previous or next in the sequence.

Note that the issuing loop waits for a subrequest to complete right after
issuing it, but this wait could be moved elsewhere allowing preparatory
steps to be performed whilst the subrequest is in progress. In particular,
once content encryption is available in netfslib, that could be done whilst
waiting, as could cleanup of buffers that have been completed.

Fixes: 153a9961b551 ("netfs: Implement unbuffered/DIO write support")
Signed-off-by: David Howells <dhowells@redhat.com>
Link: https://patch.msgid.link/58526.1772112753@warthog.procyon.org.uk
Tested-by: Steve French <sfrench@samba.org>
Reviewed-by: Paulo Alcantara (Red Hat) <pc@manguebit.org>
cc: netfs@lists.linux.dev
cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Christian Brauner <brauner@kernel.org>

authored by

David Howells and committed by
Christian Brauner
a0b4c7a4 28aaa9c3

+221 -77
+212 -16
fs/netfs/direct_write.c
··· 10 10 #include "internal.h" 11 11 12 12 /* 13 + * Perform the cleanup rituals after an unbuffered write is complete. 14 + */ 15 + static void netfs_unbuffered_write_done(struct netfs_io_request *wreq) 16 + { 17 + struct netfs_inode *ictx = netfs_inode(wreq->inode); 18 + 19 + _enter("R=%x", wreq->debug_id); 20 + 21 + /* Okay, declare that all I/O is complete. */ 22 + trace_netfs_rreq(wreq, netfs_rreq_trace_write_done); 23 + 24 + if (!wreq->error) 25 + netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred); 26 + 27 + if (wreq->origin == NETFS_DIO_WRITE && 28 + wreq->mapping->nrpages) { 29 + /* mmap may have got underfoot and we may now have folios 30 + * locally covering the region we just wrote. Attempt to 31 + * discard the folios, but leave in place any modified locally. 32 + * ->write_iter() is prevented from interfering by the DIO 33 + * counter. 34 + */ 35 + pgoff_t first = wreq->start >> PAGE_SHIFT; 36 + pgoff_t last = (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT; 37 + 38 + invalidate_inode_pages2_range(wreq->mapping, first, last); 39 + } 40 + 41 + if (wreq->origin == NETFS_DIO_WRITE) 42 + inode_dio_end(wreq->inode); 43 + 44 + _debug("finished"); 45 + netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip); 46 + /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */ 47 + 48 + if (wreq->iocb) { 49 + size_t written = umin(wreq->transferred, wreq->len); 50 + 51 + wreq->iocb->ki_pos += written; 52 + if (wreq->iocb->ki_complete) { 53 + trace_netfs_rreq(wreq, netfs_rreq_trace_ki_complete); 54 + wreq->iocb->ki_complete(wreq->iocb, wreq->error ?: written); 55 + } 56 + wreq->iocb = VFS_PTR_POISON; 57 + } 58 + 59 + netfs_clear_subrequests(wreq); 60 + } 61 + 62 + /* 63 + * Collect the subrequest results of unbuffered write subrequests. 64 + */ 65 + static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq, 66 + struct netfs_io_stream *stream, 67 + struct netfs_io_subrequest *subreq) 68 + { 69 + trace_netfs_collect_sreq(wreq, subreq); 70 + 71 + spin_lock(&wreq->lock); 72 + list_del_init(&subreq->rreq_link); 73 + spin_unlock(&wreq->lock); 74 + 75 + wreq->transferred += subreq->transferred; 76 + iov_iter_advance(&wreq->buffer.iter, subreq->transferred); 77 + 78 + stream->collected_to = subreq->start + subreq->transferred; 79 + wreq->collected_to = stream->collected_to; 80 + netfs_put_subrequest(subreq, netfs_sreq_trace_put_done); 81 + 82 + trace_netfs_collect_stream(wreq, stream); 83 + trace_netfs_collect_state(wreq, wreq->collected_to, 0); 84 + } 85 + 86 + /* 87 + * Write data to the server without going through the pagecache and without 88 + * writing it to the local cache. We dispatch the subrequests serially and 89 + * wait for each to complete before dispatching the next, lest we leave a gap 90 + * in the data written due to a failure such as ENOSPC. We could, however 91 + * attempt to do preparation such as content encryption for the next subreq 92 + * whilst the current is in progress. 93 + */ 94 + static int netfs_unbuffered_write(struct netfs_io_request *wreq) 95 + { 96 + struct netfs_io_subrequest *subreq = NULL; 97 + struct netfs_io_stream *stream = &wreq->io_streams[0]; 98 + int ret; 99 + 100 + _enter("%llx", wreq->len); 101 + 102 + if (wreq->origin == NETFS_DIO_WRITE) 103 + inode_dio_begin(wreq->inode); 104 + 105 + stream->collected_to = wreq->start; 106 + 107 + for (;;) { 108 + bool retry = false; 109 + 110 + if (!subreq) { 111 + netfs_prepare_write(wreq, stream, wreq->start + wreq->transferred); 112 + subreq = stream->construct; 113 + stream->construct = NULL; 114 + stream->front = NULL; 115 + } 116 + 117 + /* Check if (re-)preparation failed. */ 118 + if (unlikely(test_bit(NETFS_SREQ_FAILED, &subreq->flags))) { 119 + netfs_write_subrequest_terminated(subreq, subreq->error); 120 + wreq->error = subreq->error; 121 + break; 122 + } 123 + 124 + iov_iter_truncate(&subreq->io_iter, wreq->len - wreq->transferred); 125 + if (!iov_iter_count(&subreq->io_iter)) 126 + break; 127 + 128 + subreq->len = netfs_limit_iter(&subreq->io_iter, 0, 129 + stream->sreq_max_len, 130 + stream->sreq_max_segs); 131 + iov_iter_truncate(&subreq->io_iter, subreq->len); 132 + stream->submit_extendable_to = subreq->len; 133 + 134 + trace_netfs_sreq(subreq, netfs_sreq_trace_submit); 135 + stream->issue_write(subreq); 136 + 137 + /* Async, need to wait. */ 138 + netfs_wait_for_in_progress_stream(wreq, stream); 139 + 140 + if (test_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { 141 + retry = true; 142 + } else if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) { 143 + ret = subreq->error; 144 + wreq->error = ret; 145 + netfs_see_subrequest(subreq, netfs_sreq_trace_see_failed); 146 + subreq = NULL; 147 + break; 148 + } 149 + ret = 0; 150 + 151 + if (!retry) { 152 + netfs_unbuffered_write_collect(wreq, stream, subreq); 153 + subreq = NULL; 154 + if (wreq->transferred >= wreq->len) 155 + break; 156 + if (!wreq->iocb && signal_pending(current)) { 157 + ret = wreq->transferred ? -EINTR : -ERESTARTSYS; 158 + trace_netfs_rreq(wreq, netfs_rreq_trace_intr); 159 + break; 160 + } 161 + continue; 162 + } 163 + 164 + /* We need to retry the last subrequest, so first reset the 165 + * iterator, taking into account what, if anything, we managed 166 + * to transfer. 167 + */ 168 + subreq->error = -EAGAIN; 169 + trace_netfs_sreq(subreq, netfs_sreq_trace_retry); 170 + if (subreq->transferred > 0) 171 + iov_iter_advance(&wreq->buffer.iter, subreq->transferred); 172 + 173 + if (stream->source == NETFS_UPLOAD_TO_SERVER && 174 + wreq->netfs_ops->retry_request) 175 + wreq->netfs_ops->retry_request(wreq, stream); 176 + 177 + __clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags); 178 + __clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags); 179 + __clear_bit(NETFS_SREQ_FAILED, &subreq->flags); 180 + subreq->io_iter = wreq->buffer.iter; 181 + subreq->start = wreq->start + wreq->transferred; 182 + subreq->len = wreq->len - wreq->transferred; 183 + subreq->transferred = 0; 184 + subreq->retry_count += 1; 185 + stream->sreq_max_len = UINT_MAX; 186 + stream->sreq_max_segs = INT_MAX; 187 + 188 + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); 189 + stream->prepare_write(subreq); 190 + 191 + __set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags); 192 + netfs_stat(&netfs_n_wh_retry_write_subreq); 193 + } 194 + 195 + netfs_unbuffered_write_done(wreq); 196 + _leave(" = %d", ret); 197 + return ret; 198 + } 199 + 200 + static void netfs_unbuffered_write_async(struct work_struct *work) 201 + { 202 + struct netfs_io_request *wreq = container_of(work, struct netfs_io_request, work); 203 + 204 + netfs_unbuffered_write(wreq); 205 + netfs_put_request(wreq, netfs_rreq_trace_put_complete); 206 + } 207 + 208 + /* 13 209 * Perform an unbuffered write where we may have to do an RMW operation on an 14 210 * encrypted file. This can also be used for direct I/O writes. 15 211 */ ··· 266 70 */ 267 71 wreq->buffer.iter = *iter; 268 72 } 73 + 74 + wreq->len = iov_iter_count(&wreq->buffer.iter); 269 75 } 270 76 271 77 __set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags); 272 - if (async) 273 - __set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags); 274 78 275 79 /* Copy the data into the bounce buffer and encrypt it. */ 276 80 // TODO 277 81 278 82 /* Dispatch the write. */ 279 83 __set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags); 280 - if (async) 84 + 85 + if (async) { 86 + INIT_WORK(&wreq->work, netfs_unbuffered_write_async); 281 87 wreq->iocb = iocb; 282 - wreq->len = iov_iter_count(&wreq->buffer.iter); 283 - ret = netfs_unbuffered_write(wreq, is_sync_kiocb(iocb), wreq->len); 284 - if (ret < 0) { 285 - _debug("begin = %zd", ret); 286 - goto out; 287 - } 288 - 289 - if (!async) { 290 - ret = netfs_wait_for_write(wreq); 291 - if (ret > 0) 292 - iocb->ki_pos += ret; 293 - } else { 88 + queue_work(system_dfl_wq, &wreq->work); 294 89 ret = -EIOCBQUEUED; 90 + } else { 91 + ret = netfs_unbuffered_write(wreq); 92 + if (ret < 0) { 93 + _debug("begin = %zd", ret); 94 + } else { 95 + iocb->ki_pos += wreq->transferred; 96 + ret = wreq->transferred ?: wreq->error; 97 + } 98 + 99 + netfs_put_request(wreq, netfs_rreq_trace_put_complete); 295 100 } 296 101 297 - out: 298 102 netfs_put_request(wreq, netfs_rreq_trace_put_return); 299 103 return ret; 300 104
+3 -1
fs/netfs/internal.h
··· 198 198 struct file *file, 199 199 loff_t start, 200 200 enum netfs_io_origin origin); 201 + void netfs_prepare_write(struct netfs_io_request *wreq, 202 + struct netfs_io_stream *stream, 203 + loff_t start); 201 204 void netfs_reissue_write(struct netfs_io_stream *stream, 202 205 struct netfs_io_subrequest *subreq, 203 206 struct iov_iter *source); ··· 215 212 struct folio **writethrough_cache); 216 213 ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_control *wbc, 217 214 struct folio *writethrough_cache); 218 - int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t len); 219 215 220 216 /* 221 217 * write_retry.c
-21
fs/netfs/write_collect.c
··· 399 399 ictx->ops->invalidate_cache(wreq); 400 400 } 401 401 402 - if ((wreq->origin == NETFS_UNBUFFERED_WRITE || 403 - wreq->origin == NETFS_DIO_WRITE) && 404 - !wreq->error) 405 - netfs_update_i_size(ictx, &ictx->inode, wreq->start, wreq->transferred); 406 - 407 - if (wreq->origin == NETFS_DIO_WRITE && 408 - wreq->mapping->nrpages) { 409 - /* mmap may have got underfoot and we may now have folios 410 - * locally covering the region we just wrote. Attempt to 411 - * discard the folios, but leave in place any modified locally. 412 - * ->write_iter() is prevented from interfering by the DIO 413 - * counter. 414 - */ 415 - pgoff_t first = wreq->start >> PAGE_SHIFT; 416 - pgoff_t last = (wreq->start + wreq->transferred - 1) >> PAGE_SHIFT; 417 - invalidate_inode_pages2_range(wreq->mapping, first, last); 418 - } 419 - 420 - if (wreq->origin == NETFS_DIO_WRITE) 421 - inode_dio_end(wreq->inode); 422 - 423 402 _debug("finished"); 424 403 netfs_wake_rreq_flag(wreq, NETFS_RREQ_IN_PROGRESS, netfs_rreq_trace_wake_ip); 425 404 /* As we cleared NETFS_RREQ_IN_PROGRESS, we acquired its ref. */
+3 -38
fs/netfs/write_issue.c
··· 154 154 * Prepare a write subrequest. We need to allocate a new subrequest 155 155 * if we don't have one. 156 156 */ 157 - static void netfs_prepare_write(struct netfs_io_request *wreq, 158 - struct netfs_io_stream *stream, 159 - loff_t start) 157 + void netfs_prepare_write(struct netfs_io_request *wreq, 158 + struct netfs_io_stream *stream, 159 + loff_t start) 160 160 { 161 161 struct netfs_io_subrequest *subreq; 162 162 struct iov_iter *wreq_iter = &wreq->buffer.iter; ··· 696 696 ret = netfs_wait_for_write(wreq); 697 697 netfs_put_request(wreq, netfs_rreq_trace_put_return); 698 698 return ret; 699 - } 700 - 701 - /* 702 - * Write data to the server without going through the pagecache and without 703 - * writing it to the local cache. 704 - */ 705 - int netfs_unbuffered_write(struct netfs_io_request *wreq, bool may_wait, size_t len) 706 - { 707 - struct netfs_io_stream *upload = &wreq->io_streams[0]; 708 - ssize_t part; 709 - loff_t start = wreq->start; 710 - int error = 0; 711 - 712 - _enter("%zx", len); 713 - 714 - if (wreq->origin == NETFS_DIO_WRITE) 715 - inode_dio_begin(wreq->inode); 716 - 717 - while (len) { 718 - // TODO: Prepare content encryption 719 - 720 - _debug("unbuffered %zx", len); 721 - part = netfs_advance_write(wreq, upload, start, len, false); 722 - start += part; 723 - len -= part; 724 - rolling_buffer_advance(&wreq->buffer, part); 725 - if (test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) 726 - netfs_wait_for_paused_write(wreq); 727 - if (test_bit(NETFS_RREQ_FAILED, &wreq->flags)) 728 - break; 729 - } 730 - 731 - netfs_end_issue_write(wreq); 732 - _leave(" = %d", error); 733 - return error; 734 699 } 735 700 736 701 /*
+3 -1
include/trace/events/netfs.h
··· 57 57 EM(netfs_rreq_trace_done, "DONE ") \ 58 58 EM(netfs_rreq_trace_end_copy_to_cache, "END-C2C") \ 59 59 EM(netfs_rreq_trace_free, "FREE ") \ 60 + EM(netfs_rreq_trace_intr, "INTR ") \ 60 61 EM(netfs_rreq_trace_ki_complete, "KI-CMPL") \ 61 62 EM(netfs_rreq_trace_recollect, "RECLLCT") \ 62 63 EM(netfs_rreq_trace_redirty, "REDIRTY") \ ··· 170 169 EM(netfs_sreq_trace_put_oom, "PUT OOM ") \ 171 170 EM(netfs_sreq_trace_put_wip, "PUT WIP ") \ 172 171 EM(netfs_sreq_trace_put_work, "PUT WORK ") \ 173 - E_(netfs_sreq_trace_put_terminated, "PUT TERM ") 172 + EM(netfs_sreq_trace_put_terminated, "PUT TERM ") \ 173 + E_(netfs_sreq_trace_see_failed, "SEE FAILED ") 174 174 175 175 #define netfs_folio_traces \ 176 176 EM(netfs_folio_is_uptodate, "mod-uptodate") \