Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

iomap: simplify ->read_folio_range() error handling for reads

Instead of requiring that the caller calls iomap_finish_folio_read()
even if the ->read_folio_range() callback returns an error, account for
this internally in iomap instead, which makes the interface simpler and
makes it match writeback's ->read_folio_range() error handling
expectations.

Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
Link: https://patch.msgid.link/20251111193658.3495942-6-joannelkoong@gmail.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Signed-off-by: Christian Brauner <brauner@kernel.org>

authored by

Joanne Koong and committed by
Christian Brauner
f8eaf794 6b1fd228

+41 -44
+3 -4
Documentation/filesystems/iomap/operations.rst
··· 149 149 iomap calls these functions: 150 150 151 151 - ``read_folio_range``: Called to read in the range. This must be provided 152 - by the caller. The caller is responsible for calling 153 - iomap_finish_folio_read() after reading in the folio range. This should be 154 - done even if an error is encountered during the read. This returns 0 on 155 - success or a negative error on failure. 152 + by the caller. If this succeeds, iomap_finish_folio_read() must be called 153 + after the range is read in, regardless of whether the read succeeded or 154 + failed. 156 155 157 156 - ``submit_read``: Submit any pending read requests. This function is 158 157 optional.
+2 -8
fs/fuse/file.c
··· 922 922 923 923 if (ctx->rac) { 924 924 ret = fuse_handle_readahead(folio, ctx->rac, data, pos, len); 925 - /* 926 - * If fuse_handle_readahead was successful, fuse_readpages_end 927 - * will do the iomap_finish_folio_read, else we need to call it 928 - * here 929 - */ 930 - if (ret) 931 - iomap_finish_folio_read(folio, off, len, ret); 932 925 } else { 933 926 /* 934 927 * for non-readahead read requests, do reads synchronously ··· 929 936 * out-of-order reads 930 937 */ 931 938 ret = fuse_do_readfolio(file, folio, off, len); 932 - iomap_finish_folio_read(folio, off, len, ret); 939 + if (!ret) 940 + iomap_finish_folio_read(folio, off, len, ret); 933 941 } 934 942 return ret; 935 943 }
+34 -29
fs/iomap/buffered-io.c
··· 398 398 * has already finished reading in the entire folio. 399 399 */ 400 400 spin_lock_irq(&ifs->state_lock); 401 - ifs->read_bytes_pending += len + 1; 401 + WARN_ON_ONCE(ifs->read_bytes_pending != 0); 402 + ifs->read_bytes_pending = len + 1; 402 403 spin_unlock_irq(&ifs->state_lock); 403 404 } 404 405 } ··· 415 414 */ 416 415 static void iomap_read_end(struct folio *folio, size_t bytes_submitted) 417 416 { 418 - struct iomap_folio_state *ifs; 417 + struct iomap_folio_state *ifs = folio->private; 419 418 420 - /* 421 - * If there are no bytes submitted, this means we are responsible for 422 - * unlocking the folio here, since no IO helper has taken ownership of 423 - * it. 424 - */ 425 - if (!bytes_submitted) { 426 - folio_unlock(folio); 427 - return; 428 - } 429 - 430 - ifs = folio->private; 431 419 if (ifs) { 432 420 bool end_read, uptodate; 433 - /* 434 - * Subtract any bytes that were initially accounted to 435 - * read_bytes_pending but skipped for IO. 436 - * The +1 accounts for the bias we added in iomap_read_init(). 437 - */ 438 - size_t bytes_not_submitted = folio_size(folio) + 1 - 439 - bytes_submitted; 440 421 441 422 spin_lock_irq(&ifs->state_lock); 442 - ifs->read_bytes_pending -= bytes_not_submitted; 443 - /* 444 - * If !ifs->read_bytes_pending, this means all pending reads 445 - * by the IO helper have already completed, which means we need 446 - * to end the folio read here. If ifs->read_bytes_pending != 0, 447 - * the IO helper will end the folio read. 448 - */ 449 - end_read = !ifs->read_bytes_pending; 423 + if (!ifs->read_bytes_pending) { 424 + WARN_ON_ONCE(bytes_submitted); 425 + end_read = true; 426 + } else { 427 + /* 428 + * Subtract any bytes that were initially accounted to 429 + * read_bytes_pending but skipped for IO. The +1 430 + * accounts for the bias we added in iomap_read_init(). 431 + */ 432 + size_t bytes_not_submitted = folio_size(folio) + 1 - 433 + bytes_submitted; 434 + ifs->read_bytes_pending -= bytes_not_submitted; 435 + /* 436 + * If !ifs->read_bytes_pending, this means all pending 437 + * reads by the IO helper have already completed, which 438 + * means we need to end the folio read here. If 439 + * ifs->read_bytes_pending != 0, the IO helper will end 440 + * the folio read. 441 + */ 442 + end_read = !ifs->read_bytes_pending; 443 + } 450 444 if (end_read) 451 445 uptodate = ifs_is_fully_uptodate(folio, ifs); 452 446 spin_unlock_irq(&ifs->state_lock); 453 447 if (end_read) 454 448 folio_end_read(folio, uptodate); 449 + } else if (!bytes_submitted) { 450 + /* 451 + * If there were no bytes submitted, this means we are 452 + * responsible for unlocking the folio here, since no IO helper 453 + * has taken ownership of it. If there were bytes submitted, 454 + * then the IO helper will end the read via 455 + * iomap_finish_folio_read(). 456 + */ 457 + folio_unlock(folio); 455 458 } 456 459 } 457 460 ··· 503 498 } else { 504 499 if (!*bytes_submitted) 505 500 iomap_read_init(folio); 506 - *bytes_submitted += plen; 507 501 ret = ctx->ops->read_folio_range(iter, ctx, plen); 508 502 if (ret) 509 503 return ret; 504 + *bytes_submitted += plen; 510 505 } 511 506 512 507 ret = iomap_iter_advance(iter, plen);
+2 -3
include/linux/iomap.h
··· 495 495 /* 496 496 * Read in a folio range. 497 497 * 498 - * The caller is responsible for calling iomap_finish_folio_read() after 499 - * reading in the folio range. This should be done even if an error is 500 - * encountered during the read. 498 + * If this succeeds, iomap_finish_folio_read() must be called after the 499 + * range is read in, regardless of whether the read succeeded or failed. 501 500 * 502 501 * Returns 0 on success or a negative error on failure. 503 502 */