Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

netfs: Adjust docs after foliation

Adjust the netfslib docs in light of the foliation changes.

Also un-kdoc-mark netfs_skip_folio_read() since it's internal and isn't
part of the API.

Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
cc: Matthew Wilcox <willy@infradead.org>
cc: linux-cachefs@redhat.com
cc: linux-mm@kvack.org
Link: https://lore.kernel.org/r/163706992597.3179783.18360472879717076435.stgit@warthog.procyon.org.uk/
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

David Howells and committed by
Linus Torvalds
ddca5b0e d58071a8

+58 -41
+56 -39
Documentation/filesystems/netfs_library.rst
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 3 ================================= 4 - NETWORK FILESYSTEM HELPER LIBRARY 4 + Network Filesystem Helper Library 5 5 ================================= 6 6 7 7 .. Contents: ··· 37 37 38 38 The following services are provided: 39 39 40 - * Handles transparent huge pages (THPs). 40 + * Handle folios that span multiple pages. 41 41 42 - * Insulates the netfs from VM interface changes. 42 + * Insulate the netfs from VM interface changes. 43 43 44 - * Allows the netfs to arbitrarily split reads up into pieces, even ones that 45 - don't match page sizes or page alignments and that may cross pages. 44 + * Allow the netfs to arbitrarily split reads up into pieces, even ones that 45 + don't match folio sizes or folio alignments and that may cross folios. 46 46 47 - * Allows the netfs to expand a readahead request in both directions to meet 48 - its needs. 47 + * Allow the netfs to expand a readahead request in both directions to meet its 48 + needs. 49 49 50 - * Allows the netfs to partially fulfil a read, which will then be resubmitted. 50 + * Allow the netfs to partially fulfil a read, which will then be resubmitted. 51 51 52 - * Handles local caching, allowing cached data and server-read data to be 52 + * Handle local caching, allowing cached data and server-read data to be 53 53 interleaved for a single request. 54 54 55 - * Handles clearing of bufferage that aren't on the server. 55 + * Handle clearing of bufferage that aren't on the server. 56 56 57 57 * Handle retrying of reads that failed, switching reads from the cache to the 58 58 server as necessary. ··· 70 70 71 71 Three read helpers are provided:: 72 72 73 - * void netfs_readahead(struct readahead_control *ractl, 74 - const struct netfs_read_request_ops *ops, 75 - void *netfs_priv);`` 76 - * int netfs_readpage(struct file *file, 77 - struct page *page, 78 - const struct netfs_read_request_ops *ops, 79 - void *netfs_priv); 80 - * int netfs_write_begin(struct file *file, 81 - struct address_space *mapping, 82 - loff_t pos, 83 - unsigned int len, 84 - unsigned int flags, 85 - struct page **_page, 86 - void **_fsdata, 87 - const struct netfs_read_request_ops *ops, 88 - void *netfs_priv); 73 + void netfs_readahead(struct readahead_control *ractl, 74 + const struct netfs_read_request_ops *ops, 75 + void *netfs_priv); 76 + int netfs_readpage(struct file *file, 77 + struct folio *folio, 78 + const struct netfs_read_request_ops *ops, 79 + void *netfs_priv); 80 + int netfs_write_begin(struct file *file, 81 + struct address_space *mapping, 82 + loff_t pos, 83 + unsigned int len, 84 + unsigned int flags, 85 + struct folio **_folio, 86 + void **_fsdata, 87 + const struct netfs_read_request_ops *ops, 88 + void *netfs_priv); 89 89 90 90 Each corresponds to a VM operation, with the addition of a couple of parameters 91 91 for the use of the read helpers: ··· 103 103 For ->readahead() and ->readpage(), the network filesystem should just jump 104 104 into the corresponding read helper; whereas for ->write_begin(), it may be a 105 105 little more complicated as the network filesystem might want to flush 106 - conflicting writes or track dirty data and needs to put the acquired page if an 107 - error occurs after calling the helper. 106 + conflicting writes or track dirty data and needs to put the acquired folio if 107 + an error occurs after calling the helper. 108 108 109 109 The helpers manage the read request, calling back into the network filesystem 110 110 through the suppplied table of operations. Waits will be performed as ··· 253 253 void (*issue_op)(struct netfs_read_subrequest *subreq); 254 254 bool (*is_still_valid)(struct netfs_read_request *rreq); 255 255 int (*check_write_begin)(struct file *file, loff_t pos, unsigned len, 256 - struct page *page, void **_fsdata); 256 + struct folio *folio, void **_fsdata); 257 257 void (*done)(struct netfs_read_request *rreq); 258 258 void (*cleanup)(struct address_space *mapping, void *netfs_priv); 259 259 }; ··· 313 313 314 314 There is no return value; the netfs_subreq_terminated() function should be 315 315 called to indicate whether or not the operation succeeded and how much data 316 - it transferred. The filesystem also should not deal with setting pages 316 + it transferred. The filesystem also should not deal with setting folios 317 317 uptodate, unlocking them or dropping their refs - the helpers need to deal 318 318 with this as they have to coordinate with copying to the local cache. 319 319 320 - Note that the helpers have the pages locked, but not pinned. It is possible 321 - to use the ITER_XARRAY iov iterator to refer to the range of the inode that 322 - is being operated upon without the need to allocate large bvec tables. 320 + Note that the helpers have the folios locked, but not pinned. It is 321 + possible to use the ITER_XARRAY iov iterator to refer to the range of the 322 + inode that is being operated upon without the need to allocate large bvec 323 + tables. 323 324 324 325 * ``is_still_valid()`` 325 326 ··· 331 330 * ``check_write_begin()`` 332 331 333 332 [Optional] This is called from the netfs_write_begin() helper once it has 334 - allocated/grabbed the page to be modified to allow the filesystem to flush 333 + allocated/grabbed the folio to be modified to allow the filesystem to flush 335 334 conflicting state before allowing it to be modified. 336 335 337 - It should return 0 if everything is now fine, -EAGAIN if the page should be 336 + It should return 0 if everything is now fine, -EAGAIN if the folio should be 338 337 regrabbed and any other error code to abort the operation. 339 338 340 339 * ``done`` 341 340 342 - [Optional] This is called after the pages in the request have all been 341 + [Optional] This is called after the folios in the request have all been 343 342 unlocked (and marked uptodate if applicable). 344 343 345 344 * ``cleanup`` ··· 391 390 * If NETFS_SREQ_CLEAR_TAIL was set, a short read will be cleared to the 392 391 end of the slice instead of reissuing. 393 392 394 - * Once the data is read, the pages that have been fully read/cleared: 393 + * Once the data is read, the folios that have been fully read/cleared: 395 394 396 395 * Will be marked uptodate. 397 396 ··· 399 398 400 399 * Unlocked 401 400 402 - * Any pages that need writing to the cache will then have DIO writes issued. 401 + * Any folios that need writing to the cache will then have DIO writes issued. 403 402 404 403 * Synchronous operations will wait for reading to be complete. 405 404 406 - * Writes to the cache will proceed asynchronously and the pages will have the 405 + * Writes to the cache will proceed asynchronously and the folios will have the 407 406 PG_fscache mark removed when that completes. 408 407 409 408 * The request structures will be cleaned up when everything has completed. ··· 452 451 bool seek_data, 453 452 netfs_io_terminated_t term_func, 454 453 void *term_func_priv); 454 + 455 + int (*prepare_write)(struct netfs_cache_resources *cres, 456 + loff_t *_start, size_t *_len, loff_t i_size); 455 457 456 458 int (*write)(struct netfs_cache_resources *cres, 457 459 loff_t start_pos, ··· 513 509 indicating whether the termination is definitely happening in the caller's 514 510 context. 515 511 512 + * ``prepare_write()`` 513 + 514 + [Required] Called to adjust a write to the cache and check that there is 515 + sufficient space in the cache. The start and length values indicate the 516 + size of the write that netfslib is proposing, and this can be adjusted by 517 + the cache to respect DIO boundaries. The file size is passed for 518 + information. 519 + 516 520 * ``write()`` 517 521 518 522 [Required] Called to write to the cache. The start file offset is given ··· 537 525 there isn't a read request structure as well, such as writing dirty data to the 538 526 cache. 539 527 528 + 529 + API Function Reference 530 + ====================== 531 + 540 532 .. kernel-doc:: include/linux/netfs.h 533 + .. kernel-doc:: fs/netfs/read_helper.c
+2 -2
fs/netfs/read_helper.c
··· 1008 1008 } 1009 1009 EXPORT_SYMBOL(netfs_readpage); 1010 1010 1011 - /** 1012 - * netfs_skip_folio_read - prep a folio for writing without reading first 1011 + /* 1012 + * Prepare a folio for writing without reading first 1013 1013 * @folio: The folio being prepared 1014 1014 * @pos: starting position for the write 1015 1015 * @len: length of write