Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'f2fs-for-6.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs updates from Jaegeuk Kim:
"In this round, Matthew converted most of page operations to using
folio. Beyond the work, we've applied some performance tunings such as
GC and linear lookup, in addition to enhancing fault injection and
sanity checks.

Enhancements:
- large number of folio conversions
- add a control to turn on/off the linear lookup for performance
- tune GC logics for zoned block device
- improve fault injection and sanity checks

Bug fixes:
- handle error cases of memory donation
- fix to correct check conditions in f2fs_cross_rename
- fix to skip f2fs_balance_fs() if checkpoint is disabled
- don't over-report free space or inodes in statvfs
- prevent the current section from being selected as a victim during GC
- fix to calculate first_zoned_segno correctly
- fix to avoid inconsistence between SIT and SSA for zoned block device

As usual, there are several debugging patches and clean-ups as well"

* tag 'f2fs-for-6.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (195 commits)
f2fs: fix to correct check conditions in f2fs_cross_rename
f2fs: use d_inode(dentry) cleanup dentry->d_inode
f2fs: fix to skip f2fs_balance_fs() if checkpoint is disabled
f2fs: clean up to check bi_status w/ BLK_STS_OK
f2fs: introduce is_{meta,node}_folio
f2fs: add ckpt_valid_blocks to the section entry
f2fs: add a method for calculating the remaining blocks in the current segment in LFS mode.
f2fs: introduce FAULT_VMALLOC
f2fs: use vmalloc instead of kvmalloc in .init_{,de}compress_ctx
f2fs: add f2fs_bug_on() in f2fs_quota_read()
f2fs: add f2fs_bug_on() to detect potential bug
f2fs: remove unused sbi argument from checksum functions
f2fs: fix 32-bits hexademical number in fault injection doc
f2fs: don't over-report free space or inodes in statvfs
f2fs: return bool from __write_node_folio
f2fs: simplify return value handling in f2fs_fsync_node_pages
f2fs: always unlock the page in f2fs_write_single_data_page
f2fs: remove wbc->for_reclaim handling
f2fs: return bool from __f2fs_write_meta_folio
f2fs: fix to return correct error number in f2fs_sync_node_pages()
...

+2058 -1785
+41 -26
Documentation/ABI/testing/sysfs-fs-f2fs
··· 270 270 inode_checksum, flexible_inline_xattr, quota_ino, 271 271 inode_crtime, lost_found, verity, sb_checksum, 272 272 casefold, readonly, compression, test_dummy_encryption_v2, 273 - atomic_write, pin_file, encrypted_casefold. 273 + atomic_write, pin_file, encrypted_casefold, linear_lookup. 274 274 275 275 What: /sys/fs/f2fs/<disk>/inject_rate 276 276 Date: May 2016 ··· 710 710 enabled with fault_injection option, fault type value 711 711 is shown below, it supports single or combined type. 712 712 713 - =========================== =========== 713 + =========================== ========== 714 714 Type_Name Type_Value 715 - =========================== =========== 716 - FAULT_KMALLOC 0x000000001 717 - FAULT_KVMALLOC 0x000000002 718 - FAULT_PAGE_ALLOC 0x000000004 719 - FAULT_PAGE_GET 0x000000008 720 - FAULT_ALLOC_BIO 0x000000010 (obsolete) 721 - FAULT_ALLOC_NID 0x000000020 722 - FAULT_ORPHAN 0x000000040 723 - FAULT_BLOCK 0x000000080 724 - FAULT_DIR_DEPTH 0x000000100 725 - FAULT_EVICT_INODE 0x000000200 726 - FAULT_TRUNCATE 0x000000400 727 - FAULT_READ_IO 0x000000800 728 - FAULT_CHECKPOINT 0x000001000 729 - FAULT_DISCARD 0x000002000 730 - FAULT_WRITE_IO 0x000004000 731 - FAULT_SLAB_ALLOC 0x000008000 732 - FAULT_DQUOT_INIT 0x000010000 733 - FAULT_LOCK_OP 0x000020000 734 - FAULT_BLKADDR_VALIDITY 0x000040000 735 - FAULT_BLKADDR_CONSISTENCE 0x000080000 736 - FAULT_NO_SEGMENT 0x000100000 737 - FAULT_INCONSISTENT_FOOTER 0x000200000 738 - =========================== =========== 715 + =========================== ========== 716 + FAULT_KMALLOC 0x00000001 717 + FAULT_KVMALLOC 0x00000002 718 + FAULT_PAGE_ALLOC 0x00000004 719 + FAULT_PAGE_GET 0x00000008 720 + FAULT_ALLOC_BIO 0x00000010 (obsolete) 721 + FAULT_ALLOC_NID 0x00000020 722 + FAULT_ORPHAN 0x00000040 723 + FAULT_BLOCK 0x00000080 724 + FAULT_DIR_DEPTH 0x00000100 725 + FAULT_EVICT_INODE 0x00000200 726 + FAULT_TRUNCATE 0x00000400 727 + FAULT_READ_IO 0x00000800 728 + FAULT_CHECKPOINT 0x00001000 729 + FAULT_DISCARD 0x00002000 730 + FAULT_WRITE_IO 0x00004000 731 + FAULT_SLAB_ALLOC 0x00008000 732 + FAULT_DQUOT_INIT 0x00010000 733 + FAULT_LOCK_OP 0x00020000 734 + FAULT_BLKADDR_VALIDITY 0x00040000 735 + FAULT_BLKADDR_CONSISTENCE 0x00080000 736 + FAULT_NO_SEGMENT 0x00100000 737 + FAULT_INCONSISTENT_FOOTER 0x00200000 738 + FAULT_TIMEOUT 0x00400000 (1000ms) 739 + FAULT_VMALLOC 0x00800000 740 + =========================== ========== 739 741 740 742 What: /sys/fs/f2fs/<disk>/discard_io_aware_gran 741 743 Date: January 2023 ··· 848 846 reserved_blocks. However, it is not enough, since this extra space should 849 847 not be shown to users. So, with this new sysfs node, we can hide the space 850 848 by substracting reserved_blocks from total bytes. 849 + 850 + What: /sys/fs/f2fs/<disk>/encoding_flags 851 + Date: April 2025 852 + Contact: "Chao Yu" <chao@kernel.org> 853 + Description: This is a read-only entry to show the value of sb.s_encoding_flags, the 854 + value is hexadecimal. 855 + 856 + ============================ ========== 857 + Flag_Name Flag_Value 858 + ============================ ========== 859 + SB_ENC_STRICT_MODE_FL 0x00000001 860 + SB_ENC_NO_COMPAT_FALLBACK_FL 0x00000002 861 + ============================ ==========
+27 -25
Documentation/filesystems/f2fs.rst
··· 182 182 enabled with fault_injection option, fault type value 183 183 is shown below, it supports single or combined type. 184 184 185 - =========================== =========== 185 + =========================== ========== 186 186 Type_Name Type_Value 187 - =========================== =========== 188 - FAULT_KMALLOC 0x000000001 189 - FAULT_KVMALLOC 0x000000002 190 - FAULT_PAGE_ALLOC 0x000000004 191 - FAULT_PAGE_GET 0x000000008 192 - FAULT_ALLOC_BIO 0x000000010 (obsolete) 193 - FAULT_ALLOC_NID 0x000000020 194 - FAULT_ORPHAN 0x000000040 195 - FAULT_BLOCK 0x000000080 196 - FAULT_DIR_DEPTH 0x000000100 197 - FAULT_EVICT_INODE 0x000000200 198 - FAULT_TRUNCATE 0x000000400 199 - FAULT_READ_IO 0x000000800 200 - FAULT_CHECKPOINT 0x000001000 201 - FAULT_DISCARD 0x000002000 202 - FAULT_WRITE_IO 0x000004000 203 - FAULT_SLAB_ALLOC 0x000008000 204 - FAULT_DQUOT_INIT 0x000010000 205 - FAULT_LOCK_OP 0x000020000 206 - FAULT_BLKADDR_VALIDITY 0x000040000 207 - FAULT_BLKADDR_CONSISTENCE 0x000080000 208 - FAULT_NO_SEGMENT 0x000100000 209 - FAULT_INCONSISTENT_FOOTER 0x000200000 210 - =========================== =========== 187 + =========================== ========== 188 + FAULT_KMALLOC 0x00000001 189 + FAULT_KVMALLOC 0x00000002 190 + FAULT_PAGE_ALLOC 0x00000004 191 + FAULT_PAGE_GET 0x00000008 192 + FAULT_ALLOC_BIO 0x00000010 (obsolete) 193 + FAULT_ALLOC_NID 0x00000020 194 + FAULT_ORPHAN 0x00000040 195 + FAULT_BLOCK 0x00000080 196 + FAULT_DIR_DEPTH 0x00000100 197 + FAULT_EVICT_INODE 0x00000200 198 + FAULT_TRUNCATE 0x00000400 199 + FAULT_READ_IO 0x00000800 200 + FAULT_CHECKPOINT 0x00001000 201 + FAULT_DISCARD 0x00002000 202 + FAULT_WRITE_IO 0x00004000 203 + FAULT_SLAB_ALLOC 0x00008000 204 + FAULT_DQUOT_INIT 0x00010000 205 + FAULT_LOCK_OP 0x00020000 206 + FAULT_BLKADDR_VALIDITY 0x00040000 207 + FAULT_BLKADDR_CONSISTENCE 0x00080000 208 + FAULT_NO_SEGMENT 0x00100000 209 + FAULT_INCONSISTENT_FOOTER 0x00200000 210 + FAULT_TIMEOUT 0x00400000 (1000ms) 211 + FAULT_VMALLOC 0x00800000 212 + =========================== ========== 211 213 mode=%s Control block allocation mode which supports "adaptive" 212 214 and "lfs". In "lfs" mode, there should be no random 213 215 writes towards main area.
+16 -17
fs/f2fs/acl.c
··· 166 166 } 167 167 168 168 static struct posix_acl *__f2fs_get_acl(struct inode *inode, int type, 169 - struct page *dpage) 169 + struct folio *dfolio) 170 170 { 171 171 int name_index = F2FS_XATTR_INDEX_POSIX_ACL_DEFAULT; 172 172 void *value = NULL; ··· 176 176 if (type == ACL_TYPE_ACCESS) 177 177 name_index = F2FS_XATTR_INDEX_POSIX_ACL_ACCESS; 178 178 179 - retval = f2fs_getxattr(inode, name_index, "", NULL, 0, dpage); 179 + retval = f2fs_getxattr(inode, name_index, "", NULL, 0, dfolio); 180 180 if (retval > 0) { 181 181 value = f2fs_kmalloc(F2FS_I_SB(inode), retval, GFP_F2FS_ZERO); 182 182 if (!value) 183 183 return ERR_PTR(-ENOMEM); 184 184 retval = f2fs_getxattr(inode, name_index, "", value, 185 - retval, dpage); 185 + retval, dfolio); 186 186 } 187 187 188 188 if (retval > 0) ··· 227 227 228 228 static int __f2fs_set_acl(struct mnt_idmap *idmap, 229 229 struct inode *inode, int type, 230 - struct posix_acl *acl, struct page *ipage) 230 + struct posix_acl *acl, struct folio *ifolio) 231 231 { 232 232 int name_index; 233 233 void *value = NULL; ··· 238 238 switch (type) { 239 239 case ACL_TYPE_ACCESS: 240 240 name_index = F2FS_XATTR_INDEX_POSIX_ACL_ACCESS; 241 - if (acl && !ipage) { 242 - error = f2fs_acl_update_mode(idmap, inode, 243 - &mode, &acl); 241 + if (acl && !ifolio) { 242 + error = f2fs_acl_update_mode(idmap, inode, &mode, &acl); 244 243 if (error) 245 244 return error; 246 245 set_acl_inode(inode, mode); ··· 264 265 } 265 266 } 266 267 267 - error = f2fs_setxattr(inode, name_index, "", value, size, ipage, 0); 268 + error = f2fs_setxattr(inode, name_index, "", value, size, ifolio, 0); 268 269 269 270 kfree(value); 270 271 if (!error) ··· 359 360 360 361 static int f2fs_acl_create(struct inode *dir, umode_t *mode, 361 362 struct posix_acl **default_acl, struct posix_acl **acl, 362 - struct page *dpage) 363 + struct folio *dfolio) 363 364 { 364 365 struct posix_acl *p; 365 366 struct posix_acl *clone; ··· 371 372 if (S_ISLNK(*mode) || !IS_POSIXACL(dir)) 372 373 return 0; 373 374 374 - p = __f2fs_get_acl(dir, ACL_TYPE_DEFAULT, dpage); 375 + p = __f2fs_get_acl(dir, ACL_TYPE_DEFAULT, dfolio); 375 376 if (!p || p == ERR_PTR(-EOPNOTSUPP)) { 376 377 *mode &= ~current_umask(); 377 378 return 0; ··· 408 409 return ret; 409 410 } 410 411 411 - int f2fs_init_acl(struct inode *inode, struct inode *dir, struct page *ipage, 412 - struct page *dpage) 412 + int f2fs_init_acl(struct inode *inode, struct inode *dir, struct folio *ifolio, 413 + struct folio *dfolio) 413 414 { 414 415 struct posix_acl *default_acl = NULL, *acl = NULL; 415 416 int error; 416 417 417 - error = f2fs_acl_create(dir, &inode->i_mode, &default_acl, &acl, dpage); 418 + error = f2fs_acl_create(dir, &inode->i_mode, &default_acl, &acl, dfolio); 418 419 if (error) 419 420 return error; 420 421 421 422 f2fs_mark_inode_dirty_sync(inode, true); 422 423 423 424 if (default_acl) { 424 - error = __f2fs_set_acl(NULL, inode, ACL_TYPE_DEFAULT, default_acl, 425 - ipage); 425 + error = __f2fs_set_acl(NULL, inode, ACL_TYPE_DEFAULT, 426 + default_acl, ifolio); 426 427 posix_acl_release(default_acl); 427 428 } else { 428 429 inode->i_default_acl = NULL; 429 430 } 430 431 if (acl) { 431 432 if (!error) 432 - error = __f2fs_set_acl(NULL, inode, ACL_TYPE_ACCESS, acl, 433 - ipage); 433 + error = __f2fs_set_acl(NULL, inode, ACL_TYPE_ACCESS, 434 + acl, ifolio); 434 435 posix_acl_release(acl); 435 436 } else { 436 437 inode->i_acl = NULL;
+5 -5
fs/f2fs/acl.h
··· 33 33 34 34 #ifdef CONFIG_F2FS_FS_POSIX_ACL 35 35 36 - extern struct posix_acl *f2fs_get_acl(struct inode *, int, bool); 37 - extern int f2fs_set_acl(struct mnt_idmap *, struct dentry *, 36 + struct posix_acl *f2fs_get_acl(struct inode *, int, bool); 37 + int f2fs_set_acl(struct mnt_idmap *, struct dentry *, 38 38 struct posix_acl *, int); 39 - extern int f2fs_init_acl(struct inode *, struct inode *, struct page *, 40 - struct page *); 39 + int f2fs_init_acl(struct inode *, struct inode *, struct folio *ifolio, 40 + struct folio *dfolio); 41 41 #else 42 42 #define f2fs_get_acl NULL 43 43 #define f2fs_set_acl NULL 44 44 45 45 static inline int f2fs_init_acl(struct inode *inode, struct inode *dir, 46 - struct page *ipage, struct page *dpage) 46 + struct folio *ifolio, struct folio *dfolio) 47 47 { 48 48 return 0; 49 49 }
+114 -128
fs/f2fs/checkpoint.c
··· 29 29 void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io, 30 30 unsigned char reason) 31 31 { 32 - f2fs_build_fault_attr(sbi, 0, 0); 32 + f2fs_build_fault_attr(sbi, 0, 0, FAULT_ALL); 33 33 if (!end_io) 34 34 f2fs_flush_merged_writes(sbi); 35 35 f2fs_handle_critical_error(sbi, reason); ··· 38 38 /* 39 39 * We guarantee no failure on the returned page. 40 40 */ 41 - struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index) 41 + struct folio *f2fs_grab_meta_folio(struct f2fs_sb_info *sbi, pgoff_t index) 42 42 { 43 43 struct address_space *mapping = META_MAPPING(sbi); 44 - struct page *page; 44 + struct folio *folio; 45 45 repeat: 46 - page = f2fs_grab_cache_page(mapping, index, false); 47 - if (!page) { 46 + folio = f2fs_grab_cache_folio(mapping, index, false); 47 + if (IS_ERR(folio)) { 48 48 cond_resched(); 49 49 goto repeat; 50 50 } 51 - f2fs_wait_on_page_writeback(page, META, true, true); 52 - if (!PageUptodate(page)) 53 - SetPageUptodate(page); 54 - return page; 51 + f2fs_folio_wait_writeback(folio, META, true, true); 52 + if (!folio_test_uptodate(folio)) 53 + folio_mark_uptodate(folio); 54 + return folio; 55 55 } 56 56 57 - static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index, 57 + static struct folio *__get_meta_folio(struct f2fs_sb_info *sbi, pgoff_t index, 58 58 bool is_meta) 59 59 { 60 60 struct address_space *mapping = META_MAPPING(sbi); ··· 93 93 f2fs_update_iostat(sbi, NULL, FS_META_READ_IO, F2FS_BLKSIZE); 94 94 95 95 folio_lock(folio); 96 - if (unlikely(folio->mapping != mapping)) { 96 + if (unlikely(!is_meta_folio(folio))) { 97 97 f2fs_folio_put(folio, true); 98 98 goto repeat; 99 99 } ··· 104 104 return ERR_PTR(-EIO); 105 105 } 106 106 out: 107 - return &folio->page; 107 + return folio; 108 108 } 109 109 110 - struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index) 110 + struct folio *f2fs_get_meta_folio(struct f2fs_sb_info *sbi, pgoff_t index) 111 111 { 112 - return __get_meta_page(sbi, index, true); 112 + return __get_meta_folio(sbi, index, true); 113 113 } 114 114 115 - struct page *f2fs_get_meta_page_retry(struct f2fs_sb_info *sbi, pgoff_t index) 115 + struct folio *f2fs_get_meta_folio_retry(struct f2fs_sb_info *sbi, pgoff_t index) 116 116 { 117 - struct page *page; 117 + struct folio *folio; 118 118 int count = 0; 119 119 120 120 retry: 121 - page = __get_meta_page(sbi, index, true); 122 - if (IS_ERR(page)) { 123 - if (PTR_ERR(page) == -EIO && 121 + folio = __get_meta_folio(sbi, index, true); 122 + if (IS_ERR(folio)) { 123 + if (PTR_ERR(folio) == -EIO && 124 124 ++count <= DEFAULT_RETRY_IO_COUNT) 125 125 goto retry; 126 126 f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_META_PAGE); 127 127 } 128 - return page; 128 + return folio; 129 129 } 130 130 131 131 /* for POR only */ 132 - struct page *f2fs_get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index) 132 + struct folio *f2fs_get_tmp_folio(struct f2fs_sb_info *sbi, pgoff_t index) 133 133 { 134 - return __get_meta_page(sbi, index, false); 134 + return __get_meta_folio(sbi, index, false); 135 135 } 136 136 137 137 static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr, ··· 252 252 int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, 253 253 int type, bool sync) 254 254 { 255 - struct page *page; 256 255 block_t blkno = start; 257 256 struct f2fs_io_info fio = { 258 257 .sbi = sbi, ··· 270 271 271 272 blk_start_plug(&plug); 272 273 for (; nrpages-- > 0; blkno++) { 274 + struct folio *folio; 273 275 274 276 if (!f2fs_is_valid_blkaddr(sbi, blkno, type)) 275 277 goto out; ··· 300 300 BUG(); 301 301 } 302 302 303 - page = f2fs_grab_cache_page(META_MAPPING(sbi), 303 + folio = f2fs_grab_cache_folio(META_MAPPING(sbi), 304 304 fio.new_blkaddr, false); 305 - if (!page) 305 + if (IS_ERR(folio)) 306 306 continue; 307 - if (PageUptodate(page)) { 308 - f2fs_put_page(page, 1); 307 + if (folio_test_uptodate(folio)) { 308 + f2fs_folio_put(folio, true); 309 309 continue; 310 310 } 311 311 312 - fio.page = page; 312 + fio.page = &folio->page; 313 313 err = f2fs_submit_page_bio(&fio); 314 - f2fs_put_page(page, err ? 1 : 0); 314 + f2fs_folio_put(folio, err ? true : false); 315 315 316 316 if (!err) 317 317 f2fs_update_iostat(sbi, NULL, FS_META_READ_IO, ··· 325 325 void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index, 326 326 unsigned int ra_blocks) 327 327 { 328 - struct page *page; 328 + struct folio *folio; 329 329 bool readahead = false; 330 330 331 331 if (ra_blocks == RECOVERY_MIN_RA_BLOCKS) 332 332 return; 333 333 334 - page = find_get_page(META_MAPPING(sbi), index); 335 - if (!page || !PageUptodate(page)) 334 + folio = filemap_get_folio(META_MAPPING(sbi), index); 335 + if (IS_ERR(folio) || !folio_test_uptodate(folio)) 336 336 readahead = true; 337 - f2fs_put_page(page, 0); 337 + f2fs_folio_put(folio, false); 338 338 339 339 if (readahead) 340 340 f2fs_ra_meta_pages(sbi, index, ra_blocks, META_POR, true); 341 341 } 342 342 343 - static int __f2fs_write_meta_page(struct page *page, 343 + static bool __f2fs_write_meta_folio(struct folio *folio, 344 344 struct writeback_control *wbc, 345 345 enum iostat_type io_type) 346 346 { 347 - struct f2fs_sb_info *sbi = F2FS_P_SB(page); 348 - struct folio *folio = page_folio(page); 347 + struct f2fs_sb_info *sbi = F2FS_F_SB(folio); 349 348 350 349 trace_f2fs_writepage(folio, META); 351 350 ··· 353 354 folio_clear_uptodate(folio); 354 355 dec_page_count(sbi, F2FS_DIRTY_META); 355 356 folio_unlock(folio); 356 - return 0; 357 + return true; 357 358 } 358 359 goto redirty_out; 359 360 } 360 361 if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING))) 361 362 goto redirty_out; 362 - if (wbc->for_reclaim && folio->index < GET_SUM_BLOCK(sbi, 0)) 363 - goto redirty_out; 364 363 365 364 f2fs_do_write_meta_page(sbi, folio, io_type); 366 365 dec_page_count(sbi, F2FS_DIRTY_META); 367 - 368 - if (wbc->for_reclaim) 369 - f2fs_submit_merged_write_cond(sbi, NULL, page, 0, META); 370 366 371 367 folio_unlock(folio); 372 368 373 369 if (unlikely(f2fs_cp_error(sbi))) 374 370 f2fs_submit_merged_write(sbi, META); 375 371 376 - return 0; 372 + return true; 377 373 378 374 redirty_out: 379 - redirty_page_for_writepage(wbc, page); 380 - return AOP_WRITEPAGE_ACTIVATE; 375 + folio_redirty_for_writepage(wbc, folio); 376 + return false; 381 377 } 382 378 383 379 static int f2fs_write_meta_pages(struct address_space *mapping, ··· 415 421 struct folio_batch fbatch; 416 422 long nwritten = 0; 417 423 int nr_folios; 418 - struct writeback_control wbc = { 419 - .for_reclaim = 0, 420 - }; 424 + struct writeback_control wbc = {}; 421 425 struct blk_plug plug; 422 426 423 427 folio_batch_init(&fbatch); ··· 439 447 440 448 folio_lock(folio); 441 449 442 - if (unlikely(folio->mapping != mapping)) { 450 + if (unlikely(!is_meta_folio(folio))) { 443 451 continue_unlock: 444 452 folio_unlock(folio); 445 453 continue; ··· 449 457 goto continue_unlock; 450 458 } 451 459 452 - f2fs_wait_on_page_writeback(&folio->page, META, 453 - true, true); 460 + f2fs_folio_wait_writeback(folio, META, true, true); 454 461 455 462 if (!folio_clear_dirty_for_io(folio)) 456 463 goto continue_unlock; 457 464 458 - if (__f2fs_write_meta_page(&folio->page, &wbc, 465 + if (!__f2fs_write_meta_folio(folio, &wbc, 459 466 io_type)) { 460 467 folio_unlock(folio); 461 468 break; ··· 504 513 { 505 514 struct inode_management *im = &sbi->im[type]; 506 515 struct ino_entry *e = NULL, *new = NULL; 516 + int ret; 507 517 508 518 if (type == FLUSH_INO) { 509 519 rcu_read_lock(); ··· 517 525 new = f2fs_kmem_cache_alloc(ino_entry_slab, 518 526 GFP_NOFS, true, NULL); 519 527 520 - radix_tree_preload(GFP_NOFS | __GFP_NOFAIL); 528 + ret = radix_tree_preload(GFP_NOFS | __GFP_NOFAIL); 529 + f2fs_bug_on(sbi, ret); 521 530 522 531 spin_lock(&im->ino_lock); 523 532 e = radix_tree_lookup(&im->ino_root, ino); ··· 743 750 f2fs_ra_meta_pages(sbi, start_blk, orphan_blocks, META_CP, true); 744 751 745 752 for (i = 0; i < orphan_blocks; i++) { 746 - struct page *page; 753 + struct folio *folio; 747 754 struct f2fs_orphan_block *orphan_blk; 748 755 749 - page = f2fs_get_meta_page(sbi, start_blk + i); 750 - if (IS_ERR(page)) { 751 - err = PTR_ERR(page); 756 + folio = f2fs_get_meta_folio(sbi, start_blk + i); 757 + if (IS_ERR(folio)) { 758 + err = PTR_ERR(folio); 752 759 goto out; 753 760 } 754 761 755 - orphan_blk = (struct f2fs_orphan_block *)page_address(page); 762 + orphan_blk = folio_address(folio); 756 763 for (j = 0; j < le32_to_cpu(orphan_blk->entry_count); j++) { 757 764 nid_t ino = le32_to_cpu(orphan_blk->ino[j]); 758 765 759 766 err = recover_orphan_inode(sbi, ino); 760 767 if (err) { 761 - f2fs_put_page(page, 1); 768 + f2fs_folio_put(folio, true); 762 769 goto out; 763 770 } 764 771 } 765 - f2fs_put_page(page, 1); 772 + f2fs_folio_put(folio, true); 766 773 } 767 774 /* clear Orphan Flag */ 768 775 clear_ckpt_flags(sbi, CP_ORPHAN_PRESENT_FLAG); ··· 779 786 unsigned int nentries = 0; 780 787 unsigned short index = 1; 781 788 unsigned short orphan_blocks; 782 - struct page *page = NULL; 789 + struct folio *folio = NULL; 783 790 struct ino_entry *orphan = NULL; 784 791 struct inode_management *im = &sbi->im[ORPHAN_INO]; 785 792 ··· 794 801 795 802 /* loop for each orphan inode entry and write them in journal block */ 796 803 list_for_each_entry(orphan, head, list) { 797 - if (!page) { 798 - page = f2fs_grab_meta_page(sbi, start_blk++); 799 - orphan_blk = 800 - (struct f2fs_orphan_block *)page_address(page); 804 + if (!folio) { 805 + folio = f2fs_grab_meta_folio(sbi, start_blk++); 806 + orphan_blk = folio_address(folio); 801 807 memset(orphan_blk, 0, sizeof(*orphan_blk)); 802 808 } 803 809 ··· 811 819 orphan_blk->blk_addr = cpu_to_le16(index); 812 820 orphan_blk->blk_count = cpu_to_le16(orphan_blocks); 813 821 orphan_blk->entry_count = cpu_to_le32(nentries); 814 - set_page_dirty(page); 815 - f2fs_put_page(page, 1); 822 + folio_mark_dirty(folio); 823 + f2fs_folio_put(folio, true); 816 824 index++; 817 825 nentries = 0; 818 - page = NULL; 826 + folio = NULL; 819 827 } 820 828 } 821 829 822 - if (page) { 830 + if (folio) { 823 831 orphan_blk->blk_addr = cpu_to_le16(index); 824 832 orphan_blk->blk_count = cpu_to_le16(orphan_blocks); 825 833 orphan_blk->entry_count = cpu_to_le32(nentries); 826 - set_page_dirty(page); 827 - f2fs_put_page(page, 1); 834 + folio_mark_dirty(folio); 835 + f2fs_folio_put(folio, true); 828 836 } 829 837 } 830 838 831 - static __u32 f2fs_checkpoint_chksum(struct f2fs_sb_info *sbi, 832 - struct f2fs_checkpoint *ckpt) 839 + static __u32 f2fs_checkpoint_chksum(struct f2fs_checkpoint *ckpt) 833 840 { 834 841 unsigned int chksum_ofs = le32_to_cpu(ckpt->checksum_offset); 835 842 __u32 chksum; 836 843 837 - chksum = f2fs_crc32(sbi, ckpt, chksum_ofs); 844 + chksum = f2fs_crc32(ckpt, chksum_ofs); 838 845 if (chksum_ofs < CP_CHKSUM_OFFSET) { 839 846 chksum_ofs += sizeof(chksum); 840 - chksum = f2fs_chksum(sbi, chksum, (__u8 *)ckpt + chksum_ofs, 841 - F2FS_BLKSIZE - chksum_ofs); 847 + chksum = f2fs_chksum(chksum, (__u8 *)ckpt + chksum_ofs, 848 + F2FS_BLKSIZE - chksum_ofs); 842 849 } 843 850 return chksum; 844 851 } 845 852 846 853 static int get_checkpoint_version(struct f2fs_sb_info *sbi, block_t cp_addr, 847 - struct f2fs_checkpoint **cp_block, struct page **cp_page, 854 + struct f2fs_checkpoint **cp_block, struct folio **cp_folio, 848 855 unsigned long long *version) 849 856 { 850 857 size_t crc_offset = 0; 851 858 __u32 crc; 852 859 853 - *cp_page = f2fs_get_meta_page(sbi, cp_addr); 854 - if (IS_ERR(*cp_page)) 855 - return PTR_ERR(*cp_page); 860 + *cp_folio = f2fs_get_meta_folio(sbi, cp_addr); 861 + if (IS_ERR(*cp_folio)) 862 + return PTR_ERR(*cp_folio); 856 863 857 - *cp_block = (struct f2fs_checkpoint *)page_address(*cp_page); 864 + *cp_block = folio_address(*cp_folio); 858 865 859 866 crc_offset = le32_to_cpu((*cp_block)->checksum_offset); 860 867 if (crc_offset < CP_MIN_CHKSUM_OFFSET || 861 868 crc_offset > CP_CHKSUM_OFFSET) { 862 - f2fs_put_page(*cp_page, 1); 869 + f2fs_folio_put(*cp_folio, true); 863 870 f2fs_warn(sbi, "invalid crc_offset: %zu", crc_offset); 864 871 return -EINVAL; 865 872 } 866 873 867 - crc = f2fs_checkpoint_chksum(sbi, *cp_block); 874 + crc = f2fs_checkpoint_chksum(*cp_block); 868 875 if (crc != cur_cp_crc(*cp_block)) { 869 - f2fs_put_page(*cp_page, 1); 876 + f2fs_folio_put(*cp_folio, true); 870 877 f2fs_warn(sbi, "invalid crc value"); 871 878 return -EINVAL; 872 879 } ··· 874 883 return 0; 875 884 } 876 885 877 - static struct page *validate_checkpoint(struct f2fs_sb_info *sbi, 886 + static struct folio *validate_checkpoint(struct f2fs_sb_info *sbi, 878 887 block_t cp_addr, unsigned long long *version) 879 888 { 880 - struct page *cp_page_1 = NULL, *cp_page_2 = NULL; 889 + struct folio *cp_folio_1 = NULL, *cp_folio_2 = NULL; 881 890 struct f2fs_checkpoint *cp_block = NULL; 882 891 unsigned long long cur_version = 0, pre_version = 0; 883 892 unsigned int cp_blocks; 884 893 int err; 885 894 886 895 err = get_checkpoint_version(sbi, cp_addr, &cp_block, 887 - &cp_page_1, version); 896 + &cp_folio_1, version); 888 897 if (err) 889 898 return NULL; 890 899 ··· 899 908 900 909 cp_addr += cp_blocks - 1; 901 910 err = get_checkpoint_version(sbi, cp_addr, &cp_block, 902 - &cp_page_2, version); 911 + &cp_folio_2, version); 903 912 if (err) 904 913 goto invalid_cp; 905 914 cur_version = *version; 906 915 907 916 if (cur_version == pre_version) { 908 917 *version = cur_version; 909 - f2fs_put_page(cp_page_2, 1); 910 - return cp_page_1; 918 + f2fs_folio_put(cp_folio_2, true); 919 + return cp_folio_1; 911 920 } 912 - f2fs_put_page(cp_page_2, 1); 921 + f2fs_folio_put(cp_folio_2, true); 913 922 invalid_cp: 914 - f2fs_put_page(cp_page_1, 1); 923 + f2fs_folio_put(cp_folio_1, true); 915 924 return NULL; 916 925 } 917 926 ··· 919 928 { 920 929 struct f2fs_checkpoint *cp_block; 921 930 struct f2fs_super_block *fsb = sbi->raw_super; 922 - struct page *cp1, *cp2, *cur_page; 931 + struct folio *cp1, *cp2, *cur_folio; 923 932 unsigned long blk_size = sbi->blocksize; 924 933 unsigned long long cp1_version = 0, cp2_version = 0; 925 934 unsigned long long cp_start_blk_no; ··· 946 955 947 956 if (cp1 && cp2) { 948 957 if (ver_after(cp2_version, cp1_version)) 949 - cur_page = cp2; 958 + cur_folio = cp2; 950 959 else 951 - cur_page = cp1; 960 + cur_folio = cp1; 952 961 } else if (cp1) { 953 - cur_page = cp1; 962 + cur_folio = cp1; 954 963 } else if (cp2) { 955 - cur_page = cp2; 964 + cur_folio = cp2; 956 965 } else { 957 966 err = -EFSCORRUPTED; 958 967 goto fail_no_cp; 959 968 } 960 969 961 - cp_block = (struct f2fs_checkpoint *)page_address(cur_page); 970 + cp_block = folio_address(cur_folio); 962 971 memcpy(sbi->ckpt, cp_block, blk_size); 963 972 964 - if (cur_page == cp1) 973 + if (cur_folio == cp1) 965 974 sbi->cur_cp_pack = 1; 966 975 else 967 976 sbi->cur_cp_pack = 2; ··· 976 985 goto done; 977 986 978 987 cp_blk_no = le32_to_cpu(fsb->cp_blkaddr); 979 - if (cur_page == cp2) 988 + if (cur_folio == cp2) 980 989 cp_blk_no += BIT(le32_to_cpu(fsb->log_blocks_per_seg)); 981 990 982 991 for (i = 1; i < cp_blks; i++) { 983 992 void *sit_bitmap_ptr; 984 993 unsigned char *ckpt = (unsigned char *)sbi->ckpt; 985 994 986 - cur_page = f2fs_get_meta_page(sbi, cp_blk_no + i); 987 - if (IS_ERR(cur_page)) { 988 - err = PTR_ERR(cur_page); 995 + cur_folio = f2fs_get_meta_folio(sbi, cp_blk_no + i); 996 + if (IS_ERR(cur_folio)) { 997 + err = PTR_ERR(cur_folio); 989 998 goto free_fail_no_cp; 990 999 } 991 - sit_bitmap_ptr = page_address(cur_page); 1000 + sit_bitmap_ptr = folio_address(cur_folio); 992 1001 memcpy(ckpt + i * blk_size, sit_bitmap_ptr, blk_size); 993 - f2fs_put_page(cur_page, 1); 1002 + f2fs_folio_put(cur_folio, true); 994 1003 } 995 1004 done: 996 - f2fs_put_page(cp1, 1); 997 - f2fs_put_page(cp2, 1); 1005 + f2fs_folio_put(cp1, true); 1006 + f2fs_folio_put(cp2, true); 998 1007 return 0; 999 1008 1000 1009 free_fail_no_cp: 1001 - f2fs_put_page(cp1, 1); 1002 - f2fs_put_page(cp2, 1); 1010 + f2fs_folio_put(cp1, true); 1011 + f2fs_folio_put(cp2, true); 1003 1012 fail_no_cp: 1004 1013 kvfree(sbi->ckpt); 1005 1014 return err; ··· 1209 1218 struct writeback_control wbc = { 1210 1219 .sync_mode = WB_SYNC_ALL, 1211 1220 .nr_to_write = LONG_MAX, 1212 - .for_reclaim = 0, 1213 1221 }; 1214 1222 int err = 0, cnt = 0; 1215 1223 ··· 1392 1402 static void commit_checkpoint(struct f2fs_sb_info *sbi, 1393 1403 void *src, block_t blk_addr) 1394 1404 { 1395 - struct writeback_control wbc = { 1396 - .for_reclaim = 0, 1397 - }; 1405 + struct writeback_control wbc = {}; 1398 1406 1399 1407 /* 1400 - * filemap_get_folios_tag and lock_page again will take 1408 + * filemap_get_folios_tag and folio_lock again will take 1401 1409 * some extra time. Therefore, f2fs_update_meta_pages and 1402 1410 * f2fs_sync_meta_pages are combined in this function. 1403 1411 */ 1404 - struct page *page = f2fs_grab_meta_page(sbi, blk_addr); 1405 - int err; 1412 + struct folio *folio = f2fs_grab_meta_folio(sbi, blk_addr); 1406 1413 1407 - f2fs_wait_on_page_writeback(page, META, true, true); 1414 + memcpy(folio_address(folio), src, PAGE_SIZE); 1408 1415 1409 - memcpy(page_address(page), src, PAGE_SIZE); 1410 - 1411 - set_page_dirty(page); 1412 - if (unlikely(!clear_page_dirty_for_io(page))) 1416 + folio_mark_dirty(folio); 1417 + if (unlikely(!folio_clear_dirty_for_io(folio))) 1413 1418 f2fs_bug_on(sbi, 1); 1414 1419 1415 1420 /* writeout cp pack 2 page */ 1416 - err = __f2fs_write_meta_page(page, &wbc, FS_CP_META_IO); 1417 - if (unlikely(err && f2fs_cp_error(sbi))) { 1418 - f2fs_put_page(page, 1); 1419 - return; 1421 + if (unlikely(!__f2fs_write_meta_folio(folio, &wbc, FS_CP_META_IO))) { 1422 + if (f2fs_cp_error(sbi)) { 1423 + f2fs_folio_put(folio, true); 1424 + return; 1425 + } 1426 + f2fs_bug_on(sbi, true); 1420 1427 } 1421 1428 1422 - f2fs_bug_on(sbi, err); 1423 - f2fs_put_page(page, 0); 1429 + f2fs_folio_put(folio, false); 1424 1430 1425 1431 /* submit checkpoint (with barrier if NOBARRIER is not set) */ 1426 1432 f2fs_submit_merged_write(sbi, META_FLUSH); ··· 1506 1520 get_sit_bitmap(sbi, __bitmap_ptr(sbi, SIT_BITMAP)); 1507 1521 get_nat_bitmap(sbi, __bitmap_ptr(sbi, NAT_BITMAP)); 1508 1522 1509 - crc32 = f2fs_checkpoint_chksum(sbi, ckpt); 1523 + crc32 = f2fs_checkpoint_chksum(ckpt); 1510 1524 *((__le32 *)((unsigned char *)ckpt + 1511 1525 le32_to_cpu(ckpt->checksum_offset))) 1512 1526 = cpu_to_le32(crc32);
+84 -82
fs/f2fs/compress.c
··· 82 82 if (page_private_nonpointer(page)) 83 83 return false; 84 84 85 - f2fs_bug_on(F2FS_M_SB(page->mapping), 85 + f2fs_bug_on(F2FS_P_SB(page), 86 86 *((u32 *)page_private(page)) != F2FS_COMPRESSED_PAGE_MAGIC); 87 87 return true; 88 88 } ··· 137 137 } 138 138 } 139 139 140 - struct page *f2fs_compress_control_page(struct page *page) 140 + struct folio *f2fs_compress_control_folio(struct folio *folio) 141 141 { 142 - return ((struct compress_io_ctx *)page_private(page))->rpages[0]; 142 + struct compress_io_ctx *ctx = folio->private; 143 + 144 + return page_folio(ctx->rpages[0]); 143 145 } 144 146 145 147 int f2fs_init_compress_ctx(struct compress_ctx *cc) ··· 180 178 #ifdef CONFIG_F2FS_FS_LZO 181 179 static int lzo_init_compress_ctx(struct compress_ctx *cc) 182 180 { 183 - cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode), 184 - LZO1X_MEM_COMPRESS, GFP_NOFS); 181 + cc->private = f2fs_vmalloc(F2FS_I_SB(cc->inode), 182 + LZO1X_MEM_COMPRESS); 185 183 if (!cc->private) 186 184 return -ENOMEM; 187 185 ··· 191 189 192 190 static void lzo_destroy_compress_ctx(struct compress_ctx *cc) 193 191 { 194 - kvfree(cc->private); 192 + vfree(cc->private); 195 193 cc->private = NULL; 196 194 } 197 195 ··· 248 246 size = LZ4HC_MEM_COMPRESS; 249 247 #endif 250 248 251 - cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode), size, GFP_NOFS); 249 + cc->private = f2fs_vmalloc(F2FS_I_SB(cc->inode), size); 252 250 if (!cc->private) 253 251 return -ENOMEM; 254 252 ··· 263 261 264 262 static void lz4_destroy_compress_ctx(struct compress_ctx *cc) 265 263 { 266 - kvfree(cc->private); 264 + vfree(cc->private); 267 265 cc->private = NULL; 268 266 } 269 267 ··· 344 342 params = zstd_get_params(level, cc->rlen); 345 343 workspace_size = zstd_cstream_workspace_bound(&params.cParams); 346 344 347 - workspace = f2fs_kvmalloc(F2FS_I_SB(cc->inode), 348 - workspace_size, GFP_NOFS); 345 + workspace = f2fs_vmalloc(F2FS_I_SB(cc->inode), workspace_size); 349 346 if (!workspace) 350 347 return -ENOMEM; 351 348 ··· 352 351 if (!stream) { 353 352 f2fs_err_ratelimited(F2FS_I_SB(cc->inode), 354 353 "%s zstd_init_cstream failed", __func__); 355 - kvfree(workspace); 354 + vfree(workspace); 356 355 return -EIO; 357 356 } 358 357 ··· 365 364 366 365 static void zstd_destroy_compress_ctx(struct compress_ctx *cc) 367 366 { 368 - kvfree(cc->private); 367 + vfree(cc->private); 369 368 cc->private = NULL; 370 369 cc->private2 = NULL; 371 370 } ··· 424 423 425 424 workspace_size = zstd_dstream_workspace_bound(max_window_size); 426 425 427 - workspace = f2fs_kvmalloc(F2FS_I_SB(dic->inode), 428 - workspace_size, GFP_NOFS); 426 + workspace = f2fs_vmalloc(F2FS_I_SB(dic->inode), workspace_size); 429 427 if (!workspace) 430 428 return -ENOMEM; 431 429 ··· 432 432 if (!stream) { 433 433 f2fs_err_ratelimited(F2FS_I_SB(dic->inode), 434 434 "%s zstd_init_dstream failed", __func__); 435 - kvfree(workspace); 435 + vfree(workspace); 436 436 return -EIO; 437 437 } 438 438 ··· 444 444 445 445 static void zstd_destroy_decompress_ctx(struct decompress_io_ctx *dic) 446 446 { 447 - kvfree(dic->private); 447 + vfree(dic->private); 448 448 dic->private = NULL; 449 449 dic->private2 = NULL; 450 450 } ··· 593 593 594 594 static void f2fs_compress_free_page(struct page *page) 595 595 { 596 + struct folio *folio; 597 + 596 598 if (!page) 597 599 return; 598 - detach_page_private(page); 599 - page->mapping = NULL; 600 - unlock_page(page); 600 + folio = page_folio(page); 601 + folio_detach_private(folio); 602 + folio->mapping = NULL; 603 + folio_unlock(folio); 601 604 mempool_free(page, compress_page_pool); 602 605 } 603 606 ··· 677 674 cc->cbuf->clen = cpu_to_le32(cc->clen); 678 675 679 676 if (fi->i_compress_flag & BIT(COMPRESS_CHKSUM)) 680 - chksum = f2fs_crc32(F2FS_I_SB(cc->inode), 681 - cc->cbuf->cdata, cc->clen); 677 + chksum = f2fs_crc32(cc->cbuf->cdata, cc->clen); 682 678 cc->cbuf->chksum = cpu_to_le32(chksum); 683 679 684 680 for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++) ··· 773 771 774 772 if (!ret && (fi->i_compress_flag & BIT(COMPRESS_CHKSUM))) { 775 773 u32 provided = le32_to_cpu(dic->cbuf->chksum); 776 - u32 calculated = f2fs_crc32(sbi, dic->cbuf->cdata, dic->clen); 774 + u32 calculated = f2fs_crc32(dic->cbuf->cdata, dic->clen); 777 775 778 776 if (provided != calculated) { 779 777 if (!is_inode_flag_set(dic->inode, FI_COMPRESS_CORRUPT)) { ··· 911 909 } 912 910 913 911 for (i = 1, count = 1; i < cluster_size; i++, count++) { 914 - block_t blkaddr = data_blkaddr(dn->inode, dn->node_page, 912 + block_t blkaddr = data_blkaddr(dn->inode, dn->node_folio, 915 913 dn->ofs_in_node + i); 916 914 917 915 /* [COMPR_ADDR, ..., COMPR_ADDR] */ ··· 952 950 int count, i; 953 951 954 952 for (i = 0, count = 0; i < cluster_size; i++) { 955 - block_t blkaddr = data_blkaddr(dn->inode, dn->node_page, 953 + block_t blkaddr = data_blkaddr(dn->inode, dn->node_folio, 956 954 dn->ofs_in_node + i); 957 955 958 956 if (__is_valid_data_blkaddr(blkaddr)) ··· 1092 1090 { 1093 1091 struct f2fs_sb_info *sbi = F2FS_I_SB(cc->inode); 1094 1092 struct address_space *mapping = cc->inode->i_mapping; 1095 - struct page *page; 1093 + struct folio *folio; 1096 1094 sector_t last_block_in_bio; 1097 1095 fgf_t fgp_flag = FGP_LOCK | FGP_WRITE | FGP_CREAT; 1098 1096 pgoff_t start_idx = start_idx_of_cluster(cc); ··· 1107 1105 if (ret) 1108 1106 return ret; 1109 1107 1110 - /* keep page reference to avoid page reclaim */ 1108 + /* keep folio reference to avoid page reclaim */ 1111 1109 for (i = 0; i < cc->cluster_size; i++) { 1112 - page = f2fs_pagecache_get_page(mapping, start_idx + i, 1113 - fgp_flag, GFP_NOFS); 1114 - if (!page) { 1115 - ret = -ENOMEM; 1110 + folio = f2fs_filemap_get_folio(mapping, start_idx + i, 1111 + fgp_flag, GFP_NOFS); 1112 + if (IS_ERR(folio)) { 1113 + ret = PTR_ERR(folio); 1116 1114 goto unlock_pages; 1117 1115 } 1118 1116 1119 - if (PageUptodate(page)) 1120 - f2fs_put_page(page, 1); 1117 + if (folio_test_uptodate(folio)) 1118 + f2fs_folio_put(folio, true); 1121 1119 else 1122 - f2fs_compress_ctx_add_page(cc, page_folio(page)); 1120 + f2fs_compress_ctx_add_page(cc, folio); 1123 1121 } 1124 1122 1125 1123 if (!f2fs_cluster_is_empty(cc)) { ··· 1142 1140 for (i = 0; i < cc->cluster_size; i++) { 1143 1141 f2fs_bug_on(sbi, cc->rpages[i]); 1144 1142 1145 - page = find_lock_page(mapping, start_idx + i); 1146 - if (!page) { 1147 - /* page can be truncated */ 1143 + folio = filemap_lock_folio(mapping, start_idx + i); 1144 + if (IS_ERR(folio)) { 1145 + /* folio could be truncated */ 1148 1146 goto release_and_retry; 1149 1147 } 1150 1148 1151 - f2fs_wait_on_page_writeback(page, DATA, true, true); 1152 - f2fs_compress_ctx_add_page(cc, page_folio(page)); 1149 + f2fs_folio_wait_writeback(folio, DATA, true, true); 1150 + f2fs_compress_ctx_add_page(cc, folio); 1153 1151 1154 - if (!PageUptodate(page)) { 1155 - f2fs_handle_page_eio(sbi, page_folio(page), DATA); 1152 + if (!folio_test_uptodate(folio)) { 1153 + f2fs_handle_page_eio(sbi, folio, DATA); 1156 1154 release_and_retry: 1157 1155 f2fs_put_rpages(cc); 1158 1156 f2fs_unlock_rpages(cc, i + 1); ··· 1319 1317 goto out_unlock_op; 1320 1318 1321 1319 for (i = 0; i < cc->cluster_size; i++) { 1322 - if (data_blkaddr(dn.inode, dn.node_page, 1320 + if (data_blkaddr(dn.inode, dn.node_folio, 1323 1321 dn.ofs_in_node + i) == NULL_ADDR) 1324 1322 goto out_put_dnode; 1325 1323 } ··· 1351 1349 page_folio(cc->rpages[i + 1])->index, cic); 1352 1350 fio.compressed_page = cc->cpages[i]; 1353 1351 1354 - fio.old_blkaddr = data_blkaddr(dn.inode, dn.node_page, 1352 + fio.old_blkaddr = data_blkaddr(dn.inode, dn.node_folio, 1355 1353 dn.ofs_in_node + i + 1); 1356 1354 1357 1355 /* wait for GCed page writeback via META_MAPPING */ ··· 1483 1481 f2fs_is_compressed_page(page)); 1484 1482 int i; 1485 1483 1486 - if (unlikely(bio->bi_status)) 1484 + if (unlikely(bio->bi_status != BLK_STS_OK)) 1487 1485 mapping_set_error(cic->inode->i_mapping, -EIO); 1488 1486 1489 1487 f2fs_compress_free_page(page); ··· 1531 1529 f2fs_lock_op(sbi); 1532 1530 1533 1531 for (i = 0; i < cc->cluster_size; i++) { 1532 + struct folio *folio; 1533 + 1534 1534 if (!cc->rpages[i]) 1535 1535 continue; 1536 + folio = page_folio(cc->rpages[i]); 1536 1537 retry_write: 1537 - lock_page(cc->rpages[i]); 1538 + folio_lock(folio); 1538 1539 1539 - if (cc->rpages[i]->mapping != mapping) { 1540 + if (folio->mapping != mapping) { 1540 1541 continue_unlock: 1541 - unlock_page(cc->rpages[i]); 1542 + folio_unlock(folio); 1542 1543 continue; 1543 1544 } 1544 1545 1545 - if (!PageDirty(cc->rpages[i])) 1546 + if (!folio_test_dirty(folio)) 1546 1547 goto continue_unlock; 1547 1548 1548 - if (folio_test_writeback(page_folio(cc->rpages[i]))) { 1549 + if (folio_test_writeback(folio)) { 1549 1550 if (wbc->sync_mode == WB_SYNC_NONE) 1550 1551 goto continue_unlock; 1551 - f2fs_wait_on_page_writeback(cc->rpages[i], DATA, true, true); 1552 + f2fs_folio_wait_writeback(folio, DATA, true, true); 1552 1553 } 1553 1554 1554 - if (!clear_page_dirty_for_io(cc->rpages[i])) 1555 + if (!folio_clear_dirty_for_io(folio)) 1555 1556 goto continue_unlock; 1556 1557 1557 1558 submitted = 0; 1558 - ret = f2fs_write_single_data_page(page_folio(cc->rpages[i]), 1559 - &submitted, 1559 + ret = f2fs_write_single_data_page(folio, &submitted, 1560 1560 NULL, NULL, wbc, io_type, 1561 1561 compr_blocks, false); 1562 1562 if (ret) { 1563 - if (ret == AOP_WRITEPAGE_ACTIVATE) { 1564 - unlock_page(cc->rpages[i]); 1563 + if (ret == 1) { 1565 1564 ret = 0; 1566 1565 } else if (ret == -EAGAIN) { 1567 1566 ret = 0; ··· 1865 1862 } 1866 1863 1867 1864 /* 1868 - * Put a reference to a compressed page's decompress_io_ctx. 1865 + * Put a reference to a compressed folio's decompress_io_ctx. 1869 1866 * 1870 - * This is called when the page is no longer needed and can be freed. 1867 + * This is called when the folio is no longer needed and can be freed. 1871 1868 */ 1872 - void f2fs_put_page_dic(struct page *page, bool in_task) 1869 + void f2fs_put_folio_dic(struct folio *folio, bool in_task) 1873 1870 { 1874 - struct decompress_io_ctx *dic = 1875 - (struct decompress_io_ctx *)page_private(page); 1871 + struct decompress_io_ctx *dic = folio->private; 1876 1872 1877 1873 f2fs_put_dic(dic, in_task); 1878 1874 } ··· 1883 1881 unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn, 1884 1882 unsigned int ofs_in_node) 1885 1883 { 1886 - bool compressed = data_blkaddr(dn->inode, dn->node_page, 1884 + bool compressed = data_blkaddr(dn->inode, dn->node_folio, 1887 1885 ofs_in_node) == COMPRESS_ADDR; 1888 1886 int i = compressed ? 1 : 0; 1889 - block_t first_blkaddr = data_blkaddr(dn->inode, dn->node_page, 1887 + block_t first_blkaddr = data_blkaddr(dn->inode, dn->node_folio, 1890 1888 ofs_in_node + i); 1891 1889 1892 1890 for (i += 1; i < F2FS_I(dn->inode)->i_cluster_size; i++) { 1893 - block_t blkaddr = data_blkaddr(dn->inode, dn->node_page, 1891 + block_t blkaddr = data_blkaddr(dn->inode, dn->node_folio, 1894 1892 ofs_in_node + i); 1895 1893 1896 1894 if (!__is_valid_data_blkaddr(blkaddr)) ··· 1924 1922 void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, 1925 1923 nid_t ino, block_t blkaddr) 1926 1924 { 1927 - struct page *cpage; 1925 + struct folio *cfolio; 1928 1926 int ret; 1929 1927 1930 1928 if (!test_opt(sbi, COMPRESS_CACHE)) ··· 1936 1934 if (!f2fs_available_free_memory(sbi, COMPRESS_PAGE)) 1937 1935 return; 1938 1936 1939 - cpage = find_get_page(COMPRESS_MAPPING(sbi), blkaddr); 1940 - if (cpage) { 1941 - f2fs_put_page(cpage, 0); 1937 + cfolio = filemap_get_folio(COMPRESS_MAPPING(sbi), blkaddr); 1938 + if (!IS_ERR(cfolio)) { 1939 + f2fs_folio_put(cfolio, false); 1942 1940 return; 1943 1941 } 1944 1942 1945 - cpage = alloc_page(__GFP_NOWARN | __GFP_IO); 1946 - if (!cpage) 1943 + cfolio = filemap_alloc_folio(__GFP_NOWARN | __GFP_IO, 0); 1944 + if (!cfolio) 1947 1945 return; 1948 1946 1949 - ret = add_to_page_cache_lru(cpage, COMPRESS_MAPPING(sbi), 1947 + ret = filemap_add_folio(COMPRESS_MAPPING(sbi), cfolio, 1950 1948 blkaddr, GFP_NOFS); 1951 1949 if (ret) { 1952 - f2fs_put_page(cpage, 0); 1950 + f2fs_folio_put(cfolio, false); 1953 1951 return; 1954 1952 } 1955 1953 1956 - set_page_private_data(cpage, ino); 1954 + set_page_private_data(&cfolio->page, ino); 1957 1955 1958 - memcpy(page_address(cpage), page_address(page), PAGE_SIZE); 1959 - SetPageUptodate(cpage); 1960 - f2fs_put_page(cpage, 1); 1956 + memcpy(folio_address(cfolio), page_address(page), PAGE_SIZE); 1957 + folio_mark_uptodate(cfolio); 1958 + f2fs_folio_put(cfolio, true); 1961 1959 } 1962 1960 1963 - bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page, 1961 + bool f2fs_load_compressed_folio(struct f2fs_sb_info *sbi, struct folio *folio, 1964 1962 block_t blkaddr) 1965 1963 { 1966 - struct page *cpage; 1964 + struct folio *cfolio; 1967 1965 bool hitted = false; 1968 1966 1969 1967 if (!test_opt(sbi, COMPRESS_CACHE)) 1970 1968 return false; 1971 1969 1972 - cpage = f2fs_pagecache_get_page(COMPRESS_MAPPING(sbi), 1970 + cfolio = f2fs_filemap_get_folio(COMPRESS_MAPPING(sbi), 1973 1971 blkaddr, FGP_LOCK | FGP_NOWAIT, GFP_NOFS); 1974 - if (cpage) { 1975 - if (PageUptodate(cpage)) { 1972 + if (!IS_ERR(cfolio)) { 1973 + if (folio_test_uptodate(cfolio)) { 1976 1974 atomic_inc(&sbi->compress_page_hit); 1977 - memcpy(page_address(page), 1978 - page_address(cpage), PAGE_SIZE); 1975 + memcpy(folio_address(folio), 1976 + folio_address(cfolio), folio_size(folio)); 1979 1977 hitted = true; 1980 1978 } 1981 - f2fs_put_page(cpage, 1); 1979 + f2fs_folio_put(cfolio, true); 1982 1980 } 1983 1981 1984 1982 return hitted;
+117 -131
fs/f2fs/data.c
··· 49 49 50 50 bool f2fs_is_cp_guaranteed(struct page *page) 51 51 { 52 - struct address_space *mapping = page->mapping; 52 + struct address_space *mapping = page_folio(page)->mapping; 53 53 struct inode *inode; 54 54 struct f2fs_sb_info *sbi; 55 55 56 - if (!mapping) 57 - return false; 56 + if (fscrypt_is_bounce_page(page)) 57 + return page_private_gcing(fscrypt_pagecache_page(page)); 58 58 59 59 inode = mapping->host; 60 60 sbi = F2FS_I_SB(inode); ··· 146 146 if (ctx && !ctx->decompression_attempted) 147 147 f2fs_end_read_compressed_page(&folio->page, true, 0, 148 148 in_task); 149 - f2fs_put_page_dic(&folio->page, in_task); 149 + f2fs_put_folio_dic(folio, in_task); 150 150 continue; 151 151 } 152 152 153 153 dec_page_count(F2FS_F_SB(folio), __read_io_type(folio)); 154 - folio_end_read(folio, bio->bi_status == 0); 154 + folio_end_read(folio, bio->bi_status == BLK_STS_OK); 155 155 } 156 156 157 157 if (ctx) ··· 290 290 if (time_to_inject(sbi, FAULT_READ_IO)) 291 291 bio->bi_status = BLK_STS_IOERR; 292 292 293 - if (bio->bi_status) { 293 + if (bio->bi_status != BLK_STS_OK) { 294 294 f2fs_finish_read_bio(bio, intask); 295 295 return; 296 296 } ··· 347 347 348 348 type = WB_DATA_TYPE(&folio->page, false); 349 349 350 - if (unlikely(bio->bi_status)) { 350 + if (unlikely(bio->bi_status != BLK_STS_OK)) { 351 351 mapping_set_error(folio->mapping, -EIO); 352 352 if (type == F2FS_WB_CP_DATA) 353 353 f2fs_stop_checkpoint(sbi, true, 354 354 STOP_CP_REASON_WRITE_FAIL); 355 355 } 356 356 357 - f2fs_bug_on(sbi, folio->mapping == NODE_MAPPING(sbi) && 357 + f2fs_bug_on(sbi, is_node_folio(folio) && 358 358 folio->index != nid_of_node(&folio->page)); 359 359 360 360 dec_page_count(sbi, type); 361 361 if (f2fs_in_warm_node_list(sbi, folio)) 362 - f2fs_del_fsync_node_entry(sbi, &folio->page); 362 + f2fs_del_fsync_node_entry(sbi, folio); 363 363 clear_page_private_gcing(&folio->page); 364 364 folio_end_writeback(folio); 365 365 } ··· 548 548 static bool __has_merged_page(struct bio *bio, struct inode *inode, 549 549 struct page *page, nid_t ino) 550 550 { 551 - struct bio_vec *bvec; 552 - struct bvec_iter_all iter_all; 551 + struct folio_iter fi; 553 552 554 553 if (!bio) 555 554 return false; ··· 556 557 if (!inode && !page && !ino) 557 558 return true; 558 559 559 - bio_for_each_segment_all(bvec, bio, iter_all) { 560 - struct page *target = bvec->bv_page; 560 + bio_for_each_folio_all(fi, bio) { 561 + struct folio *target = fi.folio; 561 562 562 - if (fscrypt_is_bounce_page(target)) { 563 - target = fscrypt_pagecache_page(target); 563 + if (fscrypt_is_bounce_folio(target)) { 564 + target = fscrypt_pagecache_folio(target); 564 565 if (IS_ERR(target)) 565 566 continue; 566 567 } 567 - if (f2fs_is_compressed_page(target)) { 568 - target = f2fs_compress_control_page(target); 568 + if (f2fs_is_compressed_page(&target->page)) { 569 + target = f2fs_compress_control_folio(target); 569 570 if (IS_ERR(target)) 570 571 continue; 571 572 } 572 573 573 574 if (inode && inode == target->mapping->host) 574 575 return true; 575 - if (page && page == target) 576 + if (page && page == &target->page) 576 577 return true; 577 - if (ino && ino == ino_of_node(target)) 578 + if (ino && ino == ino_of_node(&target->page)) 578 579 return true; 579 580 } 580 581 ··· 779 780 static int add_ipu_page(struct f2fs_io_info *fio, struct bio **bio, 780 781 struct page *page) 781 782 { 783 + struct folio *fio_folio = page_folio(fio->page); 782 784 struct f2fs_sb_info *sbi = fio->sbi; 783 785 enum temp_type temp; 784 786 bool found = false; ··· 801 801 *fio->last_block, 802 802 fio->new_blkaddr)); 803 803 if (f2fs_crypt_mergeable_bio(*bio, 804 - fio->page->mapping->host, 805 - page_folio(fio->page)->index, fio) && 804 + fio_folio->mapping->host, 805 + fio_folio->index, fio) && 806 806 bio_add_page(*bio, page, PAGE_SIZE, 0) == 807 807 PAGE_SIZE) { 808 808 ret = 0; ··· 826 826 } 827 827 828 828 void f2fs_submit_merged_ipu_write(struct f2fs_sb_info *sbi, 829 - struct bio **bio, struct page *page) 829 + struct bio **bio, struct folio *folio) 830 830 { 831 831 enum temp_type temp; 832 832 bool found = false; 833 833 struct bio *target = bio ? *bio : NULL; 834 834 835 - f2fs_bug_on(sbi, !target && !page); 835 + f2fs_bug_on(sbi, !target && !folio); 836 836 837 837 for (temp = HOT; temp < NR_TEMP_TYPE && !found; temp++) { 838 838 struct f2fs_bio_info *io = sbi->write_io[DATA] + temp; ··· 848 848 found = (target == be->bio); 849 849 else 850 850 found = __has_merged_page(be->bio, NULL, 851 - page, 0); 851 + &folio->page, 0); 852 852 if (found) 853 853 break; 854 854 } ··· 865 865 found = (target == be->bio); 866 866 else 867 867 found = __has_merged_page(be->bio, NULL, 868 - page, 0); 868 + &folio->page, 0); 869 869 if (found) { 870 870 target = be->bio; 871 871 del_bio_entry(be); ··· 995 995 if (io->bio && 996 996 (!io_is_mergeable(sbi, io->bio, io, fio, io->last_block_in_bio, 997 997 fio->new_blkaddr) || 998 - !f2fs_crypt_mergeable_bio(io->bio, fio->page->mapping->host, 998 + !f2fs_crypt_mergeable_bio(io->bio, fio_inode(fio), 999 999 page_folio(bio_page)->index, fio))) 1000 1000 __submit_merged_bio(io); 1001 1001 alloc_new: 1002 1002 if (io->bio == NULL) { 1003 1003 io->bio = __bio_alloc(fio, BIO_MAX_VECS); 1004 - f2fs_set_bio_crypt_ctx(io->bio, fio->page->mapping->host, 1004 + f2fs_set_bio_crypt_ctx(io->bio, fio_inode(fio), 1005 1005 page_folio(bio_page)->index, fio, GFP_NOIO); 1006 1006 io->fio = *fio; 1007 1007 } ··· 1116 1116 1117 1117 static void __set_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr) 1118 1118 { 1119 - __le32 *addr = get_dnode_addr(dn->inode, dn->node_page); 1119 + __le32 *addr = get_dnode_addr(dn->inode, dn->node_folio); 1120 1120 1121 1121 dn->data_blkaddr = blkaddr; 1122 1122 addr[dn->ofs_in_node] = cpu_to_le32(dn->data_blkaddr); ··· 1125 1125 /* 1126 1126 * Lock ordering for the change of data block address: 1127 1127 * ->data_page 1128 - * ->node_page 1128 + * ->node_folio 1129 1129 * update block addresses in the node page 1130 1130 */ 1131 1131 void f2fs_set_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr) 1132 1132 { 1133 - f2fs_wait_on_page_writeback(dn->node_page, NODE, true, true); 1133 + f2fs_folio_wait_writeback(dn->node_folio, NODE, true, true); 1134 1134 __set_data_blkaddr(dn, blkaddr); 1135 - if (set_page_dirty(dn->node_page)) 1135 + if (folio_mark_dirty(dn->node_folio)) 1136 1136 dn->node_changed = true; 1137 1137 } 1138 1138 ··· 1160 1160 trace_f2fs_reserve_new_blocks(dn->inode, dn->nid, 1161 1161 dn->ofs_in_node, count); 1162 1162 1163 - f2fs_wait_on_page_writeback(dn->node_page, NODE, true, true); 1163 + f2fs_folio_wait_writeback(dn->node_folio, NODE, true, true); 1164 1164 1165 1165 for (; count > 0; dn->ofs_in_node++) { 1166 1166 block_t blkaddr = f2fs_data_blkaddr(dn); ··· 1171 1171 } 1172 1172 } 1173 1173 1174 - if (set_page_dirty(dn->node_page)) 1174 + if (folio_mark_dirty(dn->node_folio)) 1175 1175 dn->node_changed = true; 1176 1176 return 0; 1177 1177 } ··· 1189 1189 1190 1190 int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index) 1191 1191 { 1192 - bool need_put = dn->inode_page ? false : true; 1192 + bool need_put = dn->inode_folio ? false : true; 1193 1193 int err; 1194 1194 1195 1195 err = f2fs_get_dnode_of_data(dn, index, ALLOC_NODE); ··· 1257 1257 * A new dentry page is allocated but not able to be written, since its 1258 1258 * new inode page couldn't be allocated due to -ENOSPC. 1259 1259 * In such the case, its blkaddr can be remained as NEW_ADDR. 1260 - * see, f2fs_add_link -> f2fs_get_new_data_page -> 1260 + * see, f2fs_add_link -> f2fs_get_new_data_folio -> 1261 1261 * f2fs_init_inode_metadata. 1262 1262 */ 1263 1263 if (dn.data_blkaddr == NEW_ADDR) { ··· 1338 1338 * 1339 1339 * Also, caller should grab and release a rwsem by calling f2fs_lock_op() and 1340 1340 * f2fs_unlock_op(). 1341 - * Note that, ipage is set only by make_empty_dir, and if any error occur, 1342 - * ipage should be released by this function. 1341 + * Note that, ifolio is set only by make_empty_dir, and if any error occur, 1342 + * ifolio should be released by this function. 1343 1343 */ 1344 - struct page *f2fs_get_new_data_page(struct inode *inode, 1345 - struct page *ipage, pgoff_t index, bool new_i_size) 1344 + struct folio *f2fs_get_new_data_folio(struct inode *inode, 1345 + struct folio *ifolio, pgoff_t index, bool new_i_size) 1346 1346 { 1347 1347 struct address_space *mapping = inode->i_mapping; 1348 - struct page *page; 1348 + struct folio *folio; 1349 1349 struct dnode_of_data dn; 1350 1350 int err; 1351 1351 1352 - page = f2fs_grab_cache_page(mapping, index, true); 1353 - if (!page) { 1352 + folio = f2fs_grab_cache_folio(mapping, index, true); 1353 + if (IS_ERR(folio)) { 1354 1354 /* 1355 - * before exiting, we should make sure ipage will be released 1355 + * before exiting, we should make sure ifolio will be released 1356 1356 * if any error occur. 1357 1357 */ 1358 - f2fs_put_page(ipage, 1); 1358 + f2fs_folio_put(ifolio, true); 1359 1359 return ERR_PTR(-ENOMEM); 1360 1360 } 1361 1361 1362 - set_new_dnode(&dn, inode, ipage, NULL, 0); 1362 + set_new_dnode(&dn, inode, ifolio, NULL, 0); 1363 1363 err = f2fs_reserve_block(&dn, index); 1364 1364 if (err) { 1365 - f2fs_put_page(page, 1); 1365 + f2fs_folio_put(folio, true); 1366 1366 return ERR_PTR(err); 1367 1367 } 1368 - if (!ipage) 1368 + if (!ifolio) 1369 1369 f2fs_put_dnode(&dn); 1370 1370 1371 - if (PageUptodate(page)) 1371 + if (folio_test_uptodate(folio)) 1372 1372 goto got_it; 1373 1373 1374 1374 if (dn.data_blkaddr == NEW_ADDR) { 1375 - zero_user_segment(page, 0, PAGE_SIZE); 1376 - if (!PageUptodate(page)) 1377 - SetPageUptodate(page); 1375 + folio_zero_segment(folio, 0, folio_size(folio)); 1376 + if (!folio_test_uptodate(folio)) 1377 + folio_mark_uptodate(folio); 1378 1378 } else { 1379 - f2fs_put_page(page, 1); 1379 + f2fs_folio_put(folio, true); 1380 1380 1381 - /* if ipage exists, blkaddr should be NEW_ADDR */ 1382 - f2fs_bug_on(F2FS_I_SB(inode), ipage); 1383 - page = f2fs_get_lock_data_page(inode, index, true); 1384 - if (IS_ERR(page)) 1385 - return page; 1381 + /* if ifolio exists, blkaddr should be NEW_ADDR */ 1382 + f2fs_bug_on(F2FS_I_SB(inode), ifolio); 1383 + folio = f2fs_get_lock_data_folio(inode, index, true); 1384 + if (IS_ERR(folio)) 1385 + return folio; 1386 1386 } 1387 1387 got_it: 1388 1388 if (new_i_size && i_size_read(inode) < 1389 1389 ((loff_t)(index + 1) << PAGE_SHIFT)) 1390 1390 f2fs_i_size_write(inode, ((loff_t)(index + 1) << PAGE_SHIFT)); 1391 - return page; 1391 + return folio; 1392 1392 } 1393 1393 1394 1394 static int __allocate_data_block(struct dnode_of_data *dn, int seg_type) ··· 1589 1589 start_pgofs = pgofs; 1590 1590 prealloc = 0; 1591 1591 last_ofs_in_node = ofs_in_node = dn.ofs_in_node; 1592 - end_offset = ADDRS_PER_PAGE(dn.node_page, inode); 1592 + end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 1593 1593 1594 1594 next_block: 1595 1595 blkaddr = f2fs_data_blkaddr(&dn); ··· 1825 1825 struct fiemap_extent_info *fieinfo) 1826 1826 { 1827 1827 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 1828 - struct page *page; 1829 1828 struct node_info ni; 1830 1829 __u64 phys = 0, len; 1831 1830 __u32 flags; ··· 1833 1834 1834 1835 if (f2fs_has_inline_xattr(inode)) { 1835 1836 int offset; 1837 + struct folio *folio = f2fs_grab_cache_folio(NODE_MAPPING(sbi), 1838 + inode->i_ino, false); 1836 1839 1837 - page = f2fs_grab_cache_page(NODE_MAPPING(sbi), 1838 - inode->i_ino, false); 1839 - if (!page) 1840 - return -ENOMEM; 1840 + if (IS_ERR(folio)) 1841 + return PTR_ERR(folio); 1841 1842 1842 1843 err = f2fs_get_node_info(sbi, inode->i_ino, &ni, false); 1843 1844 if (err) { 1844 - f2fs_put_page(page, 1); 1845 + f2fs_folio_put(folio, true); 1845 1846 return err; 1846 1847 } 1847 1848 ··· 1853 1854 phys += offset; 1854 1855 len = inline_xattr_size(inode); 1855 1856 1856 - f2fs_put_page(page, 1); 1857 + f2fs_folio_put(folio, true); 1857 1858 1858 1859 flags = FIEMAP_EXTENT_DATA_INLINE | FIEMAP_EXTENT_NOT_ALIGNED; 1859 1860 ··· 1867 1868 } 1868 1869 1869 1870 if (xnid) { 1870 - page = f2fs_grab_cache_page(NODE_MAPPING(sbi), xnid, false); 1871 - if (!page) 1872 - return -ENOMEM; 1871 + struct folio *folio = f2fs_grab_cache_folio(NODE_MAPPING(sbi), 1872 + xnid, false); 1873 + 1874 + if (IS_ERR(folio)) 1875 + return PTR_ERR(folio); 1873 1876 1874 1877 err = f2fs_get_node_info(sbi, xnid, &ni, false); 1875 1878 if (err) { 1876 - f2fs_put_page(page, 1); 1879 + f2fs_folio_put(folio, true); 1877 1880 return err; 1878 1881 } 1879 1882 1880 1883 phys = F2FS_BLK_TO_BYTES(ni.blk_addr); 1881 1884 len = inode->i_sb->s_blocksize; 1882 1885 1883 - f2fs_put_page(page, 1); 1886 + f2fs_folio_put(folio, true); 1884 1887 1885 1888 flags = FIEMAP_EXTENT_LAST; 1886 1889 } ··· 2078 2077 sector_t last_block; 2079 2078 sector_t last_block_in_file; 2080 2079 sector_t block_nr; 2081 - pgoff_t index = folio_index(folio); 2080 + pgoff_t index = folio->index; 2082 2081 int ret = 0; 2083 2082 2084 2083 block_in_file = (sector_t)index; ··· 2246 2245 for (i = 1; i < cc->cluster_size; i++) { 2247 2246 block_t blkaddr; 2248 2247 2249 - blkaddr = from_dnode ? data_blkaddr(dn.inode, dn.node_page, 2248 + blkaddr = from_dnode ? data_blkaddr(dn.inode, dn.node_folio, 2250 2249 dn.ofs_in_node + i) : 2251 2250 ei.blk + i - 1; 2252 2251 ··· 2280 2279 block_t blkaddr; 2281 2280 struct bio_post_read_ctx *ctx; 2282 2281 2283 - blkaddr = from_dnode ? data_blkaddr(dn.inode, dn.node_page, 2282 + blkaddr = from_dnode ? data_blkaddr(dn.inode, dn.node_folio, 2284 2283 dn.ofs_in_node + i + 1) : 2285 2284 ei.blk + i; 2286 2285 2287 2286 f2fs_wait_on_block_writeback(inode, blkaddr); 2288 2287 2289 - if (f2fs_load_compressed_page(sbi, folio_page(folio, 0), 2290 - blkaddr)) { 2288 + if (f2fs_load_compressed_folio(sbi, folio, blkaddr)) { 2291 2289 if (atomic_dec_and_test(&dic->remaining_pages)) { 2292 2290 f2fs_decompress_cluster(dic, true); 2293 2291 break; ··· 2392 2392 } 2393 2393 2394 2394 #ifdef CONFIG_F2FS_FS_COMPRESSION 2395 - index = folio_index(folio); 2395 + index = folio->index; 2396 2396 2397 2397 if (!f2fs_compressed_file(inode)) 2398 2398 goto read_single_page; ··· 2501 2501 2502 2502 int f2fs_encrypt_one_page(struct f2fs_io_info *fio) 2503 2503 { 2504 - struct inode *inode = fio->page->mapping->host; 2505 - struct page *mpage, *page; 2504 + struct inode *inode = fio_inode(fio); 2505 + struct folio *mfolio; 2506 + struct page *page; 2506 2507 gfp_t gfp_flags = GFP_NOFS; 2507 2508 2508 2509 if (!f2fs_encrypted_file(inode)) ··· 2528 2527 return PTR_ERR(fio->encrypted_page); 2529 2528 } 2530 2529 2531 - mpage = find_lock_page(META_MAPPING(fio->sbi), fio->old_blkaddr); 2532 - if (mpage) { 2533 - if (PageUptodate(mpage)) 2534 - memcpy(page_address(mpage), 2530 + mfolio = filemap_lock_folio(META_MAPPING(fio->sbi), fio->old_blkaddr); 2531 + if (!IS_ERR(mfolio)) { 2532 + if (folio_test_uptodate(mfolio)) 2533 + memcpy(folio_address(mfolio), 2535 2534 page_address(fio->encrypted_page), PAGE_SIZE); 2536 - f2fs_put_page(mpage, 1); 2535 + f2fs_folio_put(mfolio, true); 2537 2536 } 2538 2537 return 0; 2539 2538 } ··· 2632 2631 2633 2632 static inline bool need_inplace_update(struct f2fs_io_info *fio) 2634 2633 { 2635 - struct inode *inode = fio->page->mapping->host; 2634 + struct inode *inode = fio_inode(fio); 2636 2635 2637 2636 if (f2fs_should_update_outplace(inode, fio)) 2638 2637 return false; ··· 2856 2855 goto done; 2857 2856 } 2858 2857 2859 - if (!wbc->for_reclaim) 2860 - need_balance_fs = true; 2861 - else if (has_not_enough_free_secs(sbi, 0, 0)) 2862 - goto redirty_out; 2863 - else 2864 - set_inode_flag(inode, FI_HOT_DATA); 2865 - 2858 + need_balance_fs = true; 2866 2859 err = -EAGAIN; 2867 2860 if (f2fs_has_inline_data(inode)) { 2868 2861 err = f2fs_write_inline_data(inode, folio); ··· 2892 2897 folio_clear_uptodate(folio); 2893 2898 clear_page_private_gcing(page); 2894 2899 } 2895 - 2896 - if (wbc->for_reclaim) { 2897 - f2fs_submit_merged_write_cond(sbi, NULL, page, 0, DATA); 2898 - clear_inode_flag(inode, FI_HOT_DATA); 2899 - f2fs_remove_dirty_inode(inode); 2900 - submitted = NULL; 2901 - } 2902 2900 folio_unlock(folio); 2903 2901 if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) && 2904 2902 !F2FS_I(inode)->wb_task && allow_balance) ··· 2917 2929 * file_write_and_wait_range() will see EIO error, which is critical 2918 2930 * to return value of fsync() followed by atomic_write failure to user. 2919 2931 */ 2920 - if (!err || wbc->for_reclaim) 2921 - return AOP_WRITEPAGE_ACTIVATE; 2922 2932 folio_unlock(folio); 2933 + if (!err) 2934 + return 1; 2923 2935 return err; 2924 2936 } 2925 2937 ··· 3116 3128 if (folio_test_writeback(folio)) { 3117 3129 if (wbc->sync_mode == WB_SYNC_NONE) 3118 3130 goto continue_unlock; 3119 - f2fs_wait_on_page_writeback(&folio->page, DATA, true, true); 3131 + f2fs_folio_wait_writeback(folio, DATA, true, true); 3120 3132 } 3121 3133 3122 3134 if (!folio_clear_dirty_for_io(folio)) ··· 3133 3145 ret = f2fs_write_single_data_page(folio, 3134 3146 &submitted, &bio, &last_block, 3135 3147 wbc, io_type, 0, true); 3136 - if (ret == AOP_WRITEPAGE_ACTIVATE) 3137 - folio_unlock(folio); 3138 3148 #ifdef CONFIG_F2FS_FS_COMPRESSION 3139 3149 result: 3140 3150 #endif ··· 3144 3158 * keep nr_to_write, since vfs uses this to 3145 3159 * get # of written pages. 3146 3160 */ 3147 - if (ret == AOP_WRITEPAGE_ACTIVATE) { 3161 + if (ret == 1) { 3148 3162 ret = 0; 3149 3163 goto next; 3150 3164 } else if (ret == -EAGAIN) { ··· 3338 3352 struct inode *inode = folio->mapping->host; 3339 3353 pgoff_t index = folio->index; 3340 3354 struct dnode_of_data dn; 3341 - struct page *ipage; 3355 + struct folio *ifolio; 3342 3356 bool locked = false; 3343 3357 int flag = F2FS_GET_BLOCK_PRE_AIO; 3344 3358 int err = 0; ··· 3363 3377 3364 3378 restart: 3365 3379 /* check inline_data */ 3366 - ipage = f2fs_get_inode_page(sbi, inode->i_ino); 3367 - if (IS_ERR(ipage)) { 3368 - err = PTR_ERR(ipage); 3380 + ifolio = f2fs_get_inode_folio(sbi, inode->i_ino); 3381 + if (IS_ERR(ifolio)) { 3382 + err = PTR_ERR(ifolio); 3369 3383 goto unlock_out; 3370 3384 } 3371 3385 3372 - set_new_dnode(&dn, inode, ipage, ipage, 0); 3386 + set_new_dnode(&dn, inode, ifolio, ifolio, 0); 3373 3387 3374 3388 if (f2fs_has_inline_data(inode)) { 3375 3389 if (pos + len <= MAX_INLINE_DATA(inode)) { 3376 - f2fs_do_read_inline_data(folio, ipage); 3390 + f2fs_do_read_inline_data(folio, ifolio); 3377 3391 set_inode_flag(inode, FI_DATA_EXIST); 3378 3392 if (inode->i_nlink) 3379 - set_page_private_inline(ipage); 3393 + set_page_private_inline(&ifolio->page); 3380 3394 goto out; 3381 3395 } 3382 - err = f2fs_convert_inline_page(&dn, folio_page(folio, 0)); 3396 + err = f2fs_convert_inline_folio(&dn, folio); 3383 3397 if (err || dn.data_blkaddr != NULL_ADDR) 3384 3398 goto out; 3385 3399 } ··· 3423 3437 block_t *blk_addr) 3424 3438 { 3425 3439 struct dnode_of_data dn; 3426 - struct page *ipage; 3440 + struct folio *ifolio; 3427 3441 int err = 0; 3428 3442 3429 - ipage = f2fs_get_inode_page(F2FS_I_SB(inode), inode->i_ino); 3430 - if (IS_ERR(ipage)) 3431 - return PTR_ERR(ipage); 3443 + ifolio = f2fs_get_inode_folio(F2FS_I_SB(inode), inode->i_ino); 3444 + if (IS_ERR(ifolio)) 3445 + return PTR_ERR(ifolio); 3432 3446 3433 - set_new_dnode(&dn, inode, ipage, ipage, 0); 3447 + set_new_dnode(&dn, inode, ifolio, ifolio, 0); 3434 3448 3435 3449 if (!f2fs_lookup_read_extent_cache_block(inode, index, 3436 3450 &dn.data_blkaddr)) { ··· 3451 3465 { 3452 3466 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 3453 3467 struct dnode_of_data dn; 3454 - struct page *ipage; 3468 + struct folio *ifolio; 3455 3469 int err = 0; 3456 3470 3457 3471 f2fs_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO); 3458 3472 3459 - ipage = f2fs_get_inode_page(sbi, inode->i_ino); 3460 - if (IS_ERR(ipage)) { 3461 - err = PTR_ERR(ipage); 3473 + ifolio = f2fs_get_inode_folio(sbi, inode->i_ino); 3474 + if (IS_ERR(ifolio)) { 3475 + err = PTR_ERR(ifolio); 3462 3476 goto unlock_out; 3463 3477 } 3464 - set_new_dnode(&dn, inode, ipage, ipage, 0); 3478 + set_new_dnode(&dn, inode, ifolio, ifolio, 0); 3465 3479 3466 3480 if (!f2fs_lookup_read_extent_cache_block(dn.inode, index, 3467 3481 &dn.data_blkaddr)) ··· 3609 3623 } 3610 3624 } 3611 3625 3612 - f2fs_wait_on_page_writeback(&folio->page, DATA, false, true); 3626 + f2fs_folio_wait_writeback(folio, DATA, false, true); 3613 3627 3614 3628 if (len == folio_size(folio) || folio_test_uptodate(folio)) 3615 3629 return 0; ··· 3864 3878 set_inode_flag(inode, FI_SKIP_WRITES); 3865 3879 3866 3880 for (blkofs = 0; blkofs <= blkofs_end; blkofs++) { 3867 - struct page *page; 3881 + struct folio *folio; 3868 3882 unsigned int blkidx = secidx * blk_per_sec + blkofs; 3869 3883 3870 - page = f2fs_get_lock_data_page(inode, blkidx, true); 3871 - if (IS_ERR(page)) { 3884 + folio = f2fs_get_lock_data_folio(inode, blkidx, true); 3885 + if (IS_ERR(folio)) { 3872 3886 f2fs_up_write(&sbi->pin_sem); 3873 - ret = PTR_ERR(page); 3887 + ret = PTR_ERR(folio); 3874 3888 goto done; 3875 3889 } 3876 3890 3877 - set_page_dirty(page); 3878 - f2fs_put_page(page, 1); 3891 + folio_mark_dirty(folio); 3892 + f2fs_folio_put(folio, true); 3879 3893 } 3880 3894 3881 3895 clear_inode_flag(inode, FI_SKIP_WRITES); ··· 3952 3966 3953 3967 if ((pblock - SM_I(sbi)->main_blkaddr) % blks_per_sec || 3954 3968 nr_pblocks % blks_per_sec || 3955 - !f2fs_valid_pinned_area(sbi, pblock)) { 3969 + f2fs_is_sequential_zone_area(sbi, pblock)) { 3956 3970 bool last_extent = false; 3957 3971 3958 3972 not_aligned++;
+121 -122
fs/f2fs/dir.c
··· 173 173 } 174 174 175 175 static struct f2fs_dir_entry *find_in_block(struct inode *dir, 176 - struct page *dentry_page, 176 + struct folio *dentry_folio, 177 177 const struct f2fs_filename *fname, 178 178 int *max_slots, 179 179 bool use_hash) ··· 181 181 struct f2fs_dentry_block *dentry_blk; 182 182 struct f2fs_dentry_ptr d; 183 183 184 - dentry_blk = (struct f2fs_dentry_block *)page_address(dentry_page); 184 + dentry_blk = folio_address(dentry_folio); 185 185 186 186 make_dentry_ptr_block(dir, &d, dentry_blk); 187 187 return f2fs_find_target_dentry(&d, fname, max_slots, use_hash); ··· 260 260 static struct f2fs_dir_entry *find_in_level(struct inode *dir, 261 261 unsigned int level, 262 262 const struct f2fs_filename *fname, 263 - struct page **res_page, 263 + struct folio **res_folio, 264 264 bool use_hash) 265 265 { 266 266 int s = GET_DENTRY_SLOTS(fname->disk_name.len); 267 267 unsigned int nbucket, nblock; 268 268 unsigned int bidx, end_block, bucket_no; 269 - struct page *dentry_page; 270 269 struct f2fs_dir_entry *de = NULL; 271 270 pgoff_t next_pgofs; 272 271 bool room = false; ··· 283 284 284 285 while (bidx < end_block) { 285 286 /* no need to allocate new dentry pages to all the indices */ 286 - dentry_page = f2fs_find_data_page(dir, bidx, &next_pgofs); 287 - if (IS_ERR(dentry_page)) { 288 - if (PTR_ERR(dentry_page) == -ENOENT) { 287 + struct folio *dentry_folio; 288 + dentry_folio = f2fs_find_data_folio(dir, bidx, &next_pgofs); 289 + if (IS_ERR(dentry_folio)) { 290 + if (PTR_ERR(dentry_folio) == -ENOENT) { 289 291 room = true; 290 292 bidx = next_pgofs; 291 293 continue; 292 294 } else { 293 - *res_page = dentry_page; 295 + *res_folio = dentry_folio; 294 296 break; 295 297 } 296 298 } 297 299 298 - de = find_in_block(dir, dentry_page, fname, &max_slots, use_hash); 300 + de = find_in_block(dir, dentry_folio, fname, &max_slots, use_hash); 299 301 if (IS_ERR(de)) { 300 - *res_page = ERR_CAST(de); 302 + *res_folio = ERR_CAST(de); 301 303 de = NULL; 302 304 break; 303 305 } else if (de) { 304 - *res_page = dentry_page; 306 + *res_folio = dentry_folio; 305 307 break; 306 308 } 307 309 308 310 if (max_slots >= s) 309 311 room = true; 310 - f2fs_put_page(dentry_page, 0); 312 + f2fs_folio_put(dentry_folio, false); 311 313 312 314 bidx++; 313 315 } ··· 329 329 330 330 struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir, 331 331 const struct f2fs_filename *fname, 332 - struct page **res_page) 332 + struct folio **res_folio) 333 333 { 334 334 unsigned long npages = dir_blocks(dir); 335 335 struct f2fs_dir_entry *de = NULL; ··· 337 337 unsigned int level; 338 338 bool use_hash = true; 339 339 340 - *res_page = NULL; 340 + *res_folio = NULL; 341 341 342 342 #if IS_ENABLED(CONFIG_UNICODE) 343 343 start_find_entry: 344 344 #endif 345 345 if (f2fs_has_inline_dentry(dir)) { 346 - de = f2fs_find_in_inline_dir(dir, fname, res_page, use_hash); 346 + de = f2fs_find_in_inline_dir(dir, fname, res_folio, use_hash); 347 347 goto out; 348 348 } 349 349 ··· 359 359 } 360 360 361 361 for (level = 0; level < max_depth; level++) { 362 - de = find_in_level(dir, level, fname, res_page, use_hash); 363 - if (de || IS_ERR(*res_page)) 362 + de = find_in_level(dir, level, fname, res_folio, use_hash); 363 + if (de || IS_ERR(*res_folio)) 364 364 break; 365 365 } 366 366 367 367 out: 368 368 #if IS_ENABLED(CONFIG_UNICODE) 369 - if (IS_CASEFOLDED(dir) && !de && use_hash) { 369 + if (!sb_no_casefold_compat_fallback(dir->i_sb) && 370 + IS_CASEFOLDED(dir) && !de && use_hash) { 370 371 use_hash = false; 371 372 goto start_find_entry; 372 373 } ··· 385 384 * Entry is guaranteed to be valid. 386 385 */ 387 386 struct f2fs_dir_entry *f2fs_find_entry(struct inode *dir, 388 - const struct qstr *child, struct page **res_page) 387 + const struct qstr *child, struct folio **res_folio) 389 388 { 390 389 struct f2fs_dir_entry *de = NULL; 391 390 struct f2fs_filename fname; ··· 394 393 err = f2fs_setup_filename(dir, child, 1, &fname); 395 394 if (err) { 396 395 if (err == -ENOENT) 397 - *res_page = NULL; 396 + *res_folio = NULL; 398 397 else 399 - *res_page = ERR_PTR(err); 398 + *res_folio = ERR_PTR(err); 400 399 return NULL; 401 400 } 402 401 403 - de = __f2fs_find_entry(dir, &fname, res_page); 402 + de = __f2fs_find_entry(dir, &fname, res_folio); 404 403 405 404 f2fs_free_filename(&fname); 406 405 return de; 407 406 } 408 407 409 - struct f2fs_dir_entry *f2fs_parent_dir(struct inode *dir, struct page **p) 408 + struct f2fs_dir_entry *f2fs_parent_dir(struct inode *dir, struct folio **f) 410 409 { 411 - return f2fs_find_entry(dir, &dotdot_name, p); 410 + return f2fs_find_entry(dir, &dotdot_name, f); 412 411 } 413 412 414 413 ino_t f2fs_inode_by_name(struct inode *dir, const struct qstr *qstr, 415 - struct page **page) 414 + struct folio **folio) 416 415 { 417 416 ino_t res = 0; 418 417 struct f2fs_dir_entry *de; 419 418 420 - de = f2fs_find_entry(dir, qstr, page); 419 + de = f2fs_find_entry(dir, qstr, folio); 421 420 if (de) { 422 421 res = le32_to_cpu(de->ino); 423 - f2fs_put_page(*page, 0); 422 + f2fs_folio_put(*folio, false); 424 423 } 425 424 426 425 return res; 427 426 } 428 427 429 428 void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de, 430 - struct page *page, struct inode *inode) 429 + struct folio *folio, struct inode *inode) 431 430 { 432 431 enum page_type type = f2fs_has_inline_dentry(dir) ? NODE : DATA; 433 432 434 - lock_page(page); 435 - f2fs_wait_on_page_writeback(page, type, true, true); 433 + folio_lock(folio); 434 + f2fs_folio_wait_writeback(folio, type, true, true); 436 435 de->ino = cpu_to_le32(inode->i_ino); 437 436 de->file_type = fs_umode_to_ftype(inode->i_mode); 438 - set_page_dirty(page); 437 + folio_mark_dirty(folio); 439 438 440 439 inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir)); 441 440 f2fs_mark_inode_dirty_sync(dir, false); 442 - f2fs_put_page(page, 1); 441 + f2fs_folio_put(folio, true); 443 442 } 444 443 445 444 static void init_dent_inode(struct inode *dir, struct inode *inode, 446 445 const struct f2fs_filename *fname, 447 - struct page *ipage) 446 + struct folio *ifolio) 448 447 { 449 448 struct f2fs_inode *ri; 450 449 451 450 if (!fname) /* tmpfile case? */ 452 451 return; 453 452 454 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 453 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 455 454 456 - /* copy name info. to this inode page */ 457 - ri = F2FS_INODE(ipage); 455 + /* copy name info. to this inode folio */ 456 + ri = F2FS_INODE(&ifolio->page); 458 457 ri->i_namelen = cpu_to_le32(fname->disk_name.len); 459 458 memcpy(ri->i_name, fname->disk_name.name, fname->disk_name.len); 460 459 if (IS_ENCRYPTED(dir)) { ··· 475 474 file_lost_pino(inode); 476 475 } 477 476 } 478 - set_page_dirty(ipage); 477 + folio_mark_dirty(ifolio); 479 478 } 480 479 481 480 void f2fs_do_make_empty_dir(struct inode *inode, struct inode *parent, ··· 492 491 } 493 492 494 493 static int make_empty_dir(struct inode *inode, 495 - struct inode *parent, struct page *page) 494 + struct inode *parent, struct folio *folio) 496 495 { 497 - struct page *dentry_page; 496 + struct folio *dentry_folio; 498 497 struct f2fs_dentry_block *dentry_blk; 499 498 struct f2fs_dentry_ptr d; 500 499 501 500 if (f2fs_has_inline_dentry(inode)) 502 - return f2fs_make_empty_inline_dir(inode, parent, page); 501 + return f2fs_make_empty_inline_dir(inode, parent, folio); 503 502 504 - dentry_page = f2fs_get_new_data_page(inode, page, 0, true); 505 - if (IS_ERR(dentry_page)) 506 - return PTR_ERR(dentry_page); 503 + dentry_folio = f2fs_get_new_data_folio(inode, folio, 0, true); 504 + if (IS_ERR(dentry_folio)) 505 + return PTR_ERR(dentry_folio); 507 506 508 - dentry_blk = page_address(dentry_page); 507 + dentry_blk = folio_address(dentry_folio); 509 508 510 509 make_dentry_ptr_block(NULL, &d, dentry_blk); 511 510 f2fs_do_make_empty_dir(inode, parent, &d); 512 511 513 - set_page_dirty(dentry_page); 514 - f2fs_put_page(dentry_page, 1); 512 + folio_mark_dirty(dentry_folio); 513 + f2fs_folio_put(dentry_folio, true); 515 514 return 0; 516 515 } 517 516 518 - struct page *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir, 519 - const struct f2fs_filename *fname, struct page *dpage) 517 + struct folio *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir, 518 + const struct f2fs_filename *fname, struct folio *dfolio) 520 519 { 521 - struct page *page; 520 + struct folio *folio; 522 521 int err; 523 522 524 523 if (is_inode_flag_set(inode, FI_NEW_INODE)) { 525 - page = f2fs_new_inode_page(inode); 526 - if (IS_ERR(page)) 527 - return page; 524 + folio = f2fs_new_inode_folio(inode); 525 + if (IS_ERR(folio)) 526 + return folio; 528 527 529 528 if (S_ISDIR(inode->i_mode)) { 530 529 /* in order to handle error case */ 531 - get_page(page); 532 - err = make_empty_dir(inode, dir, page); 530 + folio_get(folio); 531 + err = make_empty_dir(inode, dir, folio); 533 532 if (err) { 534 - lock_page(page); 533 + folio_lock(folio); 535 534 goto put_error; 536 535 } 537 - put_page(page); 536 + folio_put(folio); 538 537 } 539 538 540 - err = f2fs_init_acl(inode, dir, page, dpage); 539 + err = f2fs_init_acl(inode, dir, folio, dfolio); 541 540 if (err) 542 541 goto put_error; 543 542 544 543 err = f2fs_init_security(inode, dir, 545 - fname ? fname->usr_fname : NULL, page); 544 + fname ? fname->usr_fname : NULL, 545 + folio); 546 546 if (err) 547 547 goto put_error; 548 548 549 549 if (IS_ENCRYPTED(inode)) { 550 - err = fscrypt_set_context(inode, page); 550 + err = fscrypt_set_context(inode, folio); 551 551 if (err) 552 552 goto put_error; 553 553 } 554 554 } else { 555 - page = f2fs_get_inode_page(F2FS_I_SB(dir), inode->i_ino); 556 - if (IS_ERR(page)) 557 - return page; 555 + folio = f2fs_get_inode_folio(F2FS_I_SB(dir), inode->i_ino); 556 + if (IS_ERR(folio)) 557 + return folio; 558 558 } 559 559 560 - init_dent_inode(dir, inode, fname, page); 560 + init_dent_inode(dir, inode, fname, folio); 561 561 562 562 /* 563 563 * This file should be checkpointed during fsync. ··· 575 573 f2fs_remove_orphan_inode(F2FS_I_SB(dir), inode->i_ino); 576 574 f2fs_i_links_write(inode, true); 577 575 } 578 - return page; 576 + return folio; 579 577 580 578 put_error: 581 579 clear_nlink(inode); 582 - f2fs_update_inode(inode, page); 583 - f2fs_put_page(page, 1); 580 + f2fs_update_inode(inode, folio); 581 + f2fs_folio_put(folio, true); 584 582 return ERR_PTR(err); 585 583 } 586 584 ··· 622 620 goto next; 623 621 } 624 622 625 - bool f2fs_has_enough_room(struct inode *dir, struct page *ipage, 623 + bool f2fs_has_enough_room(struct inode *dir, struct folio *ifolio, 626 624 const struct f2fs_filename *fname) 627 625 { 628 626 struct f2fs_dentry_ptr d; 629 627 unsigned int bit_pos; 630 628 int slots = GET_DENTRY_SLOTS(fname->disk_name.len); 631 629 632 - make_dentry_ptr_inline(dir, &d, inline_data_addr(dir, ipage)); 630 + make_dentry_ptr_inline(dir, &d, inline_data_addr(dir, ifolio)); 633 631 634 632 bit_pos = f2fs_room_for_filename(d.bitmap, slots, d.max); 635 633 ··· 666 664 unsigned int current_depth; 667 665 unsigned long bidx, block; 668 666 unsigned int nbucket, nblock; 669 - struct page *dentry_page = NULL; 667 + struct folio *dentry_folio = NULL; 670 668 struct f2fs_dentry_block *dentry_blk = NULL; 671 669 struct f2fs_dentry_ptr d; 672 - struct page *page = NULL; 670 + struct folio *folio = NULL; 673 671 int slots, err = 0; 674 672 675 673 level = 0; ··· 699 697 (le32_to_cpu(fname->hash) % nbucket)); 700 698 701 699 for (block = bidx; block <= (bidx + nblock - 1); block++) { 702 - dentry_page = f2fs_get_new_data_page(dir, NULL, block, true); 703 - if (IS_ERR(dentry_page)) 704 - return PTR_ERR(dentry_page); 700 + dentry_folio = f2fs_get_new_data_folio(dir, NULL, block, true); 701 + if (IS_ERR(dentry_folio)) 702 + return PTR_ERR(dentry_folio); 705 703 706 - dentry_blk = page_address(dentry_page); 704 + dentry_blk = folio_address(dentry_folio); 707 705 bit_pos = f2fs_room_for_filename(&dentry_blk->dentry_bitmap, 708 706 slots, NR_DENTRY_IN_BLOCK); 709 707 if (bit_pos < NR_DENTRY_IN_BLOCK) 710 708 goto add_dentry; 711 709 712 - f2fs_put_page(dentry_page, 1); 710 + f2fs_folio_put(dentry_folio, true); 713 711 } 714 712 715 713 /* Move to next level to find the empty slot for new dentry */ 716 714 ++level; 717 715 goto start; 718 716 add_dentry: 719 - f2fs_wait_on_page_writeback(dentry_page, DATA, true, true); 717 + f2fs_folio_wait_writeback(dentry_folio, DATA, true, true); 720 718 721 719 if (inode) { 722 720 f2fs_down_write(&F2FS_I(inode)->i_sem); 723 - page = f2fs_init_inode_metadata(inode, dir, fname, NULL); 724 - if (IS_ERR(page)) { 725 - err = PTR_ERR(page); 721 + folio = f2fs_init_inode_metadata(inode, dir, fname, NULL); 722 + if (IS_ERR(folio)) { 723 + err = PTR_ERR(folio); 726 724 goto fail; 727 725 } 728 726 } ··· 731 729 f2fs_update_dentry(ino, mode, &d, &fname->disk_name, fname->hash, 732 730 bit_pos); 733 731 734 - set_page_dirty(dentry_page); 732 + folio_mark_dirty(dentry_folio); 735 733 736 734 if (inode) { 737 735 f2fs_i_pino_write(inode, dir->i_ino); 738 736 739 737 /* synchronize inode page's data from inode cache */ 740 738 if (is_inode_flag_set(inode, FI_NEW_INODE)) 741 - f2fs_update_inode(inode, page); 739 + f2fs_update_inode(inode, folio); 742 740 743 - f2fs_put_page(page, 1); 741 + f2fs_folio_put(folio, true); 744 742 } 745 743 746 744 f2fs_update_parent_metadata(dir, inode, current_depth); ··· 748 746 if (inode) 749 747 f2fs_up_write(&F2FS_I(inode)->i_sem); 750 748 751 - f2fs_put_page(dentry_page, 1); 749 + f2fs_folio_put(dentry_folio, true); 752 750 753 751 return err; 754 752 } ··· 782 780 struct inode *inode, nid_t ino, umode_t mode) 783 781 { 784 782 struct f2fs_filename fname; 785 - struct page *page = NULL; 783 + struct folio *folio = NULL; 786 784 struct f2fs_dir_entry *de = NULL; 787 785 int err; 788 786 ··· 798 796 * consistency more. 799 797 */ 800 798 if (current != F2FS_I(dir)->task) { 801 - de = __f2fs_find_entry(dir, &fname, &page); 799 + de = __f2fs_find_entry(dir, &fname, &folio); 802 800 F2FS_I(dir)->task = NULL; 803 801 } 804 802 if (de) { 805 - f2fs_put_page(page, 0); 803 + f2fs_folio_put(folio, false); 806 804 err = -EEXIST; 807 - } else if (IS_ERR(page)) { 808 - err = PTR_ERR(page); 805 + } else if (IS_ERR(folio)) { 806 + err = PTR_ERR(folio); 809 807 } else { 810 808 err = f2fs_add_dentry(dir, &fname, inode, ino, mode); 811 809 } ··· 816 814 int f2fs_do_tmpfile(struct inode *inode, struct inode *dir, 817 815 struct f2fs_filename *fname) 818 816 { 819 - struct page *page; 817 + struct folio *folio; 820 818 int err = 0; 821 819 822 820 f2fs_down_write(&F2FS_I(inode)->i_sem); 823 - page = f2fs_init_inode_metadata(inode, dir, fname, NULL); 824 - if (IS_ERR(page)) { 825 - err = PTR_ERR(page); 821 + folio = f2fs_init_inode_metadata(inode, dir, fname, NULL); 822 + if (IS_ERR(folio)) { 823 + err = PTR_ERR(folio); 826 824 goto fail; 827 825 } 828 - f2fs_put_page(page, 1); 826 + f2fs_folio_put(folio, true); 829 827 830 828 clear_inode_flag(inode, FI_NEW_INODE); 831 829 f2fs_update_time(F2FS_I_SB(inode), REQ_TIME); ··· 861 859 * It only removes the dentry from the dentry page, corresponding name 862 860 * entry in name page does not need to be touched during deletion. 863 861 */ 864 - void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, 862 + void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct folio *folio, 865 863 struct inode *dir, struct inode *inode) 866 864 { 867 - struct f2fs_dentry_block *dentry_blk; 865 + struct f2fs_dentry_block *dentry_blk; 868 866 unsigned int bit_pos; 869 867 int slots = GET_DENTRY_SLOTS(le16_to_cpu(dentry->name_len)); 870 - pgoff_t index = page_folio(page)->index; 868 + pgoff_t index = folio->index; 871 869 int i; 872 870 873 871 f2fs_update_time(F2FS_I_SB(dir), REQ_TIME); ··· 876 874 f2fs_add_ino_entry(F2FS_I_SB(dir), dir->i_ino, TRANS_DIR_INO); 877 875 878 876 if (f2fs_has_inline_dentry(dir)) 879 - return f2fs_delete_inline_entry(dentry, page, dir, inode); 877 + return f2fs_delete_inline_entry(dentry, folio, dir, inode); 880 878 881 - lock_page(page); 882 - f2fs_wait_on_page_writeback(page, DATA, true, true); 879 + folio_lock(folio); 880 + f2fs_folio_wait_writeback(folio, DATA, true, true); 883 881 884 - dentry_blk = page_address(page); 882 + dentry_blk = folio_address(folio); 885 883 bit_pos = dentry - dentry_blk->dentry; 886 884 for (i = 0; i < slots; i++) 887 885 __clear_bit_le(bit_pos + i, &dentry_blk->dentry_bitmap); ··· 890 888 bit_pos = find_next_bit_le(&dentry_blk->dentry_bitmap, 891 889 NR_DENTRY_IN_BLOCK, 892 890 0); 893 - set_page_dirty(page); 891 + folio_mark_dirty(folio); 894 892 895 893 if (bit_pos == NR_DENTRY_IN_BLOCK && 896 894 !f2fs_truncate_hole(dir, index, index + 1)) { 897 - f2fs_clear_page_cache_dirty_tag(page_folio(page)); 898 - clear_page_dirty_for_io(page); 899 - ClearPageUptodate(page); 900 - clear_page_private_all(page); 895 + f2fs_clear_page_cache_dirty_tag(folio); 896 + folio_clear_dirty_for_io(folio); 897 + folio_clear_uptodate(folio); 898 + clear_page_private_all(&folio->page); 901 899 902 900 inode_dec_dirty_pages(dir); 903 901 f2fs_remove_dirty_inode(dir); 904 902 } 905 - f2fs_put_page(page, 1); 903 + f2fs_folio_put(folio, true); 906 904 907 905 inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir)); 908 906 f2fs_mark_inode_dirty_sync(dir, false); ··· 914 912 bool f2fs_empty_dir(struct inode *dir) 915 913 { 916 914 unsigned long bidx = 0; 917 - struct page *dentry_page; 918 915 unsigned int bit_pos; 919 916 struct f2fs_dentry_block *dentry_blk; 920 917 unsigned long nblock = dir_blocks(dir); ··· 923 922 924 923 while (bidx < nblock) { 925 924 pgoff_t next_pgofs; 925 + struct folio *dentry_folio; 926 926 927 - dentry_page = f2fs_find_data_page(dir, bidx, &next_pgofs); 928 - if (IS_ERR(dentry_page)) { 929 - if (PTR_ERR(dentry_page) == -ENOENT) { 927 + dentry_folio = f2fs_find_data_folio(dir, bidx, &next_pgofs); 928 + if (IS_ERR(dentry_folio)) { 929 + if (PTR_ERR(dentry_folio) == -ENOENT) { 930 930 bidx = next_pgofs; 931 931 continue; 932 932 } else { ··· 935 933 } 936 934 } 937 935 938 - dentry_blk = page_address(dentry_page); 936 + dentry_blk = folio_address(dentry_folio); 939 937 if (bidx == 0) 940 938 bit_pos = 2; 941 939 else ··· 944 942 NR_DENTRY_IN_BLOCK, 945 943 bit_pos); 946 944 947 - f2fs_put_page(dentry_page, 0); 945 + f2fs_folio_put(dentry_folio, false); 948 946 949 947 if (bit_pos < NR_DENTRY_IN_BLOCK) 950 948 return false; ··· 1043 1041 struct inode *inode = file_inode(file); 1044 1042 unsigned long npages = dir_blocks(inode); 1045 1043 struct f2fs_dentry_block *dentry_blk = NULL; 1046 - struct page *dentry_page = NULL; 1047 1044 struct file_ra_state *ra = &file->f_ra; 1048 1045 loff_t start_pos = ctx->pos; 1049 1046 unsigned int n = ((unsigned long)ctx->pos / NR_DENTRY_IN_BLOCK); ··· 1066 1065 } 1067 1066 1068 1067 for (; n < npages; ctx->pos = n * NR_DENTRY_IN_BLOCK) { 1068 + struct folio *dentry_folio; 1069 1069 pgoff_t next_pgofs; 1070 1070 1071 1071 /* allow readdir() to be interrupted */ ··· 1081 1079 page_cache_sync_readahead(inode->i_mapping, ra, file, n, 1082 1080 min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES)); 1083 1081 1084 - dentry_page = f2fs_find_data_page(inode, n, &next_pgofs); 1085 - if (IS_ERR(dentry_page)) { 1086 - err = PTR_ERR(dentry_page); 1082 + dentry_folio = f2fs_find_data_folio(inode, n, &next_pgofs); 1083 + if (IS_ERR(dentry_folio)) { 1084 + err = PTR_ERR(dentry_folio); 1087 1085 if (err == -ENOENT) { 1088 1086 err = 0; 1089 1087 n = next_pgofs; ··· 1093 1091 } 1094 1092 } 1095 1093 1096 - dentry_blk = page_address(dentry_page); 1094 + dentry_blk = folio_address(dentry_folio); 1097 1095 1098 1096 make_dentry_ptr_block(inode, &d, dentry_blk); 1099 1097 1100 1098 err = f2fs_fill_dentries(ctx, &d, 1101 1099 n * NR_DENTRY_IN_BLOCK, &fstr); 1102 - if (err) { 1103 - f2fs_put_page(dentry_page, 0); 1100 + f2fs_folio_put(dentry_folio, false); 1101 + if (err) 1104 1102 break; 1105 - } 1106 - 1107 - f2fs_put_page(dentry_page, 0); 1108 1103 1109 1104 n++; 1110 1105 }
+5 -5
fs/f2fs/extent_cache.c
··· 407 407 } 408 408 } 409 409 410 - void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage) 410 + void f2fs_init_read_extent_tree(struct inode *inode, struct folio *ifolio) 411 411 { 412 412 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 413 413 struct extent_tree_info *eti = &sbi->extent_tree[EX_READ]; 414 - struct f2fs_extent *i_ext = &F2FS_INODE(ipage)->i_ext; 414 + struct f2fs_extent *i_ext = &F2FS_INODE(&ifolio->page)->i_ext; 415 415 struct extent_tree *et; 416 416 struct extent_node *en; 417 417 struct extent_info ei; ··· 419 419 if (!__may_extent_tree(inode, EX_READ)) { 420 420 /* drop largest read extent */ 421 421 if (i_ext->len) { 422 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 422 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 423 423 i_ext->len = 0; 424 - set_page_dirty(ipage); 424 + folio_mark_dirty(ifolio); 425 425 } 426 426 set_inode_flag(inode, FI_NO_EXTENT); 427 427 return; ··· 934 934 if (!__may_extent_tree(dn->inode, type)) 935 935 return; 936 936 937 - ei.fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) + 937 + ei.fofs = f2fs_start_bidx_of_node(ofs_of_node(&dn->node_folio->page), dn->inode) + 938 938 dn->ofs_in_node; 939 939 ei.len = 1; 940 940
+177 -130
fs/f2fs/f2fs.h
··· 63 63 FAULT_BLKADDR_CONSISTENCE, 64 64 FAULT_NO_SEGMENT, 65 65 FAULT_INCONSISTENT_FOOTER, 66 + FAULT_TIMEOUT, 67 + FAULT_VMALLOC, 66 68 FAULT_MAX, 67 69 }; 68 70 69 - #ifdef CONFIG_F2FS_FAULT_INJECTION 70 - #define F2FS_ALL_FAULT_TYPE (GENMASK(FAULT_MAX - 1, 0)) 71 + /* indicate which option to update */ 72 + enum fault_option { 73 + FAULT_RATE = 1, /* only update fault rate */ 74 + FAULT_TYPE = 2, /* only update fault type */ 75 + FAULT_ALL = 4, /* reset all fault injection options/stats */ 76 + }; 71 77 78 + #ifdef CONFIG_F2FS_FAULT_INJECTION 72 79 struct f2fs_fault_info { 73 80 atomic_t inject_ops; 74 81 int inject_rate; 75 82 unsigned int inject_type; 83 + /* Used to account total count of injection for each type */ 84 + unsigned int inject_count[FAULT_MAX]; 76 85 }; 77 86 78 87 extern const char *f2fs_fault_name[FAULT_MAX]; ··· 326 317 327 318 struct fsync_node_entry { 328 319 struct list_head list; /* list head */ 329 - struct page *page; /* warm node page pointer */ 320 + struct folio *folio; /* warm node folio pointer */ 330 321 unsigned int seq_id; /* sequence id */ 331 322 }; 332 323 ··· 615 606 /* congestion wait timeout value, default: 20ms */ 616 607 #define DEFAULT_IO_TIMEOUT (msecs_to_jiffies(20)) 617 608 609 + /* timeout value injected, default: 1000ms */ 610 + #define DEFAULT_FAULT_TIMEOUT (msecs_to_jiffies(1000)) 611 + 618 612 /* maximum retry quota flush count */ 619 613 #define DEFAULT_RETRY_QUOTA_FLUSH_COUNT 8 620 614 ··· 833 821 FI_ATOMIC_DIRTIED, /* indicate atomic file is dirtied */ 834 822 FI_ATOMIC_REPLACE, /* indicate atomic replace */ 835 823 FI_OPENED_FILE, /* indicate file has been opened */ 824 + FI_DONATE_FINISHED, /* indicate page donation of file has been finished */ 836 825 FI_MAX, /* max flag, never be used */ 837 826 }; 838 827 ··· 1007 994 */ 1008 995 struct dnode_of_data { 1009 996 struct inode *inode; /* vfs inode pointer */ 1010 - struct page *inode_page; /* its inode page, NULL is possible */ 1011 - struct page *node_page; /* cached direct node page */ 997 + struct folio *inode_folio; /* its inode folio, NULL is possible */ 998 + struct folio *node_folio; /* cached direct node folio */ 1012 999 nid_t nid; /* node id of the direct node block */ 1013 1000 unsigned int ofs_in_node; /* data offset in the node page */ 1014 - bool inode_page_locked; /* inode page is locked or not */ 1001 + bool inode_folio_locked; /* inode folio is locked or not */ 1015 1002 bool node_changed; /* is node block changed */ 1016 1003 char cur_level; /* level of hole node page */ 1017 1004 char max_level; /* level of current page located */ ··· 1019 1006 }; 1020 1007 1021 1008 static inline void set_new_dnode(struct dnode_of_data *dn, struct inode *inode, 1022 - struct page *ipage, struct page *npage, nid_t nid) 1009 + struct folio *ifolio, struct folio *nfolio, nid_t nid) 1023 1010 { 1024 1011 memset(dn, 0, sizeof(*dn)); 1025 1012 dn->inode = inode; 1026 - dn->inode_page = ipage; 1027 - dn->node_page = npage; 1013 + dn->inode_folio = ifolio; 1014 + dn->node_folio = nfolio; 1028 1015 dn->nid = nid; 1029 1016 } 1030 1017 ··· 1793 1780 unsigned int dirty_device; /* for checkpoint data flush */ 1794 1781 spinlock_t dev_lock; /* protect dirty_device */ 1795 1782 bool aligned_blksize; /* all devices has the same logical blksize */ 1796 - unsigned int first_zoned_segno; /* first zoned segno */ 1783 + unsigned int first_seq_zone_segno; /* first segno in sequential zone */ 1797 1784 1798 1785 /* For write statistics */ 1799 1786 u64 sectors_written_start; ··· 1915 1902 atomic_inc(&ffi->inject_ops); 1916 1903 if (atomic_read(&ffi->inject_ops) >= ffi->inject_rate) { 1917 1904 atomic_set(&ffi->inject_ops, 0); 1905 + ffi->inject_count[type]++; 1918 1906 f2fs_info_ratelimited(sbi, "inject %s in %s of %pS", 1919 1907 f2fs_fault_name[type], func, parent_func); 1920 1908 return true; ··· 1977 1963 /* 1978 1964 * Inline functions 1979 1965 */ 1980 - static inline u32 __f2fs_crc32(struct f2fs_sb_info *sbi, u32 crc, 1981 - const void *address, unsigned int length) 1966 + static inline u32 __f2fs_crc32(u32 crc, const void *address, 1967 + unsigned int length) 1982 1968 { 1983 1969 return crc32(crc, address, length); 1984 1970 } 1985 1971 1986 - static inline u32 f2fs_crc32(struct f2fs_sb_info *sbi, const void *address, 1987 - unsigned int length) 1972 + static inline u32 f2fs_crc32(const void *address, unsigned int length) 1988 1973 { 1989 - return __f2fs_crc32(sbi, F2FS_SUPER_MAGIC, address, length); 1974 + return __f2fs_crc32(F2FS_SUPER_MAGIC, address, length); 1990 1975 } 1991 1976 1992 - static inline bool f2fs_crc_valid(struct f2fs_sb_info *sbi, __u32 blk_crc, 1993 - void *buf, size_t buf_size) 1977 + static inline u32 f2fs_chksum(u32 crc, const void *address, unsigned int length) 1994 1978 { 1995 - return f2fs_crc32(sbi, buf, buf_size) == blk_crc; 1996 - } 1997 - 1998 - static inline u32 f2fs_chksum(struct f2fs_sb_info *sbi, u32 crc, 1999 - const void *address, unsigned int length) 2000 - { 2001 - return __f2fs_crc32(sbi, crc, address, length); 1979 + return __f2fs_crc32(crc, address, length); 2002 1980 } 2003 1981 2004 1982 static inline struct f2fs_inode_info *F2FS_I(struct inode *inode) ··· 2086 2080 static inline struct address_space *NODE_MAPPING(struct f2fs_sb_info *sbi) 2087 2081 { 2088 2082 return sbi->node_inode->i_mapping; 2083 + } 2084 + 2085 + static inline bool is_meta_folio(struct folio *folio) 2086 + { 2087 + return folio->mapping == META_MAPPING(F2FS_F_SB(folio)); 2088 + } 2089 + 2090 + static inline bool is_node_folio(struct folio *folio) 2091 + { 2092 + return folio->mapping == NODE_MAPPING(F2FS_F_SB(folio)); 2089 2093 } 2090 2094 2091 2095 static inline bool is_sbi_flag_set(struct f2fs_sb_info *sbi, unsigned int type) ··· 2534 2518 blkcnt_t sectors = count << F2FS_LOG_SECTORS_PER_BLOCK; 2535 2519 2536 2520 spin_lock(&sbi->stat_lock); 2537 - f2fs_bug_on(sbi, sbi->total_valid_block_count < (block_t) count); 2538 - sbi->total_valid_block_count -= (block_t)count; 2521 + if (unlikely(sbi->total_valid_block_count < count)) { 2522 + f2fs_warn(sbi, "Inconsistent total_valid_block_count:%u, ino:%lu, count:%u", 2523 + sbi->total_valid_block_count, inode->i_ino, count); 2524 + sbi->total_valid_block_count = 0; 2525 + set_sbi_flag(sbi, SBI_NEED_FSCK); 2526 + } else { 2527 + sbi->total_valid_block_count -= count; 2528 + } 2539 2529 if (sbi->reserved_blocks && 2540 2530 sbi->current_reserved_blocks < sbi->reserved_blocks) 2541 2531 sbi->current_reserved_blocks = min(sbi->reserved_blocks, ··· 2871 2849 return folio; 2872 2850 } 2873 2851 2874 - static inline struct page *f2fs_grab_cache_page(struct address_space *mapping, 2875 - pgoff_t index, bool for_write) 2852 + static inline struct folio *f2fs_filemap_get_folio( 2853 + struct address_space *mapping, pgoff_t index, 2854 + fgf_t fgp_flags, gfp_t gfp_mask) 2876 2855 { 2877 - struct folio *folio = f2fs_grab_cache_folio(mapping, index, for_write); 2856 + if (time_to_inject(F2FS_M_SB(mapping), FAULT_PAGE_GET)) 2857 + return ERR_PTR(-ENOMEM); 2878 2858 2879 - if (IS_ERR(folio)) 2880 - return NULL; 2881 - return &folio->page; 2859 + return __filemap_get_folio(mapping, index, fgp_flags, gfp_mask); 2882 2860 } 2883 2861 2884 2862 static inline struct page *f2fs_pagecache_get_page( ··· 2893 2871 2894 2872 static inline void f2fs_folio_put(struct folio *folio, bool unlock) 2895 2873 { 2896 - if (!folio) 2874 + if (IS_ERR_OR_NULL(folio)) 2897 2875 return; 2898 2876 2899 2877 if (unlock) { ··· 2912 2890 2913 2891 static inline void f2fs_put_dnode(struct dnode_of_data *dn) 2914 2892 { 2915 - if (dn->node_page) 2916 - f2fs_put_page(dn->node_page, 1); 2917 - if (dn->inode_page && dn->node_page != dn->inode_page) 2918 - f2fs_put_page(dn->inode_page, 0); 2919 - dn->node_page = NULL; 2920 - dn->inode_page = NULL; 2893 + if (dn->node_folio) 2894 + f2fs_folio_put(dn->node_folio, true); 2895 + if (dn->inode_folio && dn->node_folio != dn->inode_folio) 2896 + f2fs_folio_put(dn->inode_folio, false); 2897 + dn->node_folio = NULL; 2898 + dn->inode_folio = NULL; 2921 2899 } 2922 2900 2923 2901 static inline struct kmem_cache *f2fs_kmem_cache_create(const char *name, ··· 3041 3019 } 3042 3020 3043 3021 static inline __le32 *get_dnode_addr(struct inode *inode, 3044 - struct page *node_page) 3022 + struct folio *node_folio) 3045 3023 { 3046 - return blkaddr_in_node(F2FS_NODE(node_page)) + 3047 - get_dnode_base(inode, node_page); 3024 + return blkaddr_in_node(F2FS_NODE(&node_folio->page)) + 3025 + get_dnode_base(inode, &node_folio->page); 3048 3026 } 3049 3027 3050 3028 static inline block_t data_blkaddr(struct inode *inode, 3051 - struct page *node_page, unsigned int offset) 3029 + struct folio *node_folio, unsigned int offset) 3052 3030 { 3053 - return le32_to_cpu(*(get_dnode_addr(inode, node_page) + offset)); 3031 + return le32_to_cpu(*(get_dnode_addr(inode, node_folio) + offset)); 3054 3032 } 3055 3033 3056 3034 static inline block_t f2fs_data_blkaddr(struct dnode_of_data *dn) 3057 3035 { 3058 - return data_blkaddr(dn->inode, dn->node_page, dn->ofs_in_node); 3036 + return data_blkaddr(dn->inode, dn->node_folio, dn->ofs_in_node); 3059 3037 } 3060 3038 3061 3039 static inline int f2fs_test_bit(unsigned int nr, char *addr) ··· 3366 3344 return addrs; 3367 3345 } 3368 3346 3369 - static inline void *inline_xattr_addr(struct inode *inode, struct page *page) 3347 + static inline void *inline_xattr_addr(struct inode *inode, struct folio *folio) 3370 3348 { 3371 - struct f2fs_inode *ri = F2FS_INODE(page); 3349 + struct f2fs_inode *ri = F2FS_INODE(&folio->page); 3372 3350 3373 3351 return (void *)&(ri->i_addr[DEF_ADDRS_PER_INODE - 3374 3352 get_inline_xattr_addrs(inode)]); ··· 3383 3361 3384 3362 /* 3385 3363 * Notice: check inline_data flag without inode page lock is unsafe. 3386 - * It could change at any time by f2fs_convert_inline_page(). 3364 + * It could change at any time by f2fs_convert_inline_folio(). 3387 3365 */ 3388 3366 static inline int f2fs_has_inline_data(struct inode *inode) 3389 3367 { ··· 3415 3393 return is_inode_flag_set(inode, FI_COW_FILE); 3416 3394 } 3417 3395 3418 - static inline void *inline_data_addr(struct inode *inode, struct page *page) 3396 + static inline void *inline_data_addr(struct inode *inode, struct folio *folio) 3419 3397 { 3420 - __le32 *addr = get_dnode_addr(inode, page); 3398 + __le32 *addr = get_dnode_addr(inode, folio); 3421 3399 3422 3400 return (void *)(addr + DEF_INLINE_RESERVED_SIZE); 3423 3401 } ··· 3543 3521 return f2fs_kvmalloc(sbi, size, flags | __GFP_ZERO); 3544 3522 } 3545 3523 3524 + static inline void *f2fs_vmalloc(struct f2fs_sb_info *sbi, size_t size) 3525 + { 3526 + if (time_to_inject(sbi, FAULT_VMALLOC)) 3527 + return NULL; 3528 + 3529 + return vmalloc(size); 3530 + } 3531 + 3546 3532 static inline int get_extra_isize(struct inode *inode) 3547 3533 { 3548 3534 return F2FS_I(inode)->i_extra_isize / sizeof(__le32); ··· 3627 3597 * inode.c 3628 3598 */ 3629 3599 void f2fs_set_inode_flags(struct inode *inode); 3630 - bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page); 3600 + bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct folio *folio); 3631 3601 void f2fs_inode_chksum_set(struct f2fs_sb_info *sbi, struct page *page); 3632 3602 struct inode *f2fs_iget(struct super_block *sb, unsigned long ino); 3633 3603 struct inode *f2fs_iget_retry(struct super_block *sb, unsigned long ino); 3634 3604 int f2fs_try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink); 3635 - void f2fs_update_inode(struct inode *inode, struct page *node_page); 3605 + void f2fs_update_inode(struct inode *inode, struct folio *node_folio); 3636 3606 void f2fs_update_inode_page(struct inode *inode); 3637 3607 int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc); 3638 3608 void f2fs_evict_inode(struct inode *inode); ··· 3678 3648 unsigned int start_pos, struct fscrypt_str *fstr); 3679 3649 void f2fs_do_make_empty_dir(struct inode *inode, struct inode *parent, 3680 3650 struct f2fs_dentry_ptr *d); 3681 - struct page *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir, 3682 - const struct f2fs_filename *fname, struct page *dpage); 3651 + struct folio *f2fs_init_inode_metadata(struct inode *inode, struct inode *dir, 3652 + const struct f2fs_filename *fname, struct folio *dfolio); 3683 3653 void f2fs_update_parent_metadata(struct inode *dir, struct inode *inode, 3684 3654 unsigned int current_depth); 3685 3655 int f2fs_room_for_filename(const void *bitmap, int slots, int max_slots); 3686 3656 void f2fs_drop_nlink(struct inode *dir, struct inode *inode); 3687 3657 struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir, 3688 - const struct f2fs_filename *fname, 3689 - struct page **res_page); 3658 + const struct f2fs_filename *fname, struct folio **res_folio); 3690 3659 struct f2fs_dir_entry *f2fs_find_entry(struct inode *dir, 3691 - const struct qstr *child, struct page **res_page); 3692 - struct f2fs_dir_entry *f2fs_parent_dir(struct inode *dir, struct page **p); 3660 + const struct qstr *child, struct folio **res_folio); 3661 + struct f2fs_dir_entry *f2fs_parent_dir(struct inode *dir, struct folio **f); 3693 3662 ino_t f2fs_inode_by_name(struct inode *dir, const struct qstr *qstr, 3694 - struct page **page); 3663 + struct folio **folio); 3695 3664 void f2fs_set_link(struct inode *dir, struct f2fs_dir_entry *de, 3696 - struct page *page, struct inode *inode); 3697 - bool f2fs_has_enough_room(struct inode *dir, struct page *ipage, 3665 + struct folio *folio, struct inode *inode); 3666 + bool f2fs_has_enough_room(struct inode *dir, struct folio *ifolio, 3698 3667 const struct f2fs_filename *fname); 3699 3668 void f2fs_update_dentry(nid_t ino, umode_t mode, struct f2fs_dentry_ptr *d, 3700 3669 const struct fscrypt_str *name, f2fs_hash_t name_hash, ··· 3704 3675 struct inode *inode, nid_t ino, umode_t mode); 3705 3676 int f2fs_do_add_link(struct inode *dir, const struct qstr *name, 3706 3677 struct inode *inode, nid_t ino, umode_t mode); 3707 - void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, 3678 + void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct folio *folio, 3708 3679 struct inode *dir, struct inode *inode); 3709 3680 int f2fs_do_tmpfile(struct inode *inode, struct inode *dir, 3710 3681 struct f2fs_filename *fname); ··· 3748 3719 3749 3720 int f2fs_check_nid_range(struct f2fs_sb_info *sbi, nid_t nid); 3750 3721 bool f2fs_available_free_memory(struct f2fs_sb_info *sbi, int type); 3751 - bool f2fs_in_warm_node_list(struct f2fs_sb_info *sbi, 3752 - const struct folio *folio); 3722 + bool f2fs_in_warm_node_list(struct f2fs_sb_info *sbi, struct folio *folio); 3753 3723 void f2fs_init_fsync_node_info(struct f2fs_sb_info *sbi); 3754 - void f2fs_del_fsync_node_entry(struct f2fs_sb_info *sbi, struct page *page); 3724 + void f2fs_del_fsync_node_entry(struct f2fs_sb_info *sbi, struct folio *folio); 3755 3725 void f2fs_reset_fsync_node_info(struct f2fs_sb_info *sbi); 3756 3726 int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid); 3757 3727 bool f2fs_is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid); ··· 3764 3736 int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, 3765 3737 unsigned int seq_id); 3766 3738 int f2fs_remove_inode_page(struct inode *inode); 3767 - struct page *f2fs_new_inode_page(struct inode *inode); 3768 - struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs); 3739 + struct folio *f2fs_new_inode_folio(struct inode *inode); 3740 + struct folio *f2fs_new_node_folio(struct dnode_of_data *dn, unsigned int ofs); 3769 3741 void f2fs_ra_node_page(struct f2fs_sb_info *sbi, nid_t nid); 3770 - struct page *f2fs_get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid); 3742 + struct folio *f2fs_get_node_folio(struct f2fs_sb_info *sbi, pgoff_t nid); 3771 3743 struct folio *f2fs_get_inode_folio(struct f2fs_sb_info *sbi, pgoff_t ino); 3772 - struct page *f2fs_get_inode_page(struct f2fs_sb_info *sbi, pgoff_t ino); 3773 - struct page *f2fs_get_xnode_page(struct f2fs_sb_info *sbi, pgoff_t xnid); 3774 - struct page *f2fs_get_node_page_ra(struct page *parent, int start); 3775 - int f2fs_move_node_page(struct page *node_page, int gc_type); 3744 + struct folio *f2fs_get_xnode_folio(struct f2fs_sb_info *sbi, pgoff_t xnid); 3745 + int f2fs_move_node_folio(struct folio *node_folio, int gc_type); 3776 3746 void f2fs_flush_inline_data(struct f2fs_sb_info *sbi); 3777 3747 int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, 3778 3748 struct writeback_control *wbc, bool atomic, ··· 3783 3757 void f2fs_alloc_nid_done(struct f2fs_sb_info *sbi, nid_t nid); 3784 3758 void f2fs_alloc_nid_failed(struct f2fs_sb_info *sbi, nid_t nid); 3785 3759 int f2fs_try_to_free_nids(struct f2fs_sb_info *sbi, int nr_shrink); 3786 - int f2fs_recover_inline_xattr(struct inode *inode, struct page *page); 3760 + int f2fs_recover_inline_xattr(struct inode *inode, struct folio *folio); 3787 3761 int f2fs_recover_xattr_data(struct inode *inode, struct page *page); 3788 3762 int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page); 3789 3763 int f2fs_restore_node_summary(struct f2fs_sb_info *sbi, ··· 3833 3807 int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range); 3834 3808 bool f2fs_exist_trim_candidates(struct f2fs_sb_info *sbi, 3835 3809 struct cp_control *cpc); 3836 - struct page *f2fs_get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno); 3810 + struct folio *f2fs_get_sum_folio(struct f2fs_sb_info *sbi, unsigned int segno); 3837 3811 void f2fs_update_meta_page(struct f2fs_sb_info *sbi, void *src, 3838 3812 block_t blk_addr); 3839 3813 void f2fs_do_write_meta_page(struct f2fs_sb_info *sbi, struct folio *folio, ··· 3884 3858 unsigned long long f2fs_get_section_mtime(struct f2fs_sb_info *sbi, 3885 3859 unsigned int segno); 3886 3860 3861 + static inline struct inode *fio_inode(struct f2fs_io_info *fio) 3862 + { 3863 + return page_folio(fio->page)->mapping->host; 3864 + } 3865 + 3887 3866 #define DEF_FRAGMENT_SIZE 4 3888 3867 #define MIN_FRAGMENT_SIZE 1 3889 3868 #define MAX_FRAGMENT_SIZE 512 ··· 3905 3874 void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io, 3906 3875 unsigned char reason); 3907 3876 void f2fs_flush_ckpt_thread(struct f2fs_sb_info *sbi); 3908 - struct page *f2fs_grab_meta_page(struct f2fs_sb_info *sbi, pgoff_t index); 3909 - struct page *f2fs_get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index); 3910 - struct page *f2fs_get_meta_page_retry(struct f2fs_sb_info *sbi, pgoff_t index); 3911 - struct page *f2fs_get_tmp_page(struct f2fs_sb_info *sbi, pgoff_t index); 3877 + struct folio *f2fs_grab_meta_folio(struct f2fs_sb_info *sbi, pgoff_t index); 3878 + struct folio *f2fs_get_meta_folio(struct f2fs_sb_info *sbi, pgoff_t index); 3879 + struct folio *f2fs_get_meta_folio_retry(struct f2fs_sb_info *sbi, pgoff_t index); 3880 + struct folio *f2fs_get_tmp_folio(struct f2fs_sb_info *sbi, pgoff_t index); 3912 3881 bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, 3913 3882 block_t blkaddr, int type); 3914 3883 bool f2fs_is_valid_blkaddr_raw(struct f2fs_sb_info *sbi, ··· 3964 3933 struct inode *inode, struct page *page, 3965 3934 nid_t ino, enum page_type type); 3966 3935 void f2fs_submit_merged_ipu_write(struct f2fs_sb_info *sbi, 3967 - struct bio **bio, struct page *page); 3936 + struct bio **bio, struct folio *folio); 3968 3937 void f2fs_flush_merged_writes(struct f2fs_sb_info *sbi); 3969 3938 int f2fs_submit_page_bio(struct f2fs_io_info *fio); 3970 3939 int f2fs_merge_page_bio(struct f2fs_io_info *fio); ··· 3984 3953 pgoff_t *next_pgofs); 3985 3954 struct folio *f2fs_get_lock_data_folio(struct inode *inode, pgoff_t index, 3986 3955 bool for_write); 3987 - struct page *f2fs_get_new_data_page(struct inode *inode, 3988 - struct page *ipage, pgoff_t index, bool new_i_size); 3956 + struct folio *f2fs_get_new_data_folio(struct inode *inode, 3957 + struct folio *ifolio, pgoff_t index, bool new_i_size); 3989 3958 int f2fs_do_write_data_page(struct f2fs_io_info *fio); 3990 3959 int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag); 3991 3960 int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, ··· 4008 3977 int f2fs_init_post_read_wq(struct f2fs_sb_info *sbi); 4009 3978 void f2fs_destroy_post_read_wq(struct f2fs_sb_info *sbi); 4010 3979 extern const struct iomap_ops f2fs_iomap_ops; 4011 - 4012 - static inline struct page *f2fs_find_data_page(struct inode *inode, 4013 - pgoff_t index, pgoff_t *next_pgofs) 4014 - { 4015 - struct folio *folio = f2fs_find_data_folio(inode, index, next_pgofs); 4016 - 4017 - return &folio->page; 4018 - } 4019 - 4020 - static inline struct page *f2fs_get_lock_data_page(struct inode *inode, 4021 - pgoff_t index, bool for_write) 4022 - { 4023 - struct folio *folio = f2fs_get_lock_data_folio(inode, index, for_write); 4024 - 4025 - return &folio->page; 4026 - } 4027 3980 4028 3981 /* 4029 3982 * gc.c ··· 4305 4290 bool f2fs_may_inline_data(struct inode *inode); 4306 4291 bool f2fs_sanity_check_inline_data(struct inode *inode, struct page *ipage); 4307 4292 bool f2fs_may_inline_dentry(struct inode *inode); 4308 - void f2fs_do_read_inline_data(struct folio *folio, struct page *ipage); 4309 - void f2fs_truncate_inline_inode(struct inode *inode, 4310 - struct page *ipage, u64 from); 4293 + void f2fs_do_read_inline_data(struct folio *folio, struct folio *ifolio); 4294 + void f2fs_truncate_inline_inode(struct inode *inode, struct folio *ifolio, 4295 + u64 from); 4311 4296 int f2fs_read_inline_data(struct inode *inode, struct folio *folio); 4312 - int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page); 4297 + int f2fs_convert_inline_folio(struct dnode_of_data *dn, struct folio *folio); 4313 4298 int f2fs_convert_inline_inode(struct inode *inode); 4314 4299 int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry); 4315 4300 int f2fs_write_inline_data(struct inode *inode, struct folio *folio); 4316 - int f2fs_recover_inline_data(struct inode *inode, struct page *npage); 4301 + int f2fs_recover_inline_data(struct inode *inode, struct folio *nfolio); 4317 4302 struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir, 4318 - const struct f2fs_filename *fname, 4319 - struct page **res_page, 4320 - bool use_hash); 4303 + const struct f2fs_filename *fname, struct folio **res_folio, 4304 + bool use_hash); 4321 4305 int f2fs_make_empty_inline_dir(struct inode *inode, struct inode *parent, 4322 - struct page *ipage); 4306 + struct folio *ifolio); 4323 4307 int f2fs_add_inline_entry(struct inode *dir, const struct f2fs_filename *fname, 4324 4308 struct inode *inode, nid_t ino, umode_t mode); 4325 4309 void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, 4326 - struct page *page, struct inode *dir, 4327 - struct inode *inode); 4310 + struct folio *folio, struct inode *dir, struct inode *inode); 4328 4311 bool f2fs_empty_inline_dir(struct inode *dir); 4329 4312 int f2fs_read_inline_dir(struct file *file, struct dir_context *ctx, 4330 4313 struct fscrypt_str *fstr); ··· 4355 4342 void f2fs_destroy_extent_cache(void); 4356 4343 4357 4344 /* read extent cache ops */ 4358 - void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage); 4345 + void f2fs_init_read_extent_tree(struct inode *inode, struct folio *ifolio); 4359 4346 bool f2fs_lookup_read_extent_cache(struct inode *inode, pgoff_t pgofs, 4360 4347 struct extent_info *ei); 4361 4348 bool f2fs_lookup_read_extent_cache_block(struct inode *inode, pgoff_t index, ··· 4436 4423 CLUSTER_RAW_BLKS /* return # of raw blocks in a cluster */ 4437 4424 }; 4438 4425 bool f2fs_is_compressed_page(struct page *page); 4439 - struct page *f2fs_compress_control_page(struct page *page); 4426 + struct folio *f2fs_compress_control_folio(struct folio *folio); 4440 4427 int f2fs_prepare_compress_overwrite(struct inode *inode, 4441 4428 struct page **pagep, pgoff_t index, void **fsdata); 4442 4429 bool f2fs_compress_write_end(struct inode *inode, void *fsdata, ··· 4471 4458 struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc); 4472 4459 void f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed, 4473 4460 bool in_task); 4474 - void f2fs_put_page_dic(struct page *page, bool in_task); 4461 + void f2fs_put_folio_dic(struct folio *folio, bool in_task); 4475 4462 unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn, 4476 4463 unsigned int ofs_in_node); 4477 4464 int f2fs_init_compress_ctx(struct compress_ctx *cc); ··· 4488 4475 block_t blkaddr, unsigned int len); 4489 4476 void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, 4490 4477 nid_t ino, block_t blkaddr); 4491 - bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, struct page *page, 4478 + bool f2fs_load_compressed_folio(struct f2fs_sb_info *sbi, struct folio *folio, 4492 4479 block_t blkaddr); 4493 4480 void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, nid_t ino); 4494 4481 #define inc_compr_inode_stat(inode) \ ··· 4513 4500 return false; 4514 4501 } 4515 4502 static inline bool f2fs_is_compress_level_valid(int alg, int lvl) { return false; } 4516 - static inline struct page *f2fs_compress_control_page(struct page *page) 4503 + static inline struct folio *f2fs_compress_control_folio(struct folio *folio) 4517 4504 { 4518 4505 WARN_ON_ONCE(1); 4519 4506 return ERR_PTR(-EINVAL); ··· 4527 4514 { 4528 4515 WARN_ON_ONCE(1); 4529 4516 } 4530 - static inline void f2fs_put_page_dic(struct page *page, bool in_task) 4517 + static inline void f2fs_put_folio_dic(struct folio *folio, bool in_task) 4531 4518 { 4532 4519 WARN_ON_ONCE(1); 4533 4520 } ··· 4544 4531 block_t blkaddr, unsigned int len) { } 4545 4532 static inline void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, 4546 4533 struct page *page, nid_t ino, block_t blkaddr) { } 4547 - static inline bool f2fs_load_compressed_page(struct f2fs_sb_info *sbi, 4548 - struct page *page, block_t blkaddr) { return false; } 4534 + static inline bool f2fs_load_compressed_folio(struct f2fs_sb_info *sbi, 4535 + struct folio *folio, block_t blkaddr) { return false; } 4549 4536 static inline void f2fs_invalidate_compress_pages(struct f2fs_sb_info *sbi, 4550 4537 nid_t ino) { } 4551 4538 #define inc_compr_inode_stat(inode) do { } while (0) ··· 4635 4622 F2FS_FEATURE_FUNCS(device_alias, DEVICE_ALIAS); 4636 4623 4637 4624 #ifdef CONFIG_BLK_DEV_ZONED 4638 - static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi, 4639 - block_t blkaddr) 4625 + static inline bool f2fs_zone_is_seq(struct f2fs_sb_info *sbi, int devi, 4626 + unsigned int zone) 4640 4627 { 4641 - unsigned int zno = blkaddr / sbi->blocks_per_blkz; 4628 + return test_bit(zone, FDEV(devi).blkz_seq); 4629 + } 4642 4630 4643 - return test_bit(zno, FDEV(devi).blkz_seq); 4631 + static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi, 4632 + block_t blkaddr) 4633 + { 4634 + return f2fs_zone_is_seq(sbi, devi, blkaddr / sbi->blocks_per_blkz); 4644 4635 } 4645 4636 #endif 4646 4637 ··· 4716 4699 return F2FS_OPTION(sbi).fs_mode == FS_MODE_LFS; 4717 4700 } 4718 4701 4719 - static inline bool f2fs_valid_pinned_area(struct f2fs_sb_info *sbi, 4702 + static inline bool f2fs_is_sequential_zone_area(struct f2fs_sb_info *sbi, 4720 4703 block_t blkaddr) 4721 4704 { 4722 4705 if (f2fs_sb_has_blkzoned(sbi)) { 4706 + #ifdef CONFIG_BLK_DEV_ZONED 4723 4707 int devi = f2fs_target_device_index(sbi, blkaddr); 4724 4708 4725 - return !bdev_is_zoned(FDEV(devi).bdev); 4709 + if (!bdev_is_zoned(FDEV(devi).bdev)) 4710 + return false; 4711 + 4712 + if (f2fs_is_multi_device(sbi)) { 4713 + if (blkaddr < FDEV(devi).start_blk || 4714 + blkaddr > FDEV(devi).end_blk) { 4715 + f2fs_err(sbi, "Invalid block %x", blkaddr); 4716 + return false; 4717 + } 4718 + blkaddr -= FDEV(devi).start_blk; 4719 + } 4720 + 4721 + return f2fs_blkz_is_seq(sbi, devi, blkaddr); 4722 + #else 4723 + return false; 4724 + #endif 4726 4725 } 4727 - return true; 4726 + return false; 4728 4727 } 4729 4728 4730 4729 static inline bool f2fs_low_mem_mode(struct f2fs_sb_info *sbi) ··· 4795 4762 4796 4763 #ifdef CONFIG_F2FS_FAULT_INJECTION 4797 4764 extern int f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned long rate, 4798 - unsigned long type); 4765 + unsigned long type, enum fault_option fo); 4799 4766 #else 4800 4767 static inline int f2fs_build_fault_attr(struct f2fs_sb_info *sbi, 4801 - unsigned long rate, unsigned long type) 4768 + unsigned long rate, unsigned long type, 4769 + enum fault_option fo) 4802 4770 { 4803 4771 return 0; 4804 4772 } ··· 4827 4793 { 4828 4794 set_current_state(TASK_UNINTERRUPTIBLE); 4829 4795 io_schedule_timeout(timeout); 4796 + } 4797 + 4798 + static inline void f2fs_io_schedule_timeout_killable(long timeout) 4799 + { 4800 + while (timeout) { 4801 + if (fatal_signal_pending(current)) 4802 + return; 4803 + set_current_state(TASK_UNINTERRUPTIBLE); 4804 + io_schedule_timeout(DEFAULT_IO_TIMEOUT); 4805 + if (timeout <= DEFAULT_IO_TIMEOUT) 4806 + return; 4807 + timeout -= DEFAULT_IO_TIMEOUT; 4808 + } 4830 4809 } 4831 4810 4832 4811 static inline void f2fs_handle_page_eio(struct f2fs_sb_info *sbi, ··· 4871 4824 int i = 0; 4872 4825 4873 4826 do { 4874 - struct page *page; 4827 + struct folio *folio; 4875 4828 4876 - page = find_get_page(META_MAPPING(sbi), blkaddr + i); 4877 - if (page) { 4878 - if (folio_test_writeback(page_folio(page))) 4829 + folio = filemap_get_folio(META_MAPPING(sbi), blkaddr + i); 4830 + if (!IS_ERR(folio)) { 4831 + if (folio_test_writeback(folio)) 4879 4832 need_submit = true; 4880 - f2fs_put_page(page, 0); 4833 + f2fs_folio_put(folio, false); 4881 4834 } 4882 4835 } while (++i < cnt && !need_submit); 4883 4836
+115 -101
fs/f2fs/file.c
··· 131 131 goto out_sem; 132 132 } 133 133 134 - f2fs_wait_on_page_writeback(folio_page(folio, 0), DATA, false, true); 134 + f2fs_folio_wait_writeback(folio, DATA, false, true); 135 135 136 136 /* wait for GCed page writeback via META_MAPPING */ 137 137 f2fs_wait_on_block_writeback(inode, dn.data_blkaddr); ··· 226 226 227 227 static bool need_inode_page_update(struct f2fs_sb_info *sbi, nid_t ino) 228 228 { 229 - struct page *i = find_get_page(NODE_MAPPING(sbi), ino); 229 + struct folio *i = filemap_get_folio(NODE_MAPPING(sbi), ino); 230 230 bool ret = false; 231 231 /* But we need to avoid that there are some inode updates */ 232 - if ((i && PageDirty(i)) || f2fs_need_inode_block_update(sbi, ino)) 232 + if ((!IS_ERR(i) && folio_test_dirty(i)) || 233 + f2fs_need_inode_block_update(sbi, ino)) 233 234 ret = true; 234 - f2fs_put_page(i, 0); 235 + f2fs_folio_put(i, false); 235 236 return ret; 236 237 } 237 238 ··· 261 260 struct writeback_control wbc = { 262 261 .sync_mode = WB_SYNC_ALL, 263 262 .nr_to_write = LONG_MAX, 264 - .for_reclaim = 0, 265 263 }; 266 264 unsigned int seq_id = 0; 267 265 ··· 403 403 bool compressed_cluster = false; 404 404 405 405 if (f2fs_compressed_file(inode)) { 406 - block_t first_blkaddr = data_blkaddr(dn->inode, dn->node_page, 406 + block_t first_blkaddr = data_blkaddr(dn->inode, dn->node_folio, 407 407 ALIGN_DOWN(dn->ofs_in_node, F2FS_I(inode)->i_cluster_size)); 408 408 409 409 compressed_cluster = first_blkaddr == COMPRESS_ADDR; ··· 473 473 } 474 474 } 475 475 476 - end_offset = ADDRS_PER_PAGE(dn.node_page, inode); 476 + end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 477 477 478 478 /* find data/hole in dnode block */ 479 479 for (; dn.ofs_in_node < end_offset; ··· 554 554 555 555 static int finish_preallocate_blocks(struct inode *inode) 556 556 { 557 - int ret; 557 + int ret = 0; 558 + bool opened; 559 + 560 + f2fs_down_read(&F2FS_I(inode)->i_sem); 561 + opened = is_inode_flag_set(inode, FI_OPENED_FILE); 562 + f2fs_up_read(&F2FS_I(inode)->i_sem); 563 + if (opened) 564 + return 0; 558 565 559 566 inode_lock(inode); 560 - if (is_inode_flag_set(inode, FI_OPENED_FILE)) { 561 - inode_unlock(inode); 562 - return 0; 563 - } 567 + if (is_inode_flag_set(inode, FI_OPENED_FILE)) 568 + goto out_unlock; 564 569 565 - if (!file_should_truncate(inode)) { 566 - set_inode_flag(inode, FI_OPENED_FILE); 567 - inode_unlock(inode); 568 - return 0; 569 - } 570 + if (!file_should_truncate(inode)) 571 + goto out_update; 570 572 571 573 f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); 572 574 filemap_invalidate_lock(inode->i_mapping); ··· 578 576 579 577 filemap_invalidate_unlock(inode->i_mapping); 580 578 f2fs_up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); 581 - 582 - if (!ret) 583 - set_inode_flag(inode, FI_OPENED_FILE); 584 - 585 - inode_unlock(inode); 586 579 if (ret) 587 - return ret; 580 + goto out_unlock; 588 581 589 582 file_dont_truncate(inode); 590 - return 0; 583 + out_update: 584 + f2fs_down_write(&F2FS_I(inode)->i_sem); 585 + set_inode_flag(inode, FI_OPENED_FILE); 586 + f2fs_up_write(&F2FS_I(inode)->i_sem); 587 + out_unlock: 588 + inode_unlock(inode); 589 + return ret; 591 590 } 592 591 593 592 static int f2fs_file_open(struct inode *inode, struct file *filp) ··· 627 624 block_t blkstart; 628 625 int blklen = 0; 629 626 630 - addr = get_dnode_addr(dn->inode, dn->node_page) + ofs; 627 + addr = get_dnode_addr(dn->inode, dn->node_folio) + ofs; 631 628 blkstart = le32_to_cpu(*addr); 632 629 633 630 /* Assumption: truncation starts with cluster */ ··· 691 688 * once we invalidate valid blkaddr in range [ofs, ofs + count], 692 689 * we will invalidate all blkaddr in the whole range. 693 690 */ 694 - fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), 691 + fofs = f2fs_start_bidx_of_node(ofs_of_node(&dn->node_folio->page), 695 692 dn->inode) + ofs; 696 693 f2fs_update_read_extent_cache_range(dn, fofs, 0, len); 697 694 f2fs_update_age_extent_cache_range(dn, fofs, len); ··· 746 743 struct dnode_of_data dn; 747 744 pgoff_t free_from; 748 745 int count = 0, err = 0; 749 - struct page *ipage; 746 + struct folio *ifolio; 750 747 bool truncate_page = false; 751 748 752 749 trace_f2fs_truncate_blocks_enter(inode, from); ··· 764 761 if (lock) 765 762 f2fs_lock_op(sbi); 766 763 767 - ipage = f2fs_get_inode_page(sbi, inode->i_ino); 768 - if (IS_ERR(ipage)) { 769 - err = PTR_ERR(ipage); 764 + ifolio = f2fs_get_inode_folio(sbi, inode->i_ino); 765 + if (IS_ERR(ifolio)) { 766 + err = PTR_ERR(ifolio); 770 767 goto out; 771 768 } 772 769 ··· 779 776 dec_valid_block_count(sbi, inode, ei.len); 780 777 f2fs_update_time(sbi, REQ_TIME); 781 778 782 - f2fs_put_page(ipage, 1); 779 + f2fs_folio_put(ifolio, true); 783 780 goto out; 784 781 } 785 782 786 783 if (f2fs_has_inline_data(inode)) { 787 - f2fs_truncate_inline_inode(inode, ipage, from); 788 - f2fs_put_page(ipage, 1); 784 + f2fs_truncate_inline_inode(inode, ifolio, from); 785 + f2fs_folio_put(ifolio, true); 789 786 truncate_page = true; 790 787 goto out; 791 788 } 792 789 793 - set_new_dnode(&dn, inode, ipage, NULL, 0); 790 + set_new_dnode(&dn, inode, ifolio, NULL, 0); 794 791 err = f2fs_get_dnode_of_data(&dn, free_from, LOOKUP_NODE_RA); 795 792 if (err) { 796 793 if (err == -ENOENT) ··· 798 795 goto out; 799 796 } 800 797 801 - count = ADDRS_PER_PAGE(dn.node_page, inode); 798 + count = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 802 799 803 800 count -= dn.ofs_in_node; 804 801 f2fs_bug_on(sbi, count < 0); 805 802 806 - if (dn.ofs_in_node || IS_INODE(dn.node_page)) { 803 + if (dn.ofs_in_node || IS_INODE(&dn.node_folio->page)) { 807 804 f2fs_truncate_data_blocks_range(&dn, count); 808 805 free_from += count; 809 806 } ··· 1164 1161 loff_t start, loff_t len) 1165 1162 { 1166 1163 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 1167 - struct page *page; 1164 + struct folio *folio; 1168 1165 1169 1166 if (!len) 1170 1167 return 0; ··· 1172 1169 f2fs_balance_fs(sbi, true); 1173 1170 1174 1171 f2fs_lock_op(sbi); 1175 - page = f2fs_get_new_data_page(inode, NULL, index, false); 1172 + folio = f2fs_get_new_data_folio(inode, NULL, index, false); 1176 1173 f2fs_unlock_op(sbi); 1177 1174 1178 - if (IS_ERR(page)) 1179 - return PTR_ERR(page); 1175 + if (IS_ERR(folio)) 1176 + return PTR_ERR(folio); 1180 1177 1181 - f2fs_wait_on_page_writeback(page, DATA, true, true); 1182 - zero_user(page, start, len); 1183 - set_page_dirty(page); 1184 - f2fs_put_page(page, 1); 1178 + f2fs_folio_wait_writeback(folio, DATA, true, true); 1179 + folio_zero_range(folio, start, len); 1180 + folio_mark_dirty(folio); 1181 + f2fs_folio_put(folio, true); 1185 1182 return 0; 1186 1183 } 1187 1184 ··· 1204 1201 return err; 1205 1202 } 1206 1203 1207 - end_offset = ADDRS_PER_PAGE(dn.node_page, inode); 1204 + end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 1208 1205 count = min(end_offset - dn.ofs_in_node, pg_end - pg_start); 1209 1206 1210 1207 f2fs_bug_on(F2FS_I_SB(inode), count == 0 || count > end_offset); ··· 1299 1296 goto next; 1300 1297 } 1301 1298 1302 - done = min((pgoff_t)ADDRS_PER_PAGE(dn.node_page, inode) - 1299 + done = min((pgoff_t)ADDRS_PER_PAGE(&dn.node_folio->page, inode) - 1303 1300 dn.ofs_in_node, len); 1304 1301 for (i = 0; i < done; i++, blkaddr++, do_replace++, dn.ofs_in_node++) { 1305 1302 *blkaddr = f2fs_data_blkaddr(&dn); ··· 1388 1385 } 1389 1386 1390 1387 ilen = min((pgoff_t) 1391 - ADDRS_PER_PAGE(dn.node_page, dst_inode) - 1388 + ADDRS_PER_PAGE(&dn.node_folio->page, dst_inode) - 1392 1389 dn.ofs_in_node, len - i); 1393 1390 do { 1394 1391 dn.data_blkaddr = f2fs_data_blkaddr(&dn); ··· 1413 1410 1414 1411 f2fs_put_dnode(&dn); 1415 1412 } else { 1416 - struct page *psrc, *pdst; 1413 + struct folio *fsrc, *fdst; 1417 1414 1418 - psrc = f2fs_get_lock_data_page(src_inode, 1415 + fsrc = f2fs_get_lock_data_folio(src_inode, 1419 1416 src + i, true); 1420 - if (IS_ERR(psrc)) 1421 - return PTR_ERR(psrc); 1422 - pdst = f2fs_get_new_data_page(dst_inode, NULL, dst + i, 1417 + if (IS_ERR(fsrc)) 1418 + return PTR_ERR(fsrc); 1419 + fdst = f2fs_get_new_data_folio(dst_inode, NULL, dst + i, 1423 1420 true); 1424 - if (IS_ERR(pdst)) { 1425 - f2fs_put_page(psrc, 1); 1426 - return PTR_ERR(pdst); 1421 + if (IS_ERR(fdst)) { 1422 + f2fs_folio_put(fsrc, true); 1423 + return PTR_ERR(fdst); 1427 1424 } 1428 1425 1429 - f2fs_wait_on_page_writeback(pdst, DATA, true, true); 1426 + f2fs_folio_wait_writeback(fdst, DATA, true, true); 1430 1427 1431 - memcpy_page(pdst, 0, psrc, 0, PAGE_SIZE); 1432 - set_page_dirty(pdst); 1433 - set_page_private_gcing(pdst); 1434 - f2fs_put_page(pdst, 1); 1435 - f2fs_put_page(psrc, 1); 1428 + memcpy_folio(fdst, 0, fsrc, 0, PAGE_SIZE); 1429 + folio_mark_dirty(fdst); 1430 + set_page_private_gcing(&fdst->page); 1431 + f2fs_folio_put(fdst, true); 1432 + f2fs_folio_put(fsrc, true); 1436 1433 1437 1434 ret = f2fs_truncate_hole(src_inode, 1438 1435 src + i, src + i + 1); ··· 1678 1675 goto out; 1679 1676 } 1680 1677 1681 - end_offset = ADDRS_PER_PAGE(dn.node_page, inode); 1678 + end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 1682 1679 end = min(pg_end, end_offset - dn.ofs_in_node + index); 1683 1680 1684 1681 ret = f2fs_do_zero_range(&dn, index, end); ··· 2467 2464 return ret; 2468 2465 } 2469 2466 2470 - static void f2fs_keep_noreuse_range(struct inode *inode, 2467 + static int f2fs_keep_noreuse_range(struct inode *inode, 2471 2468 loff_t offset, loff_t len) 2472 2469 { 2473 2470 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 2474 2471 u64 max_bytes = F2FS_BLK_TO_BYTES(max_file_blocks(inode)); 2475 2472 u64 start, end; 2473 + int ret = 0; 2476 2474 2477 2475 if (!S_ISREG(inode->i_mode)) 2478 - return; 2476 + return 0; 2479 2477 2480 2478 if (offset >= max_bytes || len > max_bytes || 2481 2479 (offset + len) > max_bytes) 2482 - return; 2480 + return 0; 2483 2481 2484 2482 start = offset >> PAGE_SHIFT; 2485 2483 end = DIV_ROUND_UP(offset + len, PAGE_SIZE); ··· 2488 2484 inode_lock(inode); 2489 2485 if (f2fs_is_atomic_file(inode)) { 2490 2486 inode_unlock(inode); 2491 - return; 2487 + return 0; 2492 2488 } 2493 2489 2494 2490 spin_lock(&sbi->inode_lock[DONATE_INODE]); ··· 2497 2493 if (!list_empty(&F2FS_I(inode)->gdonate_list)) { 2498 2494 list_del_init(&F2FS_I(inode)->gdonate_list); 2499 2495 sbi->donate_files--; 2500 - } 2496 + if (is_inode_flag_set(inode, FI_DONATE_FINISHED)) 2497 + ret = -EALREADY; 2498 + else 2499 + set_inode_flag(inode, FI_DONATE_FINISHED); 2500 + } else 2501 + ret = -ENOENT; 2501 2502 } else { 2502 2503 if (list_empty(&F2FS_I(inode)->gdonate_list)) { 2503 2504 list_add_tail(&F2FS_I(inode)->gdonate_list, ··· 2514 2505 } 2515 2506 F2FS_I(inode)->donate_start = start; 2516 2507 F2FS_I(inode)->donate_end = end - 1; 2508 + clear_inode_flag(inode, FI_DONATE_FINISHED); 2517 2509 } 2518 2510 spin_unlock(&sbi->inode_lock[DONATE_INODE]); 2519 2511 inode_unlock(inode); 2512 + 2513 + return ret; 2520 2514 } 2521 2515 2522 2516 static int f2fs_ioc_fitrim(struct file *filp, unsigned long arg) ··· 2932 2920 idx = map.m_lblk; 2933 2921 while (idx < map.m_lblk + map.m_len && 2934 2922 cnt < BLKS_PER_SEG(sbi)) { 2935 - struct page *page; 2923 + struct folio *folio; 2936 2924 2937 - page = f2fs_get_lock_data_page(inode, idx, true); 2938 - if (IS_ERR(page)) { 2939 - err = PTR_ERR(page); 2925 + folio = f2fs_get_lock_data_folio(inode, idx, true); 2926 + if (IS_ERR(folio)) { 2927 + err = PTR_ERR(folio); 2940 2928 goto clear_out; 2941 2929 } 2942 2930 2943 - f2fs_wait_on_page_writeback(page, DATA, true, true); 2931 + f2fs_folio_wait_writeback(folio, DATA, true, true); 2944 2932 2945 - set_page_dirty(page); 2946 - set_page_private_gcing(page); 2947 - f2fs_put_page(page, 1); 2933 + folio_mark_dirty(folio); 2934 + set_page_private_gcing(&folio->page); 2935 + f2fs_folio_put(folio, true); 2948 2936 2949 2937 idx++; 2950 2938 cnt++; ··· 3723 3711 int i; 3724 3712 3725 3713 for (i = 0; i < count; i++) { 3726 - blkaddr = data_blkaddr(dn->inode, dn->node_page, 3714 + blkaddr = data_blkaddr(dn->inode, dn->node_folio, 3727 3715 dn->ofs_in_node + i); 3728 3716 3729 3717 if (!__is_valid_data_blkaddr(blkaddr)) ··· 3841 3829 break; 3842 3830 } 3843 3831 3844 - end_offset = ADDRS_PER_PAGE(dn.node_page, inode); 3832 + end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 3845 3833 count = min(end_offset - dn.ofs_in_node, last_idx - page_idx); 3846 3834 count = round_up(count, fi->i_cluster_size); 3847 3835 ··· 3892 3880 int i; 3893 3881 3894 3882 for (i = 0; i < count; i++) { 3895 - blkaddr = data_blkaddr(dn->inode, dn->node_page, 3883 + blkaddr = data_blkaddr(dn->inode, dn->node_folio, 3896 3884 dn->ofs_in_node + i); 3897 3885 3898 3886 if (!__is_valid_data_blkaddr(blkaddr)) ··· 3909 3897 int ret; 3910 3898 3911 3899 for (i = 0; i < cluster_size; i++) { 3912 - blkaddr = data_blkaddr(dn->inode, dn->node_page, 3900 + blkaddr = data_blkaddr(dn->inode, dn->node_folio, 3913 3901 dn->ofs_in_node + i); 3914 3902 3915 3903 if (i == 0) { ··· 4019 4007 break; 4020 4008 } 4021 4009 4022 - end_offset = ADDRS_PER_PAGE(dn.node_page, inode); 4010 + end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 4023 4011 count = min(end_offset - dn.ofs_in_node, last_idx - page_idx); 4024 4012 count = round_up(count, fi->i_cluster_size); 4025 4013 ··· 4183 4171 goto out; 4184 4172 } 4185 4173 4186 - end_offset = ADDRS_PER_PAGE(dn.node_page, inode); 4174 + end_offset = ADDRS_PER_PAGE(&dn.node_folio->page, inode); 4187 4175 count = min(end_offset - dn.ofs_in_node, pg_end - index); 4188 4176 for (i = 0; i < count; i++, index++, dn.ofs_in_node++) { 4189 4177 struct block_device *cur_bdev; ··· 4355 4343 { 4356 4344 DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, page_idx); 4357 4345 struct address_space *mapping = inode->i_mapping; 4358 - struct page *page; 4346 + struct folio *folio; 4359 4347 pgoff_t redirty_idx = page_idx; 4360 - int i, page_len = 0, ret = 0; 4348 + int page_len = 0, ret = 0; 4361 4349 4362 4350 page_cache_ra_unbounded(&ractl, len, 0); 4363 4351 4364 - for (i = 0; i < len; i++, page_idx++) { 4365 - page = read_cache_page(mapping, page_idx, NULL, NULL); 4366 - if (IS_ERR(page)) { 4367 - ret = PTR_ERR(page); 4352 + do { 4353 + folio = read_cache_folio(mapping, page_idx, NULL, NULL); 4354 + if (IS_ERR(folio)) { 4355 + ret = PTR_ERR(folio); 4368 4356 break; 4369 4357 } 4370 - page_len++; 4371 - } 4358 + page_len += folio_nr_pages(folio) - (page_idx - folio->index); 4359 + page_idx = folio_next_index(folio); 4360 + } while (page_len < len); 4372 4361 4373 - for (i = 0; i < page_len; i++, redirty_idx++) { 4374 - page = find_lock_page(mapping, redirty_idx); 4362 + do { 4363 + folio = filemap_lock_folio(mapping, redirty_idx); 4375 4364 4376 - /* It will never fail, when page has pinned above */ 4377 - f2fs_bug_on(F2FS_I_SB(inode), !page); 4365 + /* It will never fail, when folio has pinned above */ 4366 + f2fs_bug_on(F2FS_I_SB(inode), IS_ERR(folio)); 4378 4367 4379 - f2fs_wait_on_page_writeback(page, DATA, true, true); 4368 + f2fs_folio_wait_writeback(folio, DATA, true, true); 4380 4369 4381 - set_page_dirty(page); 4382 - set_page_private_gcing(page); 4383 - f2fs_put_page(page, 1); 4384 - f2fs_put_page(page, 0); 4385 - } 4370 + folio_mark_dirty(folio); 4371 + set_page_private_gcing(&folio->page); 4372 + redirty_idx = folio_next_index(folio); 4373 + folio_unlock(folio); 4374 + folio_put_refs(folio, 2); 4375 + } while (redirty_idx < page_idx); 4386 4376 4387 4377 return ret; 4388 4378 } ··· 5250 5236 f2fs_compressed_file(inode))) 5251 5237 f2fs_invalidate_compress_pages(F2FS_I_SB(inode), inode->i_ino); 5252 5238 else if (advice == POSIX_FADV_NOREUSE) 5253 - f2fs_keep_noreuse_range(inode, offset, len); 5254 - return 0; 5239 + err = f2fs_keep_noreuse_range(inode, offset, len); 5240 + return err; 5255 5241 } 5256 5242 5257 5243 #ifdef CONFIG_COMPAT
+72 -71
fs/f2fs/gc.c
··· 1045 1045 1046 1046 for (off = 0; off < usable_blks_in_seg; off++, entry++) { 1047 1047 nid_t nid = le32_to_cpu(entry->nid); 1048 - struct page *node_page; 1048 + struct folio *node_folio; 1049 1049 struct node_info ni; 1050 1050 int err; 1051 1051 ··· 1068 1068 } 1069 1069 1070 1070 /* phase == 2 */ 1071 - node_page = f2fs_get_node_page(sbi, nid); 1072 - if (IS_ERR(node_page)) 1071 + node_folio = f2fs_get_node_folio(sbi, nid); 1072 + if (IS_ERR(node_folio)) 1073 1073 continue; 1074 1074 1075 - /* block may become invalid during f2fs_get_node_page */ 1075 + /* block may become invalid during f2fs_get_node_folio */ 1076 1076 if (check_valid_map(sbi, segno, off) == 0) { 1077 - f2fs_put_page(node_page, 1); 1077 + f2fs_folio_put(node_folio, true); 1078 1078 continue; 1079 1079 } 1080 1080 1081 1081 if (f2fs_get_node_info(sbi, nid, &ni, false)) { 1082 - f2fs_put_page(node_page, 1); 1082 + f2fs_folio_put(node_folio, true); 1083 1083 continue; 1084 1084 } 1085 1085 1086 1086 if (ni.blk_addr != start_addr + off) { 1087 - f2fs_put_page(node_page, 1); 1087 + f2fs_folio_put(node_folio, true); 1088 1088 continue; 1089 1089 } 1090 1090 1091 - err = f2fs_move_node_page(node_page, gc_type); 1091 + err = f2fs_move_node_folio(node_folio, gc_type); 1092 1092 if (!err && gc_type == FG_GC) 1093 1093 submitted++; 1094 1094 stat_inc_node_blk_count(sbi, 1, gc_type); ··· 1134 1134 static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, 1135 1135 struct node_info *dni, block_t blkaddr, unsigned int *nofs) 1136 1136 { 1137 - struct page *node_page; 1137 + struct folio *node_folio; 1138 1138 nid_t nid; 1139 1139 unsigned int ofs_in_node, max_addrs, base; 1140 1140 block_t source_blkaddr; ··· 1142 1142 nid = le32_to_cpu(sum->nid); 1143 1143 ofs_in_node = le16_to_cpu(sum->ofs_in_node); 1144 1144 1145 - node_page = f2fs_get_node_page(sbi, nid); 1146 - if (IS_ERR(node_page)) 1145 + node_folio = f2fs_get_node_folio(sbi, nid); 1146 + if (IS_ERR(node_folio)) 1147 1147 return false; 1148 1148 1149 1149 if (f2fs_get_node_info(sbi, nid, dni, false)) { 1150 - f2fs_put_page(node_page, 1); 1150 + f2fs_folio_put(node_folio, true); 1151 1151 return false; 1152 1152 } 1153 1153 ··· 1158 1158 } 1159 1159 1160 1160 if (f2fs_check_nid_range(sbi, dni->ino)) { 1161 - f2fs_put_page(node_page, 1); 1161 + f2fs_folio_put(node_folio, true); 1162 1162 return false; 1163 1163 } 1164 1164 1165 - if (IS_INODE(node_page)) { 1166 - base = offset_in_addr(F2FS_INODE(node_page)); 1165 + if (IS_INODE(&node_folio->page)) { 1166 + base = offset_in_addr(F2FS_INODE(&node_folio->page)); 1167 1167 max_addrs = DEF_ADDRS_PER_INODE; 1168 1168 } else { 1169 1169 base = 0; ··· 1173 1173 if (base + ofs_in_node >= max_addrs) { 1174 1174 f2fs_err(sbi, "Inconsistent blkaddr offset: base:%u, ofs_in_node:%u, max:%u, ino:%u, nid:%u", 1175 1175 base, ofs_in_node, max_addrs, dni->ino, dni->nid); 1176 - f2fs_put_page(node_page, 1); 1176 + f2fs_folio_put(node_folio, true); 1177 1177 return false; 1178 1178 } 1179 1179 1180 - *nofs = ofs_of_node(node_page); 1181 - source_blkaddr = data_blkaddr(NULL, node_page, ofs_in_node); 1182 - f2fs_put_page(node_page, 1); 1180 + *nofs = ofs_of_node(&node_folio->page); 1181 + source_blkaddr = data_blkaddr(NULL, node_folio, ofs_in_node); 1182 + f2fs_folio_put(node_folio, true); 1183 1183 1184 1184 if (source_blkaddr != blkaddr) { 1185 1185 #ifdef CONFIG_F2FS_CHECK_FS ··· 1205 1205 struct address_space *mapping = f2fs_is_cow_file(inode) ? 1206 1206 F2FS_I(inode)->atomic_inode->i_mapping : inode->i_mapping; 1207 1207 struct dnode_of_data dn; 1208 - struct page *page; 1208 + struct folio *folio; 1209 1209 struct f2fs_io_info fio = { 1210 1210 .sbi = sbi, 1211 1211 .ino = inode->i_ino, ··· 1218 1218 }; 1219 1219 int err; 1220 1220 1221 - page = f2fs_grab_cache_page(mapping, index, true); 1222 - if (!page) 1223 - return -ENOMEM; 1221 + folio = f2fs_grab_cache_folio(mapping, index, true); 1222 + if (IS_ERR(folio)) 1223 + return PTR_ERR(folio); 1224 1224 1225 1225 if (f2fs_lookup_read_extent_cache_block(inode, index, 1226 1226 &dn.data_blkaddr)) { 1227 1227 if (unlikely(!f2fs_is_valid_blkaddr(sbi, dn.data_blkaddr, 1228 1228 DATA_GENERIC_ENHANCE_READ))) { 1229 1229 err = -EFSCORRUPTED; 1230 - goto put_page; 1230 + goto put_folio; 1231 1231 } 1232 1232 goto got_it; 1233 1233 } ··· 1235 1235 set_new_dnode(&dn, inode, NULL, NULL, 0); 1236 1236 err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE); 1237 1237 if (err) 1238 - goto put_page; 1238 + goto put_folio; 1239 1239 f2fs_put_dnode(&dn); 1240 1240 1241 1241 if (!__is_valid_data_blkaddr(dn.data_blkaddr)) { 1242 1242 err = -ENOENT; 1243 - goto put_page; 1243 + goto put_folio; 1244 1244 } 1245 1245 if (unlikely(!f2fs_is_valid_blkaddr(sbi, dn.data_blkaddr, 1246 1246 DATA_GENERIC_ENHANCE))) { 1247 1247 err = -EFSCORRUPTED; 1248 - goto put_page; 1248 + goto put_folio; 1249 1249 } 1250 1250 got_it: 1251 - /* read page */ 1252 - fio.page = page; 1251 + /* read folio */ 1252 + fio.page = &folio->page; 1253 1253 fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr; 1254 1254 1255 1255 /* 1256 1256 * don't cache encrypted data into meta inode until previous dirty 1257 1257 * data were writebacked to avoid racing between GC and flush. 1258 1258 */ 1259 - f2fs_wait_on_page_writeback(page, DATA, true, true); 1259 + f2fs_folio_wait_writeback(folio, DATA, true, true); 1260 1260 1261 1261 f2fs_wait_on_block_writeback(inode, dn.data_blkaddr); 1262 1262 ··· 1265 1265 FGP_LOCK | FGP_CREAT, GFP_NOFS); 1266 1266 if (!fio.encrypted_page) { 1267 1267 err = -ENOMEM; 1268 - goto put_page; 1268 + goto put_folio; 1269 1269 } 1270 1270 1271 1271 err = f2fs_submit_page_bio(&fio); 1272 1272 if (err) 1273 1273 goto put_encrypted_page; 1274 1274 f2fs_put_page(fio.encrypted_page, 0); 1275 - f2fs_put_page(page, 1); 1275 + f2fs_folio_put(folio, true); 1276 1276 1277 1277 f2fs_update_iostat(sbi, inode, FS_DATA_READ_IO, F2FS_BLKSIZE); 1278 1278 f2fs_update_iostat(sbi, NULL, FS_GDATA_READ_IO, F2FS_BLKSIZE); ··· 1280 1280 return 0; 1281 1281 put_encrypted_page: 1282 1282 f2fs_put_page(fio.encrypted_page, 1); 1283 - put_page: 1284 - f2fs_put_page(page, 1); 1283 + put_folio: 1284 + f2fs_folio_put(folio, true); 1285 1285 return err; 1286 1286 } 1287 1287 ··· 1307 1307 struct dnode_of_data dn; 1308 1308 struct f2fs_summary sum; 1309 1309 struct node_info ni; 1310 - struct page *page, *mpage; 1310 + struct folio *folio, *mfolio; 1311 1311 block_t newaddr; 1312 1312 int err = 0; 1313 1313 bool lfs_mode = f2fs_lfs_mode(fio.sbi); ··· 1316 1316 CURSEG_ALL_DATA_ATGC : CURSEG_COLD_DATA; 1317 1317 1318 1318 /* do not read out */ 1319 - page = f2fs_grab_cache_page(mapping, bidx, false); 1320 - if (!page) 1321 - return -ENOMEM; 1319 + folio = f2fs_grab_cache_folio(mapping, bidx, false); 1320 + if (IS_ERR(folio)) 1321 + return PTR_ERR(folio); 1322 1322 1323 1323 if (!check_valid_map(F2FS_I_SB(inode), segno, off)) { 1324 1324 err = -ENOENT; ··· 1335 1335 goto out; 1336 1336 1337 1337 if (unlikely(dn.data_blkaddr == NULL_ADDR)) { 1338 - ClearPageUptodate(page); 1338 + folio_clear_uptodate(folio); 1339 1339 err = -ENOENT; 1340 1340 goto put_out; 1341 1341 } ··· 1344 1344 * don't cache encrypted data into meta inode until previous dirty 1345 1345 * data were writebacked to avoid racing between GC and flush. 1346 1346 */ 1347 - f2fs_wait_on_page_writeback(page, DATA, true, true); 1347 + f2fs_folio_wait_writeback(folio, DATA, true, true); 1348 1348 1349 1349 f2fs_wait_on_block_writeback(inode, dn.data_blkaddr); 1350 1350 ··· 1353 1353 goto put_out; 1354 1354 1355 1355 /* read page */ 1356 - fio.page = page; 1356 + fio.page = &folio->page; 1357 1357 fio.new_blkaddr = fio.old_blkaddr = dn.data_blkaddr; 1358 1358 1359 1359 if (lfs_mode) 1360 1360 f2fs_down_write(&fio.sbi->io_order_lock); 1361 1361 1362 - mpage = f2fs_grab_cache_page(META_MAPPING(fio.sbi), 1362 + mfolio = f2fs_grab_cache_folio(META_MAPPING(fio.sbi), 1363 1363 fio.old_blkaddr, false); 1364 - if (!mpage) { 1365 - err = -ENOMEM; 1364 + if (IS_ERR(mfolio)) { 1365 + err = PTR_ERR(mfolio); 1366 1366 goto up_out; 1367 1367 } 1368 1368 1369 - fio.encrypted_page = mpage; 1369 + fio.encrypted_page = folio_file_page(mfolio, fio.old_blkaddr); 1370 1370 1371 - /* read source block in mpage */ 1372 - if (!PageUptodate(mpage)) { 1371 + /* read source block in mfolio */ 1372 + if (!folio_test_uptodate(mfolio)) { 1373 1373 err = f2fs_submit_page_bio(&fio); 1374 1374 if (err) { 1375 - f2fs_put_page(mpage, 1); 1375 + f2fs_folio_put(mfolio, true); 1376 1376 goto up_out; 1377 1377 } 1378 1378 ··· 1381 1381 f2fs_update_iostat(fio.sbi, NULL, FS_GDATA_READ_IO, 1382 1382 F2FS_BLKSIZE); 1383 1383 1384 - lock_page(mpage); 1385 - if (unlikely(mpage->mapping != META_MAPPING(fio.sbi) || 1386 - !PageUptodate(mpage))) { 1384 + folio_lock(mfolio); 1385 + if (unlikely(!is_meta_folio(mfolio) || 1386 + !folio_test_uptodate(mfolio))) { 1387 1387 err = -EIO; 1388 - f2fs_put_page(mpage, 1); 1388 + f2fs_folio_put(mfolio, true); 1389 1389 goto up_out; 1390 1390 } 1391 1391 } ··· 1396 1396 err = f2fs_allocate_data_block(fio.sbi, NULL, fio.old_blkaddr, &newaddr, 1397 1397 &sum, type, NULL); 1398 1398 if (err) { 1399 - f2fs_put_page(mpage, 1); 1399 + f2fs_folio_put(mfolio, true); 1400 1400 /* filesystem should shutdown, no need to recovery block */ 1401 1401 goto up_out; 1402 1402 } ··· 1405 1405 newaddr, FGP_LOCK | FGP_CREAT, GFP_NOFS); 1406 1406 if (!fio.encrypted_page) { 1407 1407 err = -ENOMEM; 1408 - f2fs_put_page(mpage, 1); 1408 + f2fs_folio_put(mfolio, true); 1409 1409 goto recover_block; 1410 1410 } 1411 1411 1412 1412 /* write target block */ 1413 1413 f2fs_wait_on_page_writeback(fio.encrypted_page, DATA, true, true); 1414 1414 memcpy(page_address(fio.encrypted_page), 1415 - page_address(mpage), PAGE_SIZE); 1416 - f2fs_put_page(mpage, 1); 1415 + folio_address(mfolio), PAGE_SIZE); 1416 + f2fs_folio_put(mfolio, true); 1417 1417 1418 1418 f2fs_invalidate_internal_cache(fio.sbi, fio.old_blkaddr, 1); 1419 1419 ··· 1444 1444 put_out: 1445 1445 f2fs_put_dnode(&dn); 1446 1446 out: 1447 - f2fs_put_page(page, 1); 1447 + f2fs_folio_put(folio, true); 1448 1448 return err; 1449 1449 } 1450 1450 ··· 1718 1718 struct gc_inode_list *gc_list, int gc_type, 1719 1719 bool force_migrate, bool one_time) 1720 1720 { 1721 - struct page *sum_page; 1722 - struct f2fs_summary_block *sum; 1723 1721 struct blk_plug plug; 1724 1722 unsigned int segno = start_segno; 1725 1723 unsigned int end_segno = start_segno + SEGS_PER_SEC(sbi); ··· 1767 1769 1768 1770 /* reference all summary page */ 1769 1771 while (segno < end_segno) { 1770 - sum_page = f2fs_get_sum_page(sbi, segno++); 1771 - if (IS_ERR(sum_page)) { 1772 - int err = PTR_ERR(sum_page); 1772 + struct folio *sum_folio = f2fs_get_sum_folio(sbi, segno++); 1773 + if (IS_ERR(sum_folio)) { 1774 + int err = PTR_ERR(sum_folio); 1773 1775 1774 1776 end_segno = segno - 1; 1775 1777 for (segno = start_segno; segno < end_segno; segno++) { 1776 - sum_page = find_get_page(META_MAPPING(sbi), 1778 + sum_folio = filemap_get_folio(META_MAPPING(sbi), 1777 1779 GET_SUM_BLOCK(sbi, segno)); 1778 - f2fs_put_page(sum_page, 0); 1779 - f2fs_put_page(sum_page, 0); 1780 + folio_put_refs(sum_folio, 2); 1780 1781 } 1781 1782 return err; 1782 1783 } 1783 - unlock_page(sum_page); 1784 + folio_unlock(sum_folio); 1784 1785 } 1785 1786 1786 1787 blk_start_plug(&plug); 1787 1788 1788 1789 for (segno = start_segno; segno < end_segno; segno++) { 1790 + struct f2fs_summary_block *sum; 1789 1791 1790 1792 /* find segment summary of victim */ 1791 - sum_page = find_get_page(META_MAPPING(sbi), 1793 + struct folio *sum_folio = filemap_get_folio(META_MAPPING(sbi), 1792 1794 GET_SUM_BLOCK(sbi, segno)); 1793 - f2fs_put_page(sum_page, 0); 1794 1795 1795 1796 if (get_valid_blocks(sbi, segno, false) == 0) 1796 1797 goto freed; 1797 1798 if (gc_type == BG_GC && __is_large_section(sbi) && 1798 1799 migrated >= sbi->migration_granularity) 1799 1800 goto skip; 1800 - if (!PageUptodate(sum_page) || unlikely(f2fs_cp_error(sbi))) 1801 + if (!folio_test_uptodate(sum_folio) || 1802 + unlikely(f2fs_cp_error(sbi))) 1801 1803 goto skip; 1802 1804 1803 - sum = page_address(sum_page); 1805 + sum = folio_address(sum_folio); 1804 1806 if (type != GET_SUM_TYPE((&sum->footer))) { 1805 1807 f2fs_err(sbi, "Inconsistent segment (%u) type [%d, %d] in SSA and SIT", 1806 1808 segno, type, GET_SUM_TYPE((&sum->footer))); ··· 1838 1840 (segno + 1 < sec_end_segno) ? 1839 1841 segno + 1 : NULL_SEGNO; 1840 1842 skip: 1841 - f2fs_put_page(sum_page, 0); 1843 + folio_put_refs(sum_folio, 2); 1842 1844 } 1843 1845 1844 1846 if (submitted) ··· 2063 2065 .ilist = LIST_HEAD_INIT(gc_list.ilist), 2064 2066 .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS), 2065 2067 }; 2068 + 2069 + if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, segno))) 2070 + continue; 2066 2071 2067 2072 do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false); 2068 2073 put_gc_inode(&gc_list);
+155 -155
fs/f2fs/inline.c
··· 79 79 return true; 80 80 } 81 81 82 - void f2fs_do_read_inline_data(struct folio *folio, struct page *ipage) 82 + void f2fs_do_read_inline_data(struct folio *folio, struct folio *ifolio) 83 83 { 84 84 struct inode *inode = folio->mapping->host; 85 85 86 86 if (folio_test_uptodate(folio)) 87 87 return; 88 88 89 - f2fs_bug_on(F2FS_I_SB(inode), folio_index(folio)); 89 + f2fs_bug_on(F2FS_I_SB(inode), folio->index); 90 90 91 91 folio_zero_segment(folio, MAX_INLINE_DATA(inode), folio_size(folio)); 92 92 93 93 /* Copy the whole inline data block */ 94 - memcpy_to_folio(folio, 0, inline_data_addr(inode, ipage), 94 + memcpy_to_folio(folio, 0, inline_data_addr(inode, ifolio), 95 95 MAX_INLINE_DATA(inode)); 96 96 if (!folio_test_uptodate(folio)) 97 97 folio_mark_uptodate(folio); 98 98 } 99 99 100 - void f2fs_truncate_inline_inode(struct inode *inode, 101 - struct page *ipage, u64 from) 100 + void f2fs_truncate_inline_inode(struct inode *inode, struct folio *ifolio, 101 + u64 from) 102 102 { 103 103 void *addr; 104 104 105 105 if (from >= MAX_INLINE_DATA(inode)) 106 106 return; 107 107 108 - addr = inline_data_addr(inode, ipage); 108 + addr = inline_data_addr(inode, ifolio); 109 109 110 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 110 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 111 111 memset(addr + from, 0, MAX_INLINE_DATA(inode) - from); 112 - set_page_dirty(ipage); 112 + folio_mark_dirty(ifolio); 113 113 114 114 if (from == 0) 115 115 clear_inode_flag(inode, FI_DATA_EXIST); ··· 117 117 118 118 int f2fs_read_inline_data(struct inode *inode, struct folio *folio) 119 119 { 120 - struct page *ipage; 120 + struct folio *ifolio; 121 121 122 - ipage = f2fs_get_inode_page(F2FS_I_SB(inode), inode->i_ino); 123 - if (IS_ERR(ipage)) { 122 + ifolio = f2fs_get_inode_folio(F2FS_I_SB(inode), inode->i_ino); 123 + if (IS_ERR(ifolio)) { 124 124 folio_unlock(folio); 125 - return PTR_ERR(ipage); 125 + return PTR_ERR(ifolio); 126 126 } 127 127 128 128 if (!f2fs_has_inline_data(inode)) { 129 - f2fs_put_page(ipage, 1); 129 + f2fs_folio_put(ifolio, true); 130 130 return -EAGAIN; 131 131 } 132 132 133 - if (folio_index(folio)) 133 + if (folio->index) 134 134 folio_zero_segment(folio, 0, folio_size(folio)); 135 135 else 136 - f2fs_do_read_inline_data(folio, ipage); 136 + f2fs_do_read_inline_data(folio, ifolio); 137 137 138 138 if (!folio_test_uptodate(folio)) 139 139 folio_mark_uptodate(folio); 140 - f2fs_put_page(ipage, 1); 140 + f2fs_folio_put(ifolio, true); 141 141 folio_unlock(folio); 142 142 return 0; 143 143 } 144 144 145 - int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page) 145 + int f2fs_convert_inline_folio(struct dnode_of_data *dn, struct folio *folio) 146 146 { 147 147 struct f2fs_io_info fio = { 148 148 .sbi = F2FS_I_SB(dn->inode), ··· 150 150 .type = DATA, 151 151 .op = REQ_OP_WRITE, 152 152 .op_flags = REQ_SYNC | REQ_PRIO, 153 - .page = page, 153 + .page = &folio->page, 154 154 .encrypted_page = NULL, 155 155 .io_type = FS_DATA_IO, 156 156 }; ··· 182 182 return -EFSCORRUPTED; 183 183 } 184 184 185 - f2fs_bug_on(F2FS_P_SB(page), folio_test_writeback(page_folio(page))); 185 + f2fs_bug_on(F2FS_F_SB(folio), folio_test_writeback(folio)); 186 186 187 - f2fs_do_read_inline_data(page_folio(page), dn->inode_page); 188 - set_page_dirty(page); 187 + f2fs_do_read_inline_data(folio, dn->inode_folio); 188 + folio_mark_dirty(folio); 189 189 190 190 /* clear dirty state */ 191 - dirty = clear_page_dirty_for_io(page); 191 + dirty = folio_clear_dirty_for_io(folio); 192 192 193 193 /* write data page to try to make data consistent */ 194 - set_page_writeback(page); 194 + folio_start_writeback(folio); 195 195 fio.old_blkaddr = dn->data_blkaddr; 196 196 set_inode_flag(dn->inode, FI_HOT_DATA); 197 197 f2fs_outplace_write_data(dn, &fio); 198 - f2fs_wait_on_page_writeback(page, DATA, true, true); 198 + f2fs_folio_wait_writeback(folio, DATA, true, true); 199 199 if (dirty) { 200 200 inode_dec_dirty_pages(dn->inode); 201 201 f2fs_remove_dirty_inode(dn->inode); ··· 205 205 set_inode_flag(dn->inode, FI_APPEND_WRITE); 206 206 207 207 /* clear inline data and flag after data writeback */ 208 - f2fs_truncate_inline_inode(dn->inode, dn->inode_page, 0); 209 - clear_page_private_inline(dn->inode_page); 208 + f2fs_truncate_inline_inode(dn->inode, dn->inode_folio, 0); 209 + clear_page_private_inline(&dn->inode_folio->page); 210 210 clear_out: 211 211 stat_dec_inline_inode(dn->inode); 212 212 clear_inode_flag(dn->inode, FI_INLINE_DATA); ··· 218 218 { 219 219 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 220 220 struct dnode_of_data dn; 221 - struct page *ipage, *page; 221 + struct folio *ifolio, *folio; 222 222 int err = 0; 223 223 224 224 if (f2fs_hw_is_readonly(sbi) || f2fs_readonly(sbi->sb)) ··· 231 231 if (err) 232 232 return err; 233 233 234 - page = f2fs_grab_cache_page(inode->i_mapping, 0, false); 235 - if (!page) 236 - return -ENOMEM; 234 + folio = f2fs_grab_cache_folio(inode->i_mapping, 0, false); 235 + if (IS_ERR(folio)) 236 + return PTR_ERR(folio); 237 237 238 238 f2fs_lock_op(sbi); 239 239 240 - ipage = f2fs_get_inode_page(sbi, inode->i_ino); 241 - if (IS_ERR(ipage)) { 242 - err = PTR_ERR(ipage); 240 + ifolio = f2fs_get_inode_folio(sbi, inode->i_ino); 241 + if (IS_ERR(ifolio)) { 242 + err = PTR_ERR(ifolio); 243 243 goto out; 244 244 } 245 245 246 - set_new_dnode(&dn, inode, ipage, ipage, 0); 246 + set_new_dnode(&dn, inode, ifolio, ifolio, 0); 247 247 248 248 if (f2fs_has_inline_data(inode)) 249 - err = f2fs_convert_inline_page(&dn, page); 249 + err = f2fs_convert_inline_folio(&dn, folio); 250 250 251 251 f2fs_put_dnode(&dn); 252 252 out: 253 253 f2fs_unlock_op(sbi); 254 254 255 - f2fs_put_page(page, 1); 255 + f2fs_folio_put(folio, true); 256 256 257 257 if (!err) 258 258 f2fs_balance_fs(sbi, dn.node_changed); ··· 263 263 int f2fs_write_inline_data(struct inode *inode, struct folio *folio) 264 264 { 265 265 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 266 - struct page *ipage; 266 + struct folio *ifolio; 267 267 268 - ipage = f2fs_get_inode_page(sbi, inode->i_ino); 269 - if (IS_ERR(ipage)) 270 - return PTR_ERR(ipage); 268 + ifolio = f2fs_get_inode_folio(sbi, inode->i_ino); 269 + if (IS_ERR(ifolio)) 270 + return PTR_ERR(ifolio); 271 271 272 272 if (!f2fs_has_inline_data(inode)) { 273 - f2fs_put_page(ipage, 1); 273 + f2fs_folio_put(ifolio, true); 274 274 return -EAGAIN; 275 275 } 276 276 277 277 f2fs_bug_on(F2FS_I_SB(inode), folio->index); 278 278 279 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 280 - memcpy_from_folio(inline_data_addr(inode, ipage), 279 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 280 + memcpy_from_folio(inline_data_addr(inode, ifolio), 281 281 folio, 0, MAX_INLINE_DATA(inode)); 282 - set_page_dirty(ipage); 282 + folio_mark_dirty(ifolio); 283 283 284 284 f2fs_clear_page_cache_dirty_tag(folio); 285 285 286 286 set_inode_flag(inode, FI_APPEND_WRITE); 287 287 set_inode_flag(inode, FI_DATA_EXIST); 288 288 289 - clear_page_private_inline(ipage); 290 - f2fs_put_page(ipage, 1); 289 + clear_page_private_inline(&ifolio->page); 290 + f2fs_folio_put(ifolio, 1); 291 291 return 0; 292 292 } 293 293 294 - int f2fs_recover_inline_data(struct inode *inode, struct page *npage) 294 + int f2fs_recover_inline_data(struct inode *inode, struct folio *nfolio) 295 295 { 296 296 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 297 297 struct f2fs_inode *ri = NULL; 298 298 void *src_addr, *dst_addr; 299 - struct page *ipage; 300 299 301 300 /* 302 301 * The inline_data recovery policy is as follows. ··· 305 306 * x o -> remove data blocks, and then recover inline_data 306 307 * x x -> recover data blocks 307 308 */ 308 - if (IS_INODE(npage)) 309 - ri = F2FS_INODE(npage); 309 + if (IS_INODE(&nfolio->page)) 310 + ri = F2FS_INODE(&nfolio->page); 310 311 311 312 if (f2fs_has_inline_data(inode) && 312 313 ri && (ri->i_inline & F2FS_INLINE_DATA)) { 314 + struct folio *ifolio; 313 315 process_inline: 314 - ipage = f2fs_get_inode_page(sbi, inode->i_ino); 315 - if (IS_ERR(ipage)) 316 - return PTR_ERR(ipage); 316 + ifolio = f2fs_get_inode_folio(sbi, inode->i_ino); 317 + if (IS_ERR(ifolio)) 318 + return PTR_ERR(ifolio); 317 319 318 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 320 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 319 321 320 - src_addr = inline_data_addr(inode, npage); 321 - dst_addr = inline_data_addr(inode, ipage); 322 + src_addr = inline_data_addr(inode, nfolio); 323 + dst_addr = inline_data_addr(inode, ifolio); 322 324 memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode)); 323 325 324 326 set_inode_flag(inode, FI_INLINE_DATA); 325 327 set_inode_flag(inode, FI_DATA_EXIST); 326 328 327 - set_page_dirty(ipage); 328 - f2fs_put_page(ipage, 1); 329 + folio_mark_dirty(ifolio); 330 + f2fs_folio_put(ifolio, true); 329 331 return 1; 330 332 } 331 333 332 334 if (f2fs_has_inline_data(inode)) { 333 - ipage = f2fs_get_inode_page(sbi, inode->i_ino); 334 - if (IS_ERR(ipage)) 335 - return PTR_ERR(ipage); 336 - f2fs_truncate_inline_inode(inode, ipage, 0); 335 + struct folio *ifolio = f2fs_get_inode_folio(sbi, inode->i_ino); 336 + if (IS_ERR(ifolio)) 337 + return PTR_ERR(ifolio); 338 + f2fs_truncate_inline_inode(inode, ifolio, 0); 337 339 stat_dec_inline_inode(inode); 338 340 clear_inode_flag(inode, FI_INLINE_DATA); 339 - f2fs_put_page(ipage, 1); 341 + f2fs_folio_put(ifolio, true); 340 342 } else if (ri && (ri->i_inline & F2FS_INLINE_DATA)) { 341 343 int ret; 342 344 ··· 352 352 353 353 struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir, 354 354 const struct f2fs_filename *fname, 355 - struct page **res_page, 355 + struct folio **res_folio, 356 356 bool use_hash) 357 357 { 358 358 struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb); 359 359 struct f2fs_dir_entry *de; 360 360 struct f2fs_dentry_ptr d; 361 - struct page *ipage; 361 + struct folio *ifolio; 362 362 void *inline_dentry; 363 363 364 - ipage = f2fs_get_inode_page(sbi, dir->i_ino); 365 - if (IS_ERR(ipage)) { 366 - *res_page = ipage; 364 + ifolio = f2fs_get_inode_folio(sbi, dir->i_ino); 365 + if (IS_ERR(ifolio)) { 366 + *res_folio = ifolio; 367 367 return NULL; 368 368 } 369 369 370 - inline_dentry = inline_data_addr(dir, ipage); 370 + inline_dentry = inline_data_addr(dir, ifolio); 371 371 372 372 make_dentry_ptr_inline(dir, &d, inline_dentry); 373 373 de = f2fs_find_target_dentry(&d, fname, NULL, use_hash); 374 - unlock_page(ipage); 374 + folio_unlock(ifolio); 375 375 if (IS_ERR(de)) { 376 - *res_page = ERR_CAST(de); 376 + *res_folio = ERR_CAST(de); 377 377 de = NULL; 378 378 } 379 379 if (de) 380 - *res_page = ipage; 380 + *res_folio = ifolio; 381 381 else 382 - f2fs_put_page(ipage, 0); 382 + f2fs_folio_put(ifolio, false); 383 383 384 384 return de; 385 385 } 386 386 387 387 int f2fs_make_empty_inline_dir(struct inode *inode, struct inode *parent, 388 - struct page *ipage) 388 + struct folio *ifolio) 389 389 { 390 390 struct f2fs_dentry_ptr d; 391 391 void *inline_dentry; 392 392 393 - inline_dentry = inline_data_addr(inode, ipage); 393 + inline_dentry = inline_data_addr(inode, ifolio); 394 394 395 395 make_dentry_ptr_inline(inode, &d, inline_dentry); 396 396 f2fs_do_make_empty_dir(inode, parent, &d); 397 397 398 - set_page_dirty(ipage); 398 + folio_mark_dirty(ifolio); 399 399 400 400 /* update i_size to MAX_INLINE_DATA */ 401 401 if (i_size_read(inode) < MAX_INLINE_DATA(inode)) ··· 407 407 * NOTE: ipage is grabbed by caller, but if any error occurs, we should 408 408 * release ipage in this function. 409 409 */ 410 - static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, 410 + static int f2fs_move_inline_dirents(struct inode *dir, struct folio *ifolio, 411 411 void *inline_dentry) 412 412 { 413 - struct page *page; 413 + struct folio *folio; 414 414 struct dnode_of_data dn; 415 415 struct f2fs_dentry_block *dentry_blk; 416 416 struct f2fs_dentry_ptr src, dst; 417 417 int err; 418 418 419 - page = f2fs_grab_cache_page(dir->i_mapping, 0, true); 420 - if (!page) { 421 - f2fs_put_page(ipage, 1); 422 - return -ENOMEM; 419 + folio = f2fs_grab_cache_folio(dir->i_mapping, 0, true); 420 + if (IS_ERR(folio)) { 421 + f2fs_folio_put(ifolio, true); 422 + return PTR_ERR(folio); 423 423 } 424 424 425 - set_new_dnode(&dn, dir, ipage, NULL, 0); 425 + set_new_dnode(&dn, dir, ifolio, NULL, 0); 426 426 err = f2fs_reserve_block(&dn, 0); 427 427 if (err) 428 428 goto out; 429 429 430 430 if (unlikely(dn.data_blkaddr != NEW_ADDR)) { 431 431 f2fs_put_dnode(&dn); 432 - set_sbi_flag(F2FS_P_SB(page), SBI_NEED_FSCK); 433 - f2fs_warn(F2FS_P_SB(page), "%s: corrupted inline inode ino=%lx, i_addr[0]:0x%x, run fsck to fix.", 432 + set_sbi_flag(F2FS_F_SB(folio), SBI_NEED_FSCK); 433 + f2fs_warn(F2FS_F_SB(folio), "%s: corrupted inline inode ino=%lx, i_addr[0]:0x%x, run fsck to fix.", 434 434 __func__, dir->i_ino, dn.data_blkaddr); 435 - f2fs_handle_error(F2FS_P_SB(page), ERROR_INVALID_BLKADDR); 435 + f2fs_handle_error(F2FS_F_SB(folio), ERROR_INVALID_BLKADDR); 436 436 err = -EFSCORRUPTED; 437 437 goto out; 438 438 } 439 439 440 - f2fs_wait_on_page_writeback(page, DATA, true, true); 440 + f2fs_folio_wait_writeback(folio, DATA, true, true); 441 441 442 - dentry_blk = page_address(page); 442 + dentry_blk = folio_address(folio); 443 443 444 444 /* 445 445 * Start by zeroing the full block, to ensure that all unused space is ··· 455 455 memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max); 456 456 memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN); 457 457 458 - if (!PageUptodate(page)) 459 - SetPageUptodate(page); 460 - set_page_dirty(page); 458 + if (!folio_test_uptodate(folio)) 459 + folio_mark_uptodate(folio); 460 + folio_mark_dirty(folio); 461 461 462 462 /* clear inline dir and flag after data writeback */ 463 - f2fs_truncate_inline_inode(dir, ipage, 0); 463 + f2fs_truncate_inline_inode(dir, ifolio, 0); 464 464 465 465 stat_dec_inline_dir(dir); 466 466 clear_inode_flag(dir, FI_INLINE_DENTRY); ··· 477 477 if (i_size_read(dir) < PAGE_SIZE) 478 478 f2fs_i_size_write(dir, PAGE_SIZE); 479 479 out: 480 - f2fs_put_page(page, 1); 480 + f2fs_folio_put(folio, true); 481 481 return err; 482 482 } 483 483 ··· 533 533 return err; 534 534 } 535 535 536 - static int f2fs_move_rehashed_dirents(struct inode *dir, struct page *ipage, 536 + static int f2fs_move_rehashed_dirents(struct inode *dir, struct folio *ifolio, 537 537 void *inline_dentry) 538 538 { 539 539 void *backup_dentry; ··· 542 542 backup_dentry = f2fs_kmalloc(F2FS_I_SB(dir), 543 543 MAX_INLINE_DATA(dir), GFP_F2FS_ZERO); 544 544 if (!backup_dentry) { 545 - f2fs_put_page(ipage, 1); 545 + f2fs_folio_put(ifolio, true); 546 546 return -ENOMEM; 547 547 } 548 548 549 549 memcpy(backup_dentry, inline_dentry, MAX_INLINE_DATA(dir)); 550 - f2fs_truncate_inline_inode(dir, ipage, 0); 550 + f2fs_truncate_inline_inode(dir, ifolio, 0); 551 551 552 - unlock_page(ipage); 552 + folio_unlock(ifolio); 553 553 554 554 err = f2fs_add_inline_entries(dir, backup_dentry); 555 555 if (err) 556 556 goto recover; 557 557 558 - lock_page(ipage); 558 + folio_lock(ifolio); 559 559 560 560 stat_dec_inline_dir(dir); 561 561 clear_inode_flag(dir, FI_INLINE_DENTRY); ··· 571 571 kfree(backup_dentry); 572 572 return 0; 573 573 recover: 574 - lock_page(ipage); 575 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 574 + folio_lock(ifolio); 575 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 576 576 memcpy(inline_dentry, backup_dentry, MAX_INLINE_DATA(dir)); 577 577 f2fs_i_depth_write(dir, 0); 578 578 f2fs_i_size_write(dir, MAX_INLINE_DATA(dir)); 579 - set_page_dirty(ipage); 580 - f2fs_put_page(ipage, 1); 579 + folio_mark_dirty(ifolio); 580 + f2fs_folio_put(ifolio, 1); 581 581 582 582 kfree(backup_dentry); 583 583 return err; 584 584 } 585 585 586 - static int do_convert_inline_dir(struct inode *dir, struct page *ipage, 586 + static int do_convert_inline_dir(struct inode *dir, struct folio *ifolio, 587 587 void *inline_dentry) 588 588 { 589 589 if (!F2FS_I(dir)->i_dir_level) 590 - return f2fs_move_inline_dirents(dir, ipage, inline_dentry); 590 + return f2fs_move_inline_dirents(dir, ifolio, inline_dentry); 591 591 else 592 - return f2fs_move_rehashed_dirents(dir, ipage, inline_dentry); 592 + return f2fs_move_rehashed_dirents(dir, ifolio, inline_dentry); 593 593 } 594 594 595 595 int f2fs_try_convert_inline_dir(struct inode *dir, struct dentry *dentry) 596 596 { 597 597 struct f2fs_sb_info *sbi = F2FS_I_SB(dir); 598 - struct page *ipage; 598 + struct folio *ifolio; 599 599 struct f2fs_filename fname; 600 600 void *inline_dentry = NULL; 601 601 int err = 0; ··· 609 609 if (err) 610 610 goto out; 611 611 612 - ipage = f2fs_get_inode_page(sbi, dir->i_ino); 613 - if (IS_ERR(ipage)) { 614 - err = PTR_ERR(ipage); 612 + ifolio = f2fs_get_inode_folio(sbi, dir->i_ino); 613 + if (IS_ERR(ifolio)) { 614 + err = PTR_ERR(ifolio); 615 615 goto out_fname; 616 616 } 617 617 618 - if (f2fs_has_enough_room(dir, ipage, &fname)) { 619 - f2fs_put_page(ipage, 1); 618 + if (f2fs_has_enough_room(dir, ifolio, &fname)) { 619 + f2fs_folio_put(ifolio, true); 620 620 goto out_fname; 621 621 } 622 622 623 - inline_dentry = inline_data_addr(dir, ipage); 623 + inline_dentry = inline_data_addr(dir, ifolio); 624 624 625 - err = do_convert_inline_dir(dir, ipage, inline_dentry); 625 + err = do_convert_inline_dir(dir, ifolio, inline_dentry); 626 626 if (!err) 627 - f2fs_put_page(ipage, 1); 627 + f2fs_folio_put(ifolio, true); 628 628 out_fname: 629 629 f2fs_free_filename(&fname); 630 630 out: ··· 636 636 struct inode *inode, nid_t ino, umode_t mode) 637 637 { 638 638 struct f2fs_sb_info *sbi = F2FS_I_SB(dir); 639 - struct page *ipage; 639 + struct folio *ifolio; 640 640 unsigned int bit_pos; 641 641 void *inline_dentry = NULL; 642 642 struct f2fs_dentry_ptr d; 643 643 int slots = GET_DENTRY_SLOTS(fname->disk_name.len); 644 - struct page *page = NULL; 644 + struct folio *folio = NULL; 645 645 int err = 0; 646 646 647 - ipage = f2fs_get_inode_page(sbi, dir->i_ino); 648 - if (IS_ERR(ipage)) 649 - return PTR_ERR(ipage); 647 + ifolio = f2fs_get_inode_folio(sbi, dir->i_ino); 648 + if (IS_ERR(ifolio)) 649 + return PTR_ERR(ifolio); 650 650 651 - inline_dentry = inline_data_addr(dir, ipage); 651 + inline_dentry = inline_data_addr(dir, ifolio); 652 652 make_dentry_ptr_inline(dir, &d, inline_dentry); 653 653 654 654 bit_pos = f2fs_room_for_filename(d.bitmap, slots, d.max); 655 655 if (bit_pos >= d.max) { 656 - err = do_convert_inline_dir(dir, ipage, inline_dentry); 656 + err = do_convert_inline_dir(dir, ifolio, inline_dentry); 657 657 if (err) 658 658 return err; 659 659 err = -EAGAIN; ··· 663 663 if (inode) { 664 664 f2fs_down_write_nested(&F2FS_I(inode)->i_sem, 665 665 SINGLE_DEPTH_NESTING); 666 - page = f2fs_init_inode_metadata(inode, dir, fname, ipage); 667 - if (IS_ERR(page)) { 668 - err = PTR_ERR(page); 666 + folio = f2fs_init_inode_metadata(inode, dir, fname, ifolio); 667 + if (IS_ERR(folio)) { 668 + err = PTR_ERR(folio); 669 669 goto fail; 670 670 } 671 671 } 672 672 673 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 673 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 674 674 675 675 f2fs_update_dentry(ino, mode, &d, &fname->disk_name, fname->hash, 676 676 bit_pos); 677 677 678 - set_page_dirty(ipage); 678 + folio_mark_dirty(ifolio); 679 679 680 680 /* we don't need to mark_inode_dirty now */ 681 681 if (inode) { ··· 683 683 684 684 /* synchronize inode page's data from inode cache */ 685 685 if (is_inode_flag_set(inode, FI_NEW_INODE)) 686 - f2fs_update_inode(inode, page); 686 + f2fs_update_inode(inode, folio); 687 687 688 - f2fs_put_page(page, 1); 688 + f2fs_folio_put(folio, true); 689 689 } 690 690 691 691 f2fs_update_parent_metadata(dir, inode, 0); ··· 693 693 if (inode) 694 694 f2fs_up_write(&F2FS_I(inode)->i_sem); 695 695 out: 696 - f2fs_put_page(ipage, 1); 696 + f2fs_folio_put(ifolio, true); 697 697 return err; 698 698 } 699 699 700 - void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, struct page *page, 701 - struct inode *dir, struct inode *inode) 700 + void f2fs_delete_inline_entry(struct f2fs_dir_entry *dentry, 701 + struct folio *folio, struct inode *dir, struct inode *inode) 702 702 { 703 703 struct f2fs_dentry_ptr d; 704 704 void *inline_dentry; ··· 706 706 unsigned int bit_pos; 707 707 int i; 708 708 709 - lock_page(page); 710 - f2fs_wait_on_page_writeback(page, NODE, true, true); 709 + folio_lock(folio); 710 + f2fs_folio_wait_writeback(folio, NODE, true, true); 711 711 712 - inline_dentry = inline_data_addr(dir, page); 712 + inline_dentry = inline_data_addr(dir, folio); 713 713 make_dentry_ptr_inline(dir, &d, inline_dentry); 714 714 715 715 bit_pos = dentry - d.dentry; 716 716 for (i = 0; i < slots; i++) 717 717 __clear_bit_le(bit_pos + i, d.bitmap); 718 718 719 - set_page_dirty(page); 720 - f2fs_put_page(page, 1); 719 + folio_mark_dirty(folio); 720 + f2fs_folio_put(folio, true); 721 721 722 722 inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir)); 723 723 f2fs_mark_inode_dirty_sync(dir, false); ··· 729 729 bool f2fs_empty_inline_dir(struct inode *dir) 730 730 { 731 731 struct f2fs_sb_info *sbi = F2FS_I_SB(dir); 732 - struct page *ipage; 732 + struct folio *ifolio; 733 733 unsigned int bit_pos = 2; 734 734 void *inline_dentry; 735 735 struct f2fs_dentry_ptr d; 736 736 737 - ipage = f2fs_get_inode_page(sbi, dir->i_ino); 738 - if (IS_ERR(ipage)) 737 + ifolio = f2fs_get_inode_folio(sbi, dir->i_ino); 738 + if (IS_ERR(ifolio)) 739 739 return false; 740 740 741 - inline_dentry = inline_data_addr(dir, ipage); 741 + inline_dentry = inline_data_addr(dir, ifolio); 742 742 make_dentry_ptr_inline(dir, &d, inline_dentry); 743 743 744 744 bit_pos = find_next_bit_le(d.bitmap, d.max, bit_pos); 745 745 746 - f2fs_put_page(ipage, 1); 746 + f2fs_folio_put(ifolio, true); 747 747 748 748 if (bit_pos < d.max) 749 749 return false; ··· 755 755 struct fscrypt_str *fstr) 756 756 { 757 757 struct inode *inode = file_inode(file); 758 - struct page *ipage = NULL; 758 + struct folio *ifolio = NULL; 759 759 struct f2fs_dentry_ptr d; 760 760 void *inline_dentry = NULL; 761 761 int err; ··· 765 765 if (ctx->pos == d.max) 766 766 return 0; 767 767 768 - ipage = f2fs_get_inode_page(F2FS_I_SB(inode), inode->i_ino); 769 - if (IS_ERR(ipage)) 770 - return PTR_ERR(ipage); 768 + ifolio = f2fs_get_inode_folio(F2FS_I_SB(inode), inode->i_ino); 769 + if (IS_ERR(ifolio)) 770 + return PTR_ERR(ifolio); 771 771 772 772 /* 773 773 * f2fs_readdir was protected by inode.i_rwsem, it is safe to access 774 774 * ipage without page's lock held. 775 775 */ 776 - unlock_page(ipage); 776 + folio_unlock(ifolio); 777 777 778 - inline_dentry = inline_data_addr(inode, ipage); 778 + inline_dentry = inline_data_addr(inode, ifolio); 779 779 780 780 make_dentry_ptr_inline(inode, &d, inline_dentry); 781 781 ··· 783 783 if (!err) 784 784 ctx->pos = d.max; 785 785 786 - f2fs_put_page(ipage, 0); 786 + f2fs_folio_put(ifolio, false); 787 787 return err < 0 ? err : 0; 788 788 } 789 789 ··· 794 794 __u32 flags = FIEMAP_EXTENT_DATA_INLINE | FIEMAP_EXTENT_NOT_ALIGNED | 795 795 FIEMAP_EXTENT_LAST; 796 796 struct node_info ni; 797 - struct page *ipage; 797 + struct folio *ifolio; 798 798 int err = 0; 799 799 800 - ipage = f2fs_get_inode_page(F2FS_I_SB(inode), inode->i_ino); 801 - if (IS_ERR(ipage)) 802 - return PTR_ERR(ipage); 800 + ifolio = f2fs_get_inode_folio(F2FS_I_SB(inode), inode->i_ino); 801 + if (IS_ERR(ifolio)) 802 + return PTR_ERR(ifolio); 803 803 804 804 if ((S_ISREG(inode->i_mode) || S_ISLNK(inode->i_mode)) && 805 805 !f2fs_has_inline_data(inode)) { ··· 824 824 goto out; 825 825 826 826 byteaddr = (__u64)ni.blk_addr << inode->i_sb->s_blocksize_bits; 827 - byteaddr += (char *)inline_data_addr(inode, ipage) - 828 - (char *)F2FS_INODE(ipage); 827 + byteaddr += (char *)inline_data_addr(inode, ifolio) - 828 + (char *)F2FS_INODE(&ifolio->page); 829 829 err = fiemap_fill_next_extent(fieinfo, start, byteaddr, ilen, flags); 830 830 trace_f2fs_fiemap(inode, start, byteaddr, ilen, flags, err); 831 831 out: 832 - f2fs_put_page(ipage, 1); 832 + f2fs_folio_put(ifolio, true); 833 833 return err; 834 834 }
+62 -55
fs/f2fs/inode.c
··· 34 34 if (f2fs_inode_dirtied(inode, sync)) 35 35 return; 36 36 37 - if (f2fs_is_atomic_file(inode)) 37 + /* only atomic file w/ FI_ATOMIC_COMMITTED can be set vfs dirty */ 38 + if (f2fs_is_atomic_file(inode) && 39 + !is_inode_flag_set(inode, FI_ATOMIC_COMMITTED)) 38 40 return; 39 41 40 42 mark_inode_dirty_sync(inode); ··· 68 66 S_ENCRYPTED|S_VERITY|S_CASEFOLD); 69 67 } 70 68 71 - static void __get_inode_rdev(struct inode *inode, struct page *node_page) 69 + static void __get_inode_rdev(struct inode *inode, struct folio *node_folio) 72 70 { 73 - __le32 *addr = get_dnode_addr(inode, node_page); 71 + __le32 *addr = get_dnode_addr(inode, node_folio); 74 72 75 73 if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode) || 76 74 S_ISFIFO(inode->i_mode) || S_ISSOCK(inode->i_mode)) { ··· 81 79 } 82 80 } 83 81 84 - static void __set_inode_rdev(struct inode *inode, struct page *node_page) 82 + static void __set_inode_rdev(struct inode *inode, struct folio *node_folio) 85 83 { 86 - __le32 *addr = get_dnode_addr(inode, node_page); 84 + __le32 *addr = get_dnode_addr(inode, node_folio); 87 85 88 86 if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) { 89 87 if (old_valid_dev(inode->i_rdev)) { ··· 97 95 } 98 96 } 99 97 100 - static void __recover_inline_status(struct inode *inode, struct page *ipage) 98 + static void __recover_inline_status(struct inode *inode, struct folio *ifolio) 101 99 { 102 - void *inline_data = inline_data_addr(inode, ipage); 100 + void *inline_data = inline_data_addr(inode, ifolio); 103 101 __le32 *start = inline_data; 104 102 __le32 *end = start + MAX_INLINE_DATA(inode) / sizeof(__le32); 105 103 106 104 while (start < end) { 107 105 if (*start++) { 108 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 106 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 109 107 110 108 set_inode_flag(inode, FI_DATA_EXIST); 111 - set_raw_inline(inode, F2FS_INODE(ipage)); 112 - set_page_dirty(ipage); 109 + set_raw_inline(inode, F2FS_INODE(&ifolio->page)); 110 + folio_mark_dirty(ifolio); 113 111 return; 114 112 } 115 113 } ··· 144 142 unsigned int offset = offsetof(struct f2fs_inode, i_inode_checksum); 145 143 unsigned int cs_size = sizeof(dummy_cs); 146 144 147 - chksum = f2fs_chksum(sbi, sbi->s_chksum_seed, (__u8 *)&ino, 148 - sizeof(ino)); 149 - chksum_seed = f2fs_chksum(sbi, chksum, (__u8 *)&gen, sizeof(gen)); 145 + chksum = f2fs_chksum(sbi->s_chksum_seed, (__u8 *)&ino, sizeof(ino)); 146 + chksum_seed = f2fs_chksum(chksum, (__u8 *)&gen, sizeof(gen)); 150 147 151 - chksum = f2fs_chksum(sbi, chksum_seed, (__u8 *)ri, offset); 152 - chksum = f2fs_chksum(sbi, chksum, (__u8 *)&dummy_cs, cs_size); 148 + chksum = f2fs_chksum(chksum_seed, (__u8 *)ri, offset); 149 + chksum = f2fs_chksum(chksum, (__u8 *)&dummy_cs, cs_size); 153 150 offset += cs_size; 154 - chksum = f2fs_chksum(sbi, chksum, (__u8 *)ri + offset, 155 - F2FS_BLKSIZE - offset); 151 + chksum = f2fs_chksum(chksum, (__u8 *)ri + offset, 152 + F2FS_BLKSIZE - offset); 156 153 return chksum; 157 154 } 158 155 159 - bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct page *page) 156 + bool f2fs_inode_chksum_verify(struct f2fs_sb_info *sbi, struct folio *folio) 160 157 { 161 158 struct f2fs_inode *ri; 162 159 __u32 provided, calculated; ··· 164 163 return true; 165 164 166 165 #ifdef CONFIG_F2FS_CHECK_FS 167 - if (!f2fs_enable_inode_chksum(sbi, page)) 166 + if (!f2fs_enable_inode_chksum(sbi, &folio->page)) 168 167 #else 169 - if (!f2fs_enable_inode_chksum(sbi, page) || 170 - PageDirty(page) || 171 - folio_test_writeback(page_folio(page))) 168 + if (!f2fs_enable_inode_chksum(sbi, &folio->page) || 169 + folio_test_dirty(folio) || 170 + folio_test_writeback(folio)) 172 171 #endif 173 172 return true; 174 173 175 - ri = &F2FS_NODE(page)->i; 174 + ri = &F2FS_NODE(&folio->page)->i; 176 175 provided = le32_to_cpu(ri->i_inode_checksum); 177 - calculated = f2fs_inode_chksum(sbi, page); 176 + calculated = f2fs_inode_chksum(sbi, &folio->page); 178 177 179 178 if (provided != calculated) 180 179 f2fs_warn(sbi, "checksum invalid, nid = %lu, ino_of_node = %x, %x vs. %x", 181 - page_folio(page)->index, ino_of_node(page), 180 + folio->index, ino_of_node(&folio->page), 182 181 provided, calculated); 183 182 184 183 return provided == calculated; ··· 284 283 f2fs_warn(sbi, "%s: corrupted inode footer i_ino=%lx, ino,nid: [%u, %u] run fsck to fix.", 285 284 __func__, inode->i_ino, 286 285 ino_of_node(node_page), nid_of_node(node_page)); 286 + return false; 287 + } 288 + 289 + if (ino_of_node(node_page) == fi->i_xattr_nid) { 290 + f2fs_warn(sbi, "%s: corrupted inode i_ino=%lx, xnid=%x, run fsck to fix.", 291 + __func__, inode->i_ino, fi->i_xattr_nid); 287 292 return false; 288 293 } 289 294 ··· 407 400 { 408 401 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 409 402 struct f2fs_inode_info *fi = F2FS_I(inode); 410 - struct page *node_page; 403 + struct folio *node_folio; 411 404 struct f2fs_inode *ri; 412 405 projid_t i_projid; 413 406 ··· 415 408 if (f2fs_check_nid_range(sbi, inode->i_ino)) 416 409 return -EINVAL; 417 410 418 - node_page = f2fs_get_inode_page(sbi, inode->i_ino); 419 - if (IS_ERR(node_page)) 420 - return PTR_ERR(node_page); 411 + node_folio = f2fs_get_inode_folio(sbi, inode->i_ino); 412 + if (IS_ERR(node_folio)) 413 + return PTR_ERR(node_folio); 421 414 422 - ri = F2FS_INODE(node_page); 415 + ri = F2FS_INODE(&node_folio->page); 423 416 424 417 inode->i_mode = le16_to_cpu(ri->i_mode); 425 418 i_uid_write(inode, le32_to_cpu(ri->i_uid)); ··· 469 462 fi->i_inline_xattr_size = 0; 470 463 } 471 464 472 - if (!sanity_check_inode(inode, node_page)) { 473 - f2fs_put_page(node_page, 1); 465 + if (!sanity_check_inode(inode, &node_folio->page)) { 466 + f2fs_folio_put(node_folio, true); 474 467 set_sbi_flag(sbi, SBI_NEED_FSCK); 475 468 f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); 476 469 return -EFSCORRUPTED; ··· 478 471 479 472 /* check data exist */ 480 473 if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode)) 481 - __recover_inline_status(inode, node_page); 474 + __recover_inline_status(inode, node_folio); 482 475 483 476 /* try to recover cold bit for non-dir inode */ 484 - if (!S_ISDIR(inode->i_mode) && !is_cold_node(node_page)) { 485 - f2fs_wait_on_page_writeback(node_page, NODE, true, true); 486 - set_cold_node(node_page, false); 487 - set_page_dirty(node_page); 477 + if (!S_ISDIR(inode->i_mode) && !is_cold_node(&node_folio->page)) { 478 + f2fs_folio_wait_writeback(node_folio, NODE, true, true); 479 + set_cold_node(&node_folio->page, false); 480 + folio_mark_dirty(node_folio); 488 481 } 489 482 490 483 /* get rdev by using inline_info */ 491 - __get_inode_rdev(inode, node_page); 484 + __get_inode_rdev(inode, node_folio); 492 485 493 486 if (!f2fs_need_inode_block_update(sbi, inode->i_ino)) 494 487 fi->last_disk_size = inode->i_size; ··· 531 524 532 525 init_idisk_time(inode); 533 526 534 - if (!sanity_check_extent_cache(inode, node_page)) { 535 - f2fs_put_page(node_page, 1); 527 + if (!sanity_check_extent_cache(inode, &node_folio->page)) { 528 + f2fs_folio_put(node_folio, true); 536 529 f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); 537 530 return -EFSCORRUPTED; 538 531 } 539 532 540 533 /* Need all the flag bits */ 541 - f2fs_init_read_extent_tree(inode, node_page); 534 + f2fs_init_read_extent_tree(inode, node_folio); 542 535 f2fs_init_age_extent_tree(inode); 543 536 544 - f2fs_put_page(node_page, 1); 537 + f2fs_folio_put(node_folio, true); 545 538 546 539 stat_inc_inline_xattr(inode); 547 540 stat_inc_inline_inode(inode); ··· 658 651 return inode; 659 652 } 660 653 661 - void f2fs_update_inode(struct inode *inode, struct page *node_page) 654 + void f2fs_update_inode(struct inode *inode, struct folio *node_folio) 662 655 { 663 656 struct f2fs_inode_info *fi = F2FS_I(inode); 664 657 struct f2fs_inode *ri; 665 658 struct extent_tree *et = fi->extent_tree[EX_READ]; 666 659 667 - f2fs_wait_on_page_writeback(node_page, NODE, true, true); 668 - set_page_dirty(node_page); 660 + f2fs_folio_wait_writeback(node_folio, NODE, true, true); 661 + folio_mark_dirty(node_folio); 669 662 670 663 f2fs_inode_synced(inode); 671 664 672 - ri = F2FS_INODE(node_page); 665 + ri = F2FS_INODE(&node_folio->page); 673 666 674 667 ri->i_mode = cpu_to_le16(inode->i_mode); 675 668 ri->i_advise = fi->i_advise; ··· 744 737 } 745 738 } 746 739 747 - __set_inode_rdev(inode, node_page); 740 + __set_inode_rdev(inode, node_folio); 748 741 749 742 /* deleted inode */ 750 743 if (inode->i_nlink == 0) 751 - clear_page_private_inline(node_page); 744 + clear_page_private_inline(&node_folio->page); 752 745 753 746 init_idisk_time(inode); 754 747 #ifdef CONFIG_F2FS_CHECK_FS 755 - f2fs_inode_chksum_set(F2FS_I_SB(inode), node_page); 748 + f2fs_inode_chksum_set(F2FS_I_SB(inode), &node_folio->page); 756 749 #endif 757 750 } 758 751 759 752 void f2fs_update_inode_page(struct inode *inode) 760 753 { 761 754 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 762 - struct page *node_page; 755 + struct folio *node_folio; 763 756 int count = 0; 764 757 retry: 765 - node_page = f2fs_get_inode_page(sbi, inode->i_ino); 766 - if (IS_ERR(node_page)) { 767 - int err = PTR_ERR(node_page); 758 + node_folio = f2fs_get_inode_folio(sbi, inode->i_ino); 759 + if (IS_ERR(node_folio)) { 760 + int err = PTR_ERR(node_folio); 768 761 769 762 /* The node block was truncated. */ 770 763 if (err == -ENOENT) ··· 779 772 f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_UPDATE_INODE); 780 773 return; 781 774 } 782 - f2fs_update_inode(inode, node_page); 783 - f2fs_put_page(node_page, 1); 775 + f2fs_update_inode(inode, node_folio); 776 + f2fs_folio_put(node_folio, true); 784 777 } 785 778 786 779 int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc)
+70 -61
fs/f2fs/namei.c
··· 414 414 415 415 if (is_inode_flag_set(dir, FI_PROJ_INHERIT) && 416 416 (!projid_eq(F2FS_I(dir)->i_projid, 417 - F2FS_I(old_dentry->d_inode)->i_projid))) 417 + F2FS_I(inode)->i_projid))) 418 418 return -EXDEV; 419 419 420 420 err = f2fs_dquot_initialize(dir); ··· 447 447 448 448 struct dentry *f2fs_get_parent(struct dentry *child) 449 449 { 450 - struct page *page; 451 - unsigned long ino = f2fs_inode_by_name(d_inode(child), &dotdot_name, &page); 450 + struct folio *folio; 451 + unsigned long ino = f2fs_inode_by_name(d_inode(child), &dotdot_name, &folio); 452 452 453 453 if (!ino) { 454 - if (IS_ERR(page)) 455 - return ERR_CAST(page); 454 + if (IS_ERR(folio)) 455 + return ERR_CAST(folio); 456 456 return ERR_PTR(-ENOENT); 457 457 } 458 458 return d_obtain_alias(f2fs_iget(child->d_sb, ino)); ··· 463 463 { 464 464 struct inode *inode = NULL; 465 465 struct f2fs_dir_entry *de; 466 - struct page *page; 466 + struct folio *folio; 467 467 struct dentry *new; 468 468 nid_t ino = -1; 469 469 int err = 0; ··· 481 481 goto out_splice; 482 482 if (err) 483 483 goto out; 484 - de = __f2fs_find_entry(dir, &fname, &page); 484 + de = __f2fs_find_entry(dir, &fname, &folio); 485 485 f2fs_free_filename(&fname); 486 486 487 487 if (!de) { 488 - if (IS_ERR(page)) { 489 - err = PTR_ERR(page); 488 + if (IS_ERR(folio)) { 489 + err = PTR_ERR(folio); 490 490 goto out; 491 491 } 492 492 err = -ENOENT; ··· 494 494 } 495 495 496 496 ino = le32_to_cpu(de->ino); 497 - f2fs_put_page(page, 0); 497 + f2fs_folio_put(folio, false); 498 498 499 499 inode = f2fs_iget(dir->i_sb, ino); 500 500 if (IS_ERR(inode)) { ··· 545 545 struct f2fs_sb_info *sbi = F2FS_I_SB(dir); 546 546 struct inode *inode = d_inode(dentry); 547 547 struct f2fs_dir_entry *de; 548 - struct page *page; 548 + struct folio *folio; 549 549 int err; 550 550 551 551 trace_f2fs_unlink_enter(dir, dentry); ··· 562 562 if (err) 563 563 goto fail; 564 564 565 - de = f2fs_find_entry(dir, &dentry->d_name, &page); 565 + de = f2fs_find_entry(dir, &dentry->d_name, &folio); 566 566 if (!de) { 567 - if (IS_ERR(page)) 568 - err = PTR_ERR(page); 567 + if (IS_ERR(folio)) 568 + err = PTR_ERR(folio); 569 + goto fail; 570 + } 571 + 572 + if (unlikely(inode->i_nlink == 0)) { 573 + f2fs_warn(F2FS_I_SB(inode), "%s: inode (ino=%lx) has zero i_nlink", 574 + __func__, inode->i_ino); 575 + err = -EFSCORRUPTED; 576 + set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK); 577 + f2fs_folio_put(folio, false); 569 578 goto fail; 570 579 } 571 580 ··· 584 575 err = f2fs_acquire_orphan_inode(sbi); 585 576 if (err) { 586 577 f2fs_unlock_op(sbi); 587 - f2fs_put_page(page, 0); 578 + f2fs_folio_put(folio, false); 588 579 goto fail; 589 580 } 590 - f2fs_delete_entry(de, page, dir, inode); 581 + f2fs_delete_entry(de, folio, dir, inode); 591 582 f2fs_unlock_op(sbi); 592 583 593 584 /* VFS negative dentries are incompatible with Encoding and ··· 908 899 struct inode *old_inode = d_inode(old_dentry); 909 900 struct inode *new_inode = d_inode(new_dentry); 910 901 struct inode *whiteout = NULL; 911 - struct page *old_dir_page = NULL; 912 - struct page *old_page, *new_page = NULL; 902 + struct folio *old_dir_folio = NULL; 903 + struct folio *old_folio, *new_folio = NULL; 913 904 struct f2fs_dir_entry *old_dir_entry = NULL; 914 905 struct f2fs_dir_entry *old_entry; 915 906 struct f2fs_dir_entry *new_entry; ··· 923 914 924 915 if (is_inode_flag_set(new_dir, FI_PROJ_INHERIT) && 925 916 (!projid_eq(F2FS_I(new_dir)->i_projid, 926 - F2FS_I(old_dentry->d_inode)->i_projid))) 917 + F2FS_I(old_inode)->i_projid))) 927 918 return -EXDEV; 928 919 929 920 /* ··· 968 959 } 969 960 970 961 err = -ENOENT; 971 - old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page); 962 + old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_folio); 972 963 if (!old_entry) { 973 - if (IS_ERR(old_page)) 974 - err = PTR_ERR(old_page); 964 + if (IS_ERR(old_folio)) 965 + err = PTR_ERR(old_folio); 975 966 goto out; 976 967 } 977 968 978 969 if (old_is_dir && old_dir != new_dir) { 979 - old_dir_entry = f2fs_parent_dir(old_inode, &old_dir_page); 970 + old_dir_entry = f2fs_parent_dir(old_inode, &old_dir_folio); 980 971 if (!old_dir_entry) { 981 - if (IS_ERR(old_dir_page)) 982 - err = PTR_ERR(old_dir_page); 972 + if (IS_ERR(old_dir_folio)) 973 + err = PTR_ERR(old_dir_folio); 983 974 goto out_old; 984 975 } 985 976 } ··· 992 983 993 984 err = -ENOENT; 994 985 new_entry = f2fs_find_entry(new_dir, &new_dentry->d_name, 995 - &new_page); 986 + &new_folio); 996 987 if (!new_entry) { 997 - if (IS_ERR(new_page)) 998 - err = PTR_ERR(new_page); 988 + if (IS_ERR(new_folio)) 989 + err = PTR_ERR(new_folio); 999 990 goto out_dir; 1000 991 } 1001 992 ··· 1007 998 if (err) 1008 999 goto put_out_dir; 1009 1000 1010 - f2fs_set_link(new_dir, new_entry, new_page, old_inode); 1011 - new_page = NULL; 1001 + f2fs_set_link(new_dir, new_entry, new_folio, old_inode); 1002 + new_folio = NULL; 1012 1003 1013 1004 inode_set_ctime_current(new_inode); 1014 1005 f2fs_down_write(&F2FS_I(new_inode)->i_sem); ··· 1047 1038 inode_set_ctime_current(old_inode); 1048 1039 f2fs_mark_inode_dirty_sync(old_inode, false); 1049 1040 1050 - f2fs_delete_entry(old_entry, old_page, old_dir, NULL); 1051 - old_page = NULL; 1041 + f2fs_delete_entry(old_entry, old_folio, old_dir, NULL); 1042 + old_folio = NULL; 1052 1043 1053 1044 if (whiteout) { 1054 1045 set_inode_flag(whiteout, FI_INC_LINK); ··· 1064 1055 } 1065 1056 1066 1057 if (old_dir_entry) 1067 - f2fs_set_link(old_inode, old_dir_entry, old_dir_page, new_dir); 1058 + f2fs_set_link(old_inode, old_dir_entry, old_dir_folio, new_dir); 1068 1059 if (old_is_dir) 1069 1060 f2fs_i_links_write(old_dir, false); 1070 1061 ··· 1085 1076 1086 1077 put_out_dir: 1087 1078 f2fs_unlock_op(sbi); 1088 - f2fs_put_page(new_page, 0); 1079 + f2fs_folio_put(new_folio, false); 1089 1080 out_dir: 1090 1081 if (old_dir_entry) 1091 - f2fs_put_page(old_dir_page, 0); 1082 + f2fs_folio_put(old_dir_folio, false); 1092 1083 out_old: 1093 - f2fs_put_page(old_page, 0); 1084 + f2fs_folio_put(old_folio, false); 1094 1085 out: 1095 1086 iput(whiteout); 1096 1087 return err; ··· 1102 1093 struct f2fs_sb_info *sbi = F2FS_I_SB(old_dir); 1103 1094 struct inode *old_inode = d_inode(old_dentry); 1104 1095 struct inode *new_inode = d_inode(new_dentry); 1105 - struct page *old_dir_page, *new_dir_page; 1106 - struct page *old_page, *new_page; 1096 + struct folio *old_dir_folio, *new_dir_folio; 1097 + struct folio *old_folio, *new_folio; 1107 1098 struct f2fs_dir_entry *old_dir_entry = NULL, *new_dir_entry = NULL; 1108 1099 struct f2fs_dir_entry *old_entry, *new_entry; 1109 1100 int old_nlink = 0, new_nlink = 0; ··· 1116 1107 1117 1108 if ((is_inode_flag_set(new_dir, FI_PROJ_INHERIT) && 1118 1109 !projid_eq(F2FS_I(new_dir)->i_projid, 1119 - F2FS_I(old_dentry->d_inode)->i_projid)) || 1120 - (is_inode_flag_set(new_dir, FI_PROJ_INHERIT) && 1110 + F2FS_I(old_inode)->i_projid)) || 1111 + (is_inode_flag_set(old_dir, FI_PROJ_INHERIT) && 1121 1112 !projid_eq(F2FS_I(old_dir)->i_projid, 1122 - F2FS_I(new_dentry->d_inode)->i_projid))) 1113 + F2FS_I(new_inode)->i_projid))) 1123 1114 return -EXDEV; 1124 1115 1125 1116 err = f2fs_dquot_initialize(old_dir); ··· 1131 1122 goto out; 1132 1123 1133 1124 err = -ENOENT; 1134 - old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_page); 1125 + old_entry = f2fs_find_entry(old_dir, &old_dentry->d_name, &old_folio); 1135 1126 if (!old_entry) { 1136 - if (IS_ERR(old_page)) 1137 - err = PTR_ERR(old_page); 1127 + if (IS_ERR(old_folio)) 1128 + err = PTR_ERR(old_folio); 1138 1129 goto out; 1139 1130 } 1140 1131 1141 - new_entry = f2fs_find_entry(new_dir, &new_dentry->d_name, &new_page); 1132 + new_entry = f2fs_find_entry(new_dir, &new_dentry->d_name, &new_folio); 1142 1133 if (!new_entry) { 1143 - if (IS_ERR(new_page)) 1144 - err = PTR_ERR(new_page); 1134 + if (IS_ERR(new_folio)) 1135 + err = PTR_ERR(new_folio); 1145 1136 goto out_old; 1146 1137 } 1147 1138 ··· 1149 1140 if (old_dir != new_dir) { 1150 1141 if (S_ISDIR(old_inode->i_mode)) { 1151 1142 old_dir_entry = f2fs_parent_dir(old_inode, 1152 - &old_dir_page); 1143 + &old_dir_folio); 1153 1144 if (!old_dir_entry) { 1154 - if (IS_ERR(old_dir_page)) 1155 - err = PTR_ERR(old_dir_page); 1145 + if (IS_ERR(old_dir_folio)) 1146 + err = PTR_ERR(old_dir_folio); 1156 1147 goto out_new; 1157 1148 } 1158 1149 } 1159 1150 1160 1151 if (S_ISDIR(new_inode->i_mode)) { 1161 1152 new_dir_entry = f2fs_parent_dir(new_inode, 1162 - &new_dir_page); 1153 + &new_dir_folio); 1163 1154 if (!new_dir_entry) { 1164 - if (IS_ERR(new_dir_page)) 1165 - err = PTR_ERR(new_dir_page); 1155 + if (IS_ERR(new_dir_folio)) 1156 + err = PTR_ERR(new_dir_folio); 1166 1157 goto out_old_dir; 1167 1158 } 1168 1159 } ··· 1189 1180 1190 1181 /* update ".." directory entry info of old dentry */ 1191 1182 if (old_dir_entry) 1192 - f2fs_set_link(old_inode, old_dir_entry, old_dir_page, new_dir); 1183 + f2fs_set_link(old_inode, old_dir_entry, old_dir_folio, new_dir); 1193 1184 1194 1185 /* update ".." directory entry info of new dentry */ 1195 1186 if (new_dir_entry) 1196 - f2fs_set_link(new_inode, new_dir_entry, new_dir_page, old_dir); 1187 + f2fs_set_link(new_inode, new_dir_entry, new_dir_folio, old_dir); 1197 1188 1198 1189 /* update directory entry info of old dir inode */ 1199 - f2fs_set_link(old_dir, old_entry, old_page, new_inode); 1190 + f2fs_set_link(old_dir, old_entry, old_folio, new_inode); 1200 1191 1201 1192 f2fs_down_write(&F2FS_I(old_inode)->i_sem); 1202 1193 if (!old_dir_entry) ··· 1215 1206 f2fs_mark_inode_dirty_sync(old_dir, false); 1216 1207 1217 1208 /* update directory entry info of new dir inode */ 1218 - f2fs_set_link(new_dir, new_entry, new_page, old_inode); 1209 + f2fs_set_link(new_dir, new_entry, new_folio, old_inode); 1219 1210 1220 1211 f2fs_down_write(&F2FS_I(new_inode)->i_sem); 1221 1212 if (!new_dir_entry) ··· 1247 1238 return 0; 1248 1239 out_new_dir: 1249 1240 if (new_dir_entry) { 1250 - f2fs_put_page(new_dir_page, 0); 1241 + f2fs_folio_put(new_dir_folio, 0); 1251 1242 } 1252 1243 out_old_dir: 1253 1244 if (old_dir_entry) { 1254 - f2fs_put_page(old_dir_page, 0); 1245 + f2fs_folio_put(old_dir_folio, 0); 1255 1246 } 1256 1247 out_new: 1257 - f2fs_put_page(new_page, 0); 1248 + f2fs_folio_put(new_folio, false); 1258 1249 out_old: 1259 - f2fs_put_page(old_page, 0); 1250 + f2fs_folio_put(old_folio, false); 1260 1251 out: 1261 1252 return err; 1262 1253 }
+293 -315
fs/f2fs/node.c
··· 120 120 return res; 121 121 } 122 122 123 - static void clear_node_page_dirty(struct page *page) 123 + static void clear_node_folio_dirty(struct folio *folio) 124 124 { 125 - if (PageDirty(page)) { 126 - f2fs_clear_page_cache_dirty_tag(page_folio(page)); 127 - clear_page_dirty_for_io(page); 128 - dec_page_count(F2FS_P_SB(page), F2FS_DIRTY_NODES); 125 + if (folio_test_dirty(folio)) { 126 + f2fs_clear_page_cache_dirty_tag(folio); 127 + folio_clear_dirty_for_io(folio); 128 + dec_page_count(F2FS_F_SB(folio), F2FS_DIRTY_NODES); 129 129 } 130 - ClearPageUptodate(page); 130 + folio_clear_uptodate(folio); 131 131 } 132 132 133 - static struct page *get_current_nat_page(struct f2fs_sb_info *sbi, nid_t nid) 133 + static struct folio *get_current_nat_folio(struct f2fs_sb_info *sbi, nid_t nid) 134 134 { 135 - return f2fs_get_meta_page_retry(sbi, current_nat_addr(sbi, nid)); 135 + return f2fs_get_meta_folio_retry(sbi, current_nat_addr(sbi, nid)); 136 136 } 137 137 138 138 static struct page *get_next_nat_page(struct f2fs_sb_info *sbi, nid_t nid) 139 139 { 140 - struct page *src_page; 141 - struct page *dst_page; 140 + struct folio *src_folio; 141 + struct folio *dst_folio; 142 142 pgoff_t dst_off; 143 143 void *src_addr; 144 144 void *dst_addr; ··· 147 147 dst_off = next_nat_addr(sbi, current_nat_addr(sbi, nid)); 148 148 149 149 /* get current nat block page with lock */ 150 - src_page = get_current_nat_page(sbi, nid); 151 - if (IS_ERR(src_page)) 152 - return src_page; 153 - dst_page = f2fs_grab_meta_page(sbi, dst_off); 154 - f2fs_bug_on(sbi, PageDirty(src_page)); 150 + src_folio = get_current_nat_folio(sbi, nid); 151 + if (IS_ERR(src_folio)) 152 + return &src_folio->page; 153 + dst_folio = f2fs_grab_meta_folio(sbi, dst_off); 154 + f2fs_bug_on(sbi, folio_test_dirty(src_folio)); 155 155 156 - src_addr = page_address(src_page); 157 - dst_addr = page_address(dst_page); 156 + src_addr = folio_address(src_folio); 157 + dst_addr = folio_address(dst_folio); 158 158 memcpy(dst_addr, src_addr, PAGE_SIZE); 159 - set_page_dirty(dst_page); 160 - f2fs_put_page(src_page, 1); 159 + folio_mark_dirty(dst_folio); 160 + f2fs_folio_put(src_folio, true); 161 161 162 162 set_to_next_nat(nm_i, nid); 163 163 164 - return dst_page; 164 + return &dst_folio->page; 165 165 } 166 166 167 167 static struct nat_entry *__alloc_nat_entry(struct f2fs_sb_info *sbi, ··· 310 310 start, nr); 311 311 } 312 312 313 - bool f2fs_in_warm_node_list(struct f2fs_sb_info *sbi, const struct folio *folio) 313 + bool f2fs_in_warm_node_list(struct f2fs_sb_info *sbi, struct folio *folio) 314 314 { 315 - return NODE_MAPPING(sbi) == folio->mapping && 316 - IS_DNODE(&folio->page) && is_cold_node(&folio->page); 315 + return is_node_folio(folio) && IS_DNODE(&folio->page) && 316 + is_cold_node(&folio->page); 317 317 } 318 318 319 319 void f2fs_init_fsync_node_info(struct f2fs_sb_info *sbi) ··· 325 325 } 326 326 327 327 static unsigned int f2fs_add_fsync_node_entry(struct f2fs_sb_info *sbi, 328 - struct page *page) 328 + struct folio *folio) 329 329 { 330 330 struct fsync_node_entry *fn; 331 331 unsigned long flags; ··· 334 334 fn = f2fs_kmem_cache_alloc(fsync_node_entry_slab, 335 335 GFP_NOFS, true, NULL); 336 336 337 - get_page(page); 338 - fn->page = page; 337 + folio_get(folio); 338 + fn->folio = folio; 339 339 INIT_LIST_HEAD(&fn->list); 340 340 341 341 spin_lock_irqsave(&sbi->fsync_node_lock, flags); ··· 348 348 return seq_id; 349 349 } 350 350 351 - void f2fs_del_fsync_node_entry(struct f2fs_sb_info *sbi, struct page *page) 351 + void f2fs_del_fsync_node_entry(struct f2fs_sb_info *sbi, struct folio *folio) 352 352 { 353 353 struct fsync_node_entry *fn; 354 354 unsigned long flags; 355 355 356 356 spin_lock_irqsave(&sbi->fsync_node_lock, flags); 357 357 list_for_each_entry(fn, &sbi->fsync_node_list, list) { 358 - if (fn->page == page) { 358 + if (fn->folio == folio) { 359 359 list_del(&fn->list); 360 360 sbi->fsync_node_num--; 361 361 spin_unlock_irqrestore(&sbi->fsync_node_lock, flags); 362 362 kmem_cache_free(fsync_node_entry_slab, fn); 363 - put_page(page); 363 + folio_put(folio); 364 364 return; 365 365 } 366 366 } ··· 551 551 struct f2fs_journal *journal = curseg->journal; 552 552 nid_t start_nid = START_NID(nid); 553 553 struct f2fs_nat_block *nat_blk; 554 - struct page *page = NULL; 554 + struct folio *folio = NULL; 555 555 struct f2fs_nat_entry ne; 556 556 struct nat_entry *e; 557 557 pgoff_t index; ··· 601 601 index = current_nat_addr(sbi, nid); 602 602 f2fs_up_read(&nm_i->nat_tree_lock); 603 603 604 - page = f2fs_get_meta_page(sbi, index); 605 - if (IS_ERR(page)) 606 - return PTR_ERR(page); 604 + folio = f2fs_get_meta_folio(sbi, index); 605 + if (IS_ERR(folio)) 606 + return PTR_ERR(folio); 607 607 608 - nat_blk = (struct f2fs_nat_block *)page_address(page); 608 + nat_blk = folio_address(folio); 609 609 ne = nat_blk->entries[nid - start_nid]; 610 610 node_info_from_raw_nat(ni, &ne); 611 - f2fs_put_page(page, 1); 611 + f2fs_folio_put(folio, true); 612 612 cache: 613 613 blkaddr = le32_to_cpu(ne.block_addr); 614 614 if (__is_valid_data_blkaddr(blkaddr) && ··· 623 623 /* 624 624 * readahead MAX_RA_NODE number of node pages. 625 625 */ 626 - static void f2fs_ra_node_pages(struct page *parent, int start, int n) 626 + static void f2fs_ra_node_pages(struct folio *parent, int start, int n) 627 627 { 628 - struct f2fs_sb_info *sbi = F2FS_P_SB(parent); 628 + struct f2fs_sb_info *sbi = F2FS_F_SB(parent); 629 629 struct blk_plug plug; 630 630 int i, end; 631 631 nid_t nid; ··· 636 636 end = start + n; 637 637 end = min(end, (int)NIDS_PER_BLOCK); 638 638 for (i = start; i < end; i++) { 639 - nid = get_nid(parent, i, false); 639 + nid = get_nid(&parent->page, i, false); 640 640 f2fs_ra_node_page(sbi, nid); 641 641 } 642 642 ··· 754 754 return level; 755 755 } 756 756 757 + static struct folio *f2fs_get_node_folio_ra(struct folio *parent, int start); 758 + 757 759 /* 758 760 * Caller should call f2fs_put_dnode(dn). 759 761 * Also, it should grab and release a rwsem by calling f2fs_lock_op() and ··· 764 762 int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) 765 763 { 766 764 struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); 767 - struct page *npage[4]; 768 - struct page *parent = NULL; 765 + struct folio *nfolio[4]; 766 + struct folio *parent = NULL; 769 767 int offset[4]; 770 768 unsigned int noffset[4]; 771 769 nid_t nids[4]; ··· 777 775 return level; 778 776 779 777 nids[0] = dn->inode->i_ino; 780 - npage[0] = dn->inode_page; 781 778 782 - if (!npage[0]) { 783 - npage[0] = f2fs_get_inode_page(sbi, nids[0]); 784 - if (IS_ERR(npage[0])) 785 - return PTR_ERR(npage[0]); 779 + if (!dn->inode_folio) { 780 + nfolio[0] = f2fs_get_inode_folio(sbi, nids[0]); 781 + if (IS_ERR(nfolio[0])) 782 + return PTR_ERR(nfolio[0]); 783 + } else { 784 + nfolio[0] = dn->inode_folio; 786 785 } 787 786 788 787 /* if inline_data is set, should not report any block indices */ 789 788 if (f2fs_has_inline_data(dn->inode) && index) { 790 789 err = -ENOENT; 791 - f2fs_put_page(npage[0], 1); 790 + f2fs_folio_put(nfolio[0], true); 792 791 goto release_out; 793 792 } 794 793 795 - parent = npage[0]; 794 + parent = nfolio[0]; 796 795 if (level != 0) 797 - nids[1] = get_nid(parent, offset[0], true); 798 - dn->inode_page = npage[0]; 799 - dn->inode_page_locked = true; 796 + nids[1] = get_nid(&parent->page, offset[0], true); 797 + dn->inode_folio = nfolio[0]; 798 + dn->inode_folio_locked = true; 800 799 801 800 /* get indirect or direct nodes */ 802 801 for (i = 1; i <= level; i++) { ··· 811 808 } 812 809 813 810 dn->nid = nids[i]; 814 - npage[i] = f2fs_new_node_page(dn, noffset[i]); 815 - if (IS_ERR(npage[i])) { 811 + nfolio[i] = f2fs_new_node_folio(dn, noffset[i]); 812 + if (IS_ERR(nfolio[i])) { 816 813 f2fs_alloc_nid_failed(sbi, nids[i]); 817 - err = PTR_ERR(npage[i]); 814 + err = PTR_ERR(nfolio[i]); 818 815 goto release_pages; 819 816 } 820 817 ··· 822 819 f2fs_alloc_nid_done(sbi, nids[i]); 823 820 done = true; 824 821 } else if (mode == LOOKUP_NODE_RA && i == level && level > 1) { 825 - npage[i] = f2fs_get_node_page_ra(parent, offset[i - 1]); 826 - if (IS_ERR(npage[i])) { 827 - err = PTR_ERR(npage[i]); 822 + nfolio[i] = f2fs_get_node_folio_ra(parent, offset[i - 1]); 823 + if (IS_ERR(nfolio[i])) { 824 + err = PTR_ERR(nfolio[i]); 828 825 goto release_pages; 829 826 } 830 827 done = true; 831 828 } 832 829 if (i == 1) { 833 - dn->inode_page_locked = false; 834 - unlock_page(parent); 830 + dn->inode_folio_locked = false; 831 + folio_unlock(parent); 835 832 } else { 836 - f2fs_put_page(parent, 1); 833 + f2fs_folio_put(parent, true); 837 834 } 838 835 839 836 if (!done) { 840 - npage[i] = f2fs_get_node_page(sbi, nids[i]); 841 - if (IS_ERR(npage[i])) { 842 - err = PTR_ERR(npage[i]); 843 - f2fs_put_page(npage[0], 0); 837 + nfolio[i] = f2fs_get_node_folio(sbi, nids[i]); 838 + if (IS_ERR(nfolio[i])) { 839 + err = PTR_ERR(nfolio[i]); 840 + f2fs_folio_put(nfolio[0], false); 844 841 goto release_out; 845 842 } 846 843 } 847 844 if (i < level) { 848 - parent = npage[i]; 849 - nids[i + 1] = get_nid(parent, offset[i], false); 845 + parent = nfolio[i]; 846 + nids[i + 1] = get_nid(&parent->page, offset[i], false); 850 847 } 851 848 } 852 849 dn->nid = nids[level]; 853 850 dn->ofs_in_node = offset[level]; 854 - dn->node_page = npage[level]; 851 + dn->node_folio = nfolio[level]; 855 852 dn->data_blkaddr = f2fs_data_blkaddr(dn); 856 853 857 854 if (is_inode_flag_set(dn->inode, FI_COMPRESSED_FILE) && ··· 872 869 if (!c_len) 873 870 goto out; 874 871 875 - blkaddr = data_blkaddr(dn->inode, dn->node_page, ofs_in_node); 872 + blkaddr = data_blkaddr(dn->inode, dn->node_folio, ofs_in_node); 876 873 if (blkaddr == COMPRESS_ADDR) 877 - blkaddr = data_blkaddr(dn->inode, dn->node_page, 874 + blkaddr = data_blkaddr(dn->inode, dn->node_folio, 878 875 ofs_in_node + 1); 879 876 880 877 f2fs_update_read_extent_tree_range_compressed(dn->inode, ··· 884 881 return 0; 885 882 886 883 release_pages: 887 - f2fs_put_page(parent, 1); 884 + f2fs_folio_put(parent, true); 888 885 if (i > 1) 889 - f2fs_put_page(npage[0], 0); 886 + f2fs_folio_put(nfolio[0], false); 890 887 release_out: 891 - dn->inode_page = NULL; 892 - dn->node_page = NULL; 888 + dn->inode_folio = NULL; 889 + dn->node_folio = NULL; 893 890 if (err == -ENOENT) { 894 891 dn->cur_level = i; 895 892 dn->max_level = level; ··· 930 927 f2fs_inode_synced(dn->inode); 931 928 } 932 929 933 - clear_node_page_dirty(dn->node_page); 930 + clear_node_folio_dirty(dn->node_folio); 934 931 set_sbi_flag(sbi, SBI_IS_DIRTY); 935 932 936 - index = page_folio(dn->node_page)->index; 937 - f2fs_put_page(dn->node_page, 1); 933 + index = dn->node_folio->index; 934 + f2fs_folio_put(dn->node_folio, true); 938 935 939 936 invalidate_mapping_pages(NODE_MAPPING(sbi), 940 937 index, index); 941 938 942 - dn->node_page = NULL; 939 + dn->node_folio = NULL; 943 940 trace_f2fs_truncate_node(dn->inode, dn->nid, ni.blk_addr); 944 941 945 942 return 0; ··· 948 945 static int truncate_dnode(struct dnode_of_data *dn) 949 946 { 950 947 struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); 951 - struct page *page; 948 + struct folio *folio; 952 949 int err; 953 950 954 951 if (dn->nid == 0) 955 952 return 1; 956 953 957 954 /* get direct node */ 958 - page = f2fs_get_node_page(sbi, dn->nid); 959 - if (PTR_ERR(page) == -ENOENT) 955 + folio = f2fs_get_node_folio(sbi, dn->nid); 956 + if (PTR_ERR(folio) == -ENOENT) 960 957 return 1; 961 - else if (IS_ERR(page)) 962 - return PTR_ERR(page); 958 + else if (IS_ERR(folio)) 959 + return PTR_ERR(folio); 963 960 964 - if (IS_INODE(page) || ino_of_node(page) != dn->inode->i_ino) { 961 + if (IS_INODE(&folio->page) || ino_of_node(&folio->page) != dn->inode->i_ino) { 965 962 f2fs_err(sbi, "incorrect node reference, ino: %lu, nid: %u, ino_of_node: %u", 966 - dn->inode->i_ino, dn->nid, ino_of_node(page)); 963 + dn->inode->i_ino, dn->nid, ino_of_node(&folio->page)); 967 964 set_sbi_flag(sbi, SBI_NEED_FSCK); 968 965 f2fs_handle_error(sbi, ERROR_INVALID_NODE_REFERENCE); 969 - f2fs_put_page(page, 1); 966 + f2fs_folio_put(folio, true); 970 967 return -EFSCORRUPTED; 971 968 } 972 969 973 970 /* Make dnode_of_data for parameter */ 974 - dn->node_page = page; 971 + dn->node_folio = folio; 975 972 dn->ofs_in_node = 0; 976 973 f2fs_truncate_data_blocks_range(dn, ADDRS_PER_BLOCK(dn->inode)); 977 974 err = truncate_node(dn); 978 975 if (err) { 979 - f2fs_put_page(page, 1); 976 + f2fs_folio_put(folio, true); 980 977 return err; 981 978 } 982 979 ··· 987 984 int ofs, int depth) 988 985 { 989 986 struct dnode_of_data rdn = *dn; 990 - struct page *page; 987 + struct folio *folio; 991 988 struct f2fs_node *rn; 992 989 nid_t child_nid; 993 990 unsigned int child_nofs; ··· 999 996 1000 997 trace_f2fs_truncate_nodes_enter(dn->inode, dn->nid, dn->data_blkaddr); 1001 998 1002 - page = f2fs_get_node_page(F2FS_I_SB(dn->inode), dn->nid); 1003 - if (IS_ERR(page)) { 1004 - trace_f2fs_truncate_nodes_exit(dn->inode, PTR_ERR(page)); 1005 - return PTR_ERR(page); 999 + folio = f2fs_get_node_folio(F2FS_I_SB(dn->inode), dn->nid); 1000 + if (IS_ERR(folio)) { 1001 + trace_f2fs_truncate_nodes_exit(dn->inode, PTR_ERR(folio)); 1002 + return PTR_ERR(folio); 1006 1003 } 1007 1004 1008 - f2fs_ra_node_pages(page, ofs, NIDS_PER_BLOCK); 1005 + f2fs_ra_node_pages(folio, ofs, NIDS_PER_BLOCK); 1009 1006 1010 - rn = F2FS_NODE(page); 1007 + rn = F2FS_NODE(&folio->page); 1011 1008 if (depth < 3) { 1012 1009 for (i = ofs; i < NIDS_PER_BLOCK; i++, freed++) { 1013 1010 child_nid = le32_to_cpu(rn->in.nid[i]); ··· 1017 1014 ret = truncate_dnode(&rdn); 1018 1015 if (ret < 0) 1019 1016 goto out_err; 1020 - if (set_nid(page, i, 0, false)) 1017 + if (set_nid(folio, i, 0, false)) 1021 1018 dn->node_changed = true; 1022 1019 } 1023 1020 } else { ··· 1031 1028 rdn.nid = child_nid; 1032 1029 ret = truncate_nodes(&rdn, child_nofs, 0, depth - 1); 1033 1030 if (ret == (NIDS_PER_BLOCK + 1)) { 1034 - if (set_nid(page, i, 0, false)) 1031 + if (set_nid(folio, i, 0, false)) 1035 1032 dn->node_changed = true; 1036 1033 child_nofs += ret; 1037 1034 } else if (ret < 0 && ret != -ENOENT) { ··· 1043 1040 1044 1041 if (!ofs) { 1045 1042 /* remove current indirect node */ 1046 - dn->node_page = page; 1043 + dn->node_folio = folio; 1047 1044 ret = truncate_node(dn); 1048 1045 if (ret) 1049 1046 goto out_err; 1050 1047 freed++; 1051 1048 } else { 1052 - f2fs_put_page(page, 1); 1049 + f2fs_folio_put(folio, true); 1053 1050 } 1054 1051 trace_f2fs_truncate_nodes_exit(dn->inode, freed); 1055 1052 return freed; 1056 1053 1057 1054 out_err: 1058 - f2fs_put_page(page, 1); 1055 + f2fs_folio_put(folio, true); 1059 1056 trace_f2fs_truncate_nodes_exit(dn->inode, ret); 1060 1057 return ret; 1061 1058 } ··· 1063 1060 static int truncate_partial_nodes(struct dnode_of_data *dn, 1064 1061 struct f2fs_inode *ri, int *offset, int depth) 1065 1062 { 1066 - struct page *pages[2]; 1063 + struct folio *folios[2]; 1067 1064 nid_t nid[3]; 1068 1065 nid_t child_nid; 1069 1066 int err = 0; 1070 1067 int i; 1071 1068 int idx = depth - 2; 1072 1069 1073 - nid[0] = get_nid(dn->inode_page, offset[0], true); 1070 + nid[0] = get_nid(&dn->inode_folio->page, offset[0], true); 1074 1071 if (!nid[0]) 1075 1072 return 0; 1076 1073 1077 1074 /* get indirect nodes in the path */ 1078 1075 for (i = 0; i < idx + 1; i++) { 1079 1076 /* reference count'll be increased */ 1080 - pages[i] = f2fs_get_node_page(F2FS_I_SB(dn->inode), nid[i]); 1081 - if (IS_ERR(pages[i])) { 1082 - err = PTR_ERR(pages[i]); 1077 + folios[i] = f2fs_get_node_folio(F2FS_I_SB(dn->inode), nid[i]); 1078 + if (IS_ERR(folios[i])) { 1079 + err = PTR_ERR(folios[i]); 1083 1080 idx = i - 1; 1084 1081 goto fail; 1085 1082 } 1086 - nid[i + 1] = get_nid(pages[i], offset[i + 1], false); 1083 + nid[i + 1] = get_nid(&folios[i]->page, offset[i + 1], false); 1087 1084 } 1088 1085 1089 - f2fs_ra_node_pages(pages[idx], offset[idx + 1], NIDS_PER_BLOCK); 1086 + f2fs_ra_node_pages(folios[idx], offset[idx + 1], NIDS_PER_BLOCK); 1090 1087 1091 1088 /* free direct nodes linked to a partial indirect node */ 1092 1089 for (i = offset[idx + 1]; i < NIDS_PER_BLOCK; i++) { 1093 - child_nid = get_nid(pages[idx], i, false); 1090 + child_nid = get_nid(&folios[idx]->page, i, false); 1094 1091 if (!child_nid) 1095 1092 continue; 1096 1093 dn->nid = child_nid; 1097 1094 err = truncate_dnode(dn); 1098 1095 if (err < 0) 1099 1096 goto fail; 1100 - if (set_nid(pages[idx], i, 0, false)) 1097 + if (set_nid(folios[idx], i, 0, false)) 1101 1098 dn->node_changed = true; 1102 1099 } 1103 1100 1104 1101 if (offset[idx + 1] == 0) { 1105 - dn->node_page = pages[idx]; 1102 + dn->node_folio = folios[idx]; 1106 1103 dn->nid = nid[idx]; 1107 1104 err = truncate_node(dn); 1108 1105 if (err) 1109 1106 goto fail; 1110 1107 } else { 1111 - f2fs_put_page(pages[idx], 1); 1108 + f2fs_folio_put(folios[idx], true); 1112 1109 } 1113 1110 offset[idx]++; 1114 1111 offset[idx + 1] = 0; 1115 1112 idx--; 1116 1113 fail: 1117 1114 for (i = idx; i >= 0; i--) 1118 - f2fs_put_page(pages[i], 1); 1115 + f2fs_folio_put(folios[i], true); 1119 1116 1120 1117 trace_f2fs_truncate_partial_nodes(dn->inode, nid, depth, err); 1121 1118 ··· 1156 1153 return PTR_ERR(folio); 1157 1154 } 1158 1155 1159 - set_new_dnode(&dn, inode, &folio->page, NULL, 0); 1156 + set_new_dnode(&dn, inode, folio, NULL, 0); 1160 1157 folio_unlock(folio); 1161 1158 1162 1159 ri = F2FS_INODE(&folio->page); ··· 1222 1219 goto fail; 1223 1220 if (offset[1] == 0 && get_nid(&folio->page, offset[0], true)) { 1224 1221 folio_lock(folio); 1225 - BUG_ON(folio->mapping != NODE_MAPPING(sbi)); 1226 - set_nid(&folio->page, offset[0], 0, true); 1222 + BUG_ON(!is_node_folio(folio)); 1223 + set_nid(folio, offset[0], 0, true); 1227 1224 folio_unlock(folio); 1228 1225 } 1229 1226 offset[1] = 0; ··· 1242 1239 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 1243 1240 nid_t nid = F2FS_I(inode)->i_xattr_nid; 1244 1241 struct dnode_of_data dn; 1245 - struct page *npage; 1242 + struct folio *nfolio; 1246 1243 int err; 1247 1244 1248 1245 if (!nid) 1249 1246 return 0; 1250 1247 1251 - npage = f2fs_get_xnode_page(sbi, nid); 1252 - if (IS_ERR(npage)) 1253 - return PTR_ERR(npage); 1248 + nfolio = f2fs_get_xnode_folio(sbi, nid); 1249 + if (IS_ERR(nfolio)) 1250 + return PTR_ERR(nfolio); 1254 1251 1255 - set_new_dnode(&dn, inode, NULL, npage, nid); 1252 + set_new_dnode(&dn, inode, NULL, nfolio, nid); 1256 1253 err = truncate_node(&dn); 1257 1254 if (err) { 1258 - f2fs_put_page(npage, 1); 1255 + f2fs_folio_put(nfolio, true); 1259 1256 return err; 1260 1257 } 1261 1258 ··· 1312 1309 return 0; 1313 1310 } 1314 1311 1315 - struct page *f2fs_new_inode_page(struct inode *inode) 1312 + struct folio *f2fs_new_inode_folio(struct inode *inode) 1316 1313 { 1317 1314 struct dnode_of_data dn; 1318 1315 1319 1316 /* allocate inode page for new inode */ 1320 1317 set_new_dnode(&dn, inode, NULL, NULL, inode->i_ino); 1321 1318 1322 - /* caller should f2fs_put_page(page, 1); */ 1323 - return f2fs_new_node_page(&dn, 0); 1319 + /* caller should f2fs_folio_put(folio, true); */ 1320 + return f2fs_new_node_folio(&dn, 0); 1324 1321 } 1325 1322 1326 - struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs) 1323 + struct folio *f2fs_new_node_folio(struct dnode_of_data *dn, unsigned int ofs) 1327 1324 { 1328 1325 struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); 1329 1326 struct node_info new_ni; 1330 - struct page *page; 1327 + struct folio *folio; 1331 1328 int err; 1332 1329 1333 1330 if (unlikely(is_inode_flag_set(dn->inode, FI_NO_ALLOC))) 1334 1331 return ERR_PTR(-EPERM); 1335 1332 1336 - page = f2fs_grab_cache_page(NODE_MAPPING(sbi), dn->nid, false); 1337 - if (!page) 1338 - return ERR_PTR(-ENOMEM); 1333 + folio = f2fs_grab_cache_folio(NODE_MAPPING(sbi), dn->nid, false); 1334 + if (IS_ERR(folio)) 1335 + return folio; 1339 1336 1340 1337 if (unlikely((err = inc_valid_node_count(sbi, dn->inode, !ofs)))) 1341 1338 goto fail; ··· 1351 1348 dec_valid_node_count(sbi, dn->inode, !ofs); 1352 1349 set_sbi_flag(sbi, SBI_NEED_FSCK); 1353 1350 f2fs_warn_ratelimited(sbi, 1354 - "f2fs_new_node_page: inconsistent nat entry, " 1351 + "f2fs_new_node_folio: inconsistent nat entry, " 1355 1352 "ino:%u, nid:%u, blkaddr:%u, ver:%u, flag:%u", 1356 1353 new_ni.ino, new_ni.nid, new_ni.blk_addr, 1357 1354 new_ni.version, new_ni.flag); ··· 1366 1363 new_ni.version = 0; 1367 1364 set_node_addr(sbi, &new_ni, NEW_ADDR, false); 1368 1365 1369 - f2fs_wait_on_page_writeback(page, NODE, true, true); 1370 - fill_node_footer(page, dn->nid, dn->inode->i_ino, ofs, true); 1371 - set_cold_node(page, S_ISDIR(dn->inode->i_mode)); 1372 - if (!PageUptodate(page)) 1373 - SetPageUptodate(page); 1374 - if (set_page_dirty(page)) 1366 + f2fs_folio_wait_writeback(folio, NODE, true, true); 1367 + fill_node_footer(&folio->page, dn->nid, dn->inode->i_ino, ofs, true); 1368 + set_cold_node(&folio->page, S_ISDIR(dn->inode->i_mode)); 1369 + if (!folio_test_uptodate(folio)) 1370 + folio_mark_uptodate(folio); 1371 + if (folio_mark_dirty(folio)) 1375 1372 dn->node_changed = true; 1376 1373 1377 1374 if (f2fs_has_xattr_block(ofs)) ··· 1379 1376 1380 1377 if (ofs == 0) 1381 1378 inc_valid_inode_count(sbi); 1382 - return page; 1379 + return folio; 1383 1380 fail: 1384 - clear_node_page_dirty(page); 1385 - f2fs_put_page(page, 1); 1381 + clear_node_folio_dirty(folio); 1382 + f2fs_folio_put(folio, true); 1386 1383 return ERR_PTR(err); 1387 1384 } 1388 1385 1389 1386 /* 1390 1387 * Caller should do after getting the following values. 1391 - * 0: f2fs_put_page(page, 0) 1392 - * LOCKED_PAGE or error: f2fs_put_page(page, 1) 1388 + * 0: f2fs_folio_put(folio, false) 1389 + * LOCKED_PAGE or error: f2fs_folio_put(folio, true) 1393 1390 */ 1394 - static int read_node_page(struct page *page, blk_opf_t op_flags) 1391 + static int read_node_folio(struct folio *folio, blk_opf_t op_flags) 1395 1392 { 1396 - struct folio *folio = page_folio(page); 1397 - struct f2fs_sb_info *sbi = F2FS_P_SB(page); 1393 + struct f2fs_sb_info *sbi = F2FS_F_SB(folio); 1398 1394 struct node_info ni; 1399 1395 struct f2fs_io_info fio = { 1400 1396 .sbi = sbi, 1401 1397 .type = NODE, 1402 1398 .op = REQ_OP_READ, 1403 1399 .op_flags = op_flags, 1404 - .page = page, 1400 + .page = &folio->page, 1405 1401 .encrypted_page = NULL, 1406 1402 }; 1407 1403 int err; 1408 1404 1409 1405 if (folio_test_uptodate(folio)) { 1410 - if (!f2fs_inode_chksum_verify(sbi, page)) { 1406 + if (!f2fs_inode_chksum_verify(sbi, folio)) { 1411 1407 folio_clear_uptodate(folio); 1412 1408 return -EFSBADCRC; 1413 1409 } ··· 1438 1436 */ 1439 1437 void f2fs_ra_node_page(struct f2fs_sb_info *sbi, nid_t nid) 1440 1438 { 1441 - struct page *apage; 1439 + struct folio *afolio; 1442 1440 int err; 1443 1441 1444 1442 if (!nid) ··· 1446 1444 if (f2fs_check_nid_range(sbi, nid)) 1447 1445 return; 1448 1446 1449 - apage = xa_load(&NODE_MAPPING(sbi)->i_pages, nid); 1450 - if (apage) 1447 + afolio = xa_load(&NODE_MAPPING(sbi)->i_pages, nid); 1448 + if (afolio) 1451 1449 return; 1452 1450 1453 - apage = f2fs_grab_cache_page(NODE_MAPPING(sbi), nid, false); 1454 - if (!apage) 1451 + afolio = f2fs_grab_cache_folio(NODE_MAPPING(sbi), nid, false); 1452 + if (IS_ERR(afolio)) 1455 1453 return; 1456 1454 1457 - err = read_node_page(apage, REQ_RAHEAD); 1458 - f2fs_put_page(apage, err ? 1 : 0); 1455 + err = read_node_folio(afolio, REQ_RAHEAD); 1456 + f2fs_folio_put(afolio, err ? true : false); 1459 1457 } 1460 1458 1461 1459 static int sanity_check_node_footer(struct f2fs_sb_info *sbi, 1462 - struct page *page, pgoff_t nid, 1460 + struct folio *folio, pgoff_t nid, 1463 1461 enum node_type ntype) 1464 1462 { 1463 + struct page *page = &folio->page; 1464 + 1465 1465 if (unlikely(nid != nid_of_node(page) || 1466 1466 (ntype == NODE_TYPE_INODE && !IS_INODE(page)) || 1467 1467 (ntype == NODE_TYPE_XATTR && ··· 1473 1469 "node_footer[nid:%u,ino:%u,ofs:%u,cpver:%llu,blkaddr:%u]", 1474 1470 ntype, nid, nid_of_node(page), ino_of_node(page), 1475 1471 ofs_of_node(page), cpver_of_node(page), 1476 - next_blkaddr_of_node(page)); 1472 + next_blkaddr_of_node(folio)); 1477 1473 set_sbi_flag(sbi, SBI_NEED_FSCK); 1478 1474 f2fs_handle_error(sbi, ERROR_INCONSISTENT_FOOTER); 1479 1475 return -EFSCORRUPTED; ··· 1482 1478 } 1483 1479 1484 1480 static struct folio *__get_node_folio(struct f2fs_sb_info *sbi, pgoff_t nid, 1485 - struct page *parent, int start, 1486 - enum node_type ntype) 1481 + struct folio *parent, int start, enum node_type ntype) 1487 1482 { 1488 1483 struct folio *folio; 1489 1484 int err; ··· 1496 1493 if (IS_ERR(folio)) 1497 1494 return folio; 1498 1495 1499 - err = read_node_page(&folio->page, 0); 1500 - if (err < 0) { 1496 + err = read_node_folio(folio, 0); 1497 + if (err < 0) 1501 1498 goto out_put_err; 1502 - } else if (err == LOCKED_PAGE) { 1503 - err = 0; 1499 + if (err == LOCKED_PAGE) 1504 1500 goto page_hit; 1505 - } 1506 1501 1507 1502 if (parent) 1508 1503 f2fs_ra_node_pages(parent, start + 1, MAX_RA_NODE); 1509 1504 1510 1505 folio_lock(folio); 1511 1506 1512 - if (unlikely(folio->mapping != NODE_MAPPING(sbi))) { 1507 + if (unlikely(!is_node_folio(folio))) { 1513 1508 f2fs_folio_put(folio, true); 1514 1509 goto repeat; 1515 1510 } ··· 1517 1516 goto out_err; 1518 1517 } 1519 1518 1520 - if (!f2fs_inode_chksum_verify(sbi, &folio->page)) { 1519 + if (!f2fs_inode_chksum_verify(sbi, folio)) { 1521 1520 err = -EFSBADCRC; 1522 1521 goto out_err; 1523 1522 } 1524 1523 page_hit: 1525 - err = sanity_check_node_footer(sbi, &folio->page, nid, ntype); 1524 + err = sanity_check_node_footer(sbi, folio, nid, ntype); 1526 1525 if (!err) 1527 1526 return folio; 1528 1527 out_err: 1529 1528 folio_clear_uptodate(folio); 1530 1529 out_put_err: 1531 - /* ENOENT comes from read_node_page which is not an error. */ 1530 + /* ENOENT comes from read_node_folio which is not an error. */ 1532 1531 if (err != -ENOENT) 1533 1532 f2fs_handle_page_eio(sbi, folio, NODE); 1534 1533 f2fs_folio_put(folio, true); 1535 1534 return ERR_PTR(err); 1536 1535 } 1537 1536 1538 - struct page *f2fs_get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid) 1537 + struct folio *f2fs_get_node_folio(struct f2fs_sb_info *sbi, pgoff_t nid) 1539 1538 { 1540 - struct folio *folio = __get_node_folio(sbi, nid, NULL, 0, 1541 - NODE_TYPE_REGULAR); 1542 - 1543 - return &folio->page; 1539 + return __get_node_folio(sbi, nid, NULL, 0, NODE_TYPE_REGULAR); 1544 1540 } 1545 1541 1546 1542 struct folio *f2fs_get_inode_folio(struct f2fs_sb_info *sbi, pgoff_t ino) ··· 1545 1547 return __get_node_folio(sbi, ino, NULL, 0, NODE_TYPE_INODE); 1546 1548 } 1547 1549 1548 - struct page *f2fs_get_inode_page(struct f2fs_sb_info *sbi, pgoff_t ino) 1550 + struct folio *f2fs_get_xnode_folio(struct f2fs_sb_info *sbi, pgoff_t xnid) 1549 1551 { 1550 - struct folio *folio = f2fs_get_inode_folio(sbi, ino); 1551 - 1552 - return &folio->page; 1552 + return __get_node_folio(sbi, xnid, NULL, 0, NODE_TYPE_XATTR); 1553 1553 } 1554 1554 1555 - struct page *f2fs_get_xnode_page(struct f2fs_sb_info *sbi, pgoff_t xnid) 1555 + static struct folio *f2fs_get_node_folio_ra(struct folio *parent, int start) 1556 1556 { 1557 - struct folio *folio = __get_node_folio(sbi, xnid, NULL, 0, 1558 - NODE_TYPE_XATTR); 1557 + struct f2fs_sb_info *sbi = F2FS_F_SB(parent); 1558 + nid_t nid = get_nid(&parent->page, start, false); 1559 1559 1560 - return &folio->page; 1561 - } 1562 - 1563 - struct page *f2fs_get_node_page_ra(struct page *parent, int start) 1564 - { 1565 - struct f2fs_sb_info *sbi = F2FS_P_SB(parent); 1566 - nid_t nid = get_nid(parent, start, false); 1567 - struct folio *folio = __get_node_folio(sbi, nid, parent, start, 1568 - NODE_TYPE_REGULAR); 1569 - 1570 - return &folio->page; 1560 + return __get_node_folio(sbi, nid, parent, start, NODE_TYPE_REGULAR); 1571 1561 } 1572 1562 1573 1563 static void flush_inline_data(struct f2fs_sb_info *sbi, nid_t ino) 1574 1564 { 1575 1565 struct inode *inode; 1576 - struct page *page; 1566 + struct folio *folio; 1577 1567 int ret; 1578 1568 1579 1569 /* should flush inline_data before evict_inode */ ··· 1569 1583 if (!inode) 1570 1584 return; 1571 1585 1572 - page = f2fs_pagecache_get_page(inode->i_mapping, 0, 1586 + folio = f2fs_filemap_get_folio(inode->i_mapping, 0, 1573 1587 FGP_LOCK|FGP_NOWAIT, 0); 1574 - if (!page) 1588 + if (IS_ERR(folio)) 1575 1589 goto iput_out; 1576 1590 1577 - if (!PageUptodate(page)) 1578 - goto page_out; 1591 + if (!folio_test_uptodate(folio)) 1592 + goto folio_out; 1579 1593 1580 - if (!PageDirty(page)) 1581 - goto page_out; 1594 + if (!folio_test_dirty(folio)) 1595 + goto folio_out; 1582 1596 1583 - if (!clear_page_dirty_for_io(page)) 1584 - goto page_out; 1597 + if (!folio_clear_dirty_for_io(folio)) 1598 + goto folio_out; 1585 1599 1586 - ret = f2fs_write_inline_data(inode, page_folio(page)); 1600 + ret = f2fs_write_inline_data(inode, folio); 1587 1601 inode_dec_dirty_pages(inode); 1588 1602 f2fs_remove_dirty_inode(inode); 1589 1603 if (ret) 1590 - set_page_dirty(page); 1591 - page_out: 1592 - f2fs_put_page(page, 1); 1604 + folio_mark_dirty(folio); 1605 + folio_out: 1606 + f2fs_folio_put(folio, true); 1593 1607 iput_out: 1594 1608 iput(inode); 1595 1609 } ··· 1625 1639 1626 1640 folio_lock(folio); 1627 1641 1628 - if (unlikely(folio->mapping != NODE_MAPPING(sbi))) { 1642 + if (unlikely(!is_node_folio(folio))) { 1629 1643 continue_unlock: 1630 1644 folio_unlock(folio); 1631 1645 continue; ··· 1651 1665 return last_folio; 1652 1666 } 1653 1667 1654 - static int __write_node_page(struct page *page, bool atomic, bool *submitted, 1668 + static bool __write_node_folio(struct folio *folio, bool atomic, bool *submitted, 1655 1669 struct writeback_control *wbc, bool do_balance, 1656 1670 enum iostat_type io_type, unsigned int *seq_id) 1657 1671 { 1658 - struct f2fs_sb_info *sbi = F2FS_P_SB(page); 1659 - struct folio *folio = page_folio(page); 1672 + struct f2fs_sb_info *sbi = F2FS_F_SB(folio); 1660 1673 nid_t nid; 1661 1674 struct node_info ni; 1662 1675 struct f2fs_io_info fio = { 1663 1676 .sbi = sbi, 1664 - .ino = ino_of_node(page), 1677 + .ino = ino_of_node(&folio->page), 1665 1678 .type = NODE, 1666 1679 .op = REQ_OP_WRITE, 1667 1680 .op_flags = wbc_to_write_flags(wbc), 1668 - .page = page, 1681 + .page = &folio->page, 1669 1682 .encrypted_page = NULL, 1670 1683 .submitted = 0, 1671 1684 .io_type = io_type, ··· 1681 1696 folio_clear_uptodate(folio); 1682 1697 dec_page_count(sbi, F2FS_DIRTY_NODES); 1683 1698 folio_unlock(folio); 1684 - return 0; 1699 + return true; 1685 1700 } 1686 1701 1687 1702 if (unlikely(is_sbi_flag_set(sbi, SBI_POR_DOING))) ··· 1689 1704 1690 1705 if (!is_sbi_flag_set(sbi, SBI_CP_DISABLED) && 1691 1706 wbc->sync_mode == WB_SYNC_NONE && 1692 - IS_DNODE(page) && is_cold_node(page)) 1707 + IS_DNODE(&folio->page) && is_cold_node(&folio->page)) 1693 1708 goto redirty_out; 1694 1709 1695 1710 /* get old block addr of this node page */ 1696 - nid = nid_of_node(page); 1711 + nid = nid_of_node(&folio->page); 1697 1712 f2fs_bug_on(sbi, folio->index != nid); 1698 1713 1699 1714 if (f2fs_get_node_info(sbi, nid, &ni, !do_balance)) 1700 1715 goto redirty_out; 1701 1716 1702 - if (wbc->for_reclaim) { 1703 - if (!f2fs_down_read_trylock(&sbi->node_write)) 1704 - goto redirty_out; 1705 - } else { 1706 - f2fs_down_read(&sbi->node_write); 1707 - } 1717 + f2fs_down_read(&sbi->node_write); 1708 1718 1709 1719 /* This page is already truncated */ 1710 1720 if (unlikely(ni.blk_addr == NULL_ADDR)) { ··· 1707 1727 dec_page_count(sbi, F2FS_DIRTY_NODES); 1708 1728 f2fs_up_read(&sbi->node_write); 1709 1729 folio_unlock(folio); 1710 - return 0; 1730 + return true; 1711 1731 } 1712 1732 1713 1733 if (__is_valid_data_blkaddr(ni.blk_addr) && ··· 1722 1742 1723 1743 /* should add to global list before clearing PAGECACHE status */ 1724 1744 if (f2fs_in_warm_node_list(sbi, folio)) { 1725 - seq = f2fs_add_fsync_node_entry(sbi, page); 1745 + seq = f2fs_add_fsync_node_entry(sbi, folio); 1726 1746 if (seq_id) 1727 1747 *seq_id = seq; 1728 1748 } ··· 1731 1751 1732 1752 fio.old_blkaddr = ni.blk_addr; 1733 1753 f2fs_do_write_node_page(nid, &fio); 1734 - set_node_addr(sbi, &ni, fio.new_blkaddr, is_fsync_dnode(page)); 1754 + set_node_addr(sbi, &ni, fio.new_blkaddr, is_fsync_dnode(&folio->page)); 1735 1755 dec_page_count(sbi, F2FS_DIRTY_NODES); 1736 1756 f2fs_up_read(&sbi->node_write); 1737 - 1738 - if (wbc->for_reclaim) { 1739 - f2fs_submit_merged_write_cond(sbi, NULL, page, 0, NODE); 1740 - submitted = NULL; 1741 - } 1742 1757 1743 1758 folio_unlock(folio); 1744 1759 ··· 1746 1771 1747 1772 if (do_balance) 1748 1773 f2fs_balance_fs(sbi, false); 1749 - return 0; 1774 + return true; 1750 1775 1751 1776 redirty_out: 1752 1777 folio_redirty_for_writepage(wbc, folio); 1753 - return AOP_WRITEPAGE_ACTIVATE; 1778 + folio_unlock(folio); 1779 + return false; 1754 1780 } 1755 1781 1756 - int f2fs_move_node_page(struct page *node_page, int gc_type) 1782 + int f2fs_move_node_folio(struct folio *node_folio, int gc_type) 1757 1783 { 1758 1784 int err = 0; 1759 1785 ··· 1762 1786 struct writeback_control wbc = { 1763 1787 .sync_mode = WB_SYNC_ALL, 1764 1788 .nr_to_write = 1, 1765 - .for_reclaim = 0, 1766 1789 }; 1767 1790 1768 - f2fs_wait_on_page_writeback(node_page, NODE, true, true); 1791 + f2fs_folio_wait_writeback(node_folio, NODE, true, true); 1769 1792 1770 - set_page_dirty(node_page); 1793 + folio_mark_dirty(node_folio); 1771 1794 1772 - if (!clear_page_dirty_for_io(node_page)) { 1795 + if (!folio_clear_dirty_for_io(node_folio)) { 1773 1796 err = -EAGAIN; 1774 1797 goto out_page; 1775 1798 } 1776 1799 1777 - if (__write_node_page(node_page, false, NULL, 1778 - &wbc, false, FS_GC_NODE_IO, NULL)) { 1800 + if (!__write_node_folio(node_folio, false, NULL, 1801 + &wbc, false, FS_GC_NODE_IO, NULL)) 1779 1802 err = -EAGAIN; 1780 - unlock_page(node_page); 1781 - } 1782 1803 goto release_page; 1783 1804 } else { 1784 1805 /* set page dirty and write it */ 1785 - if (!folio_test_writeback(page_folio(node_page))) 1786 - set_page_dirty(node_page); 1806 + if (!folio_test_writeback(node_folio)) 1807 + folio_mark_dirty(node_folio); 1787 1808 } 1788 1809 out_page: 1789 - unlock_page(node_page); 1810 + folio_unlock(node_folio); 1790 1811 release_page: 1791 - f2fs_put_page(node_page, 0); 1812 + f2fs_folio_put(node_folio, false); 1792 1813 return err; 1793 1814 } 1794 1815 ··· 1834 1861 1835 1862 folio_lock(folio); 1836 1863 1837 - if (unlikely(folio->mapping != NODE_MAPPING(sbi))) { 1864 + if (unlikely(!is_node_folio(folio))) { 1838 1865 continue_unlock: 1839 1866 folio_unlock(folio); 1840 1867 continue; ··· 1858 1885 if (IS_INODE(&folio->page)) { 1859 1886 if (is_inode_flag_set(inode, 1860 1887 FI_DIRTY_INODE)) 1861 - f2fs_update_inode(inode, &folio->page); 1888 + f2fs_update_inode(inode, folio); 1862 1889 set_dentry_mark(&folio->page, 1863 1890 f2fs_need_dentry_mark(sbi, ino)); 1864 1891 } ··· 1870 1897 if (!folio_clear_dirty_for_io(folio)) 1871 1898 goto continue_unlock; 1872 1899 1873 - ret = __write_node_page(&folio->page, atomic && 1900 + if (!__write_node_folio(folio, atomic && 1874 1901 folio == last_folio, 1875 1902 &submitted, wbc, true, 1876 - FS_NODE_IO, seq_id); 1877 - if (ret) { 1878 - folio_unlock(folio); 1903 + FS_NODE_IO, seq_id)) { 1879 1904 f2fs_folio_put(last_folio, false); 1880 - break; 1881 - } else if (submitted) { 1882 - nwritten++; 1905 + folio_batch_release(&fbatch); 1906 + ret = -EIO; 1907 + goto out; 1883 1908 } 1909 + if (submitted) 1910 + nwritten++; 1884 1911 1885 1912 if (folio == last_folio) { 1886 1913 f2fs_folio_put(folio, false); 1914 + folio_batch_release(&fbatch); 1887 1915 marked = true; 1888 - break; 1916 + goto out; 1889 1917 } 1890 1918 } 1891 1919 folio_batch_release(&fbatch); 1892 1920 cond_resched(); 1893 - 1894 - if (ret || marked) 1895 - break; 1896 1921 } 1897 - if (!ret && atomic && !marked) { 1922 + if (atomic && !marked) { 1898 1923 f2fs_debug(sbi, "Retry to write fsync mark: ino=%u, idx=%lx", 1899 1924 ino, last_folio->index); 1900 1925 folio_lock(last_folio); ··· 1904 1933 out: 1905 1934 if (nwritten) 1906 1935 f2fs_submit_merged_write_cond(sbi, NULL, NULL, ino, NODE); 1907 - return ret ? -EIO : 0; 1936 + return ret; 1908 1937 } 1909 1938 1910 1939 static int f2fs_match_ino(struct inode *inode, unsigned long ino, void *data) ··· 1941 1970 if (!inode) 1942 1971 return false; 1943 1972 1944 - f2fs_update_inode(inode, &folio->page); 1973 + f2fs_update_inode(inode, folio); 1945 1974 folio_unlock(folio); 1946 1975 1947 1976 iput(inode); ··· 1969 1998 1970 1999 folio_lock(folio); 1971 2000 1972 - if (unlikely(folio->mapping != NODE_MAPPING(sbi))) 2001 + if (unlikely(!is_node_folio(folio))) 1973 2002 goto unlock; 1974 2003 if (!folio_test_dirty(folio)) 1975 2004 goto unlock; ··· 2041 2070 else if (!folio_trylock(folio)) 2042 2071 continue; 2043 2072 2044 - if (unlikely(folio->mapping != NODE_MAPPING(sbi))) { 2073 + if (unlikely(!is_node_folio(folio))) { 2045 2074 continue_unlock: 2046 2075 folio_unlock(folio); 2047 2076 continue; ··· 2076 2105 set_fsync_mark(&folio->page, 0); 2077 2106 set_dentry_mark(&folio->page, 0); 2078 2107 2079 - ret = __write_node_page(&folio->page, false, &submitted, 2080 - wbc, do_balance, io_type, NULL); 2081 - if (ret) 2108 + if (!__write_node_folio(folio, false, &submitted, 2109 + wbc, do_balance, io_type, NULL)) { 2082 2110 folio_unlock(folio); 2083 - else if (submitted) 2111 + folio_batch_release(&fbatch); 2112 + ret = -EIO; 2113 + goto out; 2114 + } 2115 + if (submitted) 2084 2116 nwritten++; 2085 2117 2086 2118 if (--wbc->nr_to_write == 0) ··· 2118 2144 unsigned int seq_id) 2119 2145 { 2120 2146 struct fsync_node_entry *fn; 2121 - struct page *page; 2122 2147 struct list_head *head = &sbi->fsync_node_list; 2123 2148 unsigned long flags; 2124 2149 unsigned int cur_seq_id = 0; 2125 2150 2126 2151 while (seq_id && cur_seq_id < seq_id) { 2152 + struct folio *folio; 2153 + 2127 2154 spin_lock_irqsave(&sbi->fsync_node_lock, flags); 2128 2155 if (list_empty(head)) { 2129 2156 spin_unlock_irqrestore(&sbi->fsync_node_lock, flags); ··· 2136 2161 break; 2137 2162 } 2138 2163 cur_seq_id = fn->seq_id; 2139 - page = fn->page; 2140 - get_page(page); 2164 + folio = fn->folio; 2165 + folio_get(folio); 2141 2166 spin_unlock_irqrestore(&sbi->fsync_node_lock, flags); 2142 2167 2143 - f2fs_wait_on_page_writeback(page, NODE, true, false); 2168 + f2fs_folio_wait_writeback(folio, NODE, true, false); 2144 2169 2145 - put_page(page); 2170 + folio_put(folio); 2146 2171 } 2147 2172 2148 2173 return filemap_check_errors(NODE_MAPPING(sbi)); ··· 2309 2334 struct f2fs_nm_info *nm_i = NM_I(sbi); 2310 2335 struct free_nid *i, *e; 2311 2336 struct nat_entry *ne; 2312 - int err = -EINVAL; 2337 + int err; 2313 2338 bool ret = false; 2314 2339 2315 2340 /* 0 nid should not be used */ ··· 2323 2348 i->nid = nid; 2324 2349 i->state = FREE_NID; 2325 2350 2326 - radix_tree_preload(GFP_NOFS | __GFP_NOFAIL); 2351 + err = radix_tree_preload(GFP_NOFS | __GFP_NOFAIL); 2352 + f2fs_bug_on(sbi, err); 2353 + 2354 + err = -EINVAL; 2327 2355 2328 2356 spin_lock(&nm_i->nid_list_lock); 2329 2357 ··· 2345 2367 * - __lookup_nat_cache 2346 2368 * - f2fs_add_link 2347 2369 * - f2fs_init_inode_metadata 2348 - * - f2fs_new_inode_page 2349 - * - f2fs_new_node_page 2370 + * - f2fs_new_inode_folio 2371 + * - f2fs_new_node_folio 2350 2372 * - set_node_addr 2351 2373 * - f2fs_alloc_nid_done 2352 2374 * - __remove_nid_from_list(PREALLOC_NID) ··· 2399 2421 } 2400 2422 2401 2423 static int scan_nat_page(struct f2fs_sb_info *sbi, 2402 - struct page *nat_page, nid_t start_nid) 2424 + struct f2fs_nat_block *nat_blk, nid_t start_nid) 2403 2425 { 2404 2426 struct f2fs_nm_info *nm_i = NM_I(sbi); 2405 - struct f2fs_nat_block *nat_blk = page_address(nat_page); 2406 2427 block_t blk_addr; 2407 2428 unsigned int nat_ofs = NAT_BLOCK_OFFSET(start_nid); 2408 2429 int i; ··· 2521 2544 while (1) { 2522 2545 if (!test_bit_le(NAT_BLOCK_OFFSET(nid), 2523 2546 nm_i->nat_block_bitmap)) { 2524 - struct page *page = get_current_nat_page(sbi, nid); 2547 + struct folio *folio = get_current_nat_folio(sbi, nid); 2525 2548 2526 - if (IS_ERR(page)) { 2527 - ret = PTR_ERR(page); 2549 + if (IS_ERR(folio)) { 2550 + ret = PTR_ERR(folio); 2528 2551 } else { 2529 - ret = scan_nat_page(sbi, page, nid); 2530 - f2fs_put_page(page, 1); 2552 + ret = scan_nat_page(sbi, folio_address(folio), 2553 + nid); 2554 + f2fs_folio_put(folio, true); 2531 2555 } 2532 2556 2533 2557 if (ret) { ··· 2704 2726 return nr - nr_shrink; 2705 2727 } 2706 2728 2707 - int f2fs_recover_inline_xattr(struct inode *inode, struct page *page) 2729 + int f2fs_recover_inline_xattr(struct inode *inode, struct folio *folio) 2708 2730 { 2709 2731 void *src_addr, *dst_addr; 2710 2732 size_t inline_size; 2711 - struct page *ipage; 2733 + struct folio *ifolio; 2712 2734 struct f2fs_inode *ri; 2713 2735 2714 - ipage = f2fs_get_inode_page(F2FS_I_SB(inode), inode->i_ino); 2715 - if (IS_ERR(ipage)) 2716 - return PTR_ERR(ipage); 2736 + ifolio = f2fs_get_inode_folio(F2FS_I_SB(inode), inode->i_ino); 2737 + if (IS_ERR(ifolio)) 2738 + return PTR_ERR(ifolio); 2717 2739 2718 - ri = F2FS_INODE(page); 2740 + ri = F2FS_INODE(&folio->page); 2719 2741 if (ri->i_inline & F2FS_INLINE_XATTR) { 2720 2742 if (!f2fs_has_inline_xattr(inode)) { 2721 2743 set_inode_flag(inode, FI_INLINE_XATTR); ··· 2729 2751 goto update_inode; 2730 2752 } 2731 2753 2732 - dst_addr = inline_xattr_addr(inode, ipage); 2733 - src_addr = inline_xattr_addr(inode, page); 2754 + dst_addr = inline_xattr_addr(inode, ifolio); 2755 + src_addr = inline_xattr_addr(inode, folio); 2734 2756 inline_size = inline_xattr_size(inode); 2735 2757 2736 - f2fs_wait_on_page_writeback(ipage, NODE, true, true); 2758 + f2fs_folio_wait_writeback(ifolio, NODE, true, true); 2737 2759 memcpy(dst_addr, src_addr, inline_size); 2738 2760 update_inode: 2739 - f2fs_update_inode(inode, ipage); 2740 - f2fs_put_page(ipage, 1); 2761 + f2fs_update_inode(inode, ifolio); 2762 + f2fs_folio_put(ifolio, true); 2741 2763 return 0; 2742 2764 } 2743 2765 ··· 2748 2770 nid_t new_xnid; 2749 2771 struct dnode_of_data dn; 2750 2772 struct node_info ni; 2751 - struct page *xpage; 2773 + struct folio *xfolio; 2752 2774 int err; 2753 2775 2754 2776 if (!prev_xnid) ··· 2769 2791 return -ENOSPC; 2770 2792 2771 2793 set_new_dnode(&dn, inode, NULL, NULL, new_xnid); 2772 - xpage = f2fs_new_node_page(&dn, XATTR_NODE_OFFSET); 2773 - if (IS_ERR(xpage)) { 2794 + xfolio = f2fs_new_node_folio(&dn, XATTR_NODE_OFFSET); 2795 + if (IS_ERR(xfolio)) { 2774 2796 f2fs_alloc_nid_failed(sbi, new_xnid); 2775 - return PTR_ERR(xpage); 2797 + return PTR_ERR(xfolio); 2776 2798 } 2777 2799 2778 2800 f2fs_alloc_nid_done(sbi, new_xnid); ··· 2780 2802 2781 2803 /* 3: update and set xattr node page dirty */ 2782 2804 if (page) { 2783 - memcpy(F2FS_NODE(xpage), F2FS_NODE(page), 2805 + memcpy(F2FS_NODE(&xfolio->page), F2FS_NODE(page), 2784 2806 VALID_XATTR_BLOCK_SIZE); 2785 - set_page_dirty(xpage); 2807 + folio_mark_dirty(xfolio); 2786 2808 } 2787 - f2fs_put_page(xpage, 1); 2809 + f2fs_folio_put(xfolio, true); 2788 2810 2789 2811 return 0; 2790 2812 } ··· 2794 2816 struct f2fs_inode *src, *dst; 2795 2817 nid_t ino = ino_of_node(page); 2796 2818 struct node_info old_ni, new_ni; 2797 - struct page *ipage; 2819 + struct folio *ifolio; 2798 2820 int err; 2799 2821 2800 2822 err = f2fs_get_node_info(sbi, ino, &old_ni, false); ··· 2804 2826 if (unlikely(old_ni.blk_addr != NULL_ADDR)) 2805 2827 return -EINVAL; 2806 2828 retry: 2807 - ipage = f2fs_grab_cache_page(NODE_MAPPING(sbi), ino, false); 2808 - if (!ipage) { 2829 + ifolio = f2fs_grab_cache_folio(NODE_MAPPING(sbi), ino, false); 2830 + if (IS_ERR(ifolio)) { 2809 2831 memalloc_retry_wait(GFP_NOFS); 2810 2832 goto retry; 2811 2833 } ··· 2813 2835 /* Should not use this inode from free nid list */ 2814 2836 remove_free_nid(sbi, ino); 2815 2837 2816 - if (!PageUptodate(ipage)) 2817 - SetPageUptodate(ipage); 2818 - fill_node_footer(ipage, ino, ino, 0, true); 2819 - set_cold_node(ipage, false); 2838 + if (!folio_test_uptodate(ifolio)) 2839 + folio_mark_uptodate(ifolio); 2840 + fill_node_footer(&ifolio->page, ino, ino, 0, true); 2841 + set_cold_node(&ifolio->page, false); 2820 2842 2821 2843 src = F2FS_INODE(page); 2822 - dst = F2FS_INODE(ipage); 2844 + dst = F2FS_INODE(&ifolio->page); 2823 2845 2824 2846 memcpy(dst, src, offsetof(struct f2fs_inode, i_ext)); 2825 2847 dst->i_size = 0; ··· 2855 2877 WARN_ON(1); 2856 2878 set_node_addr(sbi, &new_ni, NEW_ADDR, false); 2857 2879 inc_valid_inode_count(sbi); 2858 - set_page_dirty(ipage); 2859 - f2fs_put_page(ipage, 1); 2880 + folio_mark_dirty(ifolio); 2881 + f2fs_folio_put(ifolio, true); 2860 2882 return 0; 2861 2883 } 2862 2884 ··· 2880 2902 f2fs_ra_meta_pages(sbi, addr, nrpages, META_POR, true); 2881 2903 2882 2904 for (idx = addr; idx < addr + nrpages; idx++) { 2883 - struct page *page = f2fs_get_tmp_page(sbi, idx); 2905 + struct folio *folio = f2fs_get_tmp_folio(sbi, idx); 2884 2906 2885 - if (IS_ERR(page)) 2886 - return PTR_ERR(page); 2907 + if (IS_ERR(folio)) 2908 + return PTR_ERR(folio); 2887 2909 2888 - rn = F2FS_NODE(page); 2910 + rn = F2FS_NODE(&folio->page); 2889 2911 sum_entry->nid = rn->footer.nid; 2890 2912 sum_entry->version = 0; 2891 2913 sum_entry->ofs_in_node = 0; 2892 2914 sum_entry++; 2893 - f2fs_put_page(page, 1); 2915 + f2fs_folio_put(folio, true); 2894 2916 } 2895 2917 2896 2918 invalidate_mapping_pages(META_MAPPING(sbi), addr, ··· 3151 3173 nat_bits_addr = __start_cp_addr(sbi) + BLKS_PER_SEG(sbi) - 3152 3174 nm_i->nat_bits_blocks; 3153 3175 for (i = 0; i < nm_i->nat_bits_blocks; i++) { 3154 - struct page *page; 3176 + struct folio *folio; 3155 3177 3156 - page = f2fs_get_meta_page(sbi, nat_bits_addr++); 3157 - if (IS_ERR(page)) 3158 - return PTR_ERR(page); 3178 + folio = f2fs_get_meta_folio(sbi, nat_bits_addr++); 3179 + if (IS_ERR(folio)) 3180 + return PTR_ERR(folio); 3159 3181 3160 3182 memcpy(nm_i->nat_bits + F2FS_BLK_TO_BYTES(i), 3161 - page_address(page), F2FS_BLKSIZE); 3162 - f2fs_put_page(page, 1); 3183 + folio_address(folio), F2FS_BLKSIZE); 3184 + f2fs_folio_put(folio, true); 3163 3185 } 3164 3186 3165 3187 cp_ver |= (cur_cp_crc(ckpt) << 32);
+6 -6
fs/f2fs/node.h
··· 268 268 return le64_to_cpu(rn->footer.cp_ver); 269 269 } 270 270 271 - static inline block_t next_blkaddr_of_node(struct page *node_page) 271 + static inline block_t next_blkaddr_of_node(struct folio *node_folio) 272 272 { 273 - struct f2fs_node *rn = F2FS_NODE(node_page); 273 + struct f2fs_node *rn = F2FS_NODE(&node_folio->page); 274 274 return le32_to_cpu(rn->footer.next_blkaddr); 275 275 } 276 276 ··· 367 367 return true; 368 368 } 369 369 370 - static inline int set_nid(struct page *p, int off, nid_t nid, bool i) 370 + static inline int set_nid(struct folio *folio, int off, nid_t nid, bool i) 371 371 { 372 - struct f2fs_node *rn = F2FS_NODE(p); 372 + struct f2fs_node *rn = F2FS_NODE(&folio->page); 373 373 374 - f2fs_wait_on_page_writeback(p, NODE, true, true); 374 + f2fs_folio_wait_writeback(folio, NODE, true, true); 375 375 376 376 if (i) 377 377 rn->i.i_nid[off - NODE_DIR1_BLOCK] = cpu_to_le32(nid); 378 378 else 379 379 rn->in.nid[off] = cpu_to_le32(nid); 380 - return set_page_dirty(p); 380 + return folio_mark_dirty(folio); 381 381 } 382 382 383 383 static inline nid_t get_nid(struct page *p, int off, bool i)
+90 -88
fs/f2fs/recovery.c
··· 165 165 struct f2fs_dir_entry *de; 166 166 struct f2fs_filename fname; 167 167 struct qstr usr_fname; 168 - struct page *page; 168 + struct folio *folio; 169 169 struct inode *dir, *einode; 170 170 struct fsync_inode_entry *entry; 171 171 int err = 0; ··· 187 187 if (err) 188 188 goto out; 189 189 retry: 190 - de = __f2fs_find_entry(dir, &fname, &page); 190 + de = __f2fs_find_entry(dir, &fname, &folio); 191 191 if (de && inode->i_ino == le32_to_cpu(de->ino)) 192 192 goto out_put; 193 193 ··· 212 212 iput(einode); 213 213 goto out_put; 214 214 } 215 - f2fs_delete_entry(de, page, dir, einode); 215 + f2fs_delete_entry(de, folio, dir, einode); 216 216 iput(einode); 217 217 goto retry; 218 - } else if (IS_ERR(page)) { 219 - err = PTR_ERR(page); 218 + } else if (IS_ERR(folio)) { 219 + err = PTR_ERR(folio); 220 220 } else { 221 221 err = f2fs_add_dentry(dir, &fname, inode, 222 222 inode->i_ino, inode->i_mode); ··· 226 226 goto out; 227 227 228 228 out_put: 229 - f2fs_put_page(page, 0); 229 + f2fs_folio_put(folio, false); 230 230 out: 231 231 if (file_enc_name(inode)) 232 232 name = "<encrypted>"; ··· 358 358 block_t *blkaddr_fast, bool *is_detecting) 359 359 { 360 360 unsigned int ra_blocks = RECOVERY_MAX_RA_BLOCKS; 361 - struct page *page = NULL; 362 361 int i; 363 362 364 363 if (!*is_detecting) 365 364 return 0; 366 365 367 366 for (i = 0; i < 2; i++) { 367 + struct folio *folio; 368 + 368 369 if (!f2fs_is_valid_blkaddr(sbi, *blkaddr_fast, META_POR)) { 369 370 *is_detecting = false; 370 371 return 0; 371 372 } 372 373 373 - page = f2fs_get_tmp_page(sbi, *blkaddr_fast); 374 - if (IS_ERR(page)) 375 - return PTR_ERR(page); 374 + folio = f2fs_get_tmp_folio(sbi, *blkaddr_fast); 375 + if (IS_ERR(folio)) 376 + return PTR_ERR(folio); 376 377 377 - if (!is_recoverable_dnode(page)) { 378 - f2fs_put_page(page, 1); 378 + if (!is_recoverable_dnode(&folio->page)) { 379 + f2fs_folio_put(folio, true); 379 380 *is_detecting = false; 380 381 return 0; 381 382 } 382 383 383 384 ra_blocks = adjust_por_ra_blocks(sbi, ra_blocks, *blkaddr_fast, 384 - next_blkaddr_of_node(page)); 385 + next_blkaddr_of_node(folio)); 385 386 386 - *blkaddr_fast = next_blkaddr_of_node(page); 387 - f2fs_put_page(page, 1); 387 + *blkaddr_fast = next_blkaddr_of_node(folio); 388 + f2fs_folio_put(folio, true); 388 389 389 390 f2fs_ra_meta_pages_cond(sbi, *blkaddr_fast, ra_blocks); 390 391 } ··· 402 401 bool check_only) 403 402 { 404 403 struct curseg_info *curseg; 405 - struct page *page = NULL; 406 404 block_t blkaddr, blkaddr_fast; 407 405 bool is_detecting = true; 408 406 int err = 0; ··· 413 413 414 414 while (1) { 415 415 struct fsync_inode_entry *entry; 416 + struct folio *folio; 416 417 417 418 if (!f2fs_is_valid_blkaddr(sbi, blkaddr, META_POR)) 418 419 return 0; 419 420 420 - page = f2fs_get_tmp_page(sbi, blkaddr); 421 - if (IS_ERR(page)) { 422 - err = PTR_ERR(page); 421 + folio = f2fs_get_tmp_folio(sbi, blkaddr); 422 + if (IS_ERR(folio)) { 423 + err = PTR_ERR(folio); 423 424 break; 424 425 } 425 426 426 - if (!is_recoverable_dnode(page)) { 427 - f2fs_put_page(page, 1); 427 + if (!is_recoverable_dnode(&folio->page)) { 428 + f2fs_folio_put(folio, true); 428 429 break; 429 430 } 430 431 431 - if (!is_fsync_dnode(page)) 432 + if (!is_fsync_dnode(&folio->page)) 432 433 goto next; 433 434 434 - entry = get_fsync_inode(head, ino_of_node(page)); 435 + entry = get_fsync_inode(head, ino_of_node(&folio->page)); 435 436 if (!entry) { 436 437 bool quota_inode = false; 437 438 438 439 if (!check_only && 439 - IS_INODE(page) && is_dent_dnode(page)) { 440 - err = f2fs_recover_inode_page(sbi, page); 440 + IS_INODE(&folio->page) && 441 + is_dent_dnode(&folio->page)) { 442 + err = f2fs_recover_inode_page(sbi, &folio->page); 441 443 if (err) { 442 - f2fs_put_page(page, 1); 444 + f2fs_folio_put(folio, true); 443 445 break; 444 446 } 445 447 quota_inode = true; ··· 451 449 * CP | dnode(F) | inode(DF) 452 450 * For this case, we should not give up now. 453 451 */ 454 - entry = add_fsync_inode(sbi, head, ino_of_node(page), 452 + entry = add_fsync_inode(sbi, head, ino_of_node(&folio->page), 455 453 quota_inode); 456 454 if (IS_ERR(entry)) { 457 455 err = PTR_ERR(entry); 458 456 if (err == -ENOENT) 459 457 goto next; 460 - f2fs_put_page(page, 1); 458 + f2fs_folio_put(folio, true); 461 459 break; 462 460 } 463 461 } 464 462 entry->blkaddr = blkaddr; 465 463 466 - if (IS_INODE(page) && is_dent_dnode(page)) 464 + if (IS_INODE(&folio->page) && is_dent_dnode(&folio->page)) 467 465 entry->last_dentry = blkaddr; 468 466 next: 469 467 /* check next segment */ 470 - blkaddr = next_blkaddr_of_node(page); 471 - f2fs_put_page(page, 1); 468 + blkaddr = next_blkaddr_of_node(folio); 469 + f2fs_folio_put(folio, true); 472 470 473 471 err = sanity_check_node_chain(sbi, blkaddr, &blkaddr_fast, 474 472 &is_detecting); ··· 494 492 unsigned short blkoff = GET_BLKOFF_FROM_SEG0(sbi, blkaddr); 495 493 struct f2fs_summary_block *sum_node; 496 494 struct f2fs_summary sum; 497 - struct page *sum_page, *node_page; 495 + struct folio *sum_folio, *node_folio; 498 496 struct dnode_of_data tdn = *dn; 499 497 nid_t ino, nid; 500 498 struct inode *inode; ··· 516 514 } 517 515 } 518 516 519 - sum_page = f2fs_get_sum_page(sbi, segno); 520 - if (IS_ERR(sum_page)) 521 - return PTR_ERR(sum_page); 522 - sum_node = (struct f2fs_summary_block *)page_address(sum_page); 517 + sum_folio = f2fs_get_sum_folio(sbi, segno); 518 + if (IS_ERR(sum_folio)) 519 + return PTR_ERR(sum_folio); 520 + sum_node = folio_address(sum_folio); 523 521 sum = sum_node->entries[blkoff]; 524 - f2fs_put_page(sum_page, 1); 522 + f2fs_folio_put(sum_folio, true); 525 523 got_it: 526 524 /* Use the locked dnode page and inode */ 527 525 nid = le32_to_cpu(sum.nid); 528 526 ofs_in_node = le16_to_cpu(sum.ofs_in_node); 529 527 530 - max_addrs = ADDRS_PER_PAGE(dn->node_page, dn->inode); 528 + max_addrs = ADDRS_PER_PAGE(&dn->node_folio->page, dn->inode); 531 529 if (ofs_in_node >= max_addrs) { 532 530 f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%lu, nid:%u, max:%u", 533 531 ofs_in_node, dn->inode->i_ino, nid, max_addrs); ··· 537 535 538 536 if (dn->inode->i_ino == nid) { 539 537 tdn.nid = nid; 540 - if (!dn->inode_page_locked) 541 - lock_page(dn->inode_page); 542 - tdn.node_page = dn->inode_page; 538 + if (!dn->inode_folio_locked) 539 + folio_lock(dn->inode_folio); 540 + tdn.node_folio = dn->inode_folio; 543 541 tdn.ofs_in_node = ofs_in_node; 544 542 goto truncate_out; 545 543 } else if (dn->nid == nid) { ··· 548 546 } 549 547 550 548 /* Get the node page */ 551 - node_page = f2fs_get_node_page(sbi, nid); 552 - if (IS_ERR(node_page)) 553 - return PTR_ERR(node_page); 549 + node_folio = f2fs_get_node_folio(sbi, nid); 550 + if (IS_ERR(node_folio)) 551 + return PTR_ERR(node_folio); 554 552 555 - offset = ofs_of_node(node_page); 556 - ino = ino_of_node(node_page); 557 - f2fs_put_page(node_page, 1); 553 + offset = ofs_of_node(&node_folio->page); 554 + ino = ino_of_node(&node_folio->page); 555 + f2fs_folio_put(node_folio, true); 558 556 559 557 if (ino != dn->inode->i_ino) { 560 558 int ret; ··· 580 578 * if inode page is locked, unlock temporarily, but its reference 581 579 * count keeps alive. 582 580 */ 583 - if (ino == dn->inode->i_ino && dn->inode_page_locked) 584 - unlock_page(dn->inode_page); 581 + if (ino == dn->inode->i_ino && dn->inode_folio_locked) 582 + folio_unlock(dn->inode_folio); 585 583 586 584 set_new_dnode(&tdn, inode, NULL, NULL, 0); 587 585 if (f2fs_get_dnode_of_data(&tdn, bidx, LOOKUP_NODE)) ··· 594 592 out: 595 593 if (ino != dn->inode->i_ino) 596 594 iput(inode); 597 - else if (dn->inode_page_locked) 598 - lock_page(dn->inode_page); 595 + else if (dn->inode_folio_locked) 596 + folio_lock(dn->inode_folio); 599 597 return 0; 600 598 601 599 truncate_out: 602 600 if (f2fs_data_blkaddr(&tdn) == blkaddr) 603 601 f2fs_truncate_data_blocks_range(&tdn, 1); 604 - if (dn->inode->i_ino == nid && !dn->inode_page_locked) 605 - unlock_page(dn->inode_page); 602 + if (dn->inode->i_ino == nid && !dn->inode_folio_locked) 603 + folio_unlock(dn->inode_folio); 606 604 return 0; 607 605 } 608 606 ··· 620 618 } 621 619 622 620 static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, 623 - struct page *page) 621 + struct folio *folio) 624 622 { 625 623 struct dnode_of_data dn; 626 624 struct node_info ni; ··· 628 626 int err = 0, recovered = 0; 629 627 630 628 /* step 1: recover xattr */ 631 - if (IS_INODE(page)) { 632 - err = f2fs_recover_inline_xattr(inode, page); 629 + if (IS_INODE(&folio->page)) { 630 + err = f2fs_recover_inline_xattr(inode, folio); 633 631 if (err) 634 632 goto out; 635 - } else if (f2fs_has_xattr_block(ofs_of_node(page))) { 636 - err = f2fs_recover_xattr_data(inode, page); 633 + } else if (f2fs_has_xattr_block(ofs_of_node(&folio->page))) { 634 + err = f2fs_recover_xattr_data(inode, &folio->page); 637 635 if (!err) 638 636 recovered++; 639 637 goto out; 640 638 } 641 639 642 640 /* step 2: recover inline data */ 643 - err = f2fs_recover_inline_data(inode, page); 641 + err = f2fs_recover_inline_data(inode, folio); 644 642 if (err) { 645 643 if (err == 1) 646 644 err = 0; ··· 648 646 } 649 647 650 648 /* step 3: recover data indices */ 651 - start = f2fs_start_bidx_of_node(ofs_of_node(page), inode); 652 - end = start + ADDRS_PER_PAGE(page, inode); 649 + start = f2fs_start_bidx_of_node(ofs_of_node(&folio->page), inode); 650 + end = start + ADDRS_PER_PAGE(&folio->page, inode); 653 651 654 652 set_new_dnode(&dn, inode, NULL, NULL, 0); 655 653 retry_dn: ··· 662 660 goto out; 663 661 } 664 662 665 - f2fs_wait_on_page_writeback(dn.node_page, NODE, true, true); 663 + f2fs_folio_wait_writeback(dn.node_folio, NODE, true, true); 666 664 667 665 err = f2fs_get_node_info(sbi, dn.nid, &ni, false); 668 666 if (err) 669 667 goto err; 670 668 671 - f2fs_bug_on(sbi, ni.ino != ino_of_node(page)); 669 + f2fs_bug_on(sbi, ni.ino != ino_of_node(&folio->page)); 672 670 673 - if (ofs_of_node(dn.node_page) != ofs_of_node(page)) { 671 + if (ofs_of_node(&dn.node_folio->page) != ofs_of_node(&folio->page)) { 674 672 f2fs_warn(sbi, "Inconsistent ofs_of_node, ino:%lu, ofs:%u, %u", 675 - inode->i_ino, ofs_of_node(dn.node_page), 676 - ofs_of_node(page)); 673 + inode->i_ino, ofs_of_node(&dn.node_folio->page), 674 + ofs_of_node(&folio->page)); 677 675 err = -EFSCORRUPTED; 678 676 f2fs_handle_error(sbi, ERROR_INCONSISTENT_FOOTER); 679 677 goto err; ··· 683 681 block_t src, dest; 684 682 685 683 src = f2fs_data_blkaddr(&dn); 686 - dest = data_blkaddr(dn.inode, page, dn.ofs_in_node); 684 + dest = data_blkaddr(dn.inode, folio, dn.ofs_in_node); 687 685 688 686 if (__is_valid_data_blkaddr(src) && 689 687 !f2fs_is_valid_blkaddr(sbi, src, META_POR)) { ··· 758 756 } 759 757 } 760 758 761 - copy_node_footer(dn.node_page, page); 762 - fill_node_footer(dn.node_page, dn.nid, ni.ino, 763 - ofs_of_node(page), false); 764 - set_page_dirty(dn.node_page); 759 + copy_node_footer(&dn.node_folio->page, &folio->page); 760 + fill_node_footer(&dn.node_folio->page, dn.nid, ni.ino, 761 + ofs_of_node(&folio->page), false); 762 + folio_mark_dirty(dn.node_folio); 765 763 err: 766 764 f2fs_put_dnode(&dn); 767 765 out: ··· 775 773 struct list_head *tmp_inode_list, struct list_head *dir_list) 776 774 { 777 775 struct curseg_info *curseg; 778 - struct page *page = NULL; 779 776 int err = 0; 780 777 block_t blkaddr; 781 778 unsigned int ra_blocks = RECOVERY_MAX_RA_BLOCKS; ··· 785 784 786 785 while (1) { 787 786 struct fsync_inode_entry *entry; 787 + struct folio *folio; 788 788 789 789 if (!f2fs_is_valid_blkaddr(sbi, blkaddr, META_POR)) 790 790 break; 791 791 792 - page = f2fs_get_tmp_page(sbi, blkaddr); 793 - if (IS_ERR(page)) { 794 - err = PTR_ERR(page); 792 + folio = f2fs_get_tmp_folio(sbi, blkaddr); 793 + if (IS_ERR(folio)) { 794 + err = PTR_ERR(folio); 795 795 break; 796 796 } 797 797 798 - if (!is_recoverable_dnode(page)) { 799 - f2fs_put_page(page, 1); 798 + if (!is_recoverable_dnode(&folio->page)) { 799 + f2fs_folio_put(folio, true); 800 800 break; 801 801 } 802 802 803 - entry = get_fsync_inode(inode_list, ino_of_node(page)); 803 + entry = get_fsync_inode(inode_list, ino_of_node(&folio->page)); 804 804 if (!entry) 805 805 goto next; 806 806 /* ··· 809 807 * In this case, we can lose the latest inode(x). 810 808 * So, call recover_inode for the inode update. 811 809 */ 812 - if (IS_INODE(page)) { 813 - err = recover_inode(entry->inode, page); 810 + if (IS_INODE(&folio->page)) { 811 + err = recover_inode(entry->inode, &folio->page); 814 812 if (err) { 815 - f2fs_put_page(page, 1); 813 + f2fs_folio_put(folio, true); 816 814 break; 817 815 } 818 816 } 819 817 if (entry->last_dentry == blkaddr) { 820 - err = recover_dentry(entry->inode, page, dir_list); 818 + err = recover_dentry(entry->inode, &folio->page, dir_list); 821 819 if (err) { 822 - f2fs_put_page(page, 1); 820 + f2fs_folio_put(folio, true); 823 821 break; 824 822 } 825 823 } 826 - err = do_recover_data(sbi, entry->inode, page); 824 + err = do_recover_data(sbi, entry->inode, folio); 827 825 if (err) { 828 - f2fs_put_page(page, 1); 826 + f2fs_folio_put(folio, true); 829 827 break; 830 828 } 831 829 ··· 833 831 list_move_tail(&entry->list, tmp_inode_list); 834 832 next: 835 833 ra_blocks = adjust_por_ra_blocks(sbi, ra_blocks, blkaddr, 836 - next_blkaddr_of_node(page)); 834 + next_blkaddr_of_node(folio)); 837 835 838 836 /* check next segment */ 839 - blkaddr = next_blkaddr_of_node(page); 840 - f2fs_put_page(page, 1); 837 + blkaddr = next_blkaddr_of_node(folio); 838 + f2fs_folio_put(folio, true); 841 839 842 840 f2fs_ra_meta_pages_cond(sbi, blkaddr, ra_blocks); 843 841 }
+131 -88
fs/f2fs/segment.c
··· 334 334 goto next; 335 335 } 336 336 337 - blen = min((pgoff_t)ADDRS_PER_PAGE(dn.node_page, cow_inode), 337 + blen = min((pgoff_t)ADDRS_PER_PAGE(&dn.node_folio->page, cow_inode), 338 338 len); 339 339 index = off; 340 340 for (i = 0; i < blen; i++, dn.ofs_in_node++, index++) { ··· 371 371 } 372 372 373 373 out: 374 + if (time_to_inject(sbi, FAULT_TIMEOUT)) 375 + f2fs_io_schedule_timeout_killable(DEFAULT_FAULT_TIMEOUT); 376 + 374 377 if (ret) { 375 378 sbi->revoked_atomic_block += fi->atomic_write_cnt; 376 379 } else { 377 380 sbi->committed_atomic_block += fi->atomic_write_cnt; 378 381 set_inode_flag(inode, FI_ATOMIC_COMMITTED); 382 + 383 + /* 384 + * inode may has no FI_ATOMIC_DIRTIED flag due to no write 385 + * before commit. 386 + */ 379 387 if (is_inode_flag_set(inode, FI_ATOMIC_DIRTIED)) { 388 + /* clear atomic dirty status and set vfs dirty status */ 380 389 clear_inode_flag(inode, FI_ATOMIC_DIRTIED); 381 390 f2fs_mark_inode_dirty_sync(inode, true); 382 391 } ··· 433 424 if (need && excess_cached_nats(sbi)) 434 425 f2fs_balance_fs_bg(sbi, false); 435 426 436 - if (!f2fs_is_checkpoint_ready(sbi)) 427 + if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) 437 428 return; 438 429 439 430 /* ··· 2447 2438 * that the consecutive input blocks belong to the same segment. 2448 2439 */ 2449 2440 static int update_sit_entry_for_release(struct f2fs_sb_info *sbi, struct seg_entry *se, 2450 - block_t blkaddr, unsigned int offset, int del) 2441 + unsigned int segno, block_t blkaddr, unsigned int offset, int del) 2451 2442 { 2452 2443 bool exist; 2453 2444 #ifdef CONFIG_F2FS_CHECK_FS ··· 2492 2483 f2fs_test_and_clear_bit(offset + i, se->discard_map)) 2493 2484 sbi->discard_blks++; 2494 2485 2495 - if (!f2fs_test_bit(offset + i, se->ckpt_valid_map)) 2486 + if (!f2fs_test_bit(offset + i, se->ckpt_valid_map)) { 2496 2487 se->ckpt_valid_blocks -= 1; 2488 + if (__is_large_section(sbi)) 2489 + get_sec_entry(sbi, segno)->ckpt_valid_blocks -= 1; 2490 + } 2497 2491 } 2492 + 2493 + if (__is_large_section(sbi)) 2494 + sanity_check_valid_blocks(sbi, segno); 2498 2495 2499 2496 return del; 2500 2497 } 2501 2498 2502 2499 static int update_sit_entry_for_alloc(struct f2fs_sb_info *sbi, struct seg_entry *se, 2503 - block_t blkaddr, unsigned int offset, int del) 2500 + unsigned int segno, block_t blkaddr, unsigned int offset, int del) 2504 2501 { 2505 2502 bool exist; 2506 2503 #ifdef CONFIG_F2FS_CHECK_FS ··· 2539 2524 * or newly invalidated. 2540 2525 */ 2541 2526 if (!is_sbi_flag_set(sbi, SBI_CP_DISABLED)) { 2542 - if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map)) 2527 + if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map)) { 2543 2528 se->ckpt_valid_blocks++; 2529 + if (__is_large_section(sbi)) 2530 + get_sec_entry(sbi, segno)->ckpt_valid_blocks++; 2531 + } 2544 2532 } 2545 2533 2546 - if (!f2fs_test_bit(offset, se->ckpt_valid_map)) 2534 + if (!f2fs_test_bit(offset, se->ckpt_valid_map)) { 2547 2535 se->ckpt_valid_blocks += del; 2536 + if (__is_large_section(sbi)) 2537 + get_sec_entry(sbi, segno)->ckpt_valid_blocks += del; 2538 + } 2539 + 2540 + if (__is_large_section(sbi)) 2541 + sanity_check_valid_blocks(sbi, segno); 2548 2542 2549 2543 return del; 2550 2544 } ··· 2584 2560 2585 2561 /* Update valid block bitmap */ 2586 2562 if (del > 0) { 2587 - del = update_sit_entry_for_alloc(sbi, se, blkaddr, offset, del); 2563 + del = update_sit_entry_for_alloc(sbi, se, segno, blkaddr, offset, del); 2588 2564 } else { 2589 - del = update_sit_entry_for_release(sbi, se, blkaddr, offset, del); 2565 + del = update_sit_entry_for_release(sbi, se, segno, blkaddr, offset, del); 2590 2566 } 2591 2567 2592 2568 __mark_sit_entry_dirty(sbi, segno); ··· 2699 2675 } 2700 2676 2701 2677 /* 2702 - * Caller should put this summary page 2678 + * Caller should put this summary folio 2703 2679 */ 2704 - struct page *f2fs_get_sum_page(struct f2fs_sb_info *sbi, unsigned int segno) 2680 + struct folio *f2fs_get_sum_folio(struct f2fs_sb_info *sbi, unsigned int segno) 2705 2681 { 2706 2682 if (unlikely(f2fs_cp_error(sbi))) 2707 2683 return ERR_PTR(-EIO); 2708 - return f2fs_get_meta_page_retry(sbi, GET_SUM_BLOCK(sbi, segno)); 2684 + return f2fs_get_meta_folio_retry(sbi, GET_SUM_BLOCK(sbi, segno)); 2709 2685 } 2710 2686 2711 2687 void f2fs_update_meta_page(struct f2fs_sb_info *sbi, 2712 2688 void *src, block_t blk_addr) 2713 2689 { 2714 - struct page *page = f2fs_grab_meta_page(sbi, blk_addr); 2690 + struct folio *folio = f2fs_grab_meta_folio(sbi, blk_addr); 2715 2691 2716 - memcpy(page_address(page), src, PAGE_SIZE); 2717 - set_page_dirty(page); 2718 - f2fs_put_page(page, 1); 2692 + memcpy(folio_address(folio), src, PAGE_SIZE); 2693 + folio_mark_dirty(folio); 2694 + f2fs_folio_put(folio, true); 2719 2695 } 2720 2696 2721 2697 static void write_sum_page(struct f2fs_sb_info *sbi, ··· 2728 2704 int type, block_t blk_addr) 2729 2705 { 2730 2706 struct curseg_info *curseg = CURSEG_I(sbi, type); 2731 - struct page *page = f2fs_grab_meta_page(sbi, blk_addr); 2707 + struct folio *folio = f2fs_grab_meta_folio(sbi, blk_addr); 2732 2708 struct f2fs_summary_block *src = curseg->sum_blk; 2733 2709 struct f2fs_summary_block *dst; 2734 2710 2735 - dst = (struct f2fs_summary_block *)page_address(page); 2711 + dst = folio_address(folio); 2736 2712 memset(dst, 0, PAGE_SIZE); 2737 2713 2738 2714 mutex_lock(&curseg->curseg_mutex); ··· 2746 2722 2747 2723 mutex_unlock(&curseg->curseg_mutex); 2748 2724 2749 - set_page_dirty(page); 2750 - f2fs_put_page(page, 1); 2725 + folio_mark_dirty(folio); 2726 + f2fs_folio_put(folio, true); 2751 2727 } 2752 2728 2753 2729 static int is_next_segment_free(struct f2fs_sb_info *sbi, ··· 2801 2777 if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_PRIOR_CONV || pinning) 2802 2778 segno = 0; 2803 2779 else 2804 - segno = max(sbi->first_zoned_segno, *newseg); 2780 + segno = max(sbi->first_seq_zone_segno, *newseg); 2805 2781 hint = GET_SEC_FROM_SEG(sbi, segno); 2806 2782 } 2807 2783 #endif ··· 2813 2789 if (secno >= MAIN_SECS(sbi) && f2fs_sb_has_blkzoned(sbi)) { 2814 2790 /* Write only to sequential zones */ 2815 2791 if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_ONLY_SEQ) { 2816 - hint = GET_SEC_FROM_SEG(sbi, sbi->first_zoned_segno); 2792 + hint = GET_SEC_FROM_SEG(sbi, sbi->first_seq_zone_segno); 2817 2793 secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint); 2818 2794 } else 2819 2795 secno = find_first_zero_bit(free_i->free_secmap, ··· 2860 2836 } 2861 2837 got_it: 2862 2838 /* set it as dirty segment in free segmap */ 2863 - f2fs_bug_on(sbi, test_bit(segno, free_i->free_segmap)); 2839 + if (test_bit(segno, free_i->free_segmap)) { 2840 + ret = -EFSCORRUPTED; 2841 + f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_CORRUPTED_FREE_BITMAP); 2842 + goto out_unlock; 2843 + } 2864 2844 2865 - /* no free section in conventional zone */ 2845 + /* no free section in conventional device or conventional zone */ 2866 2846 if (new_sec && pinning && 2867 - !f2fs_valid_pinned_area(sbi, START_BLOCK(sbi, segno))) { 2847 + f2fs_is_sequential_zone_area(sbi, START_BLOCK(sbi, segno))) { 2868 2848 ret = -EAGAIN; 2869 2849 goto out_unlock; 2870 2850 } ··· 3025 2997 struct curseg_info *curseg = CURSEG_I(sbi, type); 3026 2998 unsigned int new_segno = curseg->next_segno; 3027 2999 struct f2fs_summary_block *sum_node; 3028 - struct page *sum_page; 3000 + struct folio *sum_folio; 3029 3001 3030 3002 if (curseg->inited) 3031 3003 write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno)); ··· 3041 3013 curseg->alloc_type = SSR; 3042 3014 curseg->next_blkoff = __next_free_blkoff(sbi, curseg->segno, 0); 3043 3015 3044 - sum_page = f2fs_get_sum_page(sbi, new_segno); 3045 - if (IS_ERR(sum_page)) { 3016 + sum_folio = f2fs_get_sum_folio(sbi, new_segno); 3017 + if (IS_ERR(sum_folio)) { 3046 3018 /* GC won't be able to use stale summary pages by cp_error */ 3047 3019 memset(curseg->sum_blk, 0, SUM_ENTRY_SIZE); 3048 - return PTR_ERR(sum_page); 3020 + return PTR_ERR(sum_folio); 3049 3021 } 3050 - sum_node = (struct f2fs_summary_block *)page_address(sum_page); 3022 + sum_node = folio_address(sum_folio); 3051 3023 memcpy(curseg->sum_blk, sum_node, SUM_ENTRY_SIZE); 3052 - f2fs_put_page(sum_page, 1); 3024 + f2fs_folio_put(sum_folio, true); 3053 3025 return 0; 3054 3026 } 3055 3027 ··· 3339 3311 3340 3312 if (f2fs_sb_has_blkzoned(sbi) && err == -EAGAIN && gc_required) { 3341 3313 f2fs_down_write(&sbi->gc_lock); 3342 - err = f2fs_gc_range(sbi, 0, GET_SEGNO(sbi, FDEV(0).end_blk), 3314 + err = f2fs_gc_range(sbi, 0, sbi->first_seq_zone_segno - 1, 3343 3315 true, ZONED_PIN_SEC_REQUIRED_COUNT); 3344 3316 f2fs_up_write(&sbi->gc_lock); 3345 3317 ··· 3612 3584 static int __get_segment_type_4(struct f2fs_io_info *fio) 3613 3585 { 3614 3586 if (fio->type == DATA) { 3615 - struct inode *inode = fio->page->mapping->host; 3587 + struct inode *inode = fio_inode(fio); 3616 3588 3617 3589 if (S_ISDIR(inode->i_mode)) 3618 3590 return CURSEG_HOT_DATA; ··· 3646 3618 static int __get_segment_type_6(struct f2fs_io_info *fio) 3647 3619 { 3648 3620 if (fio->type == DATA) { 3649 - struct inode *inode = fio->page->mapping->host; 3621 + struct inode *inode = fio_inode(fio); 3650 3622 int type; 3651 3623 3652 3624 if (is_inode_flag_set(inode, FI_ALIGNED_WRITE)) ··· 3946 3918 fscrypt_finalize_bounce_page(&fio->encrypted_page); 3947 3919 folio_end_writeback(folio); 3948 3920 if (f2fs_in_warm_node_list(fio->sbi, folio)) 3949 - f2fs_del_fsync_node_entry(fio->sbi, fio->page); 3921 + f2fs_del_fsync_node_entry(fio->sbi, folio); 3950 3922 goto out; 3951 3923 } 3952 3924 if (GET_SEGNO(fio->sbi, fio->old_blkaddr) != NULL_SEGNO) ··· 4051 4023 if (!err) { 4052 4024 f2fs_update_device_state(fio->sbi, fio->ino, 4053 4025 fio->new_blkaddr, 1); 4054 - f2fs_update_iostat(fio->sbi, fio->page->mapping->host, 4026 + f2fs_update_iostat(fio->sbi, fio_inode(fio), 4055 4027 fio->io_type, F2FS_BLKSIZE); 4056 4028 } 4057 4029 ··· 4193 4165 /* submit cached LFS IO */ 4194 4166 f2fs_submit_merged_write_cond(sbi, NULL, &folio->page, 0, type); 4195 4167 /* submit cached IPU IO */ 4196 - f2fs_submit_merged_ipu_write(sbi, NULL, &folio->page); 4168 + f2fs_submit_merged_ipu_write(sbi, NULL, folio); 4197 4169 if (ordered) { 4198 4170 folio_wait_writeback(folio); 4199 4171 f2fs_bug_on(sbi, locked && folio_test_writeback(folio)); ··· 4206 4178 void f2fs_wait_on_block_writeback(struct inode *inode, block_t blkaddr) 4207 4179 { 4208 4180 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 4209 - struct page *cpage; 4181 + struct folio *cfolio; 4210 4182 4211 4183 if (!f2fs_meta_inode_gc_required(inode)) 4212 4184 return; ··· 4214 4186 if (!__is_valid_data_blkaddr(blkaddr)) 4215 4187 return; 4216 4188 4217 - cpage = find_lock_page(META_MAPPING(sbi), blkaddr); 4218 - if (cpage) { 4219 - f2fs_wait_on_page_writeback(cpage, DATA, true, true); 4220 - f2fs_put_page(cpage, 1); 4189 + cfolio = filemap_lock_folio(META_MAPPING(sbi), blkaddr); 4190 + if (!IS_ERR(cfolio)) { 4191 + f2fs_folio_wait_writeback(cfolio, DATA, true, true); 4192 + f2fs_folio_put(cfolio, true); 4221 4193 } 4222 4194 } 4223 4195 ··· 4241 4213 struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); 4242 4214 struct curseg_info *seg_i; 4243 4215 unsigned char *kaddr; 4244 - struct page *page; 4216 + struct folio *folio; 4245 4217 block_t start; 4246 4218 int i, j, offset; 4247 4219 4248 4220 start = start_sum_block(sbi); 4249 4221 4250 - page = f2fs_get_meta_page(sbi, start++); 4251 - if (IS_ERR(page)) 4252 - return PTR_ERR(page); 4253 - kaddr = (unsigned char *)page_address(page); 4222 + folio = f2fs_get_meta_folio(sbi, start++); 4223 + if (IS_ERR(folio)) 4224 + return PTR_ERR(folio); 4225 + kaddr = folio_address(folio); 4254 4226 4255 4227 /* Step 1: restore nat cache */ 4256 4228 seg_i = CURSEG_I(sbi, CURSEG_HOT_DATA); ··· 4287 4259 SUM_FOOTER_SIZE) 4288 4260 continue; 4289 4261 4290 - f2fs_put_page(page, 1); 4291 - page = NULL; 4262 + f2fs_folio_put(folio, true); 4292 4263 4293 - page = f2fs_get_meta_page(sbi, start++); 4294 - if (IS_ERR(page)) 4295 - return PTR_ERR(page); 4296 - kaddr = (unsigned char *)page_address(page); 4264 + folio = f2fs_get_meta_folio(sbi, start++); 4265 + if (IS_ERR(folio)) 4266 + return PTR_ERR(folio); 4267 + kaddr = folio_address(folio); 4297 4268 offset = 0; 4298 4269 } 4299 4270 } 4300 - f2fs_put_page(page, 1); 4271 + f2fs_folio_put(folio, true); 4301 4272 return 0; 4302 4273 } 4303 4274 ··· 4305 4278 struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); 4306 4279 struct f2fs_summary_block *sum; 4307 4280 struct curseg_info *curseg; 4308 - struct page *new; 4281 + struct folio *new; 4309 4282 unsigned short blk_off; 4310 4283 unsigned int segno = 0; 4311 4284 block_t blk_addr = 0; ··· 4332 4305 blk_addr = GET_SUM_BLOCK(sbi, segno); 4333 4306 } 4334 4307 4335 - new = f2fs_get_meta_page(sbi, blk_addr); 4308 + new = f2fs_get_meta_folio(sbi, blk_addr); 4336 4309 if (IS_ERR(new)) 4337 4310 return PTR_ERR(new); 4338 - sum = (struct f2fs_summary_block *)page_address(new); 4311 + sum = folio_address(new); 4339 4312 4340 4313 if (IS_NODESEG(type)) { 4341 4314 if (__exist_node_summaries(sbi)) { ··· 4370 4343 curseg->next_blkoff = blk_off; 4371 4344 mutex_unlock(&curseg->curseg_mutex); 4372 4345 out: 4373 - f2fs_put_page(new, 1); 4346 + f2fs_folio_put(new, true); 4374 4347 return err; 4375 4348 } 4376 4349 ··· 4419 4392 4420 4393 static void write_compacted_summaries(struct f2fs_sb_info *sbi, block_t blkaddr) 4421 4394 { 4422 - struct page *page; 4395 + struct folio *folio; 4423 4396 unsigned char *kaddr; 4424 4397 struct f2fs_summary *summary; 4425 4398 struct curseg_info *seg_i; 4426 4399 int written_size = 0; 4427 4400 int i, j; 4428 4401 4429 - page = f2fs_grab_meta_page(sbi, blkaddr++); 4430 - kaddr = (unsigned char *)page_address(page); 4402 + folio = f2fs_grab_meta_folio(sbi, blkaddr++); 4403 + kaddr = folio_address(folio); 4431 4404 memset(kaddr, 0, PAGE_SIZE); 4432 4405 4433 4406 /* Step 1: write nat cache */ ··· 4444 4417 for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) { 4445 4418 seg_i = CURSEG_I(sbi, i); 4446 4419 for (j = 0; j < f2fs_curseg_valid_blocks(sbi, i); j++) { 4447 - if (!page) { 4448 - page = f2fs_grab_meta_page(sbi, blkaddr++); 4449 - kaddr = (unsigned char *)page_address(page); 4420 + if (!folio) { 4421 + folio = f2fs_grab_meta_folio(sbi, blkaddr++); 4422 + kaddr = folio_address(folio); 4450 4423 memset(kaddr, 0, PAGE_SIZE); 4451 4424 written_size = 0; 4452 4425 } ··· 4458 4431 SUM_FOOTER_SIZE) 4459 4432 continue; 4460 4433 4461 - set_page_dirty(page); 4462 - f2fs_put_page(page, 1); 4463 - page = NULL; 4434 + folio_mark_dirty(folio); 4435 + f2fs_folio_put(folio, true); 4436 + folio = NULL; 4464 4437 } 4465 4438 } 4466 - if (page) { 4467 - set_page_dirty(page); 4468 - f2fs_put_page(page, 1); 4439 + if (folio) { 4440 + folio_mark_dirty(folio); 4441 + f2fs_folio_put(folio, true); 4469 4442 } 4470 4443 } 4471 4444 ··· 4518 4491 return -1; 4519 4492 } 4520 4493 4521 - static struct page *get_current_sit_page(struct f2fs_sb_info *sbi, 4494 + static struct folio *get_current_sit_folio(struct f2fs_sb_info *sbi, 4522 4495 unsigned int segno) 4523 4496 { 4524 - return f2fs_get_meta_page(sbi, current_sit_addr(sbi, segno)); 4497 + return f2fs_get_meta_folio(sbi, current_sit_addr(sbi, segno)); 4525 4498 } 4526 4499 4527 - static struct page *get_next_sit_page(struct f2fs_sb_info *sbi, 4500 + static struct folio *get_next_sit_folio(struct f2fs_sb_info *sbi, 4528 4501 unsigned int start) 4529 4502 { 4530 4503 struct sit_info *sit_i = SIT_I(sbi); 4531 - struct page *page; 4504 + struct folio *folio; 4532 4505 pgoff_t src_off, dst_off; 4533 4506 4534 4507 src_off = current_sit_addr(sbi, start); 4535 4508 dst_off = next_sit_addr(sbi, src_off); 4536 4509 4537 - page = f2fs_grab_meta_page(sbi, dst_off); 4538 - seg_info_to_sit_page(sbi, page, start); 4510 + folio = f2fs_grab_meta_folio(sbi, dst_off); 4511 + seg_info_to_sit_folio(sbi, folio, start); 4539 4512 4540 - set_page_dirty(page); 4513 + folio_mark_dirty(folio); 4541 4514 set_to_next_sit(sit_i, start); 4542 4515 4543 - return page; 4516 + return folio; 4544 4517 } 4545 4518 4546 4519 static struct sit_entry_set *grab_sit_entry_set(void) ··· 4670 4643 * #2, flush sit entries to sit page. 4671 4644 */ 4672 4645 list_for_each_entry_safe(ses, tmp, head, set_list) { 4673 - struct page *page = NULL; 4646 + struct folio *folio = NULL; 4674 4647 struct f2fs_sit_block *raw_sit = NULL; 4675 4648 unsigned int start_segno = ses->start_segno; 4676 4649 unsigned int end = min(start_segno + SIT_ENTRY_PER_BLOCK, ··· 4684 4657 if (to_journal) { 4685 4658 down_write(&curseg->journal_rwsem); 4686 4659 } else { 4687 - page = get_next_sit_page(sbi, start_segno); 4688 - raw_sit = page_address(page); 4660 + folio = get_next_sit_folio(sbi, start_segno); 4661 + raw_sit = folio_address(folio); 4689 4662 } 4690 4663 4691 4664 /* flush dirty sit entries in region of current sit set */ ··· 4723 4696 &raw_sit->entries[sit_offset]); 4724 4697 } 4725 4698 4699 + /* update ckpt_valid_block */ 4700 + if (__is_large_section(sbi)) { 4701 + set_ckpt_valid_blocks(sbi, segno); 4702 + sanity_check_valid_blocks(sbi, segno); 4703 + } 4704 + 4726 4705 __clear_bit(segno, bitmap); 4727 4706 sit_i->dirty_sentries--; 4728 4707 ses->entry_cnt--; ··· 4737 4704 if (to_journal) 4738 4705 up_write(&curseg->journal_rwsem); 4739 4706 else 4740 - f2fs_put_page(page, 1); 4707 + f2fs_folio_put(folio, true); 4741 4708 4742 4709 f2fs_bug_on(sbi, ses->entry_cnt); 4743 4710 release_sit_entry_set(ses); ··· 4949 4916 4950 4917 for (; start < end && start < MAIN_SEGS(sbi); start++) { 4951 4918 struct f2fs_sit_block *sit_blk; 4952 - struct page *page; 4919 + struct folio *folio; 4953 4920 4954 4921 se = &sit_i->sentries[start]; 4955 - page = get_current_sit_page(sbi, start); 4956 - if (IS_ERR(page)) 4957 - return PTR_ERR(page); 4958 - sit_blk = (struct f2fs_sit_block *)page_address(page); 4922 + folio = get_current_sit_folio(sbi, start); 4923 + if (IS_ERR(folio)) 4924 + return PTR_ERR(folio); 4925 + sit_blk = folio_address(folio); 4959 4926 sit = sit_blk->entries[SIT_ENTRY_OFFSET(sit_i, start)]; 4960 - f2fs_put_page(page, 1); 4927 + f2fs_folio_put(folio, true); 4961 4928 4962 4929 err = check_block_count(sbi, start, &sit); 4963 4930 if (err) ··· 5049 5016 } 5050 5017 } 5051 5018 up_read(&curseg->journal_rwsem); 5019 + 5020 + /* update ckpt_valid_block */ 5021 + if (__is_large_section(sbi)) { 5022 + unsigned int segno; 5023 + 5024 + for (segno = 0; segno < MAIN_SEGS(sbi); segno += SEGS_PER_SEC(sbi)) { 5025 + set_ckpt_valid_blocks(sbi, segno); 5026 + sanity_check_valid_blocks(sbi, segno); 5027 + } 5028 + } 5052 5029 5053 5030 if (err) 5054 5031 return err;
+100 -36
fs/f2fs/segment.h
··· 102 102 #define CAP_SEGS_PER_SEC(sbi) \ 103 103 (SEGS_PER_SEC(sbi) - \ 104 104 BLKS_TO_SEGS(sbi, (sbi)->unusable_blocks_per_sec)) 105 + #define GET_START_SEG_FROM_SEC(sbi, segno) \ 106 + (rounddown(segno, SEGS_PER_SEC(sbi))) 105 107 #define GET_SEC_FROM_SEG(sbi, segno) \ 106 108 (((segno) == -1) ? -1 : (segno) / SEGS_PER_SEC(sbi)) 107 109 #define GET_SEG_FROM_SEC(sbi, secno) \ ··· 211 209 212 210 struct sec_entry { 213 211 unsigned int valid_blocks; /* # of valid blocks in a section */ 212 + unsigned int ckpt_valid_blocks; /* # of valid blocks last cp in a section */ 214 213 }; 215 214 216 215 #define MAX_SKIP_GC_COUNT 16 ··· 348 345 static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi, 349 346 unsigned int segno, bool use_section) 350 347 { 351 - if (use_section && __is_large_section(sbi)) { 352 - unsigned int secno = GET_SEC_FROM_SEG(sbi, segno); 353 - unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno); 354 - unsigned int blocks = 0; 355 - int i; 356 - 357 - for (i = 0; i < SEGS_PER_SEC(sbi); i++, start_segno++) { 358 - struct seg_entry *se = get_seg_entry(sbi, start_segno); 359 - 360 - blocks += se->ckpt_valid_blocks; 361 - } 362 - return blocks; 363 - } 364 - return get_seg_entry(sbi, segno)->ckpt_valid_blocks; 348 + if (use_section && __is_large_section(sbi)) 349 + return get_sec_entry(sbi, segno)->ckpt_valid_blocks; 350 + else 351 + return get_seg_entry(sbi, segno)->ckpt_valid_blocks; 365 352 } 366 353 354 + static inline void set_ckpt_valid_blocks(struct f2fs_sb_info *sbi, 355 + unsigned int segno) 356 + { 357 + unsigned int secno = GET_SEC_FROM_SEG(sbi, segno); 358 + unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno); 359 + unsigned int blocks = 0; 360 + int i; 361 + 362 + for (i = 0; i < SEGS_PER_SEC(sbi); i++, start_segno++) { 363 + struct seg_entry *se = get_seg_entry(sbi, start_segno); 364 + 365 + blocks += se->ckpt_valid_blocks; 366 + } 367 + get_sec_entry(sbi, segno)->ckpt_valid_blocks = blocks; 368 + } 369 + 370 + #ifdef CONFIG_F2FS_CHECK_FS 371 + static inline void sanity_check_valid_blocks(struct f2fs_sb_info *sbi, 372 + unsigned int segno) 373 + { 374 + unsigned int secno = GET_SEC_FROM_SEG(sbi, segno); 375 + unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno); 376 + unsigned int blocks = 0; 377 + int i; 378 + 379 + for (i = 0; i < SEGS_PER_SEC(sbi); i++, start_segno++) { 380 + struct seg_entry *se = get_seg_entry(sbi, start_segno); 381 + 382 + blocks += se->ckpt_valid_blocks; 383 + } 384 + 385 + if (blocks != get_sec_entry(sbi, segno)->ckpt_valid_blocks) { 386 + f2fs_err(sbi, 387 + "Inconsistent ckpt valid blocks: " 388 + "seg entry(%d) vs sec entry(%d) at secno %d", 389 + blocks, get_sec_entry(sbi, segno)->ckpt_valid_blocks, secno); 390 + f2fs_bug_on(sbi, 1); 391 + } 392 + } 393 + #else 394 + static inline void sanity_check_valid_blocks(struct f2fs_sb_info *sbi, 395 + unsigned int segno) 396 + { 397 + } 398 + #endif 367 399 static inline void seg_info_from_raw_sit(struct seg_entry *se, 368 400 struct f2fs_sit_entry *rs) 369 401 { ··· 423 385 rs->mtime = cpu_to_le64(se->mtime); 424 386 } 425 387 426 - static inline void seg_info_to_sit_page(struct f2fs_sb_info *sbi, 427 - struct page *page, unsigned int start) 388 + static inline void seg_info_to_sit_folio(struct f2fs_sb_info *sbi, 389 + struct folio *folio, unsigned int start) 428 390 { 429 391 struct f2fs_sit_block *raw_sit; 430 392 struct seg_entry *se; ··· 433 395 (unsigned long)MAIN_SEGS(sbi)); 434 396 int i; 435 397 436 - raw_sit = (struct f2fs_sit_block *)page_address(page); 398 + raw_sit = folio_address(folio); 437 399 memset(raw_sit, 0, PAGE_SIZE); 438 400 for (i = 0; i < end - start; i++) { 439 401 rs = &raw_sit->entries[i]; ··· 467 429 unsigned int secno = GET_SEC_FROM_SEG(sbi, segno); 468 430 unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno); 469 431 unsigned int next; 470 - unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi); 471 432 472 433 spin_lock(&free_i->segmap_lock); 473 434 clear_bit(segno, free_i->free_segmap); ··· 474 437 475 438 next = find_next_bit(free_i->free_segmap, 476 439 start_segno + SEGS_PER_SEC(sbi), start_segno); 477 - if (next >= start_segno + usable_segs) { 440 + if (next >= start_segno + f2fs_usable_segs_in_sec(sbi)) { 478 441 clear_bit(secno, free_i->free_secmap); 479 442 free_i->free_sections++; 480 443 } ··· 500 463 unsigned int secno = GET_SEC_FROM_SEG(sbi, segno); 501 464 unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno); 502 465 unsigned int next; 503 - unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi); 466 + bool ret; 504 467 505 468 spin_lock(&free_i->segmap_lock); 506 - if (test_and_clear_bit(segno, free_i->free_segmap)) { 507 - free_i->free_segments++; 469 + ret = test_and_clear_bit(segno, free_i->free_segmap); 470 + if (!ret) 471 + goto unlock_out; 508 472 509 - if (!inmem && IS_CURSEC(sbi, secno)) 510 - goto skip_free; 511 - next = find_next_bit(free_i->free_segmap, 512 - start_segno + SEGS_PER_SEC(sbi), start_segno); 513 - if (next >= start_segno + usable_segs) { 514 - if (test_and_clear_bit(secno, free_i->free_secmap)) 515 - free_i->free_sections++; 516 - } 517 - } 518 - skip_free: 473 + free_i->free_segments++; 474 + 475 + if (!inmem && IS_CURSEC(sbi, secno)) 476 + goto unlock_out; 477 + 478 + /* check large section */ 479 + next = find_next_bit(free_i->free_segmap, 480 + start_segno + SEGS_PER_SEC(sbi), start_segno); 481 + if (next < start_segno + f2fs_usable_segs_in_sec(sbi)) 482 + goto unlock_out; 483 + 484 + ret = test_and_clear_bit(secno, free_i->free_secmap); 485 + if (!ret) 486 + goto unlock_out; 487 + 488 + free_i->free_sections++; 489 + 490 + if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[BG_GC]) == secno) 491 + sbi->next_victim_seg[BG_GC] = NULL_SEGNO; 492 + if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[FG_GC]) == secno) 493 + sbi->next_victim_seg[FG_GC] = NULL_SEGNO; 494 + 495 + unlock_out: 519 496 spin_unlock(&free_i->segmap_lock); 520 497 } 521 498 ··· 620 569 if (unlikely(segno == NULL_SEGNO)) 621 570 return false; 622 571 623 - left_blocks = CAP_BLKS_PER_SEC(sbi) - 624 - get_ckpt_valid_blocks(sbi, segno, true); 572 + if (f2fs_lfs_mode(sbi) && __is_large_section(sbi)) { 573 + left_blocks = CAP_BLKS_PER_SEC(sbi) - 574 + SEGS_TO_BLKS(sbi, (segno - GET_START_SEG_FROM_SEC(sbi, segno))) - 575 + CURSEG_I(sbi, i)->next_blkoff; 576 + } else { 577 + left_blocks = CAP_BLKS_PER_SEC(sbi) - 578 + get_ckpt_valid_blocks(sbi, segno, true); 579 + } 625 580 626 581 blocks = i <= CURSEG_COLD_DATA ? data_blocks : node_blocks; 627 582 if (blocks > left_blocks) ··· 640 583 if (unlikely(segno == NULL_SEGNO)) 641 584 return false; 642 585 643 - left_blocks = CAP_BLKS_PER_SEC(sbi) - 644 - get_ckpt_valid_blocks(sbi, segno, true); 586 + if (f2fs_lfs_mode(sbi) && __is_large_section(sbi)) { 587 + left_blocks = CAP_BLKS_PER_SEC(sbi) - 588 + SEGS_TO_BLKS(sbi, (segno - GET_START_SEG_FROM_SEC(sbi, segno))) - 589 + CURSEG_I(sbi, CURSEG_HOT_DATA)->next_blkoff; 590 + } else { 591 + left_blocks = CAP_BLKS_PER_SEC(sbi) - 592 + get_ckpt_valid_blocks(sbi, segno, true); 593 + } 594 + 645 595 if (dent_blocks > left_blocks) 646 596 return false; 647 597 return true;
+10 -3
fs/f2fs/shrinker.c
··· 184 184 if (!inode) 185 185 continue; 186 186 187 - len = fi->donate_end - fi->donate_start + 1; 188 - npages = npages < len ? 0 : npages - len; 189 - invalidate_inode_pages2_range(inode->i_mapping, 187 + inode_lock(inode); 188 + if (!is_inode_flag_set(inode, FI_DONATE_FINISHED)) { 189 + len = fi->donate_end - fi->donate_start + 1; 190 + npages = npages < len ? 0 : npages - len; 191 + 192 + invalidate_inode_pages2_range(inode->i_mapping, 190 193 fi->donate_start, fi->donate_end); 194 + set_inode_flag(inode, FI_DONATE_FINISHED); 195 + } 196 + inode_unlock(inode); 197 + 191 198 iput(inode); 192 199 cond_resched(); 193 200 }
+109 -59
fs/f2fs/super.c
··· 47 47 [FAULT_KVMALLOC] = "kvmalloc", 48 48 [FAULT_PAGE_ALLOC] = "page alloc", 49 49 [FAULT_PAGE_GET] = "page get", 50 + [FAULT_ALLOC_BIO] = "alloc bio(obsolete)", 50 51 [FAULT_ALLOC_NID] = "alloc nid", 51 52 [FAULT_ORPHAN] = "orphan", 52 53 [FAULT_BLOCK] = "no more block", ··· 65 64 [FAULT_BLKADDR_CONSISTENCE] = "inconsistent blkaddr", 66 65 [FAULT_NO_SEGMENT] = "no free segment", 67 66 [FAULT_INCONSISTENT_FOOTER] = "inconsistent footer", 67 + [FAULT_TIMEOUT] = "timeout", 68 + [FAULT_VMALLOC] = "vmalloc", 68 69 }; 69 70 70 71 int f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned long rate, 71 - unsigned long type) 72 + unsigned long type, enum fault_option fo) 72 73 { 73 74 struct f2fs_fault_info *ffi = &F2FS_OPTION(sbi).fault_info; 74 75 75 - if (rate) { 76 + if (fo & FAULT_ALL) { 77 + memset(ffi, 0, sizeof(struct f2fs_fault_info)); 78 + return 0; 79 + } 80 + 81 + if (fo & FAULT_RATE) { 76 82 if (rate > INT_MAX) 77 83 return -EINVAL; 78 84 atomic_set(&ffi->inject_ops, 0); 79 85 ffi->inject_rate = (int)rate; 86 + f2fs_info(sbi, "build fault injection rate: %lu", rate); 80 87 } 81 88 82 - if (type) { 89 + if (fo & FAULT_TYPE) { 83 90 if (type >= BIT(FAULT_MAX)) 84 91 return -EINVAL; 85 92 ffi->inject_type = (unsigned int)type; 93 + f2fs_info(sbi, "build fault injection type: 0x%lx", type); 86 94 } 87 95 88 - if (!rate && !type) 89 - memset(ffi, 0, sizeof(struct f2fs_fault_info)); 90 - else 91 - f2fs_info(sbi, 92 - "build fault injection attr: rate: %lu, type: 0x%lx", 93 - rate, type); 94 96 return 0; 95 97 } 96 98 #endif ··· 900 896 case Opt_fault_injection: 901 897 if (args->from && match_int(args, &arg)) 902 898 return -EINVAL; 903 - if (f2fs_build_fault_attr(sbi, arg, 904 - F2FS_ALL_FAULT_TYPE)) 899 + if (f2fs_build_fault_attr(sbi, arg, 0, FAULT_RATE)) 905 900 return -EINVAL; 906 901 set_opt(sbi, FAULT_INJECTION); 907 902 break; ··· 908 905 case Opt_fault_type: 909 906 if (args->from && match_int(args, &arg)) 910 907 return -EINVAL; 911 - if (f2fs_build_fault_attr(sbi, 0, arg)) 908 + if (f2fs_build_fault_attr(sbi, 0, arg, FAULT_TYPE)) 912 909 return -EINVAL; 913 910 set_opt(sbi, FAULT_INJECTION); 914 911 break; ··· 1534 1531 } 1535 1532 spin_unlock(&sbi->inode_lock[DIRTY_META]); 1536 1533 1537 - if (!ret && f2fs_is_atomic_file(inode)) 1534 + /* if atomic write is not committed, set inode w/ atomic dirty */ 1535 + if (!ret && f2fs_is_atomic_file(inode) && 1536 + !is_inode_flag_set(inode, FI_ATOMIC_COMMITTED)) 1538 1537 set_inode_flag(inode, FI_ATOMIC_DIRTIED); 1539 1538 1540 1539 return ret; ··· 1809 1804 1810 1805 limit = min_not_zero(dquot->dq_dqb.dqb_bsoftlimit, 1811 1806 dquot->dq_dqb.dqb_bhardlimit); 1812 - if (limit) 1813 - limit >>= sb->s_blocksize_bits; 1807 + limit >>= sb->s_blocksize_bits; 1814 1808 1815 - if (limit && buf->f_blocks > limit) { 1809 + if (limit) { 1810 + uint64_t remaining = 0; 1811 + 1816 1812 curblock = (dquot->dq_dqb.dqb_curspace + 1817 1813 dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits; 1818 - buf->f_blocks = limit; 1819 - buf->f_bfree = buf->f_bavail = 1820 - (buf->f_blocks > curblock) ? 1821 - (buf->f_blocks - curblock) : 0; 1814 + if (limit > curblock) 1815 + remaining = limit - curblock; 1816 + 1817 + buf->f_blocks = min(buf->f_blocks, limit); 1818 + buf->f_bfree = min(buf->f_bfree, remaining); 1819 + buf->f_bavail = min(buf->f_bavail, remaining); 1822 1820 } 1823 1821 1824 1822 limit = min_not_zero(dquot->dq_dqb.dqb_isoftlimit, 1825 1823 dquot->dq_dqb.dqb_ihardlimit); 1826 1824 1827 - if (limit && buf->f_files > limit) { 1828 - buf->f_files = limit; 1829 - buf->f_ffree = 1830 - (buf->f_files > dquot->dq_dqb.dqb_curinodes) ? 1831 - (buf->f_files - dquot->dq_dqb.dqb_curinodes) : 0; 1825 + if (limit) { 1826 + uint64_t remaining = 0; 1827 + 1828 + if (limit > dquot->dq_dqb.dqb_curinodes) 1829 + remaining = limit - dquot->dq_dqb.dqb_curinodes; 1830 + 1831 + buf->f_files = min(buf->f_files, limit); 1832 + buf->f_ffree = min(buf->f_ffree, remaining); 1832 1833 } 1833 1834 1834 1835 spin_unlock(&dquot->dq_dqb_lock); ··· 1893 1882 buf->f_fsid = u64_to_fsid(id); 1894 1883 1895 1884 #ifdef CONFIG_QUOTA 1896 - if (is_inode_flag_set(dentry->d_inode, FI_PROJ_INHERIT) && 1885 + if (is_inode_flag_set(d_inode(dentry), FI_PROJ_INHERIT) && 1897 1886 sb_has_quota_limits_enabled(sb, PRJQUOTA)) { 1898 - f2fs_statfs_project(sb, F2FS_I(dentry->d_inode)->i_projid, buf); 1887 + f2fs_statfs_project(sb, F2FS_I(d_inode(dentry))->i_projid, buf); 1899 1888 } 1900 1889 #endif 1901 1890 return 0; ··· 2219 2208 set_opt(sbi, POSIX_ACL); 2220 2209 #endif 2221 2210 2222 - f2fs_build_fault_attr(sbi, 0, 0); 2211 + f2fs_build_fault_attr(sbi, 0, 0, FAULT_ALL); 2223 2212 } 2224 2213 2225 2214 #ifdef CONFIG_QUOTA ··· 2700 2689 { 2701 2690 struct inode *inode = sb_dqopt(sb)->files[type]; 2702 2691 struct address_space *mapping = inode->i_mapping; 2703 - block_t blkidx = F2FS_BYTES_TO_BLK(off); 2704 - int offset = off & (sb->s_blocksize - 1); 2705 2692 int tocopy; 2706 2693 size_t toread; 2707 2694 loff_t i_size = i_size_read(inode); 2708 - struct page *page; 2709 2695 2710 2696 if (off > i_size) 2711 2697 return 0; ··· 2711 2703 len = i_size - off; 2712 2704 toread = len; 2713 2705 while (toread > 0) { 2714 - tocopy = min_t(unsigned long, sb->s_blocksize - offset, toread); 2706 + struct folio *folio; 2707 + size_t offset; 2708 + 2715 2709 repeat: 2716 - page = read_cache_page_gfp(mapping, blkidx, GFP_NOFS); 2717 - if (IS_ERR(page)) { 2718 - if (PTR_ERR(page) == -ENOMEM) { 2710 + folio = mapping_read_folio_gfp(mapping, off >> PAGE_SHIFT, 2711 + GFP_NOFS); 2712 + if (IS_ERR(folio)) { 2713 + if (PTR_ERR(folio) == -ENOMEM) { 2719 2714 memalloc_retry_wait(GFP_NOFS); 2720 2715 goto repeat; 2721 2716 } 2722 2717 set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR); 2723 - return PTR_ERR(page); 2718 + return PTR_ERR(folio); 2724 2719 } 2720 + offset = offset_in_folio(folio, off); 2721 + tocopy = min(folio_size(folio) - offset, toread); 2725 2722 2726 - lock_page(page); 2723 + folio_lock(folio); 2727 2724 2728 - if (unlikely(page->mapping != mapping)) { 2729 - f2fs_put_page(page, 1); 2725 + if (unlikely(folio->mapping != mapping)) { 2726 + f2fs_folio_put(folio, true); 2730 2727 goto repeat; 2731 2728 } 2732 - if (unlikely(!PageUptodate(page))) { 2733 - f2fs_put_page(page, 1); 2734 - set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR); 2735 - return -EIO; 2736 - } 2737 2729 2738 - memcpy_from_page(data, page, offset, tocopy); 2739 - f2fs_put_page(page, 1); 2730 + /* 2731 + * should never happen, just leave f2fs_bug_on() here to catch 2732 + * any potential bug. 2733 + */ 2734 + f2fs_bug_on(F2FS_SB(sb), !folio_test_uptodate(folio)); 2740 2735 2741 - offset = 0; 2736 + memcpy_from_folio(data, folio, offset, tocopy); 2737 + f2fs_folio_put(folio, true); 2738 + 2742 2739 toread -= tocopy; 2743 2740 data += tocopy; 2744 - blkidx++; 2741 + off += tocopy; 2745 2742 } 2746 2743 return len; 2747 2744 } ··· 3445 3432 bio = bio_alloc(sbi->sb->s_bdev, 1, opf, GFP_NOFS); 3446 3433 3447 3434 /* it doesn't need to set crypto context for superblock update */ 3448 - bio->bi_iter.bi_sector = SECTOR_FROM_BLOCK(folio_index(folio)); 3435 + bio->bi_iter.bi_sector = SECTOR_FROM_BLOCK(folio->index); 3449 3436 3450 3437 if (!bio_add_folio(bio, folio, folio_size(folio), 0)) 3451 3438 f2fs_bug_on(sbi, 1); ··· 3571 3558 return -EFSCORRUPTED; 3572 3559 } 3573 3560 crc = le32_to_cpu(raw_super->crc); 3574 - if (!f2fs_crc_valid(sbi, crc, raw_super, crc_offset)) { 3561 + if (crc != f2fs_crc32(raw_super, crc_offset)) { 3575 3562 f2fs_info(sbi, "Invalid SB checksum value: %u", crc); 3576 3563 return -EFSCORRUPTED; 3577 3564 } ··· 3730 3717 block_t user_block_count, valid_user_blocks; 3731 3718 block_t avail_node_count, valid_node_count; 3732 3719 unsigned int nat_blocks, nat_bits_bytes, nat_bits_blocks; 3720 + unsigned int sit_blk_cnt; 3733 3721 int i, j; 3734 3722 3735 3723 total = le32_to_cpu(raw_super->segment_count); ··· 3839 3825 nat_bitmap_size != ((nat_segs / 2) << log_blocks_per_seg) / 8) { 3840 3826 f2fs_err(sbi, "Wrong bitmap size: sit: %u, nat:%u", 3841 3827 sit_bitmap_size, nat_bitmap_size); 3828 + return 1; 3829 + } 3830 + 3831 + sit_blk_cnt = DIV_ROUND_UP(main_segs, SIT_ENTRY_PER_BLOCK); 3832 + if (sit_bitmap_size * 8 < sit_blk_cnt) { 3833 + f2fs_err(sbi, "Wrong bitmap size: sit: %u, sit_blk_cnt:%u", 3834 + sit_bitmap_size, sit_blk_cnt); 3842 3835 return 1; 3843 3836 } 3844 3837 ··· 4127 4106 4128 4107 /* we should update superblock crc here */ 4129 4108 if (!recover && f2fs_sb_has_sb_chksum(sbi)) { 4130 - crc = f2fs_crc32(sbi, F2FS_RAW_SUPER(sbi), 4109 + crc = f2fs_crc32(F2FS_RAW_SUPER(sbi), 4131 4110 offsetof(struct f2fs_super_block, crc)); 4132 4111 F2FS_RAW_SUPER(sbi)->crc = cpu_to_le32(crc); 4133 4112 } ··· 4325 4304 f2fs_record_stop_reason(sbi); 4326 4305 } 4327 4306 4328 - static inline unsigned int get_first_zoned_segno(struct f2fs_sb_info *sbi) 4307 + static inline unsigned int get_first_seq_zone_segno(struct f2fs_sb_info *sbi) 4329 4308 { 4309 + #ifdef CONFIG_BLK_DEV_ZONED 4310 + unsigned int zoneno, total_zones; 4330 4311 int devi; 4331 4312 4332 - for (devi = 0; devi < sbi->s_ndevs; devi++) 4333 - if (bdev_is_zoned(FDEV(devi).bdev)) 4334 - return GET_SEGNO(sbi, FDEV(devi).start_blk); 4335 - return 0; 4313 + if (!f2fs_sb_has_blkzoned(sbi)) 4314 + return NULL_SEGNO; 4315 + 4316 + for (devi = 0; devi < sbi->s_ndevs; devi++) { 4317 + if (!bdev_is_zoned(FDEV(devi).bdev)) 4318 + continue; 4319 + 4320 + total_zones = GET_ZONE_FROM_SEG(sbi, FDEV(devi).total_segments); 4321 + 4322 + for (zoneno = 0; zoneno < total_zones; zoneno++) { 4323 + unsigned int segs, blks; 4324 + 4325 + if (!f2fs_zone_is_seq(sbi, devi, zoneno)) 4326 + continue; 4327 + 4328 + segs = GET_SEG_FROM_SEC(sbi, 4329 + zoneno * sbi->secs_per_zone); 4330 + blks = SEGS_TO_BLKS(sbi, segs); 4331 + return GET_SEGNO(sbi, FDEV(devi).start_blk + blks); 4332 + } 4333 + } 4334 + #endif 4335 + return NULL_SEGNO; 4336 4336 } 4337 4337 4338 4338 static int f2fs_scan_devices(struct f2fs_sb_info *sbi) ··· 4390 4348 #endif 4391 4349 4392 4350 for (i = 0; i < max_devices; i++) { 4351 + if (max_devices == 1) { 4352 + FDEV(i).total_segments = 4353 + le32_to_cpu(raw_super->segment_count_main); 4354 + FDEV(i).start_blk = 0; 4355 + FDEV(i).end_blk = FDEV(i).total_segments * 4356 + BLKS_PER_SEG(sbi); 4357 + } 4358 + 4393 4359 if (i == 0) 4394 4360 FDEV(0).bdev_file = sbi->sb->s_bdev_file; 4395 4361 else if (!RDEV(i).path[0]) ··· 4588 4538 4589 4539 /* precompute checksum seed for metadata */ 4590 4540 if (f2fs_sb_has_inode_chksum(sbi)) 4591 - sbi->s_chksum_seed = f2fs_chksum(sbi, ~0, raw_super->uuid, 4592 - sizeof(raw_super->uuid)); 4541 + sbi->s_chksum_seed = f2fs_chksum(~0, raw_super->uuid, 4542 + sizeof(raw_super->uuid)); 4593 4543 4594 4544 default_options(sbi, false); 4595 4545 /* parse mount options */ ··· 4768 4718 sbi->sectors_written_start = f2fs_get_sectors_written(sbi); 4769 4719 4770 4720 /* get segno of first zoned block device */ 4771 - sbi->first_zoned_segno = get_first_zoned_segno(sbi); 4721 + sbi->first_seq_zone_segno = get_first_seq_zone_segno(sbi); 4772 4722 4773 4723 /* Read accumulated write IO statistics if exists */ 4774 4724 seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);
+39 -2
fs/f2fs/sysfs.c
··· 274 274 return sysfs_emit(buf, "(none)\n"); 275 275 } 276 276 277 + static ssize_t encoding_flags_show(struct f2fs_attr *a, 278 + struct f2fs_sb_info *sbi, char *buf) 279 + { 280 + return sysfs_emit(buf, "%x\n", 281 + le16_to_cpu(F2FS_RAW_SUPER(sbi)->s_encoding_flags)); 282 + } 283 + 277 284 static ssize_t mounted_time_sec_show(struct f2fs_attr *a, 278 285 struct f2fs_sb_info *sbi, char *buf) 279 286 { ··· 501 494 return ret; 502 495 #ifdef CONFIG_F2FS_FAULT_INJECTION 503 496 if (a->struct_type == FAULT_INFO_TYPE) { 504 - if (f2fs_build_fault_attr(sbi, 0, t)) 497 + if (f2fs_build_fault_attr(sbi, 0, t, FAULT_TYPE)) 505 498 return -EINVAL; 506 499 return count; 507 500 } 508 501 if (a->struct_type == FAULT_INFO_RATE) { 509 - if (f2fs_build_fault_attr(sbi, t, 0)) 502 + if (f2fs_build_fault_attr(sbi, t, 0, FAULT_RATE)) 510 503 return -EINVAL; 511 504 return count; 512 505 } ··· 1165 1158 F2FS_GENERAL_RO_ATTR(current_reserved_blocks); 1166 1159 F2FS_GENERAL_RO_ATTR(unusable); 1167 1160 F2FS_GENERAL_RO_ATTR(encoding); 1161 + F2FS_GENERAL_RO_ATTR(encoding_flags); 1168 1162 F2FS_GENERAL_RO_ATTR(mounted_time_sec); 1169 1163 F2FS_GENERAL_RO_ATTR(main_blkaddr); 1170 1164 F2FS_GENERAL_RO_ATTR(pending_discard); ··· 1207 1199 F2FS_FEATURE_RO_ATTR(compression); 1208 1200 #endif 1209 1201 F2FS_FEATURE_RO_ATTR(pin_file); 1202 + #ifdef CONFIG_UNICODE 1203 + F2FS_FEATURE_RO_ATTR(linear_lookup); 1204 + #endif 1210 1205 1211 1206 #define ATTR_LIST(name) (&f2fs_attr_##name.attr) 1212 1207 static struct attribute *f2fs_attrs[] = { ··· 1281 1270 ATTR_LIST(reserved_blocks), 1282 1271 ATTR_LIST(current_reserved_blocks), 1283 1272 ATTR_LIST(encoding), 1273 + ATTR_LIST(encoding_flags), 1284 1274 ATTR_LIST(mounted_time_sec), 1285 1275 #ifdef CONFIG_F2FS_STAT_FS 1286 1276 ATTR_LIST(cp_foreground_calls), ··· 1359 1347 BASE_ATTR_LIST(compression), 1360 1348 #endif 1361 1349 BASE_ATTR_LIST(pin_file), 1350 + #ifdef CONFIG_UNICODE 1351 + BASE_ATTR_LIST(linear_lookup), 1352 + #endif 1362 1353 NULL, 1363 1354 }; 1364 1355 ATTRIBUTE_GROUPS(f2fs_feat); ··· 1694 1679 return 0; 1695 1680 } 1696 1681 1682 + #ifdef CONFIG_F2FS_FAULT_INJECTION 1683 + static int __maybe_unused inject_stats_seq_show(struct seq_file *seq, 1684 + void *offset) 1685 + { 1686 + struct super_block *sb = seq->private; 1687 + struct f2fs_sb_info *sbi = F2FS_SB(sb); 1688 + struct f2fs_fault_info *ffi = &F2FS_OPTION(sbi).fault_info; 1689 + int i; 1690 + 1691 + seq_puts(seq, "fault_type injected_count\n"); 1692 + 1693 + for (i = 0; i < FAULT_MAX; i++) 1694 + seq_printf(seq, "%-24s%-10u\n", f2fs_fault_name[i], 1695 + ffi->inject_count[i]); 1696 + return 0; 1697 + } 1698 + #endif 1699 + 1697 1700 int __init f2fs_init_sysfs(void) 1698 1701 { 1699 1702 int ret; ··· 1803 1770 discard_plist_seq_show, sb); 1804 1771 proc_create_single_data("disk_map", 0444, sbi->s_proc, 1805 1772 disk_map_seq_show, sb); 1773 + #ifdef CONFIG_F2FS_FAULT_INJECTION 1774 + proc_create_single_data("inject_stats", 0444, sbi->s_proc, 1775 + inject_stats_seq_show, sb); 1776 + #endif 1806 1777 return 0; 1807 1778 put_feature_list_kobj: 1808 1779 kobject_put(&sbi->s_feature_list_kobj);
+58 -58
fs/f2fs/xattr.c
··· 136 136 137 137 #ifdef CONFIG_F2FS_FS_SECURITY 138 138 static int f2fs_initxattrs(struct inode *inode, const struct xattr *xattr_array, 139 - void *page) 139 + void *folio) 140 140 { 141 141 const struct xattr *xattr; 142 142 int err = 0; ··· 144 144 for (xattr = xattr_array; xattr->name != NULL; xattr++) { 145 145 err = f2fs_setxattr(inode, F2FS_XATTR_INDEX_SECURITY, 146 146 xattr->name, xattr->value, 147 - xattr->value_len, (struct page *)page, 0); 147 + xattr->value_len, folio, 0); 148 148 if (err < 0) 149 149 break; 150 150 } ··· 152 152 } 153 153 154 154 int f2fs_init_security(struct inode *inode, struct inode *dir, 155 - const struct qstr *qstr, struct page *ipage) 155 + const struct qstr *qstr, struct folio *ifolio) 156 156 { 157 157 return security_inode_init_security(inode, dir, qstr, 158 - &f2fs_initxattrs, ipage); 158 + f2fs_initxattrs, ifolio); 159 159 } 160 160 #endif 161 161 ··· 271 271 return entry; 272 272 } 273 273 274 - static int read_inline_xattr(struct inode *inode, struct page *ipage, 274 + static int read_inline_xattr(struct inode *inode, struct folio *ifolio, 275 275 void *txattr_addr) 276 276 { 277 277 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 278 278 unsigned int inline_size = inline_xattr_size(inode); 279 - struct page *page = NULL; 279 + struct folio *folio = NULL; 280 280 void *inline_addr; 281 281 282 - if (ipage) { 283 - inline_addr = inline_xattr_addr(inode, ipage); 282 + if (ifolio) { 283 + inline_addr = inline_xattr_addr(inode, ifolio); 284 284 } else { 285 - page = f2fs_get_inode_page(sbi, inode->i_ino); 286 - if (IS_ERR(page)) 287 - return PTR_ERR(page); 285 + folio = f2fs_get_inode_folio(sbi, inode->i_ino); 286 + if (IS_ERR(folio)) 287 + return PTR_ERR(folio); 288 288 289 - inline_addr = inline_xattr_addr(inode, page); 289 + inline_addr = inline_xattr_addr(inode, folio); 290 290 } 291 291 memcpy(txattr_addr, inline_addr, inline_size); 292 - f2fs_put_page(page, 1); 292 + f2fs_folio_put(folio, true); 293 293 294 294 return 0; 295 295 } ··· 299 299 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 300 300 nid_t xnid = F2FS_I(inode)->i_xattr_nid; 301 301 unsigned int inline_size = inline_xattr_size(inode); 302 - struct page *xpage; 302 + struct folio *xfolio; 303 303 void *xattr_addr; 304 304 305 305 /* The inode already has an extended attribute block. */ 306 - xpage = f2fs_get_xnode_page(sbi, xnid); 307 - if (IS_ERR(xpage)) 308 - return PTR_ERR(xpage); 306 + xfolio = f2fs_get_xnode_folio(sbi, xnid); 307 + if (IS_ERR(xfolio)) 308 + return PTR_ERR(xfolio); 309 309 310 - xattr_addr = page_address(xpage); 310 + xattr_addr = folio_address(xfolio); 311 311 memcpy(txattr_addr + inline_size, xattr_addr, VALID_XATTR_BLOCK_SIZE); 312 - f2fs_put_page(xpage, 1); 312 + f2fs_folio_put(xfolio, true); 313 313 314 314 return 0; 315 315 } 316 316 317 - static int lookup_all_xattrs(struct inode *inode, struct page *ipage, 317 + static int lookup_all_xattrs(struct inode *inode, struct folio *ifolio, 318 318 unsigned int index, unsigned int len, 319 319 const char *name, struct f2fs_xattr_entry **xe, 320 320 void **base_addr, int *base_size, ··· 338 338 339 339 /* read from inline xattr */ 340 340 if (inline_size) { 341 - err = read_inline_xattr(inode, ipage, txattr_addr); 341 + err = read_inline_xattr(inode, ifolio, txattr_addr); 342 342 if (err) 343 343 goto out; 344 344 ··· 385 385 return err; 386 386 } 387 387 388 - static int read_all_xattrs(struct inode *inode, struct page *ipage, 388 + static int read_all_xattrs(struct inode *inode, struct folio *ifolio, 389 389 void **base_addr) 390 390 { 391 391 struct f2fs_xattr_header *header; ··· 402 402 403 403 /* read from inline xattr */ 404 404 if (inline_size) { 405 - err = read_inline_xattr(inode, ipage, txattr_addr); 405 + err = read_inline_xattr(inode, ifolio, txattr_addr); 406 406 if (err) 407 407 goto fail; 408 408 } ··· 429 429 } 430 430 431 431 static inline int write_all_xattrs(struct inode *inode, __u32 hsize, 432 - void *txattr_addr, struct page *ipage) 432 + void *txattr_addr, struct folio *ifolio) 433 433 { 434 434 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 435 435 size_t inline_size = inline_xattr_size(inode); 436 - struct page *in_page = NULL; 436 + struct folio *in_folio = NULL; 437 437 void *xattr_addr; 438 438 void *inline_addr = NULL; 439 - struct page *xpage; 439 + struct folio *xfolio; 440 440 nid_t new_nid = 0; 441 441 int err = 0; 442 442 ··· 446 446 447 447 /* write to inline xattr */ 448 448 if (inline_size) { 449 - if (ipage) { 450 - inline_addr = inline_xattr_addr(inode, ipage); 449 + if (ifolio) { 450 + inline_addr = inline_xattr_addr(inode, ifolio); 451 451 } else { 452 - in_page = f2fs_get_inode_page(sbi, inode->i_ino); 453 - if (IS_ERR(in_page)) { 452 + in_folio = f2fs_get_inode_folio(sbi, inode->i_ino); 453 + if (IS_ERR(in_folio)) { 454 454 f2fs_alloc_nid_failed(sbi, new_nid); 455 - return PTR_ERR(in_page); 455 + return PTR_ERR(in_folio); 456 456 } 457 - inline_addr = inline_xattr_addr(inode, in_page); 457 + inline_addr = inline_xattr_addr(inode, in_folio); 458 458 } 459 459 460 - f2fs_wait_on_page_writeback(ipage ? ipage : in_page, 460 + f2fs_folio_wait_writeback(ifolio ? ifolio : in_folio, 461 461 NODE, true, true); 462 462 /* no need to use xattr node block */ 463 463 if (hsize <= inline_size) { 464 464 err = f2fs_truncate_xattr_node(inode); 465 465 f2fs_alloc_nid_failed(sbi, new_nid); 466 466 if (err) { 467 - f2fs_put_page(in_page, 1); 467 + f2fs_folio_put(in_folio, true); 468 468 return err; 469 469 } 470 470 memcpy(inline_addr, txattr_addr, inline_size); 471 - set_page_dirty(ipage ? ipage : in_page); 471 + folio_mark_dirty(ifolio ? ifolio : in_folio); 472 472 goto in_page_out; 473 473 } 474 474 } 475 475 476 476 /* write to xattr node block */ 477 477 if (F2FS_I(inode)->i_xattr_nid) { 478 - xpage = f2fs_get_xnode_page(sbi, F2FS_I(inode)->i_xattr_nid); 479 - if (IS_ERR(xpage)) { 480 - err = PTR_ERR(xpage); 478 + xfolio = f2fs_get_xnode_folio(sbi, F2FS_I(inode)->i_xattr_nid); 479 + if (IS_ERR(xfolio)) { 480 + err = PTR_ERR(xfolio); 481 481 f2fs_alloc_nid_failed(sbi, new_nid); 482 482 goto in_page_out; 483 483 } 484 484 f2fs_bug_on(sbi, new_nid); 485 - f2fs_wait_on_page_writeback(xpage, NODE, true, true); 485 + f2fs_folio_wait_writeback(xfolio, NODE, true, true); 486 486 } else { 487 487 struct dnode_of_data dn; 488 488 489 489 set_new_dnode(&dn, inode, NULL, NULL, new_nid); 490 - xpage = f2fs_new_node_page(&dn, XATTR_NODE_OFFSET); 491 - if (IS_ERR(xpage)) { 492 - err = PTR_ERR(xpage); 490 + xfolio = f2fs_new_node_folio(&dn, XATTR_NODE_OFFSET); 491 + if (IS_ERR(xfolio)) { 492 + err = PTR_ERR(xfolio); 493 493 f2fs_alloc_nid_failed(sbi, new_nid); 494 494 goto in_page_out; 495 495 } 496 496 f2fs_alloc_nid_done(sbi, new_nid); 497 497 } 498 - xattr_addr = page_address(xpage); 498 + xattr_addr = folio_address(xfolio); 499 499 500 500 if (inline_size) 501 501 memcpy(inline_addr, txattr_addr, inline_size); 502 502 memcpy(xattr_addr, txattr_addr + inline_size, VALID_XATTR_BLOCK_SIZE); 503 503 504 504 if (inline_size) 505 - set_page_dirty(ipage ? ipage : in_page); 506 - set_page_dirty(xpage); 505 + folio_mark_dirty(ifolio ? ifolio : in_folio); 506 + folio_mark_dirty(xfolio); 507 507 508 - f2fs_put_page(xpage, 1); 508 + f2fs_folio_put(xfolio, true); 509 509 in_page_out: 510 - f2fs_put_page(in_page, 1); 510 + f2fs_folio_put(in_folio, true); 511 511 return err; 512 512 } 513 513 514 514 int f2fs_getxattr(struct inode *inode, int index, const char *name, 515 - void *buffer, size_t buffer_size, struct page *ipage) 515 + void *buffer, size_t buffer_size, struct folio *ifolio) 516 516 { 517 517 struct f2fs_xattr_entry *entry = NULL; 518 518 int error; ··· 528 528 if (len > F2FS_NAME_LEN) 529 529 return -ERANGE; 530 530 531 - if (!ipage) 531 + if (!ifolio) 532 532 f2fs_down_read(&F2FS_I(inode)->i_xattr_sem); 533 - error = lookup_all_xattrs(inode, ipage, index, len, name, 533 + error = lookup_all_xattrs(inode, ifolio, index, len, name, 534 534 &entry, &base_addr, &base_size, &is_inline); 535 - if (!ipage) 535 + if (!ifolio) 536 536 f2fs_up_read(&F2FS_I(inode)->i_xattr_sem); 537 537 if (error) 538 538 return error; ··· 627 627 628 628 static int __f2fs_setxattr(struct inode *inode, int index, 629 629 const char *name, const void *value, size_t size, 630 - struct page *ipage, int flags) 630 + struct folio *ifolio, int flags) 631 631 { 632 632 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 633 633 struct f2fs_xattr_entry *here, *last; ··· 651 651 if (size > MAX_VALUE_LEN(inode)) 652 652 return -E2BIG; 653 653 retry: 654 - error = read_all_xattrs(inode, ipage, &base_addr); 654 + error = read_all_xattrs(inode, ifolio, &base_addr); 655 655 if (error) 656 656 return error; 657 657 ··· 766 766 *(u32 *)((u8 *)last + newsize) = 0; 767 767 } 768 768 769 - error = write_all_xattrs(inode, new_hsize, base_addr, ipage); 769 + error = write_all_xattrs(inode, new_hsize, base_addr, ifolio); 770 770 if (error) 771 771 goto exit; 772 772 ··· 800 800 801 801 int f2fs_setxattr(struct inode *inode, int index, const char *name, 802 802 const void *value, size_t size, 803 - struct page *ipage, int flags) 803 + struct folio *ifolio, int flags) 804 804 { 805 805 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 806 806 int err; ··· 815 815 return err; 816 816 817 817 /* this case is only from f2fs_init_inode_metadata */ 818 - if (ipage) 818 + if (ifolio) 819 819 return __f2fs_setxattr(inode, index, name, value, 820 - size, ipage, flags); 820 + size, ifolio, flags); 821 821 f2fs_balance_fs(sbi, true); 822 822 823 823 f2fs_lock_op(sbi); 824 824 f2fs_down_write(&F2FS_I(inode)->i_xattr_sem); 825 - err = __f2fs_setxattr(inode, index, name, value, size, ipage, flags); 825 + err = __f2fs_setxattr(inode, index, name, value, size, NULL, flags); 826 826 f2fs_up_write(&F2FS_I(inode)->i_xattr_sem); 827 827 f2fs_unlock_op(sbi); 828 828
+12 -12
fs/f2fs/xattr.h
··· 127 127 128 128 extern const struct xattr_handler * const f2fs_xattr_handlers[]; 129 129 130 - extern int f2fs_setxattr(struct inode *, int, const char *, 131 - const void *, size_t, struct page *, int); 132 - extern int f2fs_getxattr(struct inode *, int, const char *, void *, 133 - size_t, struct page *); 134 - extern ssize_t f2fs_listxattr(struct dentry *, char *, size_t); 135 - extern int f2fs_init_xattr_caches(struct f2fs_sb_info *); 136 - extern void f2fs_destroy_xattr_caches(struct f2fs_sb_info *); 130 + int f2fs_setxattr(struct inode *, int, const char *, const void *, 131 + size_t, struct folio *, int); 132 + int f2fs_getxattr(struct inode *, int, const char *, void *, 133 + size_t, struct folio *); 134 + ssize_t f2fs_listxattr(struct dentry *, char *, size_t); 135 + int f2fs_init_xattr_caches(struct f2fs_sb_info *); 136 + void f2fs_destroy_xattr_caches(struct f2fs_sb_info *); 137 137 #else 138 138 139 139 #define f2fs_xattr_handlers NULL 140 140 #define f2fs_listxattr NULL 141 141 static inline int f2fs_setxattr(struct inode *inode, int index, 142 142 const char *name, const void *value, size_t size, 143 - struct page *page, int flags) 143 + struct folio *folio, int flags) 144 144 { 145 145 return -EOPNOTSUPP; 146 146 } 147 147 static inline int f2fs_getxattr(struct inode *inode, int index, 148 148 const char *name, void *buffer, 149 - size_t buffer_size, struct page *dpage) 149 + size_t buffer_size, struct folio *dfolio) 150 150 { 151 151 return -EOPNOTSUPP; 152 152 } ··· 155 155 #endif 156 156 157 157 #ifdef CONFIG_F2FS_FS_SECURITY 158 - extern int f2fs_init_security(struct inode *, struct inode *, 159 - const struct qstr *, struct page *); 158 + int f2fs_init_security(struct inode *, struct inode *, 159 + const struct qstr *, struct folio *); 160 160 #else 161 161 static inline int f2fs_init_security(struct inode *inode, struct inode *dir, 162 - const struct qstr *qstr, struct page *ipage) 162 + const struct qstr *qstr, struct folio *ifolio) 163 163 { 164 164 return 0; 165 165 }
+1
include/linux/f2fs_fs.h
··· 78 78 STOP_CP_REASON_UPDATE_INODE, 79 79 STOP_CP_REASON_FLUSH_FAIL, 80 80 STOP_CP_REASON_NO_SEGMENT, 81 + STOP_CP_REASON_CORRUPTED_FREE_BITMAP, 81 82 STOP_CP_REASON_MAX, 82 83 }; 83 84
+27
include/linux/highmem.h
··· 404 404 kunmap_local(dst); 405 405 } 406 406 407 + static inline void memcpy_folio(struct folio *dst_folio, size_t dst_off, 408 + struct folio *src_folio, size_t src_off, size_t len) 409 + { 410 + VM_BUG_ON(dst_off + len > folio_size(dst_folio)); 411 + VM_BUG_ON(src_off + len > folio_size(src_folio)); 412 + 413 + do { 414 + char *dst = kmap_local_folio(dst_folio, dst_off); 415 + const char *src = kmap_local_folio(src_folio, src_off); 416 + size_t chunk = len; 417 + 418 + if (folio_test_highmem(dst_folio) && 419 + chunk > PAGE_SIZE - offset_in_page(dst_off)) 420 + chunk = PAGE_SIZE - offset_in_page(dst_off); 421 + if (folio_test_highmem(src_folio) && 422 + chunk > PAGE_SIZE - offset_in_page(src_off)) 423 + chunk = PAGE_SIZE - offset_in_page(src_off); 424 + memcpy(dst, src, chunk); 425 + kunmap_local(src); 426 + kunmap_local(dst); 427 + 428 + dst_off += chunk; 429 + src_off += chunk; 430 + len -= chunk; 431 + } while (len > 0); 432 + } 433 + 407 434 static inline void memset_page(struct page *page, size_t offset, int val, 408 435 size_t len) 409 436 {
+1 -4
include/trace/events/f2fs.h
··· 1472 1472 __field(char, for_kupdate) 1473 1473 __field(char, for_background) 1474 1474 __field(char, tagged_writepages) 1475 - __field(char, for_reclaim) 1476 1475 __field(char, range_cyclic) 1477 1476 __field(char, for_sync) 1478 1477 ), ··· 1490 1491 __entry->for_kupdate = wbc->for_kupdate; 1491 1492 __entry->for_background = wbc->for_background; 1492 1493 __entry->tagged_writepages = wbc->tagged_writepages; 1493 - __entry->for_reclaim = wbc->for_reclaim; 1494 1494 __entry->range_cyclic = wbc->range_cyclic; 1495 1495 __entry->for_sync = wbc->for_sync; 1496 1496 ), 1497 1497 1498 1498 TP_printk("dev = (%d,%d), ino = %lu, %s, %s, nr_to_write %ld, " 1499 1499 "skipped %ld, start %lld, end %lld, wb_idx %lu, sync_mode %d, " 1500 - "kupdate %u background %u tagged %u reclaim %u cyclic %u sync %u", 1500 + "kupdate %u background %u tagged %u cyclic %u sync %u", 1501 1501 show_dev_ino(__entry), 1502 1502 show_block_type(__entry->type), 1503 1503 show_file_type(__entry->dir), ··· 1509 1511 __entry->for_kupdate, 1510 1512 __entry->for_background, 1511 1513 __entry->tagged_writepages, 1512 - __entry->for_reclaim, 1513 1514 __entry->range_cyclic, 1514 1515 __entry->for_sync) 1515 1516 );