Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

ceph: remove error return from ceph_process_folio_batch()

Following an earlier commit, ceph_process_folio_batch() no longer
returns errors because the writeback loop cannot handle them.

Since this function already indicates failure to lock any pages by
leaving `ceph_wbc.locked_pages == 0`, and the writeback loop has no way
to handle abandonment of a locked batch, change the return type of
ceph_process_folio_batch() to `void` and remove the pathological goto in
the writeback loop. The lack of a return code emphasizes that
ceph_process_folio_batch() is designed to be abort-free: that is, once
it commits a folio for writeback, it will not later abandon it or
propagate an error for that folio. Any future changes requiring "abort"
logic should follow this invariant by cleaning up its array and
resetting ceph_wbc.locked_pages appropriately.

Signed-off-by: Sam Edwards <CFSworks@gmail.com>
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>

authored by

Sam Edwards and committed by
Ilya Dryomov
fa589aca cac190c7

+5 -12
+5 -12
fs/ceph/addr.c
··· 1284 1284 } 1285 1285 1286 1286 static 1287 - int ceph_process_folio_batch(struct address_space *mapping, 1288 - struct writeback_control *wbc, 1289 - struct ceph_writeback_ctl *ceph_wbc) 1287 + void ceph_process_folio_batch(struct address_space *mapping, 1288 + struct writeback_control *wbc, 1289 + struct ceph_writeback_ctl *ceph_wbc) 1290 1290 { 1291 1291 struct inode *inode = mapping->host; 1292 1292 struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode); 1293 1293 struct ceph_client *cl = fsc->client; 1294 1294 struct folio *folio = NULL; 1295 1295 unsigned i; 1296 - int rc = 0; 1296 + int rc; 1297 1297 1298 1298 for (i = 0; can_next_page_be_processed(ceph_wbc, i); i++) { 1299 1299 folio = ceph_wbc->fbatch.folios[i]; ··· 1323 1323 rc = ceph_check_page_before_write(mapping, wbc, 1324 1324 ceph_wbc, folio); 1325 1325 if (rc == -ENODATA) { 1326 - rc = 0; 1327 1326 folio_unlock(folio); 1328 1327 ceph_wbc->fbatch.folios[i] = NULL; 1329 1328 continue; 1330 1329 } else if (rc == -E2BIG) { 1331 - rc = 0; 1332 1330 folio_unlock(folio); 1333 1331 ceph_wbc->fbatch.folios[i] = NULL; 1334 1332 break; ··· 1368 1370 rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc, 1369 1371 folio); 1370 1372 if (rc) { 1371 - rc = 0; 1372 1373 folio_redirty_for_writepage(wbc, folio); 1373 1374 folio_unlock(folio); 1374 1375 break; ··· 1378 1381 } 1379 1382 1380 1383 ceph_wbc->processed_in_fbatch = i; 1381 - 1382 - return rc; 1383 1384 } 1384 1385 1385 1386 static inline ··· 1681 1686 break; 1682 1687 1683 1688 process_folio_batch: 1684 - rc = ceph_process_folio_batch(mapping, wbc, &ceph_wbc); 1689 + ceph_process_folio_batch(mapping, wbc, &ceph_wbc); 1685 1690 ceph_shift_unused_folios_left(&ceph_wbc.fbatch); 1686 - if (rc) 1687 - goto release_folios; 1688 1691 1689 1692 /* did we get anything? */ 1690 1693 if (!ceph_wbc.locked_pages)