Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm: make ref_unless functions unless_zero only

There are no users of (folio/page)_ref_add_unless(page, nr, u) with u != 0
[1] and all current users are "internal" for page refcounting API. This
allows us to safely drop this parameter and reduce function semantics to
the "unless zero" cases only.

If needed, these functions for the u!=0 cases can be trivially
reintroduced later using the same atomic_add_unless operations as before.

[1]: The last user was dropped in v5.18 kernel, commit 27674ef6c73f ("mm:
remove the extra ZONE_DEVICE struct page refcount"). There is no trace of
discussion as to why this cleanup wasn't done earlier.

Link: https://lkml.kernel.org/r/a0c89b49d38c671a0bdd35069d15ee13e08314d2.1772370066.git.gladyshev.ilya1@h-partners.com
Co-developed-by: Gorbunov Ivan <gorbunov.ivan@h-partners.com>
Signed-off-by: Gorbunov Ivan <gorbunov.ivan@h-partners.com>
Signed-off-by: Gladyshev Ilya <gladyshev.ilya1@h-partners.com>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Acked-by: Kiryl Shutsemau <kas@kernel.org>
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Cc: Yu Zhao <yuzhao@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Gladyshev Ilya and committed by
Andrew Morton
28266ac9 e9c01915

+7 -7
+1 -1
include/linux/mm.h
··· 1506 1506 */ 1507 1507 static inline bool get_page_unless_zero(struct page *page) 1508 1508 { 1509 - return page_ref_add_unless(page, 1, 0); 1509 + return page_ref_add_unless_zero(page, 1); 1510 1510 } 1511 1511 1512 1512 static inline struct folio *folio_get_nontail_page(struct page *page)
+6 -6
include/linux/page_ref.h
··· 228 228 return page_ref_dec_return(&folio->page); 229 229 } 230 230 231 - static inline bool page_ref_add_unless(struct page *page, int nr, int u) 231 + static inline bool page_ref_add_unless_zero(struct page *page, int nr) 232 232 { 233 - bool ret = atomic_add_unless(&page->_refcount, nr, u); 233 + bool ret = atomic_add_unless(&page->_refcount, nr, 0); 234 234 235 235 if (page_ref_tracepoint_active(page_ref_mod_unless)) 236 236 __page_ref_mod_unless(page, nr, ret); 237 237 return ret; 238 238 } 239 239 240 - static inline bool folio_ref_add_unless(struct folio *folio, int nr, int u) 240 + static inline bool folio_ref_add_unless_zero(struct folio *folio, int nr) 241 241 { 242 - return page_ref_add_unless(&folio->page, nr, u); 242 + return page_ref_add_unless_zero(&folio->page, nr); 243 243 } 244 244 245 245 /** ··· 255 255 */ 256 256 static inline bool folio_try_get(struct folio *folio) 257 257 { 258 - return folio_ref_add_unless(folio, 1, 0); 258 + return folio_ref_add_unless_zero(folio, 1); 259 259 } 260 260 261 261 static inline bool folio_ref_try_add(struct folio *folio, int count) 262 262 { 263 - return folio_ref_add_unless(folio, count, 0); 263 + return folio_ref_add_unless_zero(folio, count); 264 264 } 265 265 266 266 static inline int page_ref_freeze(struct page *page, int count)