Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/memremap: add driver callback support for folio splitting

When a zone device page is split (via huge pmd folio split). The driver
callback for folio_split is invoked to let the device driver know that the
folio size has been split into a smaller order.

Provide a default implementation for drivers that do not provide this
callback that copies the pgmap and mapping fields for the split folios.

Update the HMM test driver to handle the split.

Link: https://lkml.kernel.org/r/20251001065707.920170-11-balbirs@nvidia.com
Signed-off-by: Balbir Singh <balbirs@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Nico Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Lyude Paul <lyude@redhat.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: David Airlie <airlied@gmail.com>
Cc: Simona Vetter <simona@ffwll.ch>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: Mika Penttilä <mpenttil@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Francois Dugast <francois.dugast@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Balbir Singh and committed by
Andrew Morton
56ef3989 775465fd

+64
+29
include/linux/memremap.h
··· 100 100 */ 101 101 int (*memory_failure)(struct dev_pagemap *pgmap, unsigned long pfn, 102 102 unsigned long nr_pages, int mf_flags); 103 + 104 + /* 105 + * Used for private (un-addressable) device memory only. 106 + * This callback is used when a folio is split into 107 + * a smaller folio 108 + */ 109 + void (*folio_split)(struct folio *head, struct folio *tail); 103 110 }; 104 111 105 112 #define PGMAP_ALTMAP_VALID (1 << 0) ··· 242 235 folio_set_large_rmappable(folio); 243 236 } 244 237 238 + static inline void zone_device_private_split_cb(struct folio *original_folio, 239 + struct folio *new_folio) 240 + { 241 + if (folio_is_device_private(original_folio)) { 242 + if (!original_folio->pgmap->ops->folio_split) { 243 + if (new_folio) { 244 + new_folio->pgmap = original_folio->pgmap; 245 + new_folio->page.mapping = 246 + original_folio->page.mapping; 247 + } 248 + } else { 249 + original_folio->pgmap->ops->folio_split(original_folio, 250 + new_folio); 251 + } 252 + } 253 + } 254 + 245 255 #else 246 256 static inline void *devm_memremap_pages(struct device *dev, 247 257 struct dev_pagemap *pgmap) ··· 291 267 static inline unsigned long memremap_compat_align(void) 292 268 { 293 269 return PAGE_SIZE; 270 + } 271 + 272 + static inline void zone_device_private_split_cb(struct folio *original_folio, 273 + struct folio *new_folio) 274 + { 294 275 } 295 276 #endif /* CONFIG_ZONE_DEVICE */ 296 277
+35
lib/test_hmm.c
··· 1654 1654 return ret; 1655 1655 } 1656 1656 1657 + static void dmirror_devmem_folio_split(struct folio *head, struct folio *tail) 1658 + { 1659 + struct page *rpage = BACKING_PAGE(folio_page(head, 0)); 1660 + struct page *rpage_tail; 1661 + struct folio *rfolio; 1662 + unsigned long offset = 0; 1663 + 1664 + if (!rpage) { 1665 + tail->page.zone_device_data = NULL; 1666 + return; 1667 + } 1668 + 1669 + rfolio = page_folio(rpage); 1670 + 1671 + if (tail == NULL) { 1672 + folio_reset_order(rfolio); 1673 + rfolio->mapping = NULL; 1674 + folio_set_count(rfolio, 1); 1675 + return; 1676 + } 1677 + 1678 + offset = folio_pfn(tail) - folio_pfn(head); 1679 + 1680 + rpage_tail = folio_page(rfolio, offset); 1681 + tail->page.zone_device_data = rpage_tail; 1682 + rpage_tail->zone_device_data = rpage->zone_device_data; 1683 + clear_compound_head(rpage_tail); 1684 + rpage_tail->mapping = NULL; 1685 + 1686 + folio_page(tail, 0)->mapping = folio_page(head, 0)->mapping; 1687 + tail->pgmap = head->pgmap; 1688 + folio_set_count(page_folio(rpage_tail), 1); 1689 + } 1690 + 1657 1691 static const struct dev_pagemap_ops dmirror_devmem_ops = { 1658 1692 .folio_free = dmirror_devmem_free, 1659 1693 .migrate_to_ram = dmirror_devmem_fault, 1694 + .folio_split = dmirror_devmem_folio_split, 1660 1695 }; 1661 1696 1662 1697 static int dmirror_device_init(struct dmirror_device *mdevice, int id)