Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/khugepaged: change collapse_pte_mapped_thp() to return void

The only external caller of collapse_pte_mapped_thp() is uprobe, which
ignores the return value. Change the external API to return void to
simplify the interface.

Introduce try_collapse_pte_mapped_thp() for internal use that preserves
the return value. This prepares for future patch that will convert the
return type to use enum scan_result.

Link: https://lkml.kernel.org/r/20260118192253.9263-10-shivankg@amd.com
Signed-off-by: Shivank Garg <shivankg@amd.com>
Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Acked-by: Lance Yang <lance.yang@linux.dev>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Tested-by: Nico Pache <npache@redhat.com>
Reviewed-by: Nico Pache <npache@redhat.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Shivank Garg and committed by
Andrew Morton
3ab981c1 7832e4d5

+27 -22
+4 -5
include/linux/khugepaged.h
··· 17 17 vm_flags_t vm_flags); 18 18 extern void khugepaged_min_free_kbytes_update(void); 19 19 extern bool current_is_khugepaged(void); 20 - extern int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, 21 - bool install_pmd); 20 + void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, 21 + bool install_pmd); 22 22 23 23 static inline void khugepaged_fork(struct mm_struct *mm, struct mm_struct *oldmm) 24 24 { ··· 42 42 vm_flags_t vm_flags) 43 43 { 44 44 } 45 - static inline int collapse_pte_mapped_thp(struct mm_struct *mm, 46 - unsigned long addr, bool install_pmd) 45 + static inline void collapse_pte_mapped_thp(struct mm_struct *mm, 46 + unsigned long addr, bool install_pmd) 47 47 { 48 - return 0; 49 48 } 50 49 51 50 static inline void khugepaged_min_free_kbytes_update(void)
+23 -17
mm/khugepaged.c
··· 1477 1477 return SCAN_SUCCEED; 1478 1478 } 1479 1479 1480 - /** 1481 - * collapse_pte_mapped_thp - Try to collapse a pte-mapped THP for mm at 1482 - * address haddr. 1483 - * 1484 - * @mm: process address space where collapse happens 1485 - * @addr: THP collapse address 1486 - * @install_pmd: If a huge PMD should be installed 1487 - * 1488 - * This function checks whether all the PTEs in the PMD are pointing to the 1489 - * right THP. If so, retract the page table so the THP can refault in with 1490 - * as pmd-mapped. Possibly install a huge PMD mapping the THP. 1491 - */ 1492 - int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, 1493 - bool install_pmd) 1480 + static int try_collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, 1481 + bool install_pmd) 1494 1482 { 1495 1483 int nr_mapped_ptes = 0, result = SCAN_FAIL; 1496 1484 unsigned int nr_batch_ptes; ··· 1697 1709 folio_unlock(folio); 1698 1710 folio_put(folio); 1699 1711 return result; 1712 + } 1713 + 1714 + /** 1715 + * collapse_pte_mapped_thp - Try to collapse a pte-mapped THP for mm at 1716 + * address haddr. 1717 + * 1718 + * @mm: process address space where collapse happens 1719 + * @addr: THP collapse address 1720 + * @install_pmd: If a huge PMD should be installed 1721 + * 1722 + * This function checks whether all the PTEs in the PMD are pointing to the 1723 + * right THP. If so, retract the page table so the THP can refault in with 1724 + * as pmd-mapped. Possibly install a huge PMD mapping the THP. 1725 + */ 1726 + void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr, 1727 + bool install_pmd) 1728 + { 1729 + try_collapse_pte_mapped_thp(mm, addr, install_pmd); 1700 1730 } 1701 1731 1702 1732 /* Can we retract page tables for this file-backed VMA? */ ··· 2233 2227 2234 2228 /* 2235 2229 * Remove pte page tables, so we can re-fault the page as huge. 2236 - * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp(). 2230 + * If MADV_COLLAPSE, adjust result to call try_collapse_pte_mapped_thp(). 2237 2231 */ 2238 2232 retract_page_tables(mapping, start); 2239 2233 if (cc && !cc->is_khugepaged) ··· 2485 2479 mmap_read_lock(mm); 2486 2480 if (hpage_collapse_test_exit_or_disable(mm)) 2487 2481 goto breakouterloop; 2488 - *result = collapse_pte_mapped_thp(mm, 2482 + *result = try_collapse_pte_mapped_thp(mm, 2489 2483 khugepaged_scan.address, false); 2490 2484 if (*result == SCAN_PMD_MAPPED) 2491 2485 *result = SCAN_SUCCEED; ··· 2850 2844 case SCAN_PTE_MAPPED_HUGEPAGE: 2851 2845 BUG_ON(mmap_locked); 2852 2846 mmap_read_lock(mm); 2853 - result = collapse_pte_mapped_thp(mm, addr, true); 2847 + result = try_collapse_pte_mapped_thp(mm, addr, true); 2854 2848 mmap_read_unlock(mm); 2855 2849 goto handle_result; 2856 2850 /* Whitelisted set of results where continuing OK */