Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/userfaultfd: fix hugetlb fault mutex hash calculation

In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
expects the index in huge page units. This mismatch means that different
addresses within the same huge page can produce different hash values,
leading to the use of different mutexes for the same huge page. This can
cause races between faulting threads, which can corrupt the reservation
map and trigger the BUG_ON in resv_map_release().

Fix this by introducing hugetlb_linear_page_index(), which returns the
page index in huge page granularity, and using it in place of
linear_page_index().

Link: https://lkml.kernel.org/r/20260310110526.335749-1-jianhuizzzzz@gmail.com
Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
Acked-by: SeongJae Park <sj@kernel.org>
Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Jane Chu <jane.chu@oracle.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: JonasZhou <JonasZhou@zhaoxin.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Peter Xu <peterx@redhat.com>
Cc: SeongJae Park <sj@kernel.org>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Jianhui Zhou and committed by
Andrew Morton
0217c7fb 8c6a765f

+18 -1
+17
include/linux/hugetlb.h
··· 792 792 return h->order + PAGE_SHIFT; 793 793 } 794 794 795 + /** 796 + * hugetlb_linear_page_index() - linear_page_index() but in hugetlb 797 + * page size granularity. 798 + * @vma: the hugetlb VMA 799 + * @address: the virtual address within the VMA 800 + * 801 + * Return: the page offset within the mapping in huge page units. 802 + */ 803 + static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma, 804 + unsigned long address) 805 + { 806 + struct hstate *h = hstate_vma(vma); 807 + 808 + return ((address - vma->vm_start) >> huge_page_shift(h)) + 809 + (vma->vm_pgoff >> huge_page_order(h)); 810 + } 811 + 795 812 static inline bool order_is_gigantic(unsigned int order) 796 813 { 797 814 return order > MAX_PAGE_ORDER;
+1 -1
mm/userfaultfd.c
··· 573 573 * in the case of shared pmds. fault mutex prevents 574 574 * races with other faulting threads. 575 575 */ 576 - idx = linear_page_index(dst_vma, dst_addr); 576 + idx = hugetlb_linear_page_index(dst_vma, dst_addr); 577 577 mapping = dst_vma->vm_file->f_mapping; 578 578 hash = hugetlb_fault_mutex_hash(mapping, idx); 579 579 mutex_lock(&hugetlb_fault_mutex_table[hash]);