Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm: prevent droppable mappings from being locked

Droppable mappings must not be lockable. There is a check for VMAs with
VM_DROPPABLE set in mlock_fixup() along with checks for other types of
unlockable VMAs which ensures this when calling mlock()/mlock2().

For mlockall(MCL_FUTURE), the check for unlockable VMAs is different. In
apply_mlockall_flags(), if the flags parameter has MCL_FUTURE set, the
current task's mm's default VMA flag field mm->def_flags has VM_LOCKED
applied to it. VM_LOCKONFAULT is also applied if MCL_ONFAULT is also set.
When these flags are set as default in this manner they are cleared in
__mmap_complete() for new mappings that do not support mlock. A check for
VM_DROPPABLE in __mmap_complete() is missing resulting in droppable
mappings created with VM_LOCKED set. To fix this and reduce that chance
of similar bugs in the future, introduce and use vma_supports_mlock().

Link: https://lkml.kernel.org/r/20260310155821.17869-1-anthony.yznaga@oracle.com
Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
Suggested-by: David Hildenbrand <david@kernel.org>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Reviewed-by: Pedro Falcato <pfalcato@suse.de>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Tested-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Cc: Jann Horn <jannh@google.com>
Cc: Jason A. Donenfeld <jason@zx2c4.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Anthony Yznaga and committed by
Andrew Morton
d2394627 301f3922

+23 -8
+1 -1
include/linux/hugetlb_inline.h
··· 30 30 31 31 #endif 32 32 33 - static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma) 33 + static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma) 34 34 { 35 35 return is_vm_hugetlb_flags(vma->vm_flags); 36 36 }
+10
mm/internal.h
··· 1243 1243 } 1244 1244 return fpin; 1245 1245 } 1246 + 1247 + static inline bool vma_supports_mlock(const struct vm_area_struct *vma) 1248 + { 1249 + if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE)) 1250 + return false; 1251 + if (vma_is_dax(vma) || is_vm_hugetlb_page(vma)) 1252 + return false; 1253 + return vma != get_gate_vma(current->mm); 1254 + } 1255 + 1246 1256 #else /* !CONFIG_MMU */ 1247 1257 static inline void unmap_mapping_folio(struct folio *folio) { } 1248 1258 static inline void mlock_new_folio(struct folio *folio) { }
+6 -4
mm/mlock.c
··· 472 472 int ret = 0; 473 473 vm_flags_t oldflags = vma->vm_flags; 474 474 475 - if (newflags == oldflags || (oldflags & VM_SPECIAL) || 476 - is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) || 477 - vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE)) 478 - /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */ 475 + if (newflags == oldflags || vma_is_secretmem(vma) || 476 + !vma_supports_mlock(vma)) 477 + /* 478 + * Don't set VM_LOCKED or VM_LOCKONFAULT and don't count. 479 + * For secretmem, don't allow the memory to be unlocked. 480 + */ 479 481 goto out; 480 482 481 483 vma = vma_modify_flags(vmi, *prev, vma, start, end, &newflags);
+1 -3
mm/vma.c
··· 2589 2589 2590 2590 vm_stat_account(mm, vma->vm_flags, map->pglen); 2591 2591 if (vm_flags & VM_LOCKED) { 2592 - if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) || 2593 - is_vm_hugetlb_page(vma) || 2594 - vma == get_gate_vma(mm)) 2592 + if (!vma_supports_mlock(vma)) 2595 2593 vm_flags_clear(vma, VM_LOCKED_MASK); 2596 2594 else 2597 2595 mm->locked_vm += map->pglen;
+5
tools/testing/vma/include/stubs.h
··· 426 426 } 427 427 428 428 static inline void hugetlb_split(struct vm_area_struct *, unsigned long) {} 429 + 430 + static inline bool vma_supports_mlock(const struct vm_area_struct *vma) 431 + { 432 + return false; 433 + }