Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/vma: convert vma_modify_flags[_uffd]() to use vma_flags_t

Update the vma_modify_flags() and vma_modify_flags_uffd() functions to
accept a vma_flags_t parameter rather than a vm_flags_t one, and propagate
the changes as needed to implement this change.

Also add vma_flags_reset_once() in replacement of vm_flags_reset_once(). We
still need to be careful here because we need to avoid tearing, so maintain
the assumption that the first system word set of flags are the only ones
that require protection from tearing, and retain this functionality.

We can copy the remainder of VMA flags above 64 bits normally. But
hopefully by the time that happens, we will have replaced the logic that
requires these WRITE_ONCE()'s with something else.

We also replace instances of vm_flags_reset() with a simple write of VMA
flags. We are no longer perform a number of checks, most notable of all the
VMA flags asserts becase:

1. We might be operating on a VMA that is not yet added to the tree.

2. We might be operating on a VMA that is now detached.

3. Really in all but core code, you should be using vma_desc_xxx().

4. Other VMA fields are manipulated with no such checks.

5. It'd be egregious to have to add variants of flag functions just to
account for cases such as the above, especially when we don't do so for
other VMA fields. Drivers are the problematic cases and why it was
especially important (and also for debug as VMA locks were introduced),
the mmap_prepare work is solving this generally.

Additionally, we can fairly safely assume by this point the soft dirty
flags are being set correctly, so it's reasonable to drop this also.

Finally, update the VMA tests to reflect this.

Link: https://lkml.kernel.org/r/51afbb2b8c3681003cc7926647e37335d793836e.1774034900.git.ljs@kernel.org
Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Acked-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Cc: "Borislav Petkov (AMD)" <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Chengming Zhou <chengming.zhou@linux.dev>
Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Dinh Nguyen <dinguyen@kernel.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jann Horn <jannh@google.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Kees Cook <kees@kernel.org>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Moore <paul@paul-moore.com>
Cc: Pedro Falcato <pfalcato@suse.de>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Stephen Smalley <stephen.smalley.work@gmail.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Sven Schnelle <svens@linux.ibm.com>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Cc: Vineet Gupta <vgupta@kernel.org>
Cc: WANG Xuerui <kernel@xen0n.name>
Cc: Will Deacon <will@kernel.org>
Cc: xu xin <xu.xin16@zte.com.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Lorenzo Stoakes (Oracle) and committed by
Andrew Morton
a06eb2f8 e2963f63

+93 -74
+10 -12
include/linux/mm.h
··· 954 954 vm_flags_init(vma, flags); 955 955 } 956 956 957 - static inline void vm_flags_reset_once(struct vm_area_struct *vma, 958 - vm_flags_t flags) 957 + static inline void vma_flags_reset_once(struct vm_area_struct *vma, 958 + vma_flags_t *flags) 959 959 { 960 - vma_assert_write_locked(vma); 961 - /* 962 - * If VMA flags exist beyond the first system word, also clear these. It 963 - * is assumed the write once behaviour is required only for the first 964 - * system word. 965 - */ 960 + const unsigned long word = flags->__vma_flags[0]; 961 + 962 + /* It is assumed only the first system word must be written once. */ 963 + vma_flags_overwrite_word_once(&vma->flags, word); 964 + /* The remainder can be copied normally. */ 966 965 if (NUM_VMA_FLAG_BITS > BITS_PER_LONG) { 967 - unsigned long *bitmap = vma->flags.__vma_flags; 966 + unsigned long *dst = &vma->flags.__vma_flags[1]; 967 + const unsigned long *src = &flags->__vma_flags[1]; 968 968 969 - bitmap_zero(&bitmap[1], NUM_VMA_FLAG_BITS - BITS_PER_LONG); 969 + bitmap_copy(dst, src, NUM_VMA_FLAG_BITS - BITS_PER_LONG); 970 970 } 971 - 972 - vma_flags_overwrite_word_once(&vma->flags, flags); 973 971 } 974 972 975 973 static inline void vm_flags_set(struct vm_area_struct *vma,
+3
include/linux/userfaultfd_k.h
··· 23 23 /* The set of all possible UFFD-related VM flags. */ 24 24 #define __VM_UFFD_FLAGS (VM_UFFD_MISSING | VM_UFFD_WP | VM_UFFD_MINOR) 25 25 26 + #define __VMA_UFFD_FLAGS mk_vma_flags(VMA_UFFD_MISSING_BIT, VMA_UFFD_WP_BIT, \ 27 + VMA_UFFD_MINOR_BIT) 28 + 26 29 /* 27 30 * CAREFUL: Check include/uapi/asm-generic/fcntl.h when defining 28 31 * new flags, since they might collide with O_* ones. We want
+6 -4
mm/madvise.c
··· 151 151 struct madvise_behavior *madv_behavior) 152 152 { 153 153 struct vm_area_struct *vma = madv_behavior->vma; 154 + vma_flags_t new_vma_flags = legacy_to_vma_flags(new_flags); 154 155 struct madvise_behavior_range *range = &madv_behavior->range; 155 156 struct anon_vma_name *anon_name = madv_behavior->anon_name; 156 157 bool set_new_anon_name = madv_behavior->behavior == __MADV_SET_ANON_VMA_NAME; 157 158 VMA_ITERATOR(vmi, madv_behavior->mm, range->start); 158 159 159 - if (new_flags == vma->vm_flags && (!set_new_anon_name || 160 - anon_vma_name_eq(anon_vma_name(vma), anon_name))) 160 + if (vma_flags_same_mask(&vma->flags, new_vma_flags) && 161 + (!set_new_anon_name || 162 + anon_vma_name_eq(anon_vma_name(vma), anon_name))) 161 163 return 0; 162 164 163 165 if (set_new_anon_name) ··· 167 165 range->start, range->end, anon_name); 168 166 else 169 167 vma = vma_modify_flags(&vmi, madv_behavior->prev, vma, 170 - range->start, range->end, &new_flags); 168 + range->start, range->end, &new_vma_flags); 171 169 172 170 if (IS_ERR(vma)) 173 171 return PTR_ERR(vma); ··· 176 174 177 175 /* vm_flags is protected by the mmap_lock held in write mode. */ 178 176 vma_start_write(vma); 179 - vm_flags_reset(vma, new_flags); 177 + vma->flags = new_vma_flags; 180 178 if (set_new_anon_name) 181 179 return replace_anon_vma_name(vma, anon_name); 182 180
+21 -17
mm/mlock.c
··· 415 415 * @vma - vma containing range to be mlock()ed or munlock()ed 416 416 * @start - start address in @vma of the range 417 417 * @end - end of range in @vma 418 - * @newflags - the new set of flags for @vma. 418 + * @new_vma_flags - the new set of flags for @vma. 419 419 * 420 420 * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED; 421 421 * called for munlock() and munlockall(), to clear VM_LOCKED from @vma. 422 422 */ 423 423 static void mlock_vma_pages_range(struct vm_area_struct *vma, 424 - unsigned long start, unsigned long end, vm_flags_t newflags) 424 + unsigned long start, unsigned long end, 425 + vma_flags_t *new_vma_flags) 425 426 { 426 427 static const struct mm_walk_ops mlock_walk_ops = { 427 428 .pmd_entry = mlock_pte_range, ··· 440 439 * combination should not be visible to other mmap_lock users; 441 440 * but WRITE_ONCE so rmap walkers must see VM_IO if VM_LOCKED. 442 441 */ 443 - if (newflags & VM_LOCKED) 444 - newflags |= VM_IO; 442 + if (vma_flags_test(new_vma_flags, VMA_LOCKED_BIT)) 443 + vma_flags_set(new_vma_flags, VMA_IO_BIT); 445 444 vma_start_write(vma); 446 - vm_flags_reset_once(vma, newflags); 445 + vma_flags_reset_once(vma, new_vma_flags); 447 446 448 447 lru_add_drain(); 449 448 walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL); 450 449 lru_add_drain(); 451 450 452 - if (newflags & VM_IO) { 453 - newflags &= ~VM_IO; 454 - vm_flags_reset_once(vma, newflags); 451 + if (vma_flags_test(new_vma_flags, VMA_IO_BIT)) { 452 + vma_flags_clear(new_vma_flags, VMA_IO_BIT); 453 + vma_flags_reset_once(vma, new_vma_flags); 455 454 } 456 455 } 457 456 ··· 468 467 struct vm_area_struct **prev, unsigned long start, 469 468 unsigned long end, vm_flags_t newflags) 470 469 { 470 + vma_flags_t new_vma_flags = legacy_to_vma_flags(newflags); 471 + const vma_flags_t old_vma_flags = vma->flags; 471 472 struct mm_struct *mm = vma->vm_mm; 472 473 int nr_pages; 473 474 int ret = 0; 474 - vm_flags_t oldflags = vma->vm_flags; 475 475 476 - if (newflags == oldflags || vma_is_secretmem(vma) || 477 - !vma_supports_mlock(vma)) 476 + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags) || 477 + vma_is_secretmem(vma) || !vma_supports_mlock(vma)) { 478 478 /* 479 479 * Don't set VM_LOCKED or VM_LOCKONFAULT and don't count. 480 480 * For secretmem, don't allow the memory to be unlocked. 481 481 */ 482 482 goto out; 483 + } 483 484 484 - vma = vma_modify_flags(vmi, *prev, vma, start, end, &newflags); 485 + vma = vma_modify_flags(vmi, *prev, vma, start, end, &new_vma_flags); 485 486 if (IS_ERR(vma)) { 486 487 ret = PTR_ERR(vma); 487 488 goto out; ··· 493 490 * Keep track of amount of locked VM. 494 491 */ 495 492 nr_pages = (end - start) >> PAGE_SHIFT; 496 - if (!(newflags & VM_LOCKED)) 493 + if (!vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT)) 497 494 nr_pages = -nr_pages; 498 - else if (oldflags & VM_LOCKED) 495 + else if (vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) 499 496 nr_pages = 0; 500 497 mm->locked_vm += nr_pages; 501 498 ··· 504 501 * It's okay if try_to_unmap_one unmaps a page just after we 505 502 * set VM_LOCKED, populate_vma_page_range will bring it back. 506 503 */ 507 - if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) { 504 + if (vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT) && 505 + vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) { 508 506 /* No work to do, and mlocking twice would be wrong */ 509 507 vma_start_write(vma); 510 - vm_flags_reset(vma, newflags); 508 + vma->flags = new_vma_flags; 511 509 } else { 512 - mlock_vma_pages_range(vma, start, end, newflags); 510 + mlock_vma_pages_range(vma, start, end, &new_vma_flags); 513 511 } 514 512 out: 515 513 *prev = vma;
+3 -4
mm/mprotect.c
··· 756 756 vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); 757 757 } 758 758 759 - newflags = vma_flags_to_legacy(new_vma_flags); 760 - vma = vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); 759 + vma = vma_modify_flags(vmi, *pprev, vma, start, end, &new_vma_flags); 761 760 if (IS_ERR(vma)) { 762 761 error = PTR_ERR(vma); 763 762 goto fail; 764 763 } 765 - new_vma_flags = legacy_to_vma_flags(newflags); 766 764 767 765 *pprev = vma; 768 766 ··· 769 771 * held in write mode. 770 772 */ 771 773 vma_start_write(vma); 772 - vm_flags_reset_once(vma, newflags); 774 + vma_flags_reset_once(vma, &new_vma_flags); 773 775 if (vma_wants_manual_pte_write_upgrade(vma)) 774 776 mm_cp_flags |= MM_CP_TRY_CHANGE_WRITABLE; 775 777 vma_set_page_prot(vma); ··· 794 796 } 795 797 796 798 vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); 799 + newflags = vma_flags_to_legacy(new_vma_flags); 797 800 vm_stat_account(mm, newflags, nrpages); 798 801 perf_event_mmap(vma); 799 802 return 0;
+7 -4
mm/mseal.c
··· 68 68 const unsigned long curr_start = MAX(vma->vm_start, start); 69 69 const unsigned long curr_end = MIN(vma->vm_end, end); 70 70 71 - if (!(vma->vm_flags & VM_SEALED)) { 72 - vm_flags_t vm_flags = vma->vm_flags | VM_SEALED; 71 + if (!vma_test(vma, VMA_SEALED_BIT)) { 72 + vma_flags_t vma_flags = vma->flags; 73 + 74 + vma_flags_set(&vma_flags, VMA_SEALED_BIT); 73 75 74 76 vma = vma_modify_flags(&vmi, prev, vma, curr_start, 75 - curr_end, &vm_flags); 77 + curr_end, &vma_flags); 76 78 if (IS_ERR(vma)) 77 79 return PTR_ERR(vma); 78 - vm_flags_set(vma, VM_SEALED); 80 + vma_start_write(vma); 81 + vma_set_flags(vma, VMA_SEALED_BIT); 79 82 } 80 83 81 84 prev = vma;
+14 -7
mm/userfaultfd.c
··· 1976 1976 { 1977 1977 struct vm_area_struct *ret; 1978 1978 bool give_up_on_oom = false; 1979 + vma_flags_t new_vma_flags = vma->flags; 1980 + 1981 + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); 1979 1982 1980 1983 /* 1981 1984 * If we are modifying only and not splitting, just give up on the merge ··· 1992 1989 uffd_wp_range(vma, start, end - start, false); 1993 1990 1994 1991 ret = vma_modify_flags_uffd(vmi, prev, vma, start, end, 1995 - vma->vm_flags & ~__VM_UFFD_FLAGS, 1996 - NULL_VM_UFFD_CTX, give_up_on_oom); 1992 + &new_vma_flags, NULL_VM_UFFD_CTX, 1993 + give_up_on_oom); 1997 1994 1998 1995 /* 1999 1996 * In the vma_merge() successful mprotect-like case 8: ··· 2013 2010 unsigned long start, unsigned long end, 2014 2011 bool wp_async) 2015 2012 { 2013 + vma_flags_t vma_flags = legacy_to_vma_flags(vm_flags); 2016 2014 VMA_ITERATOR(vmi, ctx->mm, start); 2017 2015 struct vm_area_struct *prev = vma_prev(&vmi); 2018 2016 unsigned long vma_end; 2019 - vm_flags_t new_flags; 2017 + vma_flags_t new_vma_flags; 2020 2018 2021 2019 if (vma->vm_start < start) 2022 2020 prev = vma; ··· 2028 2024 VM_WARN_ON_ONCE(!vma_can_userfault(vma, vm_flags, wp_async)); 2029 2025 VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx && 2030 2026 vma->vm_userfaultfd_ctx.ctx != ctx); 2031 - VM_WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)); 2027 + VM_WARN_ON_ONCE(!vma_test(vma, VMA_MAYWRITE_BIT)); 2032 2028 2033 2029 /* 2034 2030 * Nothing to do: this vma is already registered into this 2035 2031 * userfaultfd and with the right tracking mode too. 2036 2032 */ 2037 2033 if (vma->vm_userfaultfd_ctx.ctx == ctx && 2038 - (vma->vm_flags & vm_flags) == vm_flags) 2034 + vma_test_all_mask(vma, vma_flags)) 2039 2035 goto skip; 2040 2036 2041 2037 if (vma->vm_start > start) 2042 2038 start = vma->vm_start; 2043 2039 vma_end = min(end, vma->vm_end); 2044 2040 2045 - new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; 2041 + new_vma_flags = vma->flags; 2042 + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); 2043 + vma_flags_set_mask(&new_vma_flags, vma_flags); 2044 + 2046 2045 vma = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end, 2047 - new_flags, 2046 + &new_vma_flags, 2048 2047 (struct vm_userfaultfd_ctx){ctx}, 2049 2048 /* give_up_on_oom = */false); 2050 2049 if (IS_ERR(vma))
+8 -7
mm/vma.c
··· 1710 1710 struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, 1711 1711 struct vm_area_struct *prev, struct vm_area_struct *vma, 1712 1712 unsigned long start, unsigned long end, 1713 - vm_flags_t *vm_flags_ptr) 1713 + vma_flags_t *vma_flags_ptr) 1714 1714 { 1715 1715 VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); 1716 - const vm_flags_t vm_flags = *vm_flags_ptr; 1716 + const vma_flags_t vma_flags = *vma_flags_ptr; 1717 1717 struct vm_area_struct *ret; 1718 1718 1719 - vmg.vm_flags = vm_flags; 1719 + vmg.vma_flags = vma_flags; 1720 1720 1721 1721 ret = vma_modify(&vmg); 1722 1722 if (IS_ERR(ret)) ··· 1728 1728 * them to the caller. 1729 1729 */ 1730 1730 if (vmg.state == VMA_MERGE_SUCCESS) 1731 - *vm_flags_ptr = ret->vm_flags; 1731 + *vma_flags_ptr = ret->flags; 1732 1732 return ret; 1733 1733 } 1734 1734 ··· 1758 1758 1759 1759 struct vm_area_struct *vma_modify_flags_uffd(struct vma_iterator *vmi, 1760 1760 struct vm_area_struct *prev, struct vm_area_struct *vma, 1761 - unsigned long start, unsigned long end, vm_flags_t vm_flags, 1762 - struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom) 1761 + unsigned long start, unsigned long end, 1762 + const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, 1763 + bool give_up_on_oom) 1763 1764 { 1764 1765 VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); 1765 1766 1766 - vmg.vm_flags = vm_flags; 1767 + vmg.vma_flags = *vma_flags; 1767 1768 vmg.uffd_ctx = new_ctx; 1768 1769 if (give_up_on_oom) 1769 1770 vmg.give_up_on_oom = true;
+7 -8
mm/vma.h
··· 342 342 * @vma: The VMA containing the range @start to @end to be updated. 343 343 * @start: The start of the range to update. May be offset within @vma. 344 344 * @end: The exclusive end of the range to update, may be offset within @vma. 345 - * @vm_flags_ptr: A pointer to the VMA flags that the @start to @end range is 345 + * @vma_flags_ptr: A pointer to the VMA flags that the @start to @end range is 346 346 * about to be set to. On merge, this will be updated to include sticky flags. 347 347 * 348 348 * IMPORTANT: The actual modification being requested here is NOT applied, 349 349 * rather the VMA is perhaps split, perhaps merged to accommodate the change, 350 350 * and the caller is expected to perform the actual modification. 351 351 * 352 - * In order to account for sticky VMA flags, the @vm_flags_ptr parameter points 352 + * In order to account for sticky VMA flags, the @vma_flags_ptr parameter points 353 353 * to the requested flags which are then updated so the caller, should they 354 354 * overwrite any existing flags, correctly retains these. 355 355 * 356 356 * Returns: A VMA which contains the range @start to @end ready to have its 357 - * flags altered to *@vm_flags. 357 + * flags altered to *@vma_flags. 358 358 */ 359 359 __must_check struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, 360 360 struct vm_area_struct *prev, struct vm_area_struct *vma, 361 - unsigned long start, unsigned long end, 362 - vm_flags_t *vm_flags_ptr); 361 + unsigned long start, unsigned long end, vma_flags_t *vma_flags_ptr); 363 362 364 363 /** 365 364 * vma_modify_name() - Perform any necessary split/merge in preparation for ··· 417 418 * @vma: The VMA containing the range @start to @end to be updated. 418 419 * @start: The start of the range to update. May be offset within @vma. 419 420 * @end: The exclusive end of the range to update, may be offset within @vma. 420 - * @vm_flags: The VMA flags that the @start to @end range is about to be set to. 421 + * @vma_flags: The VMA flags that the @start to @end range is about to be set to. 421 422 * @new_ctx: The userfaultfd context that the @start to @end range is about to 422 423 * be set to. 423 424 * @give_up_on_oom: If an out of memory condition occurs on merge, simply give ··· 428 429 * and the caller is expected to perform the actual modification. 429 430 * 430 431 * Returns: A VMA which contains the range @start to @end ready to have its VMA 431 - * flags changed to @vm_flags and its userfaultfd context changed to @new_ctx. 432 + * flags changed to @vma_flags and its userfaultfd context changed to @new_ctx. 432 433 */ 433 434 __must_check struct vm_area_struct *vma_modify_flags_uffd(struct vma_iterator *vmi, 434 435 struct vm_area_struct *prev, struct vm_area_struct *vma, 435 - unsigned long start, unsigned long end, vm_flags_t vm_flags, 436 + unsigned long start, unsigned long end, const vma_flags_t *vma_flags, 436 437 struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom); 437 438 438 439 __must_check struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg);
+13 -9
tools/testing/vma/include/dup.h
··· 871 871 vm_flags_init(vma, flags); 872 872 } 873 873 874 - static inline void vm_flags_reset_once(struct vm_area_struct *vma, 875 - vm_flags_t flags) 874 + static inline void vma_flags_reset_once(struct vm_area_struct *vma, 875 + vma_flags_t *flags) 876 876 { 877 - vma_assert_write_locked(vma); 878 - /* 879 - * The user should only be interested in avoiding reordering of 880 - * assignment to the first word. 881 - */ 882 - vma_flags_clear_all(&vma->flags); 883 - vma_flags_overwrite_word_once(&vma->flags, flags); 877 + const unsigned long word = flags->__vma_flags[0]; 878 + 879 + /* It is assumed only the first system word must be written once. */ 880 + vma_flags_overwrite_word_once(&vma->flags, word); 881 + /* The remainder can be copied normally. */ 882 + if (NUM_VMA_FLAG_BITS > BITS_PER_LONG) { 883 + unsigned long *dst = &vma->flags.__vma_flags[1]; 884 + const unsigned long *src = &flags->__vma_flags[1]; 885 + 886 + bitmap_copy(dst, src, NUM_VMA_FLAG_BITS - BITS_PER_LONG); 887 + } 884 888 } 885 889 886 890 static inline void vm_flags_set(struct vm_area_struct *vma,
+1 -2
tools/testing/vma/tests/merge.c
··· 132 132 struct vm_area_struct *vma; 133 133 vma_flags_t vma_flags = mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_MAYREAD_BIT, 134 134 VMA_MAYWRITE_BIT); 135 - vm_flags_t legacy_flags = VM_READ | VM_WRITE; 136 135 struct mm_struct mm = {}; 137 136 struct vm_area_struct *init_vma = alloc_vma(&mm, 0, 0x3000, 0, vma_flags); 138 137 VMA_ITERATOR(vmi, &mm, 0x1000); ··· 143 144 * performs the merge/split only. 144 145 */ 145 146 vma = vma_modify_flags(&vmi, init_vma, init_vma, 146 - 0x1000, 0x2000, &legacy_flags); 147 + 0x1000, 0x2000, &vma_flags); 147 148 ASSERT_NE(vma, NULL); 148 149 /* We modify the provided VMA, and on split allocate new VMAs. */ 149 150 ASSERT_EQ(vma, init_vma);