Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm: uninline the main body of vma_start_write()

vma_start_write() is used in many places and will grow in size very soon.
It is not used in performance critical paths and uninlining it should
limit the future code size growth. No functional changes.

Link: https://lkml.kernel.org/r/20250213224655.1680278-10-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Tested-by: Shivank Garg <shivankg@amd.com>
Link: https://lkml.kernel.org/r/5e19ec93-8307-47c2-bb13-3ddf7150624e@amd.com
Cc: Christian Brauner <brauner@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Klara Modin <klarasmodin@gmail.com>
Cc: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Lokesh Gidra <lokeshgidra@google.com>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: "Paul E . McKenney" <paulmck@kernel.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Sourav Panda <souravpanda@google.com>
Cc: Wei Yang <richard.weiyang@gmail.com>
Cc: Will Deacon <will@kernel.org>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Suren Baghdasaryan and committed by
Andrew Morton
45ad9f52 ce085396

+17 -9
+3 -9
include/linux/mm.h
··· 787 787 return (vma->vm_lock_seq == *mm_lock_seq); 788 788 } 789 789 790 + void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq); 791 + 790 792 /* 791 793 * Begin writing to a VMA. 792 794 * Exclude concurrent readers under the per-VMA lock until the currently ··· 801 799 if (__is_vma_write_locked(vma, &mm_lock_seq)) 802 800 return; 803 801 804 - down_write(&vma->vm_lock.lock); 805 - /* 806 - * We should use WRITE_ONCE() here because we can have concurrent reads 807 - * from the early lockless pessimistic check in vma_start_read(). 808 - * We don't really care about the correctness of that early check, but 809 - * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. 810 - */ 811 - WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); 812 - up_write(&vma->vm_lock.lock); 802 + __vma_start_write(vma, mm_lock_seq); 813 803 } 814 804 815 805 static inline void vma_assert_write_locked(struct vm_area_struct *vma)
+14
mm/memory.c
··· 6353 6353 #endif 6354 6354 6355 6355 #ifdef CONFIG_PER_VMA_LOCK 6356 + void __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq) 6357 + { 6358 + down_write(&vma->vm_lock.lock); 6359 + /* 6360 + * We should use WRITE_ONCE() here because we can have concurrent reads 6361 + * from the early lockless pessimistic check in vma_start_read(). 6362 + * We don't really care about the correctness of that early check, but 6363 + * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. 6364 + */ 6365 + WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); 6366 + up_write(&vma->vm_lock.lock); 6367 + } 6368 + EXPORT_SYMBOL_GPL(__vma_start_write); 6369 + 6356 6370 /* 6357 6371 * Lookup and lock a VMA under RCU protection. Returned VMA is guaranteed to be 6358 6372 * stable and not isolated. If the VMA is not found or is being modified the