Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm/vma: introduce helper struct + thread through exclusive lock fns

It is confusing to have __vma_start_exclude_readers() return 0, 1 or an
error (but only when waiting for readers in TASK_KILLABLE state), and
having the return value be stored in a stack variable called 'locked'
is further confusion.

More generally, we are doing a lot of rather finnicky things during the
acquisition of a state in which readers are excluded and moving out of
this state, including tracking whether we are detached or not or
whether an error occurred.

We are implementing logic in __vma_start_exclude_readers() that
effectively acts as if 'if one caller calls us do X, if another then do
Y', which is very confusing from a control flow perspective.

Introducing the shared helper object state helps us avoid this, as we
can now handle the 'an error arose but we're detached' condition
correctly in both callers - a warning if not detaching, and treating
the situation as if no error arose in the case of a VMA detaching.

This also acts to help document what's going on and allows us to add
some more logical debug asserts.

Also update vma_mark_detached() to add a guard clause for the likely
'already detached' state (given we hold the mmap write lock), and add a
comment about ephemeral VMA read lock reference count increments to
clarify why we are entering/exiting an exclusive locked state here.

Finally, separate vma_mark_detached() into its fast-path component and
make it inline, then place the slow path for excluding readers in
mmap_lock.c.

No functional change intended.

[akpm@linux-foundation.org: fix function naming in comments, add comment per Vlastimil per Lorenzo]
Link: https://lkml.kernel.org/r/7d3084d596c84da10dd374130a5055deba6439c0.1769198904.git.lorenzo.stoakes@oracle.com
Link: https://lkml.kernel.org/r/7d3084d596c84da10dd374130a5055deba6439c0.1769198904.git.lorenzo.stoakes@oracle.com
Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Waiman Long <longman@redhat.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Lorenzo Stoakes and committed by
Andrew Morton
e28e575a 28f590f3

+112 -77
+7 -7
include/linux/mm_types.h
··· 1011 1011 * decrementing it again. 1012 1012 * 1013 1013 * VM_REFCNT_EXCLUDE_READERS_FLAG - Detached, pending 1014 - * __vma_exit_locked() completion which will decrement the reference 1015 - * count to zero. IMPORTANT - at this stage no further readers can 1016 - * increment the reference count. It can only be reduced. 1014 + * __vma_end_exclude_readers() completion which will decrement the 1015 + * reference count to zero. IMPORTANT - at this stage no further readers 1016 + * can increment the reference count. It can only be reduced. 1017 1017 * 1018 1018 * VM_REFCNT_EXCLUDE_READERS_FLAG + 1 - A thread is either write-locking 1019 - * an attached VMA and has yet to invoke __vma_exit_locked(), OR a 1020 - * thread is detaching a VMA and is waiting on a single spurious reader 1021 - * in order to decrement the reference count. IMPORTANT - as above, no 1022 - * further readers can increment the reference count. 1019 + * an attached VMA and has yet to invoke __vma_end_exclude_readers(), 1020 + * OR a thread is detaching a VMA and is waiting on a single spurious 1021 + * reader in order to decrement the reference count. IMPORTANT - as 1022 + * above, no further readers can increment the reference count. 1023 1023 * 1024 1024 * > VM_REFCNT_EXCLUDE_READERS_FLAG + 1 - A thread is either 1025 1025 * write-locking or detaching a VMA is waiting on readers to
+22 -1
include/linux/mmap_lock.h
··· 358 358 refcount_set_release(&vma->vm_refcnt, 1); 359 359 } 360 360 361 - void vma_mark_detached(struct vm_area_struct *vma); 361 + void __vma_exclude_readers_for_detach(struct vm_area_struct *vma); 362 + 363 + static inline void vma_mark_detached(struct vm_area_struct *vma) 364 + { 365 + vma_assert_write_locked(vma); 366 + vma_assert_attached(vma); 367 + 368 + /* 369 + * The VMA still being attached (refcnt > 0) - is unlikely, because the 370 + * vma has been already write-locked and readers can increment vm_refcnt 371 + * only temporarily before they check vm_lock_seq, realize the vma is 372 + * locked and drop back the vm_refcnt. That is a narrow window for 373 + * observing a raised vm_refcnt. 374 + * 375 + * See the comment describing the vm_area_struct->vm_refcnt field for 376 + * details of possible refcnt values. 377 + */ 378 + if (likely(!__vma_refcount_put_return(vma))) 379 + return; 380 + 381 + __vma_exclude_readers_for_detach(vma); 382 + } 362 383 363 384 struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, 364 385 unsigned long address);
+83 -69
mm/mmap_lock.c
··· 46 46 #ifdef CONFIG_MMU 47 47 #ifdef CONFIG_PER_VMA_LOCK 48 48 49 + /* State shared across __vma_[start, end]_exclude_readers. */ 50 + struct vma_exclude_readers_state { 51 + /* Input parameters. */ 52 + struct vm_area_struct *vma; 53 + int state; /* TASK_KILLABLE or TASK_UNINTERRUPTIBLE. */ 54 + bool detaching; 55 + 56 + /* Output parameters. */ 57 + bool detached; 58 + bool exclusive; /* Are we exclusively locked? */ 59 + }; 60 + 49 61 /* 50 62 * Now that all readers have been evicted, mark the VMA as being out of the 51 63 * 'exclude readers' state. 52 - * 53 - * Returns true if the VMA is now detached, otherwise false. 54 64 */ 55 - static bool __must_check __vma_end_exclude_readers(struct vm_area_struct *vma) 65 + static void __vma_end_exclude_readers(struct vma_exclude_readers_state *ves) 56 66 { 57 - bool detached; 67 + struct vm_area_struct *vma = ves->vma; 58 68 59 - detached = refcount_sub_and_test(VM_REFCNT_EXCLUDE_READERS_FLAG, 60 - &vma->vm_refcnt); 69 + VM_WARN_ON_ONCE(ves->detached); 70 + 71 + ves->detached = refcount_sub_and_test(VM_REFCNT_EXCLUDE_READERS_FLAG, 72 + &vma->vm_refcnt); 61 73 __vma_lockdep_release_exclusive(vma); 62 - return detached; 74 + } 75 + 76 + static unsigned int get_target_refcnt(struct vma_exclude_readers_state *ves) 77 + { 78 + const unsigned int tgt = ves->detaching ? 0 : 1; 79 + 80 + return tgt | VM_REFCNT_EXCLUDE_READERS_FLAG; 63 81 } 64 82 65 83 /* ··· 87 69 * Note that this function pairs with vma_refcount_put() which will wake up this 88 70 * thread when it detects that the last reader has released its lock. 89 71 * 90 - * The state parameter ought to be set to TASK_UNINTERRUPTIBLE in cases where we 91 - * wish the thread to sleep uninterruptibly or TASK_KILLABLE if a fatal signal 92 - * is permitted to kill it. 72 + * The ves->state parameter ought to be set to TASK_UNINTERRUPTIBLE in cases 73 + * where we wish the thread to sleep uninterruptibly or TASK_KILLABLE if a fatal 74 + * signal is permitted to kill it. 93 75 * 94 - * The function will return 0 immediately if the VMA is detached, or wait for 95 - * readers and return 1 once they have all exited, leaving the VMA exclusively 96 - * locked. 76 + * The function sets the ves->exclusive parameter to true if readers were 77 + * excluded, or false if the VMA was detached or an error arose on wait. 97 78 * 98 - * If the function returns 1, the caller is required to invoke 99 - * __vma_end_exclude_readers() once the exclusive state is no longer required. 79 + * If the function indicates an exclusive lock was acquired via ves->exclusive 80 + * the caller is required to invoke __vma_end_exclude_readers() once the 81 + * exclusive state is no longer required. 100 82 * 101 - * If state is set to something other than TASK_UNINTERRUPTIBLE, the function 102 - * may also return -EINTR to indicate a fatal signal was received while waiting. 83 + * If ves->state is set to something other than TASK_UNINTERRUPTIBLE, the 84 + * function may also return -EINTR to indicate a fatal signal was received while 85 + * waiting. Otherwise, the function returns 0. 103 86 */ 104 - static int __vma_start_exclude_readers(struct vm_area_struct *vma, 105 - bool detaching, int state) 87 + static int __vma_start_exclude_readers(struct vma_exclude_readers_state *ves) 106 88 { 107 - int err; 108 - unsigned int tgt_refcnt = VM_REFCNT_EXCLUDE_READERS_FLAG; 89 + struct vm_area_struct *vma = ves->vma; 90 + unsigned int tgt_refcnt = get_target_refcnt(ves); 91 + int err = 0; 109 92 110 93 mmap_assert_write_locked(vma->vm_mm); 111 - 112 - /* Additional refcnt if the vma is attached. */ 113 - if (!detaching) 114 - tgt_refcnt++; 115 94 116 95 /* 117 96 * If vma is detached then only vma_mark_attached() can raise the ··· 117 102 * See the comment describing the vm_area_struct->vm_refcnt field for 118 103 * details of possible refcnt values. 119 104 */ 120 - if (!refcount_add_not_zero(VM_REFCNT_EXCLUDE_READERS_FLAG, &vma->vm_refcnt)) 105 + if (!refcount_add_not_zero(VM_REFCNT_EXCLUDE_READERS_FLAG, &vma->vm_refcnt)) { 106 + ves->detached = true; 121 107 return 0; 108 + } 122 109 123 110 __vma_lockdep_acquire_exclusive(vma); 124 111 err = rcuwait_wait_event(&vma->vm_mm->vma_writer_wait, 125 112 refcount_read(&vma->vm_refcnt) == tgt_refcnt, 126 - state); 113 + ves->state); 127 114 if (err) { 128 - if (__vma_end_exclude_readers(vma)) { 129 - /* 130 - * The wait failed, but the last reader went away 131 - * as well. Tell the caller the VMA is detached. 132 - */ 133 - WARN_ON_ONCE(!detaching); 134 - err = 0; 135 - } 115 + __vma_end_exclude_readers(ves); 136 116 return err; 137 117 } 138 - __vma_lockdep_stat_mark_acquired(vma); 139 118 140 - return 1; 119 + __vma_lockdep_stat_mark_acquired(vma); 120 + ves->exclusive = true; 121 + return 0; 141 122 } 142 123 143 124 int __vma_start_write(struct vm_area_struct *vma, unsigned int mm_lock_seq, 144 125 int state) 145 126 { 146 - int locked; 127 + int err; 128 + struct vma_exclude_readers_state ves = { 129 + .vma = vma, 130 + .state = state, 131 + }; 147 132 148 - locked = __vma_start_exclude_readers(vma, false, state); 149 - if (locked < 0) 150 - return locked; 133 + err = __vma_start_exclude_readers(&ves); 134 + if (err) { 135 + WARN_ON_ONCE(ves.detached); 136 + return err; 137 + } 151 138 152 139 /* 153 140 * We should use WRITE_ONCE() here because we can have concurrent reads ··· 159 142 */ 160 143 WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq); 161 144 162 - if (locked) { 163 - bool detached = __vma_end_exclude_readers(vma); 164 - 165 - /* The VMA should remain attached. */ 166 - WARN_ON_ONCE(detached); 145 + if (ves.exclusive) { 146 + __vma_end_exclude_readers(&ves); 147 + /* VMA should remain attached. */ 148 + WARN_ON_ONCE(ves.detached); 167 149 } 168 150 169 151 return 0; 170 152 } 171 153 EXPORT_SYMBOL_GPL(__vma_start_write); 172 154 173 - void vma_mark_detached(struct vm_area_struct *vma) 155 + void __vma_exclude_readers_for_detach(struct vm_area_struct *vma) 174 156 { 175 - vma_assert_write_locked(vma); 176 - vma_assert_attached(vma); 157 + struct vma_exclude_readers_state ves = { 158 + .vma = vma, 159 + .state = TASK_UNINTERRUPTIBLE, 160 + .detaching = true, 161 + }; 162 + int err; 177 163 178 164 /* 179 - * This condition - that the VMA is still attached (refcnt > 0) - is 180 - * unlikely, because the vma has been already write-locked and readers 181 - * can increment vm_refcnt only temporarily before they check 182 - * vm_lock_seq, realize the vma is locked and drop back the 183 - * vm_refcnt. That is a narrow window for observing a raised vm_refcnt. 184 - * 185 - * See the comment describing the vm_area_struct->vm_refcnt field for 186 - * details of possible refcnt values. 165 + * Wait until the VMA is detached with no readers. Since we hold the VMA 166 + * write lock, the only read locks that might be present are those from 167 + * threads trying to acquire the read lock and incrementing the 168 + * reference count before realising the write lock is held and 169 + * decrementing it. 187 170 */ 188 - if (unlikely(__vma_refcount_put_return(vma))) { 189 - /* Wait until vma is detached with no readers. */ 190 - if (__vma_start_exclude_readers(vma, true, TASK_UNINTERRUPTIBLE)) { 191 - bool detached; 192 - 193 - /* 194 - * Once this is complete, no readers can increment the 195 - * reference count, and the VMA is marked detached. 196 - */ 197 - detached = __vma_end_exclude_readers(vma); 198 - WARN_ON_ONCE(!detached); 199 - } 171 + err = __vma_start_exclude_readers(&ves); 172 + if (!err && ves.exclusive) { 173 + /* 174 + * Once this is complete, no readers can increment the 175 + * reference count, and the VMA is marked detached. 176 + */ 177 + __vma_end_exclude_readers(&ves); 200 178 } 179 + /* If an error arose but we were detached anyway, we don't care. */ 180 + WARN_ON_ONCE(!ves.detached); 201 181 } 202 182 203 183 /*