Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

sched/mmcid: Optimize transitional CIDs when scheduling out

During the investigation of the various transition mode issues
instrumentation revealed that the amount of bitmap operations can be
significantly reduced when a task with a transitional CID schedules out
after the fixup function completed and disabled the transition mode.

At that point the mode is stable and therefore it is not required to drop
the transitional CID back into the pool. As the fixup is complete the
potential exhaustion of the CID pool is not longer possible, so the CID can
be transferred to the scheduling out task or to the CPU depending on the
current ownership mode.

The racy snapshot of mm_cid::mode which contains both the ownership state
and the transition bit is valid because runqueue lock is held and the fixup
function of a concurrent mode switch is serialized.

Assigning the ownership right there not only spares the bitmap access for
dropping the CID it also avoids it when the task is scheduled back in as it
directly hits the fast path in both modes when the CID is within the
optimal range. If it's outside the range the next schedule in will need to
converge so dropping it right away is sensible. In the good case this also
allows to go into the fast path on the next schedule in operation.

With a thread pool benchmark which is configured to cross the mode switch
boundaries frequently this reduces the number of bitmap operations by about
30% and increases the fastpath utilization in the low single digit
percentage range.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20260201192835.100194627@kernel.org

authored by

Thomas Gleixner and committed by
Peter Zijlstra
4463c7aa 007d8428

+21 -2
+21 -2
kernel/sched/sched.h
··· 3902 3902 3903 3903 static __always_inline void mm_cid_schedout(struct task_struct *prev) 3904 3904 { 3905 + struct mm_struct *mm = prev->mm; 3906 + unsigned int mode, cid; 3907 + 3905 3908 /* During mode transitions CIDs are temporary and need to be dropped */ 3906 3909 if (likely(!cid_in_transit(prev->mm_cid.cid))) 3907 3910 return; 3908 3911 3909 - mm_drop_cid(prev->mm, cid_from_transit_cid(prev->mm_cid.cid)); 3910 - prev->mm_cid.cid = MM_CID_UNSET; 3912 + mode = READ_ONCE(mm->mm_cid.mode); 3913 + cid = cid_from_transit_cid(prev->mm_cid.cid); 3914 + 3915 + /* 3916 + * If transition mode is done, transfer ownership when the CID is 3917 + * within the convergence range to optimize the next schedule in. 3918 + */ 3919 + if (!cid_in_transit(mode) && cid < READ_ONCE(mm->mm_cid.max_cids)) { 3920 + if (cid_on_cpu(mode)) 3921 + cid = cid_to_cpu_cid(cid); 3922 + 3923 + /* Update both so that the next schedule in goes into the fast path */ 3924 + mm_cid_update_pcpu_cid(mm, cid); 3925 + prev->mm_cid.cid = cid; 3926 + } else { 3927 + mm_drop_cid(mm, cid); 3928 + prev->mm_cid.cid = MM_CID_UNSET; 3929 + } 3911 3930 } 3912 3931 3913 3932 static inline void mm_cid_switch_to(struct task_struct *prev, struct task_struct *next)