Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

sched_ext: Use READ_ONCE() for plain reads of scx_watchdog_timeout

scx_watchdog_timeout is written with WRITE_ONCE() in scx_enable():

WRITE_ONCE(scx_watchdog_timeout, timeout);

However, three read-side accesses use plain reads without the matching
READ_ONCE():

/* check_rq_for_timeouts() - L2824 */
last_runnable + scx_watchdog_timeout

/* scx_watchdog_workfn() - L2852 */
scx_watchdog_timeout / 2

/* scx_enable() - L5179 */
scx_watchdog_timeout / 2

The KCSAN documentation requires that if one accessor uses WRITE_ONCE()
to annotate lock-free access, all other accesses must also use the
appropriate accessor. Plain reads alongside WRITE_ONCE() leave the pair
incomplete and can trigger KCSAN warnings.

Note that scx_tick() already uses the correct READ_ONCE() annotation:

last_check + READ_ONCE(scx_watchdog_timeout)

Fix the three remaining plain reads to match, making all accesses to
scx_watchdog_timeout consistently annotated and KCSAN-clean.

Signed-off-by: zhidao su <suzhidao@xiaomi.com>
Signed-off-by: Tejun Heo <tj@kernel.org>

authored by

zhidao su and committed by
Tejun Heo
3f27958b 494eaf46

+3 -3
+3 -3
kernel/sched/ext.c
··· 2739 2739 unsigned long last_runnable = p->scx.runnable_at; 2740 2740 2741 2741 if (unlikely(time_after(jiffies, 2742 - last_runnable + scx_watchdog_timeout))) { 2742 + last_runnable + READ_ONCE(scx_watchdog_timeout)))) { 2743 2743 u32 dur_ms = jiffies_to_msecs(jiffies - last_runnable); 2744 2744 2745 2745 scx_exit(sch, SCX_EXIT_ERROR_STALL, 0, ··· 2767 2767 cond_resched(); 2768 2768 } 2769 2769 queue_delayed_work(system_unbound_wq, to_delayed_work(work), 2770 - scx_watchdog_timeout / 2); 2770 + READ_ONCE(scx_watchdog_timeout) / 2); 2771 2771 } 2772 2772 2773 2773 void scx_tick(struct rq *rq) ··· 5081 5081 WRITE_ONCE(scx_watchdog_timeout, timeout); 5082 5082 WRITE_ONCE(scx_watchdog_timestamp, jiffies); 5083 5083 queue_delayed_work(system_unbound_wq, &scx_watchdog_work, 5084 - scx_watchdog_timeout / 2); 5084 + READ_ONCE(scx_watchdog_timeout) / 2); 5085 5085 5086 5086 /* 5087 5087 * Once __scx_enabled is set, %current can be switched to SCX anytime.