Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

workqueue: Better describe stall check

Try to be more explicit why the workqueue watchdog does not take
pool->lock by default. Spin locks are full memory barriers which
delay anything. Obviously, they would primary delay operations
on the related worker pools.

Explain why it is enough to prevent the false positive by re-checking
the timestamp under the pool->lock.

Finally, make it clear what would be the alternative solution in
__queue_work() which is a hotter path.

Signed-off-by: Petr Mladek <pmladek@suse.com>
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Tejun Heo <tj@kernel.org>

authored by

Petr Mladek and committed by
Tejun Heo
e398978d c7f27a8a

+8 -7
+8 -7
kernel/workqueue.c
··· 7702 7702 /* 7703 7703 * Did we stall? 7704 7704 * 7705 - * Do a lockless check first. On weakly ordered 7706 - * architectures, the lockless check can observe a 7707 - * reordering between worklist insert_work() and 7708 - * last_progress_ts update from __queue_work(). Since 7709 - * __queue_work() is a much hotter path than the timer 7710 - * function, we handle false positive here by reading 7711 - * last_progress_ts again with pool->lock held. 7705 + * Do a lockless check first to do not disturb the system. 7706 + * 7707 + * Prevent false positives by double checking the timestamp 7708 + * under pool->lock. The lock makes sure that the check reads 7709 + * an updated pool->last_progress_ts when this CPU saw 7710 + * an already updated pool->worklist above. It seems better 7711 + * than adding another barrier into __queue_work() which 7712 + * is a hotter path. 7712 7713 */ 7713 7714 if (time_after(now, ts + thresh)) { 7714 7715 scoped_guard(raw_spinlock_irqsave, &pool->lock) {