Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

ipmi: Replace use of system_wq with system_percpu_wq

This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:

commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.

Before that to happen after a careful review and conversion of each individual
case, workqueue users must be converted to the better named new workqueues with
no intended behaviour changes:

system_wq -> system_percpu_wq
system_unbound_wq -> system_dfl_wq

This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Message-ID: <20251224161301.135382-1-marco.crivellari@suse.com>
Signed-off-by: Corey Minyard <corey@minyard.net>

authored by

Marco Crivellari and committed by
Corey Minyard
122d16da af4e9ef3

+5 -5
+5 -5
drivers/char/ipmi/ipmi_msghandler.c
··· 987 987 mutex_lock(&intf->user_msgs_mutex); 988 988 list_add_tail(&msg->link, &intf->user_msgs); 989 989 mutex_unlock(&intf->user_msgs_mutex); 990 - queue_work(system_wq, &intf->smi_work); 990 + queue_work(system_percpu_wq, &intf->smi_work); 991 991 } 992 992 993 993 return rv; ··· 4977 4977 if (run_to_completion) 4978 4978 smi_work(&intf->smi_work); 4979 4979 else 4980 - queue_work(system_wq, &intf->smi_work); 4980 + queue_work(system_percpu_wq, &intf->smi_work); 4981 4981 } 4982 4982 EXPORT_SYMBOL(ipmi_smi_msg_received); 4983 4983 ··· 4987 4987 return; 4988 4988 4989 4989 atomic_set(&intf->watchdog_pretimeouts_to_deliver, 1); 4990 - queue_work(system_wq, &intf->smi_work); 4990 + queue_work(system_percpu_wq, &intf->smi_work); 4991 4991 } 4992 4992 EXPORT_SYMBOL(ipmi_smi_watchdog_pretimeout); 4993 4993 ··· 5162 5162 flags); 5163 5163 } 5164 5164 5165 - queue_work(system_wq, &intf->smi_work); 5165 + queue_work(system_percpu_wq, &intf->smi_work); 5166 5166 5167 5167 return need_timer; 5168 5168 } ··· 5218 5218 if (atomic_read(&stop_operation)) 5219 5219 return; 5220 5220 5221 - queue_work(system_wq, &ipmi_timer_work); 5221 + queue_work(system_percpu_wq, &ipmi_timer_work); 5222 5222 } 5223 5223 5224 5224 static void need_waiter(struct ipmi_smi *intf)