Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

block: define alloc_sched_data and free_sched_data methods for kyber

Currently, the Kyber elevator allocates its private data dynamically in
->init_sched and frees it in ->exit_sched. However, since ->init_sched
is invoked during elevator switch after acquiring both ->freeze_lock and
->elevator_lock, it may trigger the lockdep splat [1] due to dependency
on pcpu_alloc_mutex.

To resolve this, move the elevator data allocation and deallocation
logic from ->init_sched and ->exit_sched into the newly introduced
->alloc_sched_data and ->free_sched_data methods. These callbacks are
invoked before acquiring ->freeze_lock and ->elevator_lock, ensuring
that memory allocation happens safely without introducing additional
locking dependencies.

This change breaks the dependency chain involving pcpu_alloc_mutex and
prevents the reported lockdep warning.

[1] https://lore.kernel.org/all/CAGVVp+VNW4M-5DZMNoADp6o2VKFhi7KxWpTDkcnVyjO0=-D5+A@mail.gmail.com/

Reported-by: Changhui Zhong <czhong@redhat.com>
Reported-by: Yi Zhang <yi.zhang@redhat.com>
Closes: https://lore.kernel.org/all/CAGVVp+VNW4M-5DZMNoADp6o2VKFhi7KxWpTDkcnVyjO0=-D5+A@mail.gmail.com/
Tested-by: Yi Zhang <yi.zhang@redhat.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Yu Kuai <yukuai@fnnas.com>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>

authored by

Nilay Shroff and committed by
Jens Axboe
d4c3ef56 0315476e

+22 -8
+22 -8
block/kyber-iosched.c
··· 409 409 410 410 static int kyber_init_sched(struct request_queue *q, struct elevator_queue *eq) 411 411 { 412 - struct kyber_queue_data *kqd; 413 - 414 - kqd = kyber_queue_data_alloc(q); 415 - if (IS_ERR(kqd)) 416 - return PTR_ERR(kqd); 417 - 418 412 blk_stat_enable_accounting(q); 419 413 420 414 blk_queue_flag_clear(QUEUE_FLAG_SQ_SCHED, q); 421 415 422 - eq->elevator_data = kqd; 423 416 q->elevator = eq; 424 417 kyber_depth_updated(q); 425 418 426 419 return 0; 427 420 } 428 421 422 + static void *kyber_alloc_sched_data(struct request_queue *q) 423 + { 424 + struct kyber_queue_data *kqd; 425 + 426 + kqd = kyber_queue_data_alloc(q); 427 + if (IS_ERR(kqd)) 428 + return NULL; 429 + 430 + return kqd; 431 + } 432 + 429 433 static void kyber_exit_sched(struct elevator_queue *e) 430 434 { 431 435 struct kyber_queue_data *kqd = e->elevator_data; 432 - int i; 433 436 434 437 timer_shutdown_sync(&kqd->timer); 435 438 blk_stat_disable_accounting(kqd->q); 439 + } 440 + 441 + static void kyber_free_sched_data(void *elv_data) 442 + { 443 + struct kyber_queue_data *kqd = elv_data; 444 + int i; 445 + 446 + if (!kqd) 447 + return; 436 448 437 449 for (i = 0; i < KYBER_NUM_DOMAINS; i++) 438 450 sbitmap_queue_free(&kqd->domain_tokens[i]); ··· 1016 1004 .exit_sched = kyber_exit_sched, 1017 1005 .init_hctx = kyber_init_hctx, 1018 1006 .exit_hctx = kyber_exit_hctx, 1007 + .alloc_sched_data = kyber_alloc_sched_data, 1008 + .free_sched_data = kyber_free_sched_data, 1019 1009 .limit_depth = kyber_limit_depth, 1020 1010 .bio_merge = kyber_bio_merge, 1021 1011 .prepare_request = kyber_prepare_request,