Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

block: fix ordering of recursive split IO

Currently, split bio will be chained to original bio, and original bio
will be resubmitted to the tail of current->bio_list, waiting for
split bio to be issued. However, if split bio get split again, the IO
order will be messed up. This problem, on the one hand, will cause
performance degradation, especially for mdraid with large IO size; on
the other hand, will cause write errors for zoned block devices[1].

For example, in raid456 IO will first be split by max_sector from
md_submit_bio(), and then later be split again by chunksize for internal
handling:

For example, assume max_sectors is 1M, and chunksize is 512k

1) issue a 2M IO:

bio issuing: 0+2M
current->bio_list: NULL

2) md_submit_bio() split by max_sector:

bio issuing: 0+1M
current->bio_list: 1M+1M

3) chunk_aligned_read() split by chunksize:

bio issuing: 0+512k
current->bio_list: 1M+1M -> 512k+512k

4) after first bio issued, __submit_bio_noacct() will contuine issuing
next bio:

bio issuing: 1M+1M
current->bio_list: 512k+512k
bio issued: 0+512k

5) chunk_aligned_read() split by chunksize:

bio issuing: 1M+512k
current->bio_list: 512k+512k -> 1536k+512k
bio issued: 0+512k

6) no split afterwards, finally the issue order is:

0+512k -> 1M+512k -> 512k+512k -> 1536k+512k

This behaviour will cause large IO read on raid456 endup to be small
discontinuous IO in underlying disks. Fix this problem by placing split
bio to the head of current->bio_list.

Test script: test on 8 disk raid5 with 64k chunksize
dd if=/dev/md0 of=/dev/null bs=4480k iflag=direct

Test results:
Before this patch
1) iostat results:
Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz aqu-sz %util
md0 52430.00 3276.87 0.00 0.00 0.62 64.00 32.60 80.10
sd* 4487.00 409.00 2054.00 31.40 0.82 93.34 3.68 71.20
2) blktrace G stage:
8,0 0 486445 11.357392936 843 G R 14071424 + 128 [dd]
8,0 0 486451 11.357466360 843 G R 14071168 + 128 [dd]
8,0 0 486454 11.357515868 843 G R 14071296 + 128 [dd]
8,0 0 486468 11.357968099 843 G R 14072192 + 128 [dd]
8,0 0 486474 11.358031320 843 G R 14071936 + 128 [dd]
8,0 0 486480 11.358096298 843 G R 14071552 + 128 [dd]
8,0 0 486490 11.358303858 843 G R 14071808 + 128 [dd]
3) io seek for sdx:
Noted io seek is the result from blktrace D stage, statistic of:
ABS((offset of next IO) - (offset + len of previous IO))

Read|Write seek
cnt 55175, zero cnt 25079
>=(KB) .. <(KB) : count ratio |distribution |
0 .. 1 : 25079 45.5% |########################################|
1 .. 2 : 0 0.0% | |
2 .. 4 : 0 0.0% | |
4 .. 8 : 0 0.0% | |
8 .. 16 : 0 0.0% | |
16 .. 32 : 0 0.0% | |
32 .. 64 : 12540 22.7% |##################### |
64 .. 128 : 2508 4.5% |##### |
128 .. 256 : 0 0.0% | |
256 .. 512 : 10032 18.2% |################# |
512 .. 1024 : 5016 9.1% |######### |

After this patch:
1) iostat results:
Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz aqu-sz %util
md0 87965.00 5271.88 0.00 0.00 0.16 61.37 14.03 90.60
sd* 6020.00 658.44 5117.00 45.95 0.44 112.00 2.68 86.50
2) blktrace G stage:
8,0 0 206296 5.354894072 664 G R 7156992 + 128 [dd]
8,0 0 206305 5.355018179 664 G R 7157248 + 128 [dd]
8,0 0 206316 5.355204438 664 G R 7157504 + 128 [dd]
8,0 0 206319 5.355241048 664 G R 7157760 + 128 [dd]
8,0 0 206333 5.355500923 664 G R 7158016 + 128 [dd]
8,0 0 206344 5.355837806 664 G R 7158272 + 128 [dd]
8,0 0 206353 5.355960395 664 G R 7158528 + 128 [dd]
8,0 0 206357 5.356020772 664 G R 7158784 + 128 [dd]
3) io seek for sdx
Read|Write seek
cnt 28644, zero cnt 21483
>=(KB) .. <(KB) : count ratio |distribution |
0 .. 1 : 21483 75.0% |########################################|
1 .. 2 : 0 0.0% | |
2 .. 4 : 0 0.0% | |
4 .. 8 : 0 0.0% | |
8 .. 16 : 0 0.0% | |
16 .. 32 : 0 0.0% | |
32 .. 64 : 7161 25.0% |############## |

BTW, this looks like a long term problem from day one, and large
sequential IO read is pretty common case like video playing.

And even with this patch, in this test case IO is merged to at most 128k
is due to block layer plug limit BLK_PLUG_FLUSH_SIZE, increase such
limit can get even better performance. However, we'll figure out how to do
this properly later.

[1] https://lore.kernel.org/all/e40b076d-583d-406b-b223-005910a9f46f@acm.org/

Fixes: d89d87965dcb ("When stacked block devices are in-use (e.g. md or dm), the recursive calls")
Reported-by: Tie Ren <tieren@fnnas.com>
Closes: https://lore.kernel.org/all/7dro5o7u5t64d6bgiansesjavxcuvkq5p2pok7dtwkav7b7ape@3isfr44b6352/
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>

authored by

Yu Kuai and committed by
Jens Axboe
b2f59740 0b64682e

+13 -9
+10 -6
block/blk-core.c
··· 725 725 current->bio_list = NULL; 726 726 } 727 727 728 - void submit_bio_noacct_nocheck(struct bio *bio) 728 + void submit_bio_noacct_nocheck(struct bio *bio, bool split) 729 729 { 730 730 blk_cgroup_bio_start(bio); 731 731 ··· 744 744 * to collect a list of requests submited by a ->submit_bio method while 745 745 * it is active, and then process them after it returned. 746 746 */ 747 - if (current->bio_list) 748 - bio_list_add(&current->bio_list[0], bio); 749 - else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) 747 + if (current->bio_list) { 748 + if (split) 749 + bio_list_add_head(&current->bio_list[0], bio); 750 + else 751 + bio_list_add(&current->bio_list[0], bio); 752 + } else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) { 750 753 __submit_bio_noacct_mq(bio); 751 - else 754 + } else { 752 755 __submit_bio_noacct(bio); 756 + } 753 757 } 754 758 755 759 static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q, ··· 874 870 875 871 if (blk_throtl_bio(bio)) 876 872 return; 877 - submit_bio_noacct_nocheck(bio); 873 + submit_bio_noacct_nocheck(bio, false); 878 874 return; 879 875 880 876 not_supported:
+1 -1
block/blk-merge.c
··· 134 134 if (should_fail_bio(bio)) 135 135 bio_io_error(bio); 136 136 else if (!blk_throtl_bio(bio)) 137 - submit_bio_noacct_nocheck(bio); 137 + submit_bio_noacct_nocheck(bio, true); 138 138 139 139 return split; 140 140 }
+1 -1
block/blk-throttle.c
··· 1224 1224 if (!bio_list_empty(&bio_list_on_stack)) { 1225 1225 blk_start_plug(&plug); 1226 1226 while ((bio = bio_list_pop(&bio_list_on_stack))) 1227 - submit_bio_noacct_nocheck(bio); 1227 + submit_bio_noacct_nocheck(bio, false); 1228 1228 blk_finish_plug(&plug); 1229 1229 } 1230 1230 }
+1 -1
block/blk.h
··· 55 55 bool __blk_freeze_queue_start(struct request_queue *q, 56 56 struct task_struct *owner); 57 57 int __bio_queue_enter(struct request_queue *q, struct bio *bio); 58 - void submit_bio_noacct_nocheck(struct bio *bio); 58 + void submit_bio_noacct_nocheck(struct bio *bio, bool split); 59 59 void bio_await_chain(struct bio *bio); 60 60 61 61 static inline bool blk_try_enter_queue(struct request_queue *q, bool pm)