md/raid5: limit request size according to implementation limits

Current implementation employ 16bit counter of active stripes in lower
bits of bio->bi_phys_segments. If request is big enough to overflow
this counter bio will be completed and freed too early.

Fortunately this not happens in default configuration because several
other limits prevent that: stripe_cache_size * nr_disks effectively
limits count of active stripes. And small max_sectors_kb at lower
disks prevent that during normal read/write operations.

Overflow easily happens in discard if it's enabled by module parameter
"devices_handle_discard_safely" and stripe_cache_size is set big enough.

This patch limits requests size with 256Mb - 8Kb to prevent overflows.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Shaohua Li <shli@kernel.org>
Cc: Neil Brown <neilb@suse.com>
Cc: stable@vger.kernel.org
Signed-off-by: Shaohua Li <shli@fb.com>
This commit is contained in:
Konstantin Khlebnikov 2016-11-27 19:32:32 +03:00 committed by Shaohua Li
parent 1a0ec5c30c
commit e8d7c33232

View File

@ -7108,6 +7108,15 @@ static int raid5_run(struct mddev *mddev)
stripe = (stripe | (stripe-1)) + 1; stripe = (stripe | (stripe-1)) + 1;
mddev->queue->limits.discard_alignment = stripe; mddev->queue->limits.discard_alignment = stripe;
mddev->queue->limits.discard_granularity = stripe; mddev->queue->limits.discard_granularity = stripe;
/*
* We use 16-bit counter of active stripes in bi_phys_segments
* (minus one for over-loaded initialization)
*/
blk_queue_max_hw_sectors(mddev->queue, 0xfffe * STRIPE_SECTORS);
blk_queue_max_discard_sectors(mddev->queue,
0xfffe * STRIPE_SECTORS);
/* /*
* unaligned part of discard request will be ignored, so can't * unaligned part of discard request will be ignored, so can't
* guarantee discard_zeroes_data * guarantee discard_zeroes_data