mirror of
https://github.com/AuxXxilium/linux_dsm_epyc7002.git
synced 2024-12-03 09:37:03 +07:00
77ea733884
Currently, both cfq-iosched and blk-throttle keep track of io_service_bytes and io_serviced stats. While keeping track of them separately may be useful during development, it doesn't make much sense otherwise. Also, blk-throttle was counting bio's as IOs while cfq-iosched request's, which is more confusing than informative. This patch adds ->stat_bytes and ->stat_ios to blkg (blkcg_gq), removes the counterparts from cfq-iosched and blk-throttle and let them print from the common blkg counters. The common counters are incremented during bio issue in blkcg_bio_issue_check(). The outputs are still filtered by whether the policy has blkg_policy_data on a given blkg, so cfq's output won't show up if it has never been used for a given blkg. The only times when the outputs would differ significantly are when policies are attached on the fly or elevators are switched back and forth. Those are quite exceptional operations and I don't think they warrant keeping separate counters. v3: Update blkio-controller.txt accordingly. v2: Account IOs during bio issues instead of request completions so that bio-based drivers can be handled the same way. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
456 lines
19 KiB
Plaintext
456 lines
19 KiB
Plaintext
Block IO Controller
|
|
===================
|
|
Overview
|
|
========
|
|
cgroup subsys "blkio" implements the block io controller. There seems to be
|
|
a need of various kinds of IO control policies (like proportional BW, max BW)
|
|
both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
|
|
Plan is to use the same cgroup based management interface for blkio controller
|
|
and based on user options switch IO policies in the background.
|
|
|
|
Currently two IO control policies are implemented. First one is proportional
|
|
weight time based division of disk policy. It is implemented in CFQ. Hence
|
|
this policy takes effect only on leaf nodes when CFQ is being used. The second
|
|
one is throttling policy which can be used to specify upper IO rate limits
|
|
on devices. This policy is implemented in generic block layer and can be
|
|
used on leaf nodes as well as higher level logical devices like device mapper.
|
|
|
|
HOWTO
|
|
=====
|
|
Proportional Weight division of bandwidth
|
|
-----------------------------------------
|
|
You can do a very simple testing of running two dd threads in two different
|
|
cgroups. Here is what you can do.
|
|
|
|
- Enable Block IO controller
|
|
CONFIG_BLK_CGROUP=y
|
|
|
|
- Enable group scheduling in CFQ
|
|
CONFIG_CFQ_GROUP_IOSCHED=y
|
|
|
|
- Compile and boot into kernel and mount IO controller (blkio); see
|
|
cgroups.txt, Why are cgroups needed?.
|
|
|
|
mount -t tmpfs cgroup_root /sys/fs/cgroup
|
|
mkdir /sys/fs/cgroup/blkio
|
|
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
|
|
|
|
- Create two cgroups
|
|
mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2
|
|
|
|
- Set weights of group test1 and test2
|
|
echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
|
|
echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight
|
|
|
|
- Create two same size files (say 512MB each) on same disk (file1, file2) and
|
|
launch two dd threads in different cgroup to read those files.
|
|
|
|
sync
|
|
echo 3 > /proc/sys/vm/drop_caches
|
|
|
|
dd if=/mnt/sdb/zerofile1 of=/dev/null &
|
|
echo $! > /sys/fs/cgroup/blkio/test1/tasks
|
|
cat /sys/fs/cgroup/blkio/test1/tasks
|
|
|
|
dd if=/mnt/sdb/zerofile2 of=/dev/null &
|
|
echo $! > /sys/fs/cgroup/blkio/test2/tasks
|
|
cat /sys/fs/cgroup/blkio/test2/tasks
|
|
|
|
- At macro level, first dd should finish first. To get more precise data, keep
|
|
on looking at (with the help of script), at blkio.disk_time and
|
|
blkio.disk_sectors files of both test1 and test2 groups. This will tell how
|
|
much disk time (in milli seconds), each group got and how many secotors each
|
|
group dispatched to the disk. We provide fairness in terms of disk time, so
|
|
ideally io.disk_time of cgroups should be in proportion to the weight.
|
|
|
|
Throttling/Upper Limit policy
|
|
-----------------------------
|
|
- Enable Block IO controller
|
|
CONFIG_BLK_CGROUP=y
|
|
|
|
- Enable throttling in block layer
|
|
CONFIG_BLK_DEV_THROTTLING=y
|
|
|
|
- Mount blkio controller (see cgroups.txt, Why are cgroups needed?)
|
|
mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
|
|
|
|
- Specify a bandwidth rate on particular device for root group. The format
|
|
for policy is "<major>:<minor> <bytes_per_second>".
|
|
|
|
echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
|
|
|
|
Above will put a limit of 1MB/second on reads happening for root group
|
|
on device having major/minor number 8:16.
|
|
|
|
- Run dd to read a file and see if rate is throttled to 1MB/s or not.
|
|
|
|
# dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
|
|
# iflag=direct
|
|
1024+0 records in
|
|
1024+0 records out
|
|
4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s
|
|
|
|
Limits for writes can be put using blkio.throttle.write_bps_device file.
|
|
|
|
Hierarchical Cgroups
|
|
====================
|
|
|
|
Both CFQ and throttling implement hierarchy support; however,
|
|
throttling's hierarchy support is enabled iff "sane_behavior" is
|
|
enabled from cgroup side, which currently is a development option and
|
|
not publicly available.
|
|
|
|
If somebody created a hierarchy like as follows.
|
|
|
|
root
|
|
/ \
|
|
test1 test2
|
|
|
|
|
test3
|
|
|
|
CFQ by default and throttling with "sane_behavior" will handle the
|
|
hierarchy correctly. For details on CFQ hierarchy support, refer to
|
|
Documentation/block/cfq-iosched.txt. For throttling, all limits apply
|
|
to the whole subtree while all statistics are local to the IOs
|
|
directly generated by tasks in that cgroup.
|
|
|
|
Throttling without "sane_behavior" enabled from cgroup side will
|
|
practically treat all groups at same level as if it looks like the
|
|
following.
|
|
|
|
pivot
|
|
/ / \ \
|
|
root test1 test2 test3
|
|
|
|
Various user visible config options
|
|
===================================
|
|
CONFIG_BLK_CGROUP
|
|
- Block IO controller.
|
|
|
|
CONFIG_DEBUG_BLK_CGROUP
|
|
- Debug help. Right now some additional stats file show up in cgroup
|
|
if this option is enabled.
|
|
|
|
CONFIG_CFQ_GROUP_IOSCHED
|
|
- Enables group scheduling in CFQ. Currently only 1 level of group
|
|
creation is allowed.
|
|
|
|
CONFIG_BLK_DEV_THROTTLING
|
|
- Enable block device throttling support in block layer.
|
|
|
|
Details of cgroup files
|
|
=======================
|
|
Proportional weight policy files
|
|
--------------------------------
|
|
- blkio.weight
|
|
- Specifies per cgroup weight. This is default weight of the group
|
|
on all the devices until and unless overridden by per device rule.
|
|
(See blkio.weight_device).
|
|
Currently allowed range of weights is from 10 to 1000.
|
|
|
|
- blkio.weight_device
|
|
- One can specify per cgroup per device rules using this interface.
|
|
These rules override the default value of group weight as specified
|
|
by blkio.weight.
|
|
|
|
Following is the format.
|
|
|
|
# echo dev_maj:dev_minor weight > blkio.weight_device
|
|
Configure weight=300 on /dev/sdb (8:16) in this cgroup
|
|
# echo 8:16 300 > blkio.weight_device
|
|
# cat blkio.weight_device
|
|
dev weight
|
|
8:16 300
|
|
|
|
Configure weight=500 on /dev/sda (8:0) in this cgroup
|
|
# echo 8:0 500 > blkio.weight_device
|
|
# cat blkio.weight_device
|
|
dev weight
|
|
8:0 500
|
|
8:16 300
|
|
|
|
Remove specific weight for /dev/sda in this cgroup
|
|
# echo 8:0 0 > blkio.weight_device
|
|
# cat blkio.weight_device
|
|
dev weight
|
|
8:16 300
|
|
|
|
- blkio.leaf_weight[_device]
|
|
- Equivalents of blkio.weight[_device] for the purpose of
|
|
deciding how much weight tasks in the given cgroup has while
|
|
competing with the cgroup's child cgroups. For details,
|
|
please refer to Documentation/block/cfq-iosched.txt.
|
|
|
|
- blkio.time
|
|
- disk time allocated to cgroup per device in milliseconds. First
|
|
two fields specify the major and minor number of the device and
|
|
third field specifies the disk time allocated to group in
|
|
milliseconds.
|
|
|
|
- blkio.sectors
|
|
- number of sectors transferred to/from disk by the group. First
|
|
two fields specify the major and minor number of the device and
|
|
third field specifies the number of sectors transferred by the
|
|
group to/from the device.
|
|
|
|
- blkio.io_service_bytes
|
|
- Number of bytes transferred to/from the disk by the group. These
|
|
are further divided by the type of operation - read or write, sync
|
|
or async. First two fields specify the major and minor number of the
|
|
device, third field specifies the operation type and the fourth field
|
|
specifies the number of bytes.
|
|
|
|
- blkio.io_serviced
|
|
- Number of IOs (bio) issued to the disk by the group. These
|
|
are further divided by the type of operation - read or write, sync
|
|
or async. First two fields specify the major and minor number of the
|
|
device, third field specifies the operation type and the fourth field
|
|
specifies the number of IOs.
|
|
|
|
- blkio.io_service_time
|
|
- Total amount of time between request dispatch and request completion
|
|
for the IOs done by this cgroup. This is in nanoseconds to make it
|
|
meaningful for flash devices too. For devices with queue depth of 1,
|
|
this time represents the actual service time. When queue_depth > 1,
|
|
that is no longer true as requests may be served out of order. This
|
|
may cause the service time for a given IO to include the service time
|
|
of multiple IOs when served out of order which may result in total
|
|
io_service_time > actual time elapsed. This time is further divided by
|
|
the type of operation - read or write, sync or async. First two fields
|
|
specify the major and minor number of the device, third field
|
|
specifies the operation type and the fourth field specifies the
|
|
io_service_time in ns.
|
|
|
|
- blkio.io_wait_time
|
|
- Total amount of time the IOs for this cgroup spent waiting in the
|
|
scheduler queues for service. This can be greater than the total time
|
|
elapsed since it is cumulative io_wait_time for all IOs. It is not a
|
|
measure of total time the cgroup spent waiting but rather a measure of
|
|
the wait_time for its individual IOs. For devices with queue_depth > 1
|
|
this metric does not include the time spent waiting for service once
|
|
the IO is dispatched to the device but till it actually gets serviced
|
|
(there might be a time lag here due to re-ordering of requests by the
|
|
device). This is in nanoseconds to make it meaningful for flash
|
|
devices too. This time is further divided by the type of operation -
|
|
read or write, sync or async. First two fields specify the major and
|
|
minor number of the device, third field specifies the operation type
|
|
and the fourth field specifies the io_wait_time in ns.
|
|
|
|
- blkio.io_merged
|
|
- Total number of bios/requests merged into requests belonging to this
|
|
cgroup. This is further divided by the type of operation - read or
|
|
write, sync or async.
|
|
|
|
- blkio.io_queued
|
|
- Total number of requests queued up at any given instant for this
|
|
cgroup. This is further divided by the type of operation - read or
|
|
write, sync or async.
|
|
|
|
- blkio.avg_queue_size
|
|
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
|
|
The average queue size for this cgroup over the entire time of this
|
|
cgroup's existence. Queue size samples are taken each time one of the
|
|
queues of this cgroup gets a timeslice.
|
|
|
|
- blkio.group_wait_time
|
|
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
|
|
This is the amount of time the cgroup had to wait since it became busy
|
|
(i.e., went from 0 to 1 request queued) to get a timeslice for one of
|
|
its queues. This is different from the io_wait_time which is the
|
|
cumulative total of the amount of time spent by each IO in that cgroup
|
|
waiting in the scheduler queue. This is in nanoseconds. If this is
|
|
read when the cgroup is in a waiting (for timeslice) state, the stat
|
|
will only report the group_wait_time accumulated till the last time it
|
|
got a timeslice and will not include the current delta.
|
|
|
|
- blkio.empty_time
|
|
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
|
|
This is the amount of time a cgroup spends without any pending
|
|
requests when not being served, i.e., it does not include any time
|
|
spent idling for one of the queues of the cgroup. This is in
|
|
nanoseconds. If this is read when the cgroup is in an empty state,
|
|
the stat will only report the empty_time accumulated till the last
|
|
time it had a pending request and will not include the current delta.
|
|
|
|
- blkio.idle_time
|
|
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
|
|
This is the amount of time spent by the IO scheduler idling for a
|
|
given cgroup in anticipation of a better request than the existing ones
|
|
from other queues/cgroups. This is in nanoseconds. If this is read
|
|
when the cgroup is in an idling state, the stat will only report the
|
|
idle_time accumulated till the last idle period and will not include
|
|
the current delta.
|
|
|
|
- blkio.dequeue
|
|
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
|
|
gives the statistics about how many a times a group was dequeued
|
|
from service tree of the device. First two fields specify the major
|
|
and minor number of the device and third field specifies the number
|
|
of times a group was dequeued from a particular device.
|
|
|
|
- blkio.*_recursive
|
|
- Recursive version of various stats. These files show the
|
|
same information as their non-recursive counterparts but
|
|
include stats from all the descendant cgroups.
|
|
|
|
Throttling/Upper limit policy files
|
|
-----------------------------------
|
|
- blkio.throttle.read_bps_device
|
|
- Specifies upper limit on READ rate from the device. IO rate is
|
|
specified in bytes per second. Rules are per device. Following is
|
|
the format.
|
|
|
|
echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.read_bps_device
|
|
|
|
- blkio.throttle.write_bps_device
|
|
- Specifies upper limit on WRITE rate to the device. IO rate is
|
|
specified in bytes per second. Rules are per device. Following is
|
|
the format.
|
|
|
|
echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.write_bps_device
|
|
|
|
- blkio.throttle.read_iops_device
|
|
- Specifies upper limit on READ rate from the device. IO rate is
|
|
specified in IO per second. Rules are per device. Following is
|
|
the format.
|
|
|
|
echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.read_iops_device
|
|
|
|
- blkio.throttle.write_iops_device
|
|
- Specifies upper limit on WRITE rate to the device. IO rate is
|
|
specified in io per second. Rules are per device. Following is
|
|
the format.
|
|
|
|
echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.write_iops_device
|
|
|
|
Note: If both BW and IOPS rules are specified for a device, then IO is
|
|
subjected to both the constraints.
|
|
|
|
- blkio.throttle.io_serviced
|
|
- Number of IOs (bio) issued to the disk by the group. These
|
|
are further divided by the type of operation - read or write, sync
|
|
or async. First two fields specify the major and minor number of the
|
|
device, third field specifies the operation type and the fourth field
|
|
specifies the number of IOs.
|
|
|
|
- blkio.throttle.io_service_bytes
|
|
- Number of bytes transferred to/from the disk by the group. These
|
|
are further divided by the type of operation - read or write, sync
|
|
or async. First two fields specify the major and minor number of the
|
|
device, third field specifies the operation type and the fourth field
|
|
specifies the number of bytes.
|
|
|
|
Common files among various policies
|
|
-----------------------------------
|
|
- blkio.reset_stats
|
|
- Writing an int to this file will result in resetting all the stats
|
|
for that cgroup.
|
|
|
|
CFQ sysfs tunable
|
|
=================
|
|
/sys/block/<disk>/queue/iosched/slice_idle
|
|
------------------------------------------
|
|
On a faster hardware CFQ can be slow, especially with sequential workload.
|
|
This happens because CFQ idles on a single queue and single queue might not
|
|
drive deeper request queue depths to keep the storage busy. In such scenarios
|
|
one can try setting slice_idle=0 and that would switch CFQ to IOPS
|
|
(IO operations per second) mode on NCQ supporting hardware.
|
|
|
|
That means CFQ will not idle between cfq queues of a cfq group and hence be
|
|
able to driver higher queue depth and achieve better throughput. That also
|
|
means that cfq provides fairness among groups in terms of IOPS and not in
|
|
terms of disk time.
|
|
|
|
/sys/block/<disk>/queue/iosched/group_idle
|
|
------------------------------------------
|
|
If one disables idling on individual cfq queues and cfq service trees by
|
|
setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
|
|
on the group in an attempt to provide fairness among groups.
|
|
|
|
By default group_idle is same as slice_idle and does not do anything if
|
|
slice_idle is enabled.
|
|
|
|
One can experience an overall throughput drop if you have created multiple
|
|
groups and put applications in that group which are not driving enough
|
|
IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
|
|
on individual groups and throughput should improve.
|
|
|
|
Writeback
|
|
=========
|
|
|
|
Page cache is dirtied through buffered writes and shared mmaps and
|
|
written asynchronously to the backing filesystem by the writeback
|
|
mechanism. Writeback sits between the memory and IO domains and
|
|
regulates the proportion of dirty memory by balancing dirtying and
|
|
write IOs.
|
|
|
|
On traditional cgroup hierarchies, relationships between different
|
|
controllers cannot be established making it impossible for writeback
|
|
to operate accounting for cgroup resource restrictions and all
|
|
writeback IOs are attributed to the root cgroup.
|
|
|
|
If both the blkio and memory controllers are used on the v2 hierarchy
|
|
and the filesystem supports cgroup writeback, writeback operations
|
|
correctly follow the resource restrictions imposed by both memory and
|
|
blkio controllers.
|
|
|
|
Writeback examines both system-wide and per-cgroup dirty memory status
|
|
and enforces the more restrictive of the two. Also, writeback control
|
|
parameters which are absolute values - vm.dirty_bytes and
|
|
vm.dirty_background_bytes - are distributed across cgroups according
|
|
to their current writeback bandwidth.
|
|
|
|
There's a peculiarity stemming from the discrepancy in ownership
|
|
granularity between memory controller and writeback. While memory
|
|
controller tracks ownership per page, writeback operates on inode
|
|
basis. cgroup writeback bridges the gap by tracking ownership by
|
|
inode but migrating ownership if too many foreign pages, pages which
|
|
don't match the current inode ownership, have been encountered while
|
|
writing back the inode.
|
|
|
|
This is a conscious design choice as writeback operations are
|
|
inherently tied to inodes making strictly following page ownership
|
|
complicated and inefficient. The only use case which suffers from
|
|
this compromise is multiple cgroups concurrently dirtying disjoint
|
|
regions of the same inode, which is an unlikely use case and decided
|
|
to be unsupported. Note that as memory controller assigns page
|
|
ownership on the first use and doesn't update it until the page is
|
|
released, even if cgroup writeback strictly follows page ownership,
|
|
multiple cgroups dirtying overlapping areas wouldn't work as expected.
|
|
In general, write-sharing an inode across multiple cgroups is not well
|
|
supported.
|
|
|
|
Filesystem support for cgroup writeback
|
|
---------------------------------------
|
|
|
|
A filesystem can make writeback IOs cgroup-aware by updating
|
|
address_space_operations->writepage[s]() to annotate bio's using the
|
|
following two functions.
|
|
|
|
* wbc_init_bio(@wbc, @bio)
|
|
|
|
Should be called for each bio carrying writeback data and associates
|
|
the bio with the inode's owner cgroup. Can be called anytime
|
|
between bio allocation and submission.
|
|
|
|
* wbc_account_io(@wbc, @page, @bytes)
|
|
|
|
Should be called for each data segment being written out. While
|
|
this function doesn't care exactly when it's called during the
|
|
writeback session, it's the easiest and most natural to call it as
|
|
data segments are added to a bio.
|
|
|
|
With writeback bio's annotated, cgroup support can be enabled per
|
|
super_block by setting MS_CGROUPWB in ->s_flags. This allows for
|
|
selective disabling of cgroup writeback support which is helpful when
|
|
certain filesystem features, e.g. journaled data mode, are
|
|
incompatible.
|
|
|
|
wbc_init_bio() binds the specified bio to its cgroup. Depending on
|
|
the configuration, the bio may be executed at a lower priority and if
|
|
the writeback session is holding shared resources, e.g. a journal
|
|
entry, may lead to priority inversion. There is no one easy solution
|
|
for the problem. Filesystems can try to work around specific problem
|
|
cases by skipping wbc_init_bio() or using bio_associate_blkcg()
|
|
directly.
|