Lines Matching refs:in
7 both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
9 and based on user options switch IO policies in the background.
12 weight time based division of disk policy. It is implemented in CFQ. Hence
15 on devices. This policy is implemented in generic block layer and can be
22 You can do a very simple testing of running two dd threads in two different
28 - Enable group scheduling in CFQ
46 launch two dd threads in different cgroup to read those files.
62 much disk time (in milli seconds), each group got and how many secotors each
63 group dispatched to the disk. We provide fairness in terms of disk time, so
64 ideally io.disk_time of cgroups should be in proportion to the weight.
71 - Enable throttling in block layer
89 1024+0 records in
115 directly generated by tasks in that cgroup.
131 - Debug help. Right now some additional stats file show up in cgroup
135 - Enables group scheduling in CFQ. Currently only 1 level of group
139 - Enable block device throttling support in block layer.
159 Configure weight=300 on /dev/sdb (8:16) in this cgroup
165 Configure weight=500 on /dev/sda (8:0) in this cgroup
172 Remove specific weight for /dev/sda in this cgroup
180 deciding how much weight tasks in the given cgroup has while
185 - disk time allocated to cgroup per device in milliseconds. First
187 third field specifies the disk time allocated to group in
212 for the IOs done by this cgroup. This is in nanoseconds to make it
217 of multiple IOs when served out of order which may result in total
222 io_service_time in ns.
225 - Total amount of time the IOs for this cgroup spent waiting in the
233 device). This is in nanoseconds to make it meaningful for flash
237 and the fourth field specifies the io_wait_time in ns.
260 cumulative total of the amount of time spent by each IO in that cgroup
261 waiting in the scheduler queue. This is in nanoseconds. If this is
262 read when the cgroup is in a waiting (for timeslice) state, the stat
270 spent idling for one of the queues of the cgroup. This is in
271 nanoseconds. If this is read when the cgroup is in an empty state,
278 given cgroup in anticipation of a better request than the existing ones
279 from other queues/cgroups. This is in nanoseconds. If this is read
280 when the cgroup is in an idling state, the stat will only report the
300 specified in bytes per second. Rules are per device. Following is
307 specified in bytes per second. Rules are per device. Following is
314 specified in IO per second. Rules are per device. Following is
321 specified in io per second. Rules are per device. Following is
336 blkio.io_serviced does accounting as seen by CFQ and counts are in
338 blkio.throttle.io_serviced counts number of IO in terms of number
358 - Writing an int to this file will result in resetting all the stats
373 means that cfq provides fairness among groups in terms of IOPS and not in
379 setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
380 on the group in an attempt to provide fairness among groups.
386 groups and put applications in that group which are not driving enough