Lines Matching refs:of

6 a need of various kinds of IO control policies (like proportional BW, max BW)
12 weight time based division of disk policy. It is implemented in CFQ. Hence
20 Proportional Weight division of bandwidth
22 You can do a very simple testing of running two dd threads in two different
41 - Set weights of group test1 and test2
51 dd if=/mnt/sdb/zerofile1 of=/dev/null &
55 dd if=/mnt/sdb/zerofile2 of=/dev/null &
60 on looking at (with the help of script), at blkio.disk_time and
61 blkio.disk_sectors files of both test1 and test2 groups. This will tell how
63 group dispatched to the disk. We provide fairness in terms of disk time, so
64 ideally io.disk_time of cgroups should be in proportion to the weight.
82 Above will put a limit of 1MB/second on reads happening for root group
87 # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
135 - Enables group scheduling in CFQ. Currently only 1 level of group
141 Details of cgroup files
146 - Specifies per cgroup weight. This is default weight of the group
149 Currently allowed range of weights is from 10 to 1000.
153 These rules override the default value of group weight as specified
179 - Equivalents of blkio.weight[_device] for the purpose of
186 two fields specify the major and minor number of the device and
191 - number of sectors transferred to/from disk by the group. First
192 two fields specify the major and minor number of the device and
193 third field specifies the number of sectors transferred by the
197 - Number of bytes transferred to/from the disk by the group. These
198 are further divided by the type of operation - read or write, sync
199 or async. First two fields specify the major and minor number of the
201 specifies the number of bytes.
204 - Number of IOs (bio) issued to the disk by the group. These
205 are further divided by the type of operation - read or write, sync
206 or async. First two fields specify the major and minor number of the
208 specifies the number of IOs.
211 - Total amount of time between request dispatch and request completion
213 meaningful for flash devices too. For devices with queue depth of 1,
215 that is no longer true as requests may be served out of order. This
217 of multiple IOs when served out of order which may result in total
219 the type of operation - read or write, sync or async. First two fields
220 specify the major and minor number of the device, third field
225 - Total amount of time the IOs for this cgroup spent waiting in the
228 measure of total time the cgroup spent waiting but rather a measure of
232 (there might be a time lag here due to re-ordering of requests by the
234 devices too. This time is further divided by the type of operation -
236 minor number of the device, third field specifies the operation type
240 - Total number of bios/requests merged into requests belonging to this
241 cgroup. This is further divided by the type of operation - read or
245 - Total number of requests queued up at any given instant for this
246 cgroup. This is further divided by the type of operation - read or
251 The average queue size for this cgroup over the entire time of this
252 cgroup's existence. Queue size samples are taken each time one of the
253 queues of this cgroup gets a timeslice.
257 This is the amount of time the cgroup had to wait since it became busy
258 (i.e., went from 0 to 1 request queued) to get a timeslice for one of
260 cumulative total of the amount of time spent by each IO in that cgroup
268 This is the amount of time a cgroup spends without any pending
270 spent idling for one of the queues of the cgroup. This is in
277 This is the amount of time spent by the IO scheduler idling for a
278 given cgroup in anticipation of a better request than the existing ones
287 from service tree of the device. First two fields specify the major
288 and minor number of the device and third field specifies the number
289 of times a group was dequeued from a particular device.
292 - Recursive version of various stats. These files show the
330 - Number of IOs (bio) issued to the disk by the group. These
331 are further divided by the type of operation - read or write, sync
332 or async. First two fields specify the major and minor number of the
334 specifies the number of IOs.
337 - Number of bytes transferred to/from the disk by the group. These
338 are further divided by the type of operation - read or write, sync
339 or async. First two fields specify the major and minor number of the
341 specifies the number of bytes.
359 That means CFQ will not idle between cfq queues of a cfq group and hence be
361 means that cfq provides fairness among groups in terms of IOPS and not in
362 terms of disk time.
384 regulates the proportion of dirty memory by balancing dirtying and
398 and enforces the more restrictive of the two. Also, writeback control
415 regions of the same inode, which is an unlikely use case and decided
445 selective disabling of cgroup writeback support which is helpful when