Lines Matching refs:cgroup
5 cgroup subsys "blkio" implements the block io controller. There seems to be
8 Plan is to use the same cgroup based management interface for blkio controller
34 mount -t tmpfs cgroup_root /sys/fs/cgroup
35 mkdir /sys/fs/cgroup/blkio
36 mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
39 mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2
42 echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
43 echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight
46 launch two dd threads in different cgroup to read those files.
52 echo $! > /sys/fs/cgroup/blkio/test1/tasks
53 cat /sys/fs/cgroup/blkio/test1/tasks
56 echo $! > /sys/fs/cgroup/blkio/test2/tasks
57 cat /sys/fs/cgroup/blkio/test2/tasks
75 mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
80 echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
100 enabled from cgroup side, which currently is a development option and
115 directly generated by tasks in that cgroup.
117 Throttling without "sane_behavior" enabled from cgroup side will
131 - Debug help. Right now some additional stats file show up in cgroup
141 Details of cgroup files
146 - Specifies per cgroup weight. This is default weight of the group
152 - One can specify per cgroup per device rules using this interface.
159 Configure weight=300 on /dev/sdb (8:16) in this cgroup
165 Configure weight=500 on /dev/sda (8:0) in this cgroup
172 Remove specific weight for /dev/sda in this cgroup
180 deciding how much weight tasks in the given cgroup has while
181 competing with the cgroup's child cgroups. For details,
185 - disk time allocated to cgroup per device in milliseconds. First
212 for the IOs done by this cgroup. This is in nanoseconds to make it
225 - Total amount of time the IOs for this cgroup spent waiting in the
228 measure of total time the cgroup spent waiting but rather a measure of
241 cgroup. This is further divided by the type of operation - read or
246 cgroup. This is further divided by the type of operation - read or
251 The average queue size for this cgroup over the entire time of this
252 cgroup's existence. Queue size samples are taken each time one of the
253 queues of this cgroup gets a timeslice.
257 This is the amount of time the cgroup had to wait since it became busy
260 cumulative total of the amount of time spent by each IO in that cgroup
262 read when the cgroup is in a waiting (for timeslice) state, the stat
268 This is the amount of time a cgroup spends without any pending
270 spent idling for one of the queues of the cgroup. This is in
271 nanoseconds. If this is read when the cgroup is in an empty state,
278 given cgroup in anticipation of a better request than the existing ones
280 when the cgroup is in an idling state, the stat will only report the
347 for that cgroup.
387 On traditional cgroup hierarchies, relationships between different
389 to operate accounting for cgroup resource restrictions and all
390 writeback IOs are attributed to the root cgroup.
393 and the filesystem supports cgroup writeback, writeback operations
397 Writeback examines both system-wide and per-cgroup dirty memory status
406 basis. cgroup writeback bridges the gap by tracking ownership by
418 released, even if cgroup writeback strictly follows page ownership,
423 Filesystem support for cgroup writeback
426 A filesystem can make writeback IOs cgroup-aware by updating
433 the bio with the inode's owner cgroup. Can be called anytime
443 With writeback bio's annotated, cgroup support can be enabled per
445 selective disabling of cgroup writeback support which is helpful when
449 wbc_init_bio() binds the specified bio to its cgroup. Depending on