Lines Matching refs:of

15 is mounted on /sys, although of course it may be mounted anywhere.
19 Here are examples of these different formats:
35 a choice of "cat /sys/block/hda/stat" or "grep 'hda ' /proc/diskstats".
36 The advantage of one over the other is that the sysfs choice works well
37 if you are watching a known, small set of disks. /proc/diskstats may
38 be a better choice if you are watching a large number of disks because
39 you'll avoid the overhead of 50, 100, or 500 or more opens/closes with
40 each snapshot of your disk statistics.
43 the above example, the first field of statistics would be 446216.
47 minor device numbers, and device name. Each of these formats provides
48 eleven fields of statistics, each meaning exactly the same things.
54 your observations are measured in large numbers of minutes or hours,
57 Each set of stats only applies to the indicated device; if you want
60 Field 1 -- # of reads completed
61 This is the total number of reads completed successfully.
62 Field 2 -- # of reads merged, field 6 -- # of writes merged
67 Field 3 -- # of sectors read
68 This is the total number of sectors read successfully.
69 Field 4 -- # of milliseconds spent reading
70 This is the total number of milliseconds spent by all reads (as
72 Field 5 -- # of writes completed
73 This is the total number of writes completed successfully.
74 Field 6 -- # of writes merged
75 See the description of field 2.
76 Field 7 -- # of sectors written
77 This is the total number of sectors written successfully.
78 Field 8 -- # of milliseconds spent writing
79 This is the total number of milliseconds spent by all writes (as
81 Field 9 -- # of I/Os currently in progress
84 Field 10 -- # of milliseconds spent doing I/Os
86 Field 11 -- weighted # of milliseconds spent doing I/Os
88 merge, or read of these stats by the number of I/Os in progress
89 (field 9) times the number of milliseconds spent doing I/O since the
90 last update of this field. This can provide an easy measure of both
98 but due to the lack of locking it may only be very close.
100 In 2.6, there are counters for each CPU, which make the lack of locking
118 Field 1 -- # of reads issued
119 This is the total number of reads issued to this partition.
120 Field 2 -- # of sectors read
121 This is the total number of sectors requested to be read from this
123 Field 3 -- # of writes issued
124 This is the total number of writes issued to this partition.
125 Field 4 -- # of sectors written
126 This is the total number of sectors requested to be written to
130 record of the partition-relative address is kept, the subsequent success
131 or failure of the read cannot be attributed to the partition. In other
132 words, the number of reads for partitions is counted slightly before time
133 of queuing for partitions, and at completion for whole disks. This is
136 More significant is the error induced by counting the numbers of
138 typical workload usually contains a lot of successive and adjacent requests,
139 the number of reads/writes issued can be several times higher than the
140 number of reads/writes completed.
144 keep record of the partition-relative address, an operation is attributed to
145 the partition which contains the first sector of the request after the
152 In 2.6, sysfs is not mounted by default. If your distribution of