Lines Matching refs:of
9 It's designed around the performance characteristics of SSDs - it only allocates
18 doesn't even have a notion of a clean shutdown; bcache simply doesn't return
21 Writeback caching can use most of the cache for buffering writes - writing
23 start to the end of the index.
27 it also keeps a rolling average of the IO sizes per task, and as long as the
28 average is above the cutoff it will skip all IO from that task - instead of
32 In the event of a data IO error on the flash it will try to recover by reading
77 but will allow for mirroring of metadata and dirty data in the future. Your new
84 device to a cache set is done thusly, with the UUID of the cache set in
128 read some of the dirty data, though.
132 Bcache has a bunch of config options and tunables. The defaults are intended to
139 running in writeback mode, which isn't the default (not due to a lack of
149 gigabyte file you probably don't want that pushing 10 gigabytes of randomly
150 accessed data out of your cache.
179 - Still getting cache misses, of the same data
189 nodes are huge and index large regions of the device). But when you're
190 benchmarking, if you're trying to warm the cache by reading a bunch of data
202 Echo the UUID of a cache set to this file to enable caching.
205 Can be one of either writethrough, writeback, writearound or none.
216 Amount of dirty data for this backing device in the cache. Continuously
220 Name of underlying device.
223 Size of readahead that should be performed. Defaults to 0. If set to e.g.
237 If non zero, bcache keeps a list of the last 128 requests submitted to compare
239 continuations of previous requests for the purpose of determining sequential
244 The backing device can be in one of four different states:
248 clean: Part of a cache set, and there is no cached dirty data.
250 dirty: Part of a cache set, and there is cached dirty data.
262 any, waits some number of seconds before initiating writeback. Defaults to
266 If nonzero, bcache tries to keep around this percentage of the cache dirty by
276 If off, writeback of dirty data will not take place at all. Dirty data will
287 Amount of IO (both reads and writes) that has bypassed the cache
306 Count of times readahead occurred.
316 Symlink to each of the attached backing devices.
319 Block size of the cache devices.
322 Amount of memory currently used by the btree cache
325 Size of buckets
328 Symlink to each of the cache devices comprising this cache set.
331 Percentage of cache device which doesn't contain dirty data, and could
340 Amount of dirty data is in the cache (updated when garbage collection runs).
357 Percentage of the root btree node in use. If this gets too high the node
365 Depth of the btree (A single node btree has depth 0).
373 This directory also exposes timings for a number of internal operations, with
378 Number of journal entries that are newer than the index.
384 Average fraction of btree in use.
405 Minimum granularity of writes - should match hardware sector size.
408 Sum of all btree writes, in (kilo/mega/giga) bytes
411 Size of buckets
414 One of either lru, fifo or random.
422 Size of the freelist as a percentage of nbuckets. Can be written to to
423 increase the number of buckets kept on the freelist, which lets you
424 artificially reduce the size of the cache at runtime. Mostly for testing
431 Number of errors that have occurred, decayed by io_error_halflife.
434 Sum of all non data writes (btree writes and all other metadata).
441 This can reveal your working set size. Unused is the percentage of
443 metadata overhead. Average is the average priority of cache buckets.
444 Next is a list of quantiles with the priority threshold of each.
447 Sum of all data that has been written to the cache; comparison with
448 btree_written gives the amount of write inflation in bcache.