Lines Matching refs:of

7 The largest scalability problem facing XFS is not one of algorithmic
8 scalability, but of verification of the filesystem structure. Scalabilty of the
10 adequate for supporting PB scale filesystems with billions of inodes, however it
18 verify, and this in turn limits the supportable size of an XFS filesystem.
20 For example, it is entirely possible to manually use xfs_db and a bit of
21 scripting to analyse the structure of a 100TB filesystem when trying to
22 determine the root cause of a corruption problem, but it is still mainly a
23 manual task of verifying that things like single bit errors or misplaced writes
24 weren't the ultimate cause of a corruption event. It may take a few hours to a
29 to analyse and so that analysis blows out towards weeks/months of forensic work.
30 Most of the analysis work is slow and tedious, so as the amount of analysis goes
33 required for basic forensic analysis of the filesystem structure.
39 One of the problems with the current metadata format is that apart from the
40 magic number in the metadata block, we have no other way of identifying what it
45 Hence most of the time spent on forensic analysis is spent doing basic
46 verification of metadata values, looking for values that are in range (and hence
55 of analysis. We can't protect against every possible type of error, but we can
56 ensure that common types of errors are easily detectable. Hence the concept of
59 The first, fundamental requirement of self describing metadata is that the
60 metadata object contains some form of unique identifier in a well known
61 location. This allows us to identify the expected contents of the block and
63 the type of metadata in the object, then the metadata doesn't describe itself
68 magic numbers. Hence we can change the on-disk format of all these objects to
72 self identifying and we can do much more expansive automated verification of the
75 As a primary concern, self describing metadata needs some form of overall
77 not been changed as a result of external influences. Hence we need some form of
80 contain, a large amount of the manual verification work can be skipped.
85 fast. So while CRC32c is not the strongest of possible integrity checks that
88 does really provide any extra value over CRC32c, but it does add a lot of
97 written to the "correct block" of the wrong filesystem. Hence location
103 of the block is important as it allows us to find other related metadata to
104 determine the scope of the corruption. For example, if we have a extent btree
106 filesystem to find the owner of the block. Worse, the corruption could mean that
108 in the metadata we have no idea of the scope of the corruption. If we have an
110 determine the scope of the problem.
112 Different types of metadata have different owner identifiers. For example,
115 contents of the owner field are determined by the type of metadata object we are
119 Self describing metadata also needs to contain some indication of when it was
120 written to the filesystem. One of the key information points when doing forensic
121 analysis is how recently the block was modified. Correlation of set of corrupted
135 Number (LSN) of the most recent transaction it was modified on written into it.
136 This number will always increase over the life of the filesystem, and the only
137 thing that resets it is running xfs_repair on the filesystem. Further, by use of
139 checkpoint and hence have some idea of how much modification occurred between
140 the first and last instance of corrupt metadata on disk and, further, how much
147 Validation of self-describing metadata takes place at runtime in two places:
152 The verification is completely stateless - it is done independently of the
155 As such, we cannot catch all types of corruption that can occur within a block
156 as there may be certain limitations that operational state enforces of the
157 metadata, or there may be corruption of interblock relationships (e.g. corrupted
159 body, but in general most of the per-field validation is handled by the
162 For read verification, the caller needs to specify the expected type of metadata
168 need more discrimination of error type at higher levels, we can define new
175 object specific metadata validation. If any of these checks fail, then the
178 Write verification is the opposite of the read verification - first the object
199 Depending on the metadata, this information may be part of a header structure
201 structure. The latter occurs with metadata that already contains some of this
205 level of information is generally provided. For example:
208 number for location. The two of these combined provide the same
215 of the metadata.
237 by checking the superblock of the feature bit, and then if the CRC verifies OK
238 (or is not needed) it verifies the actual contents of the block.
240 The verifier function will take a couple of different forms, depending on
241 whether the magic number can be used to determine the format of the block. In
320 This will verify the internal structure of the metadata before we go any
331 buffer. Hence we do not use per-buffer verifiers to do the work of per-object
333 identification of the buffer - that they contain inodes or dquots, and that
338 The structure of the verifiers and the identifiers checks is very similar to the
341 read out of the buffer and the struct xfs_inode is instantiated. The inode is
346 XXX: inode unlinked list modification doesn't recalculate the inode CRC! None of