1----------------------------------------------------------------------
21. INTRODUCTION
3
4Modern filesystems feature checksumming of data and metadata to
5protect against data corruption.  However, the detection of the
6corruption is done at read time which could potentially be months
7after the data was written.  At that point the original data that the
8application tried to write is most likely lost.
9
10The solution is to ensure that the disk is actually storing what the
11application meant it to.  Recent additions to both the SCSI family
12protocols (SBC Data Integrity Field, SCC protection proposal) as well
13as SATA/T13 (External Path Protection) try to remedy this by adding
14support for appending integrity metadata to an I/O.  The integrity
15metadata (or protection information in SCSI terminology) includes a
16checksum for each sector as well as an incrementing counter that
17ensures the individual sectors are written in the right order.  And
18for some protection schemes also that the I/O is written to the right
19place on disk.
20
21Current storage controllers and devices implement various protective
22measures, for instance checksumming and scrubbing.  But these
23technologies are working in their own isolated domains or at best
24between adjacent nodes in the I/O path.  The interesting thing about
25DIF and the other integrity extensions is that the protection format
26is well defined and every node in the I/O path can verify the
27integrity of the I/O and reject it if corruption is detected.  This
28allows not only corruption prevention but also isolation of the point
29of failure.
30
31----------------------------------------------------------------------
322. THE DATA INTEGRITY EXTENSIONS
33
34As written, the protocol extensions only protect the path between
35controller and storage device.  However, many controllers actually
36allow the operating system to interact with the integrity metadata
37(IMD).  We have been working with several FC/SAS HBA vendors to enable
38the protection information to be transferred to and from their
39controllers.
40
41The SCSI Data Integrity Field works by appending 8 bytes of protection
42information to each sector.  The data + integrity metadata is stored
43in 520 byte sectors on disk.  Data + IMD are interleaved when
44transferred between the controller and target.  The T13 proposal is
45similar.
46
47Because it is highly inconvenient for operating systems to deal with
48520 (and 4104) byte sectors, we approached several HBA vendors and
49encouraged them to allow separation of the data and integrity metadata
50scatter-gather lists.
51
52The controller will interleave the buffers on write and split them on
53read.  This means that Linux can DMA the data buffers to and from
54host memory without changes to the page cache.
55
56Also, the 16-bit CRC checksum mandated by both the SCSI and SATA specs
57is somewhat heavy to compute in software.  Benchmarks found that
58calculating this checksum had a significant impact on system
59performance for a number of workloads.  Some controllers allow a
60lighter-weight checksum to be used when interfacing with the operating
61system.  Emulex, for instance, supports the TCP/IP checksum instead.
62The IP checksum received from the OS is converted to the 16-bit CRC
63when writing and vice versa.  This allows the integrity metadata to be
64generated by Linux or the application at very low cost (comparable to
65software RAID5).
66
67The IP checksum is weaker than the CRC in terms of detecting bit
68errors.  However, the strength is really in the separation of the data
69buffers and the integrity metadata.  These two distinct buffers must
70match up for an I/O to complete.
71
72The separation of the data and integrity metadata buffers as well as
73the choice in checksums is referred to as the Data Integrity
74Extensions.  As these extensions are outside the scope of the protocol
75bodies (T10, T13), Oracle and its partners are trying to standardize
76them within the Storage Networking Industry Association.
77
78----------------------------------------------------------------------
793. KERNEL CHANGES
80
81The data integrity framework in Linux enables protection information
82to be pinned to I/Os and sent to/received from controllers that
83support it.
84
85The advantage to the integrity extensions in SCSI and SATA is that
86they enable us to protect the entire path from application to storage
87device.  However, at the same time this is also the biggest
88disadvantage. It means that the protection information must be in a
89format that can be understood by the disk.
90
91Generally Linux/POSIX applications are agnostic to the intricacies of
92the storage devices they are accessing.  The virtual filesystem switch
93and the block layer make things like hardware sector size and
94transport protocols completely transparent to the application.
95
96However, this level of detail is required when preparing the
97protection information to send to a disk.  Consequently, the very
98concept of an end-to-end protection scheme is a layering violation.
99It is completely unreasonable for an application to be aware whether
100it is accessing a SCSI or SATA disk.
101
102The data integrity support implemented in Linux attempts to hide this
103from the application.  As far as the application (and to some extent
104the kernel) is concerned, the integrity metadata is opaque information
105that's attached to the I/O.
106
107The current implementation allows the block layer to automatically
108generate the protection information for any I/O.  Eventually the
109intent is to move the integrity metadata calculation to userspace for
110user data.  Metadata and other I/O that originates within the kernel
111will still use the automatic generation interface.
112
113Some storage devices allow each hardware sector to be tagged with a
11416-bit value.  The owner of this tag space is the owner of the block
115device.  I.e. the filesystem in most cases.  The filesystem can use
116this extra space to tag sectors as they see fit.  Because the tag
117space is limited, the block interface allows tagging bigger chunks by
118way of interleaving.  This way, 8*16 bits of information can be
119attached to a typical 4KB filesystem block.
120
121This also means that applications such as fsck and mkfs will need
122access to manipulate the tags from user space.  A passthrough
123interface for this is being worked on.
124
125
126----------------------------------------------------------------------
1274. BLOCK LAYER IMPLEMENTATION DETAILS
128
1294.1 BIO
130
131The data integrity patches add a new field to struct bio when
132CONFIG_BLK_DEV_INTEGRITY is enabled.  bio_integrity(bio) returns a
133pointer to a struct bip which contains the bio integrity payload.
134Essentially a bip is a trimmed down struct bio which holds a bio_vec
135containing the integrity metadata and the required housekeeping
136information (bvec pool, vector count, etc.)
137
138A kernel subsystem can enable data integrity protection on a bio by
139calling bio_integrity_alloc(bio).  This will allocate and attach the
140bip to the bio.
141
142Individual pages containing integrity metadata can subsequently be
143attached using bio_integrity_add_page().
144
145bio_free() will automatically free the bip.
146
147
1484.2 BLOCK DEVICE
149
150Because the format of the protection data is tied to the physical
151disk, each block device has been extended with a block integrity
152profile (struct blk_integrity).  This optional profile is registered
153with the block layer using blk_integrity_register().
154
155The profile contains callback functions for generating and verifying
156the protection data, as well as getting and setting application tags.
157The profile also contains a few constants to aid in completing,
158merging and splitting the integrity metadata.
159
160Layered block devices will need to pick a profile that's appropriate
161for all subdevices.  blk_integrity_compare() can help with that.  DM
162and MD linear, RAID0 and RAID1 are currently supported.  RAID4/5/6
163will require extra work due to the application tag.
164
165
166----------------------------------------------------------------------
1675.0 BLOCK LAYER INTEGRITY API
168
1695.1 NORMAL FILESYSTEM
170
171    The normal filesystem is unaware that the underlying block device
172    is capable of sending/receiving integrity metadata.  The IMD will
173    be automatically generated by the block layer at submit_bio() time
174    in case of a WRITE.  A READ request will cause the I/O integrity
175    to be verified upon completion.
176
177    IMD generation and verification can be toggled using the
178
179      /sys/block/<bdev>/integrity/write_generate
180
181    and
182
183      /sys/block/<bdev>/integrity/read_verify
184
185    flags.
186
187
1885.2 INTEGRITY-AWARE FILESYSTEM
189
190    A filesystem that is integrity-aware can prepare I/Os with IMD
191    attached.  It can also use the application tag space if this is
192    supported by the block device.
193
194
195    int bio_integrity_prep(bio);
196
197      To generate IMD for WRITE and to set up buffers for READ, the
198      filesystem must call bio_integrity_prep(bio).
199
200      Prior to calling this function, the bio data direction and start
201      sector must be set, and the bio should have all data pages
202      added.  It is up to the caller to ensure that the bio does not
203      change while I/O is in progress.
204
205      bio_integrity_prep() should only be called if
206      bio_integrity_enabled() returned 1.
207
208
2095.3 PASSING EXISTING INTEGRITY METADATA
210
211    Filesystems that either generate their own integrity metadata or
212    are capable of transferring IMD from user space can use the
213    following calls:
214
215
216    struct bip * bio_integrity_alloc(bio, gfp_mask, nr_pages);
217
218      Allocates the bio integrity payload and hangs it off of the bio.
219      nr_pages indicate how many pages of protection data need to be
220      stored in the integrity bio_vec list (similar to bio_alloc()).
221
222      The integrity payload will be freed at bio_free() time.
223
224
225    int bio_integrity_add_page(bio, page, len, offset);
226
227      Attaches a page containing integrity metadata to an existing
228      bio.  The bio must have an existing bip,
229      i.e. bio_integrity_alloc() must have been called.  For a WRITE,
230      the integrity metadata in the pages must be in a format
231      understood by the target device with the notable exception that
232      the sector numbers will be remapped as the request traverses the
233      I/O stack.  This implies that the pages added using this call
234      will be modified during I/O!  The first reference tag in the
235      integrity metadata must have a value of bip->bip_sector.
236
237      Pages can be added using bio_integrity_add_page() as long as
238      there is room in the bip bio_vec array (nr_pages).
239
240      Upon completion of a READ operation, the attached pages will
241      contain the integrity metadata received from the storage device.
242      It is up to the receiver to process them and verify data
243      integrity upon completion.
244
245
2465.4 REGISTERING A BLOCK DEVICE AS CAPABLE OF EXCHANGING INTEGRITY
247    METADATA
248
249    To enable integrity exchange on a block device the gendisk must be
250    registered as capable:
251
252    int blk_integrity_register(gendisk, blk_integrity);
253
254      The blk_integrity struct is a template and should contain the
255      following:
256
257        static struct blk_integrity my_profile = {
258            .name                   = "STANDARDSBODY-TYPE-VARIANT-CSUM",
259            .generate_fn            = my_generate_fn,
260       	    .verify_fn              = my_verify_fn,
261	    .tuple_size             = sizeof(struct my_tuple_size),
262	    .tag_size               = <tag bytes per hw sector>,
263        };
264
265      'name' is a text string which will be visible in sysfs.  This is
266      part of the userland API so chose it carefully and never change
267      it.  The format is standards body-type-variant.
268      E.g. T10-DIF-TYPE1-IP or T13-EPP-0-CRC.
269
270      'generate_fn' generates appropriate integrity metadata (for WRITE).
271
272      'verify_fn' verifies that the data buffer matches the integrity
273      metadata.
274
275      'tuple_size' must be set to match the size of the integrity
276      metadata per sector.  I.e. 8 for DIF and EPP.
277
278      'tag_size' must be set to identify how many bytes of tag space
279      are available per hardware sector.  For DIF this is either 2 or
280      0 depending on the value of the Control Mode Page ATO bit.
281
282----------------------------------------------------------------------
2832007-12-24 Martin K. Petersen <martin.petersen@oracle.com>
284