1Introduction 2============ 3 4This document describes a collection of device-mapper targets that 5between them implement thin-provisioning and snapshots. 6 7The main highlight of this implementation, compared to the previous 8implementation of snapshots, is that it allows many virtual devices to 9be stored on the same data volume. This simplifies administration and 10allows the sharing of data between volumes, thus reducing disk usage. 11 12Another significant feature is support for an arbitrary depth of 13recursive snapshots (snapshots of snapshots of snapshots ...). The 14previous implementation of snapshots did this by chaining together 15lookup tables, and so performance was O(depth). This new 16implementation uses a single data structure to avoid this degradation 17with depth. Fragmentation may still be an issue, however, in some 18scenarios. 19 20Metadata is stored on a separate device from data, giving the 21administrator some freedom, for example to: 22 23- Improve metadata resilience by storing metadata on a mirrored volume 24 but data on a non-mirrored one. 25 26- Improve performance by storing the metadata on SSD. 27 28Status 29====== 30 31These targets are very much still in the EXPERIMENTAL state. Please 32do not yet rely on them in production. But do experiment and offer us 33feedback. Different use cases will have different performance 34characteristics, for example due to fragmentation of the data volume. 35 36If you find this software is not performing as expected please mail 37dm-devel@redhat.com with details and we'll try our best to improve 38things for you. 39 40Userspace tools for checking and repairing the metadata are under 41development. 42 43Cookbook 44======== 45 46This section describes some quick recipes for using thin provisioning. 47They use the dmsetup program to control the device-mapper driver 48directly. End users will be advised to use a higher-level volume 49manager such as LVM2 once support has been added. 50 51Pool device 52----------- 53 54The pool device ties together the metadata volume and the data volume. 55It maps I/O linearly to the data volume and updates the metadata via 56two mechanisms: 57 58- Function calls from the thin targets 59 60- Device-mapper 'messages' from userspace which control the creation of new 61 virtual devices amongst other things. 62 63Setting up a fresh pool device 64------------------------------ 65 66Setting up a pool device requires a valid metadata device, and a 67data device. If you do not have an existing metadata device you can 68make one by zeroing the first 4k to indicate empty metadata. 69 70 dd if=/dev/zero of=$metadata_dev bs=4096 count=1 71 72The amount of metadata you need will vary according to how many blocks 73are shared between thin devices (i.e. through snapshots). If you have 74less sharing than average you'll need a larger-than-average metadata device. 75 76As a guide, we suggest you calculate the number of bytes to use in the 77metadata device as 48 * $data_dev_size / $data_block_size but round it up 78to 2MB if the answer is smaller. If you're creating large numbers of 79snapshots which are recording large amounts of change, you may find you 80need to increase this. 81 82The largest size supported is 16GB: If the device is larger, 83a warning will be issued and the excess space will not be used. 84 85Reloading a pool table 86---------------------- 87 88You may reload a pool's table, indeed this is how the pool is resized 89if it runs out of space. (N.B. While specifying a different metadata 90device when reloading is not forbidden at the moment, things will go 91wrong if it does not route I/O to exactly the same on-disk location as 92previously.) 93 94Using an existing pool device 95----------------------------- 96 97 dmsetup create pool \ 98 --table "0 20971520 thin-pool $metadata_dev $data_dev \ 99 $data_block_size $low_water_mark" 100 101$data_block_size gives the smallest unit of disk space that can be 102allocated at a time expressed in units of 512-byte sectors. 103$data_block_size must be between 128 (64KB) and 2097152 (1GB) and a 104multiple of 128 (64KB). $data_block_size cannot be changed after the 105thin-pool is created. People primarily interested in thin provisioning 106may want to use a value such as 1024 (512KB). People doing lots of 107snapshotting may want a smaller value such as 128 (64KB). If you are 108not zeroing newly-allocated data, a larger $data_block_size in the 109region of 256000 (128MB) is suggested. 110 111$low_water_mark is expressed in blocks of size $data_block_size. If 112free space on the data device drops below this level then a dm event 113will be triggered which a userspace daemon should catch allowing it to 114extend the pool device. Only one such event will be sent. 115Resuming a device with a new table itself triggers an event so the 116userspace daemon can use this to detect a situation where a new table 117already exceeds the threshold. 118 119A low water mark for the metadata device is maintained in the kernel and 120will trigger a dm event if free space on the metadata device drops below 121it. 122 123Updating on-disk metadata 124------------------------- 125 126On-disk metadata is committed every time a FLUSH or FUA bio is written. 127If no such requests are made then commits will occur every second. This 128means the thin-provisioning target behaves like a physical disk that has 129a volatile write cache. If power is lost you may lose some recent 130writes. The metadata should always be consistent in spite of any crash. 131 132If data space is exhausted the pool will either error or queue IO 133according to the configuration (see: error_if_no_space). If metadata 134space is exhausted or a metadata operation fails: the pool will error IO 135until the pool is taken offline and repair is performed to 1) fix any 136potential inconsistencies and 2) clear the flag that imposes repair. 137Once the pool's metadata device is repaired it may be resized, which 138will allow the pool to return to normal operation. Note that if a pool 139is flagged as needing repair, the pool's data and metadata devices 140cannot be resized until repair is performed. It should also be noted 141that when the pool's metadata space is exhausted the current metadata 142transaction is aborted. Given that the pool will cache IO whose 143completion may have already been acknowledged to upper IO layers 144(e.g. filesystem) it is strongly suggested that consistency checks 145(e.g. fsck) be performed on those layers when repair of the pool is 146required. 147 148Thin provisioning 149----------------- 150 151i) Creating a new thinly-provisioned volume. 152 153 To create a new thinly- provisioned volume you must send a message to an 154 active pool device, /dev/mapper/pool in this example. 155 156 dmsetup message /dev/mapper/pool 0 "create_thin 0" 157 158 Here '0' is an identifier for the volume, a 24-bit number. It's up 159 to the caller to allocate and manage these identifiers. If the 160 identifier is already in use, the message will fail with -EEXIST. 161 162ii) Using a thinly-provisioned volume. 163 164 Thinly-provisioned volumes are activated using the 'thin' target: 165 166 dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0" 167 168 The last parameter is the identifier for the thinp device. 169 170Internal snapshots 171------------------ 172 173i) Creating an internal snapshot. 174 175 Snapshots are created with another message to the pool. 176 177 N.B. If the origin device that you wish to snapshot is active, you 178 must suspend it before creating the snapshot to avoid corruption. 179 This is NOT enforced at the moment, so please be careful! 180 181 dmsetup suspend /dev/mapper/thin 182 dmsetup message /dev/mapper/pool 0 "create_snap 1 0" 183 dmsetup resume /dev/mapper/thin 184 185 Here '1' is the identifier for the volume, a 24-bit number. '0' is the 186 identifier for the origin device. 187 188ii) Using an internal snapshot. 189 190 Once created, the user doesn't have to worry about any connection 191 between the origin and the snapshot. Indeed the snapshot is no 192 different from any other thinly-provisioned device and can be 193 snapshotted itself via the same method. It's perfectly legal to 194 have only one of them active, and there's no ordering requirement on 195 activating or removing them both. (This differs from conventional 196 device-mapper snapshots.) 197 198 Activate it exactly the same way as any other thinly-provisioned volume: 199 200 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1" 201 202External snapshots 203------------------ 204 205You can use an external _read only_ device as an origin for a 206thinly-provisioned volume. Any read to an unprovisioned area of the 207thin device will be passed through to the origin. Writes trigger 208the allocation of new blocks as usual. 209 210One use case for this is VM hosts that want to run guests on 211thinly-provisioned volumes but have the base image on another device 212(possibly shared between many VMs). 213 214You must not write to the origin device if you use this technique! 215Of course, you may write to the thin device and take internal snapshots 216of the thin volume. 217 218i) Creating a snapshot of an external device 219 220 This is the same as creating a thin device. 221 You don't mention the origin at this stage. 222 223 dmsetup message /dev/mapper/pool 0 "create_thin 0" 224 225ii) Using a snapshot of an external device. 226 227 Append an extra parameter to the thin target specifying the origin: 228 229 dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 0 /dev/image" 230 231 N.B. All descendants (internal snapshots) of this snapshot require the 232 same extra origin parameter. 233 234Deactivation 235------------ 236 237All devices using a pool must be deactivated before the pool itself 238can be. 239 240 dmsetup remove thin 241 dmsetup remove snap 242 dmsetup remove pool 243 244Reference 245========= 246 247'thin-pool' target 248------------------ 249 250i) Constructor 251 252 thin-pool <metadata dev> <data dev> <data block size (sectors)> \ 253 <low water mark (blocks)> [<number of feature args> [<arg>]*] 254 255 Optional feature arguments: 256 257 skip_block_zeroing: Skip the zeroing of newly-provisioned blocks. 258 259 ignore_discard: Disable discard support. 260 261 no_discard_passdown: Don't pass discards down to the underlying 262 data device, but just remove the mapping. 263 264 read_only: Don't allow any changes to be made to the pool 265 metadata. 266 267 error_if_no_space: Error IOs, instead of queueing, if no space. 268 269 Data block size must be between 64KB (128 sectors) and 1GB 270 (2097152 sectors) inclusive. 271 272 273ii) Status 274 275 <transaction id> <used metadata blocks>/<total metadata blocks> 276 <used data blocks>/<total data blocks> <held metadata root> 277 [no_]discard_passdown ro|rw 278 279 transaction id: 280 A 64-bit number used by userspace to help synchronise with metadata 281 from volume managers. 282 283 used data blocks / total data blocks 284 If the number of free blocks drops below the pool's low water mark a 285 dm event will be sent to userspace. This event is edge-triggered and 286 it will occur only once after each resume so volume manager writers 287 should register for the event and then check the target's status. 288 289 held metadata root: 290 The location, in blocks, of the metadata root that has been 291 'held' for userspace read access. '-' indicates there is no 292 held root. 293 294 discard_passdown|no_discard_passdown 295 Whether or not discards are actually being passed down to the 296 underlying device. When this is enabled when loading the table, 297 it can get disabled if the underlying device doesn't support it. 298 299 ro|rw 300 If the pool encounters certain types of device failures it will 301 drop into a read-only metadata mode in which no changes to 302 the pool metadata (like allocating new blocks) are permitted. 303 304 In serious cases where even a read-only mode is deemed unsafe 305 no further I/O will be permitted and the status will just 306 contain the string 'Fail'. The userspace recovery tools 307 should then be used. 308 309 error_if_no_space|queue_if_no_space 310 If the pool runs out of data or metadata space, the pool will 311 either queue or error the IO destined to the data device. The 312 default is to queue the IO until more space is added or the 313 'no_space_timeout' expires. The 'no_space_timeout' dm-thin-pool 314 module parameter can be used to change this timeout -- it 315 defaults to 60 seconds but may be disabled using a value of 0. 316 317iii) Messages 318 319 create_thin <dev id> 320 321 Create a new thinly-provisioned device. 322 <dev id> is an arbitrary unique 24-bit identifier chosen by 323 the caller. 324 325 create_snap <dev id> <origin id> 326 327 Create a new snapshot of another thinly-provisioned device. 328 <dev id> is an arbitrary unique 24-bit identifier chosen by 329 the caller. 330 <origin id> is the identifier of the thinly-provisioned device 331 of which the new device will be a snapshot. 332 333 delete <dev id> 334 335 Deletes a thin device. Irreversible. 336 337 set_transaction_id <current id> <new id> 338 339 Userland volume managers, such as LVM, need a way to 340 synchronise their external metadata with the internal metadata of the 341 pool target. The thin-pool target offers to store an 342 arbitrary 64-bit transaction id and return it on the target's 343 status line. To avoid races you must provide what you think 344 the current transaction id is when you change it with this 345 compare-and-swap message. 346 347 reserve_metadata_snap 348 349 Reserve a copy of the data mapping btree for use by userland. 350 This allows userland to inspect the mappings as they were when 351 this message was executed. Use the pool's status command to 352 get the root block associated with the metadata snapshot. 353 354 release_metadata_snap 355 356 Release a previously reserved copy of the data mapping btree. 357 358'thin' target 359------------- 360 361i) Constructor 362 363 thin <pool dev> <dev id> [<external origin dev>] 364 365 pool dev: 366 the thin-pool device, e.g. /dev/mapper/my_pool or 253:0 367 368 dev id: 369 the internal device identifier of the device to be 370 activated. 371 372 external origin dev: 373 an optional block device outside the pool to be treated as a 374 read-only snapshot origin: reads to unprovisioned areas of the 375 thin target will be mapped to this device. 376 377The pool doesn't store any size against the thin devices. If you 378load a thin target that is smaller than you've been using previously, 379then you'll have no access to blocks mapped beyond the end. If you 380load a target that is bigger than before, then extra blocks will be 381provisioned as and when needed. 382 383ii) Status 384 385 <nr mapped sectors> <highest mapped sector> 386 387 If the pool has encountered device errors and failed, the status 388 will just contain the string 'Fail'. The userspace recovery 389 tools should then be used. 390