1 Dynamic DMA mapping Guide 2 ========================= 3 4 David S. Miller <davem@redhat.com> 5 Richard Henderson <rth@cygnus.com> 6 Jakub Jelinek <jakub@redhat.com> 7 8This is a guide to device driver writers on how to use the DMA API 9with example pseudo-code. For a concise description of the API, see 10DMA-API.txt. 11 12 CPU and DMA addresses 13 14There are several kinds of addresses involved in the DMA API, and it's 15important to understand the differences. 16 17The kernel normally uses virtual addresses. Any address returned by 18kmalloc(), vmalloc(), and similar interfaces is a virtual address and can 19be stored in a "void *". 20 21The virtual memory system (TLB, page tables, etc.) translates virtual 22addresses to CPU physical addresses, which are stored as "phys_addr_t" or 23"resource_size_t". The kernel manages device resources like registers as 24physical addresses. These are the addresses in /proc/iomem. The physical 25address is not directly useful to a driver; it must use ioremap() to map 26the space and produce a virtual address. 27 28I/O devices use a third kind of address: a "bus address". If a device has 29registers at an MMIO address, or if it performs DMA to read or write system 30memory, the addresses used by the device are bus addresses. In some 31systems, bus addresses are identical to CPU physical addresses, but in 32general they are not. IOMMUs and host bridges can produce arbitrary 33mappings between physical and bus addresses. 34 35From a device's point of view, DMA uses the bus address space, but it may 36be restricted to a subset of that space. For example, even if a system 37supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU 38so devices only need to use 32-bit DMA addresses. 39 40Here's a picture and some examples: 41 42 CPU CPU Bus 43 Virtual Physical Address 44 Address Address Space 45 Space Space 46 47 +-------+ +------+ +------+ 48 | | |MMIO | Offset | | 49 | | Virtual |Space | applied | | 50 C +-------+ --------> B +------+ ----------> +------+ A 51 | | mapping | | by host | | 52 +-----+ | | | | bridge | | +--------+ 53 | | | | +------+ | | | | 54 | CPU | | | | RAM | | | | Device | 55 | | | | | | | | | | 56 +-----+ +-------+ +------+ +------+ +--------+ 57 | | Virtual |Buffer| Mapping | | 58 X +-------+ --------> Y +------+ <---------- +------+ Z 59 | | mapping | RAM | by IOMMU 60 | | | | 61 | | | | 62 +-------+ +------+ 63 64During the enumeration process, the kernel learns about I/O devices and 65their MMIO space and the host bridges that connect them to the system. For 66example, if a PCI device has a BAR, the kernel reads the bus address (A) 67from the BAR and converts it to a CPU physical address (B). The address B 68is stored in a struct resource and usually exposed via /proc/iomem. When a 69driver claims a device, it typically uses ioremap() to map physical address 70B at a virtual address (C). It can then use, e.g., ioread32(C), to access 71the device registers at bus address A. 72 73If the device supports DMA, the driver sets up a buffer using kmalloc() or 74a similar interface, which returns a virtual address (X). The virtual 75memory system maps X to a physical address (Y) in system RAM. The driver 76can use virtual address X to access the buffer, but the device itself 77cannot because DMA doesn't go through the CPU virtual memory system. 78 79In some simple systems, the device can do DMA directly to physical address 80Y. But in many others, there is IOMMU hardware that translates DMA 81addresses to physical addresses, e.g., it translates Z to Y. This is part 82of the reason for the DMA API: the driver can give a virtual address X to 83an interface like dma_map_single(), which sets up any required IOMMU 84mapping and returns the DMA address Z. The driver then tells the device to 85do DMA to Z, and the IOMMU maps it to the buffer at address Y in system 86RAM. 87 88So that Linux can use the dynamic DMA mapping, it needs some help from the 89drivers, namely it has to take into account that DMA addresses should be 90mapped only for the time they are actually used and unmapped after the DMA 91transfer. 92 93The following API will work of course even on platforms where no such 94hardware exists. 95 96Note that the DMA API works with any bus independent of the underlying 97microprocessor architecture. You should use the DMA API rather than the 98bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the 99pci_map_*() interfaces. 100 101First of all, you should make sure 102 103#include <linux/dma-mapping.h> 104 105is in your driver, which provides the definition of dma_addr_t. This type 106can hold any valid DMA address for the platform and should be used 107everywhere you hold a DMA address returned from the DMA mapping functions. 108 109 What memory is DMA'able? 110 111The first piece of information you must know is what kernel memory can 112be used with the DMA mapping facilities. There has been an unwritten 113set of rules regarding this, and this text is an attempt to finally 114write them down. 115 116If you acquired your memory via the page allocator 117(i.e. __get_free_page*()) or the generic memory allocators 118(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from 119that memory using the addresses returned from those routines. 120 121This means specifically that you may _not_ use the memory/addresses 122returned from vmalloc() for DMA. It is possible to DMA to the 123_underlying_ memory mapped into a vmalloc() area, but this requires 124walking page tables to get the physical addresses, and then 125translating each of those pages back to a kernel address using 126something like __va(). [ EDIT: Update this when we integrate 127Gerd Knorr's generic code which does this. ] 128 129This rule also means that you may use neither kernel image addresses 130(items in data/text/bss segments), nor module image addresses, nor 131stack addresses for DMA. These could all be mapped somewhere entirely 132different than the rest of physical memory. Even if those classes of 133memory could physically work with DMA, you'd need to ensure the I/O 134buffers were cacheline-aligned. Without that, you'd see cacheline 135sharing problems (data corruption) on CPUs with DMA-incoherent caches. 136(The CPU could write to one word, DMA would write to a different one 137in the same cache line, and one of them could be overwritten.) 138 139Also, this means that you cannot take the return of a kmap() 140call and DMA to/from that. This is similar to vmalloc(). 141 142What about block I/O and networking buffers? The block I/O and 143networking subsystems make sure that the buffers they use are valid 144for you to DMA from/to. 145 146 DMA addressing limitations 147 148Does your device have any DMA addressing limitations? For example, is 149your device only capable of driving the low order 24-bits of address? 150If so, you need to inform the kernel of this fact. 151 152By default, the kernel assumes that your device can address the full 15332-bits. For a 64-bit capable device, this needs to be increased. 154And for a device with limitations, as discussed in the previous 155paragraph, it needs to be decreased. 156 157Special note about PCI: PCI-X specification requires PCI-X devices to 158support 64-bit addressing (DAC) for all transactions. And at least 159one platform (SGI SN2) requires 64-bit consistent allocations to 160operate correctly when the IO bus is in PCI-X mode. 161 162For correct operation, you must interrogate the kernel in your device 163probe routine to see if the DMA controller on the machine can properly 164support the DMA addressing limitation your device has. It is good 165style to do this even if your device holds the default setting, 166because this shows that you did think about these issues wrt. your 167device. 168 169The query is performed via a call to dma_set_mask_and_coherent(): 170 171 int dma_set_mask_and_coherent(struct device *dev, u64 mask); 172 173which will query the mask for both streaming and coherent APIs together. 174If you have some special requirements, then the following two separate 175queries can be used instead: 176 177 The query for streaming mappings is performed via a call to 178 dma_set_mask(): 179 180 int dma_set_mask(struct device *dev, u64 mask); 181 182 The query for consistent allocations is performed via a call 183 to dma_set_coherent_mask(): 184 185 int dma_set_coherent_mask(struct device *dev, u64 mask); 186 187Here, dev is a pointer to the device struct of your device, and mask 188is a bit mask describing which bits of an address your device 189supports. It returns zero if your card can perform DMA properly on 190the machine given the address mask you provided. In general, the 191device struct of your device is embedded in the bus-specific device 192struct of your device. For example, &pdev->dev is a pointer to the 193device struct of a PCI device (pdev is a pointer to the PCI device 194struct of your device). 195 196If it returns non-zero, your device cannot perform DMA properly on 197this platform, and attempting to do so will result in undefined 198behavior. You must either use a different mask, or not use DMA. 199 200This means that in the failure case, you have three options: 201 2021) Use another DMA mask, if possible (see below). 2032) Use some non-DMA mode for data transfer, if possible. 2043) Ignore this device and do not initialize it. 205 206It is recommended that your driver print a kernel KERN_WARNING message 207when you end up performing either #2 or #3. In this manner, if a user 208of your driver reports that performance is bad or that the device is not 209even detected, you can ask them for the kernel messages to find out 210exactly why. 211 212The standard 32-bit addressing device would do something like this: 213 214 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { 215 dev_warn(dev, "mydev: No suitable DMA available\n"); 216 goto ignore_this_device; 217 } 218 219Another common scenario is a 64-bit capable device. The approach here 220is to try for 64-bit addressing, but back down to a 32-bit mask that 221should not fail. The kernel may fail the 64-bit mask not because the 222platform is not capable of 64-bit addressing. Rather, it may fail in 223this case simply because 32-bit addressing is done more efficiently 224than 64-bit addressing. For example, Sparc64 PCI SAC addressing is 225more efficient than DAC addressing. 226 227Here is how you would handle a 64-bit capable device which can drive 228all 64-bits when accessing streaming DMA: 229 230 int using_dac; 231 232 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) { 233 using_dac = 1; 234 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) { 235 using_dac = 0; 236 } else { 237 dev_warn(dev, "mydev: No suitable DMA available\n"); 238 goto ignore_this_device; 239 } 240 241If a card is capable of using 64-bit consistent allocations as well, 242the case would look like this: 243 244 int using_dac, consistent_using_dac; 245 246 if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) { 247 using_dac = 1; 248 consistent_using_dac = 1; 249 } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) { 250 using_dac = 0; 251 consistent_using_dac = 0; 252 } else { 253 dev_warn(dev, "mydev: No suitable DMA available\n"); 254 goto ignore_this_device; 255 } 256 257The coherent mask will always be able to set the same or a smaller mask as 258the streaming mask. However for the rare case that a device driver only 259uses consistent allocations, one would have to check the return value from 260dma_set_coherent_mask(). 261 262Finally, if your device can only drive the low 24-bits of 263address you might do something like: 264 265 if (dma_set_mask(dev, DMA_BIT_MASK(24))) { 266 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n"); 267 goto ignore_this_device; 268 } 269 270When dma_set_mask() or dma_set_mask_and_coherent() is successful, and 271returns zero, the kernel saves away this mask you have provided. The 272kernel will use this information later when you make DMA mappings. 273 274There is a case which we are aware of at this time, which is worth 275mentioning in this documentation. If your device supports multiple 276functions (for example a sound card provides playback and record 277functions) and the various different functions have _different_ 278DMA addressing limitations, you may wish to probe each mask and 279only provide the functionality which the machine can handle. It 280is important that the last call to dma_set_mask() be for the 281most specific mask. 282 283Here is pseudo-code showing how this might be done: 284 285 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32) 286 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24) 287 288 struct my_sound_card *card; 289 struct device *dev; 290 291 ... 292 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) { 293 card->playback_enabled = 1; 294 } else { 295 card->playback_enabled = 0; 296 dev_warn(dev, "%s: Playback disabled due to DMA limitations\n", 297 card->name); 298 } 299 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) { 300 card->record_enabled = 1; 301 } else { 302 card->record_enabled = 0; 303 dev_warn(dev, "%s: Record disabled due to DMA limitations\n", 304 card->name); 305 } 306 307A sound card was used as an example here because this genre of PCI 308devices seems to be littered with ISA chips given a PCI front end, 309and thus retaining the 16MB DMA addressing limitations of ISA. 310 311 Types of DMA mappings 312 313There are two types of DMA mappings: 314 315- Consistent DMA mappings which are usually mapped at driver 316 initialization, unmapped at the end and for which the hardware should 317 guarantee that the device and the CPU can access the data 318 in parallel and will see updates made by each other without any 319 explicit software flushing. 320 321 Think of "consistent" as "synchronous" or "coherent". 322 323 The current default is to return consistent memory in the low 32 324 bits of the DMA space. However, for future compatibility you should 325 set the consistent mask even if this default is fine for your 326 driver. 327 328 Good examples of what to use consistent mappings for are: 329 330 - Network card DMA ring descriptors. 331 - SCSI adapter mailbox command data structures. 332 - Device firmware microcode executed out of 333 main memory. 334 335 The invariant these examples all require is that any CPU store 336 to memory is immediately visible to the device, and vice 337 versa. Consistent mappings guarantee this. 338 339 IMPORTANT: Consistent DMA memory does not preclude the usage of 340 proper memory barriers. The CPU may reorder stores to 341 consistent memory just as it may normal memory. Example: 342 if it is important for the device to see the first word 343 of a descriptor updated before the second, you must do 344 something like: 345 346 desc->word0 = address; 347 wmb(); 348 desc->word1 = DESC_VALID; 349 350 in order to get correct behavior on all platforms. 351 352 Also, on some platforms your driver may need to flush CPU write 353 buffers in much the same way as it needs to flush write buffers 354 found in PCI bridges (such as by reading a register's value 355 after writing it). 356 357- Streaming DMA mappings which are usually mapped for one DMA 358 transfer, unmapped right after it (unless you use dma_sync_* below) 359 and for which hardware can optimize for sequential accesses. 360 361 This of "streaming" as "asynchronous" or "outside the coherency 362 domain". 363 364 Good examples of what to use streaming mappings for are: 365 366 - Networking buffers transmitted/received by a device. 367 - Filesystem buffers written/read by a SCSI device. 368 369 The interfaces for using this type of mapping were designed in 370 such a way that an implementation can make whatever performance 371 optimizations the hardware allows. To this end, when using 372 such mappings you must be explicit about what you want to happen. 373 374Neither type of DMA mapping has alignment restrictions that come from 375the underlying bus, although some devices may have such restrictions. 376Also, systems with caches that aren't DMA-coherent will work better 377when the underlying buffers don't share cache lines with other data. 378 379 380 Using Consistent DMA mappings. 381 382To allocate and map large (PAGE_SIZE or so) consistent DMA regions, 383you should do: 384 385 dma_addr_t dma_handle; 386 387 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp); 388 389where device is a struct device *. This may be called in interrupt 390context with the GFP_ATOMIC flag. 391 392Size is the length of the region you want to allocate, in bytes. 393 394This routine will allocate RAM for that region, so it acts similarly to 395__get_free_pages() (but takes size instead of a page order). If your 396driver needs regions sized smaller than a page, you may prefer using 397the dma_pool interface, described below. 398 399The consistent DMA mapping interfaces, for non-NULL dev, will by 400default return a DMA address which is 32-bit addressable. Even if the 401device indicates (via DMA mask) that it may address the upper 32-bits, 402consistent allocation will only return > 32-bit addresses for DMA if 403the consistent DMA mask has been explicitly changed via 404dma_set_coherent_mask(). This is true of the dma_pool interface as 405well. 406 407dma_alloc_coherent() returns two values: the virtual address which you 408can use to access it from the CPU and dma_handle which you pass to the 409card. 410 411The CPU virtual address and the DMA address are both 412guaranteed to be aligned to the smallest PAGE_SIZE order which 413is greater than or equal to the requested size. This invariant 414exists (for example) to guarantee that if you allocate a chunk 415which is smaller than or equal to 64 kilobytes, the extent of the 416buffer you receive will not cross a 64K boundary. 417 418To unmap and free such a DMA region, you call: 419 420 dma_free_coherent(dev, size, cpu_addr, dma_handle); 421 422where dev, size are the same as in the above call and cpu_addr and 423dma_handle are the values dma_alloc_coherent() returned to you. 424This function may not be called in interrupt context. 425 426If your driver needs lots of smaller memory regions, you can write 427custom code to subdivide pages returned by dma_alloc_coherent(), 428or you can use the dma_pool API to do that. A dma_pool is like 429a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages(). 430Also, it understands common hardware constraints for alignment, 431like queue heads needing to be aligned on N byte boundaries. 432 433Create a dma_pool like this: 434 435 struct dma_pool *pool; 436 437 pool = dma_pool_create(name, dev, size, align, boundary); 438 439The "name" is for diagnostics (like a kmem_cache name); dev and size 440are as above. The device's hardware alignment requirement for this 441type of data is "align" (which is expressed in bytes, and must be a 442power of two). If your device has no boundary crossing restrictions, 443pass 0 for boundary; passing 4096 says memory allocated from this pool 444must not cross 4KByte boundaries (but at that time it may be better to 445use dma_alloc_coherent() directly instead). 446 447Allocate memory from a DMA pool like this: 448 449 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle); 450 451flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor 452holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(), 453this returns two values, cpu_addr and dma_handle. 454 455Free memory that was allocated from a dma_pool like this: 456 457 dma_pool_free(pool, cpu_addr, dma_handle); 458 459where pool is what you passed to dma_pool_alloc(), and cpu_addr and 460dma_handle are the values dma_pool_alloc() returned. This function 461may be called in interrupt context. 462 463Destroy a dma_pool by calling: 464 465 dma_pool_destroy(pool); 466 467Make sure you've called dma_pool_free() for all memory allocated 468from a pool before you destroy the pool. This function may not 469be called in interrupt context. 470 471 DMA Direction 472 473The interfaces described in subsequent portions of this document 474take a DMA direction argument, which is an integer and takes on 475one of the following values: 476 477 DMA_BIDIRECTIONAL 478 DMA_TO_DEVICE 479 DMA_FROM_DEVICE 480 DMA_NONE 481 482You should provide the exact DMA direction if you know it. 483 484DMA_TO_DEVICE means "from main memory to the device" 485DMA_FROM_DEVICE means "from the device to main memory" 486It is the direction in which the data moves during the DMA 487transfer. 488 489You are _strongly_ encouraged to specify this as precisely 490as you possibly can. 491 492If you absolutely cannot know the direction of the DMA transfer, 493specify DMA_BIDIRECTIONAL. It means that the DMA can go in 494either direction. The platform guarantees that you may legally 495specify this, and that it will work, but this may be at the 496cost of performance for example. 497 498The value DMA_NONE is to be used for debugging. One can 499hold this in a data structure before you come to know the 500precise direction, and this will help catch cases where your 501direction tracking logic has failed to set things up properly. 502 503Another advantage of specifying this value precisely (outside of 504potential platform-specific optimizations of such) is for debugging. 505Some platforms actually have a write permission boolean which DMA 506mappings can be marked with, much like page protections in the user 507program address space. Such platforms can and do report errors in the 508kernel logs when the DMA controller hardware detects violation of the 509permission setting. 510 511Only streaming mappings specify a direction, consistent mappings 512implicitly have a direction attribute setting of 513DMA_BIDIRECTIONAL. 514 515The SCSI subsystem tells you the direction to use in the 516'sc_data_direction' member of the SCSI command your driver is 517working on. 518 519For Networking drivers, it's a rather simple affair. For transmit 520packets, map/unmap them with the DMA_TO_DEVICE direction 521specifier. For receive packets, just the opposite, map/unmap them 522with the DMA_FROM_DEVICE direction specifier. 523 524 Using Streaming DMA mappings 525 526The streaming DMA mapping routines can be called from interrupt 527context. There are two versions of each map/unmap, one which will 528map/unmap a single memory region, and one which will map/unmap a 529scatterlist. 530 531To map a single region, you do: 532 533 struct device *dev = &my_dev->dev; 534 dma_addr_t dma_handle; 535 void *addr = buffer->ptr; 536 size_t size = buffer->len; 537 538 dma_handle = dma_map_single(dev, addr, size, direction); 539 if (dma_mapping_error(dev, dma_handle)) { 540 /* 541 * reduce current DMA mapping usage, 542 * delay and try again later or 543 * reset driver. 544 */ 545 goto map_error_handling; 546 } 547 548and to unmap it: 549 550 dma_unmap_single(dev, dma_handle, size, direction); 551 552You should call dma_mapping_error() as dma_map_single() could fail and return 553error. Not all DMA implementations support the dma_mapping_error() interface. 554However, it is a good practice to call dma_mapping_error() interface, which 555will invoke the generic mapping error check interface. Doing so will ensure 556that the mapping code will work correctly on all DMA implementations without 557any dependency on the specifics of the underlying implementation. Using the 558returned address without checking for errors could result in failures ranging 559from panics to silent data corruption. A couple of examples of incorrect ways 560to check for errors that make assumptions about the underlying DMA 561implementation are as follows and these are applicable to dma_map_page() as 562well. 563 564Incorrect example 1: 565 dma_addr_t dma_handle; 566 567 dma_handle = dma_map_single(dev, addr, size, direction); 568 if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) { 569 goto map_error; 570 } 571 572Incorrect example 2: 573 dma_addr_t dma_handle; 574 575 dma_handle = dma_map_single(dev, addr, size, direction); 576 if (dma_handle == DMA_ERROR_CODE) { 577 goto map_error; 578 } 579 580You should call dma_unmap_single() when the DMA activity is finished, e.g., 581from the interrupt which told you that the DMA transfer is done. 582 583Using CPU pointers like this for single mappings has a disadvantage: 584you cannot reference HIGHMEM memory in this way. Thus, there is a 585map/unmap interface pair akin to dma_{map,unmap}_single(). These 586interfaces deal with page/offset pairs instead of CPU pointers. 587Specifically: 588 589 struct device *dev = &my_dev->dev; 590 dma_addr_t dma_handle; 591 struct page *page = buffer->page; 592 unsigned long offset = buffer->offset; 593 size_t size = buffer->len; 594 595 dma_handle = dma_map_page(dev, page, offset, size, direction); 596 if (dma_mapping_error(dev, dma_handle)) { 597 /* 598 * reduce current DMA mapping usage, 599 * delay and try again later or 600 * reset driver. 601 */ 602 goto map_error_handling; 603 } 604 605 ... 606 607 dma_unmap_page(dev, dma_handle, size, direction); 608 609Here, "offset" means byte offset within the given page. 610 611You should call dma_mapping_error() as dma_map_page() could fail and return 612error as outlined under the dma_map_single() discussion. 613 614You should call dma_unmap_page() when the DMA activity is finished, e.g., 615from the interrupt which told you that the DMA transfer is done. 616 617With scatterlists, you map a region gathered from several regions by: 618 619 int i, count = dma_map_sg(dev, sglist, nents, direction); 620 struct scatterlist *sg; 621 622 for_each_sg(sglist, sg, count, i) { 623 hw_address[i] = sg_dma_address(sg); 624 hw_len[i] = sg_dma_len(sg); 625 } 626 627where nents is the number of entries in the sglist. 628 629The implementation is free to merge several consecutive sglist entries 630into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any 631consecutive sglist entries can be merged into one provided the first one 632ends and the second one starts on a page boundary - in fact this is a huge 633advantage for cards which either cannot do scatter-gather or have very 634limited number of scatter-gather entries) and returns the actual number 635of sg entries it mapped them to. On failure 0 is returned. 636 637Then you should loop count times (note: this can be less than nents times) 638and use sg_dma_address() and sg_dma_len() macros where you previously 639accessed sg->address and sg->length as shown above. 640 641To unmap a scatterlist, just call: 642 643 dma_unmap_sg(dev, sglist, nents, direction); 644 645Again, make sure DMA activity has already finished. 646 647PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be 648 the _same_ one you passed into the dma_map_sg call, 649 it should _NOT_ be the 'count' value _returned_ from the 650 dma_map_sg call. 651 652Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}() 653counterpart, because the DMA address space is a shared resource and 654you could render the machine unusable by consuming all DMA addresses. 655 656If you need to use the same streaming DMA region multiple times and touch 657the data in between the DMA transfers, the buffer needs to be synced 658properly in order for the CPU and device to see the most up-to-date and 659correct copy of the DMA buffer. 660 661So, firstly, just map it with dma_map_{single,sg}(), and after each DMA 662transfer call either: 663 664 dma_sync_single_for_cpu(dev, dma_handle, size, direction); 665 666or: 667 668 dma_sync_sg_for_cpu(dev, sglist, nents, direction); 669 670as appropriate. 671 672Then, if you wish to let the device get at the DMA area again, 673finish accessing the data with the CPU, and then before actually 674giving the buffer to the hardware call either: 675 676 dma_sync_single_for_device(dev, dma_handle, size, direction); 677 678or: 679 680 dma_sync_sg_for_device(dev, sglist, nents, direction); 681 682as appropriate. 683 684After the last DMA transfer call one of the DMA unmap routines 685dma_unmap_{single,sg}(). If you don't touch the data from the first 686dma_map_*() call till dma_unmap_*(), then you don't have to call the 687dma_sync_*() routines at all. 688 689Here is pseudo code which shows a situation in which you would need 690to use the dma_sync_*() interfaces. 691 692 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len) 693 { 694 dma_addr_t mapping; 695 696 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE); 697 if (dma_mapping_error(cp->dev, dma_handle)) { 698 /* 699 * reduce current DMA mapping usage, 700 * delay and try again later or 701 * reset driver. 702 */ 703 goto map_error_handling; 704 } 705 706 cp->rx_buf = buffer; 707 cp->rx_len = len; 708 cp->rx_dma = mapping; 709 710 give_rx_buf_to_card(cp); 711 } 712 713 ... 714 715 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs) 716 { 717 struct my_card *cp = devid; 718 719 ... 720 if (read_card_status(cp) == RX_BUF_TRANSFERRED) { 721 struct my_card_header *hp; 722 723 /* Examine the header to see if we wish 724 * to accept the data. But synchronize 725 * the DMA transfer with the CPU first 726 * so that we see updated contents. 727 */ 728 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma, 729 cp->rx_len, 730 DMA_FROM_DEVICE); 731 732 /* Now it is safe to examine the buffer. */ 733 hp = (struct my_card_header *) cp->rx_buf; 734 if (header_is_ok(hp)) { 735 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len, 736 DMA_FROM_DEVICE); 737 pass_to_upper_layers(cp->rx_buf); 738 make_and_setup_new_rx_buf(cp); 739 } else { 740 /* CPU should not write to 741 * DMA_FROM_DEVICE-mapped area, 742 * so dma_sync_single_for_device() is 743 * not needed here. It would be required 744 * for DMA_BIDIRECTIONAL mapping if 745 * the memory was modified. 746 */ 747 give_rx_buf_to_card(cp); 748 } 749 } 750 } 751 752Drivers converted fully to this interface should not use virt_to_bus() any 753longer, nor should they use bus_to_virt(). Some drivers have to be changed a 754little bit, because there is no longer an equivalent to bus_to_virt() in the 755dynamic DMA mapping scheme - you have to always store the DMA addresses 756returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single() 757calls (dma_map_sg() stores them in the scatterlist itself if the platform 758supports dynamic DMA mapping in hardware) in your driver structures and/or 759in the card registers. 760 761All drivers should be using these interfaces with no exceptions. It 762is planned to completely remove virt_to_bus() and bus_to_virt() as 763they are entirely deprecated. Some ports already do not provide these 764as it is impossible to correctly support them. 765 766 Handling Errors 767 768DMA address space is limited on some architectures and an allocation 769failure can be determined by: 770 771- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0 772 773- checking the dma_addr_t returned from dma_map_single() and dma_map_page() 774 by using dma_mapping_error(): 775 776 dma_addr_t dma_handle; 777 778 dma_handle = dma_map_single(dev, addr, size, direction); 779 if (dma_mapping_error(dev, dma_handle)) { 780 /* 781 * reduce current DMA mapping usage, 782 * delay and try again later or 783 * reset driver. 784 */ 785 goto map_error_handling; 786 } 787 788- unmap pages that are already mapped, when mapping error occurs in the middle 789 of a multiple page mapping attempt. These example are applicable to 790 dma_map_page() as well. 791 792Example 1: 793 dma_addr_t dma_handle1; 794 dma_addr_t dma_handle2; 795 796 dma_handle1 = dma_map_single(dev, addr, size, direction); 797 if (dma_mapping_error(dev, dma_handle1)) { 798 /* 799 * reduce current DMA mapping usage, 800 * delay and try again later or 801 * reset driver. 802 */ 803 goto map_error_handling1; 804 } 805 dma_handle2 = dma_map_single(dev, addr, size, direction); 806 if (dma_mapping_error(dev, dma_handle2)) { 807 /* 808 * reduce current DMA mapping usage, 809 * delay and try again later or 810 * reset driver. 811 */ 812 goto map_error_handling2; 813 } 814 815 ... 816 817 map_error_handling2: 818 dma_unmap_single(dma_handle1); 819 map_error_handling1: 820 821Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when 822 mapping error is detected in the middle) 823 824 dma_addr_t dma_addr; 825 dma_addr_t array[DMA_BUFFERS]; 826 int save_index = 0; 827 828 for (i = 0; i < DMA_BUFFERS; i++) { 829 830 ... 831 832 dma_addr = dma_map_single(dev, addr, size, direction); 833 if (dma_mapping_error(dev, dma_addr)) { 834 /* 835 * reduce current DMA mapping usage, 836 * delay and try again later or 837 * reset driver. 838 */ 839 goto map_error_handling; 840 } 841 array[i].dma_addr = dma_addr; 842 save_index++; 843 } 844 845 ... 846 847 map_error_handling: 848 849 for (i = 0; i < save_index; i++) { 850 851 ... 852 853 dma_unmap_single(array[i].dma_addr); 854 } 855 856Networking drivers must call dev_kfree_skb() to free the socket buffer 857and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook 858(ndo_start_xmit). This means that the socket buffer is just dropped in 859the failure case. 860 861SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping 862fails in the queuecommand hook. This means that the SCSI subsystem 863passes the command to the driver again later. 864 865 Optimizing Unmap State Space Consumption 866 867On many platforms, dma_unmap_{single,page}() is simply a nop. 868Therefore, keeping track of the mapping address and length is a waste 869of space. Instead of filling your drivers up with ifdefs and the like 870to "work around" this (which would defeat the whole purpose of a 871portable API) the following facilities are provided. 872 873Actually, instead of describing the macros one by one, we'll 874transform some example code. 875 8761) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures. 877 Example, before: 878 879 struct ring_state { 880 struct sk_buff *skb; 881 dma_addr_t mapping; 882 __u32 len; 883 }; 884 885 after: 886 887 struct ring_state { 888 struct sk_buff *skb; 889 DEFINE_DMA_UNMAP_ADDR(mapping); 890 DEFINE_DMA_UNMAP_LEN(len); 891 }; 892 8932) Use dma_unmap_{addr,len}_set() to set these values. 894 Example, before: 895 896 ringp->mapping = FOO; 897 ringp->len = BAR; 898 899 after: 900 901 dma_unmap_addr_set(ringp, mapping, FOO); 902 dma_unmap_len_set(ringp, len, BAR); 903 9043) Use dma_unmap_{addr,len}() to access these values. 905 Example, before: 906 907 dma_unmap_single(dev, ringp->mapping, ringp->len, 908 DMA_FROM_DEVICE); 909 910 after: 911 912 dma_unmap_single(dev, 913 dma_unmap_addr(ringp, mapping), 914 dma_unmap_len(ringp, len), 915 DMA_FROM_DEVICE); 916 917It really should be self-explanatory. We treat the ADDR and LEN 918separately, because it is possible for an implementation to only 919need the address in order to perform the unmap operation. 920 921 Platform Issues 922 923If you are just writing drivers for Linux and do not maintain 924an architecture port for the kernel, you can safely skip down 925to "Closing". 926 9271) Struct scatterlist requirements. 928 929 Don't invent the architecture specific struct scatterlist; just use 930 <asm-generic/scatterlist.h>. You need to enable 931 CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs 932 (including software IOMMU). 933 9342) ARCH_DMA_MINALIGN 935 936 Architectures must ensure that kmalloc'ed buffer is 937 DMA-safe. Drivers and subsystems depend on it. If an architecture 938 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in 939 the CPU cache is identical to data in main memory), 940 ARCH_DMA_MINALIGN must be set so that the memory allocator 941 makes sure that kmalloc'ed buffer doesn't share a cache line with 942 the others. See arch/arm/include/asm/cache.h as an example. 943 944 Note that ARCH_DMA_MINALIGN is about DMA memory alignment 945 constraints. You don't need to worry about the architecture data 946 alignment constraints (e.g. the alignment constraints about 64-bit 947 objects). 948 9493) Supporting multiple types of IOMMUs 950 951 If your architecture needs to support multiple types of IOMMUs, you 952 can use include/linux/asm-generic/dma-mapping-common.h. It's a 953 library to support the DMA API with multiple types of IOMMUs. Lots 954 of architectures (x86, powerpc, sh, alpha, ia64, microblaze and 955 sparc) use it. Choose one to see how it can be used. If you need to 956 support multiple types of IOMMUs in a single system, the example of 957 x86 or powerpc helps. 958 959 Closing 960 961This document, and the API itself, would not be in its current 962form without the feedback and suggestions from numerous individuals. 963We would like to specifically mention, in no particular order, the 964following people: 965 966 Russell King <rmk@arm.linux.org.uk> 967 Leo Dagum <dagum@barrel.engr.sgi.com> 968 Ralf Baechle <ralf@oss.sgi.com> 969 Grant Grundler <grundler@cup.hp.com> 970 Jay Estabrook <Jay.Estabrook@compaq.com> 971 Thomas Sailer <sailer@ife.ee.ethz.ch> 972 Andrea Arcangeli <andrea@suse.de> 973 Jens Axboe <jens.axboe@oracle.com> 974 David Mosberger-Tang <davidm@hpl.hp.com> 975