Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1=========================
2Dynamic DMA mapping Guide
3=========================
4
5:Author: David S. Miller <davem@redhat.com>
6:Author: Richard Henderson <rth@cygnus.com>
7:Author: Jakub Jelinek <jakub@redhat.com>
8
9This is a guide to device driver writers on how to use the DMA API
10with example pseudo-code. For a concise description of the API, see
11Documentation/core-api/dma-api.rst.
12
13CPU and DMA addresses
14=====================
15
16There are several kinds of addresses involved in the DMA API, and it's
17important to understand the differences.
18
19The kernel normally uses virtual addresses. Any address returned by
20kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
21be stored in a ``void *``.
22
23The virtual memory system (TLB, page tables, etc.) translates virtual
24addresses to CPU physical addresses, which are stored as "phys_addr_t" or
25"resource_size_t". The kernel manages device resources like registers as
26physical addresses. These are the addresses in /proc/iomem. The physical
27address is not directly useful to a driver; it must use ioremap() to map
28the space and produce a virtual address.
29
30I/O devices use a third kind of address: a "bus address". If a device has
31registers at an MMIO address, or if it performs DMA to read or write system
32memory, the addresses used by the device are bus addresses. In some
33systems, bus addresses are identical to CPU physical addresses, but in
34general they are not. IOMMUs and host bridges can produce arbitrary
35mappings between physical and bus addresses.
36
37From a device's point of view, DMA uses the bus address space, but it may
38be restricted to a subset of that space. For example, even if a system
39supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40so devices only need to use 32-bit DMA addresses.
41
42Here's a picture and some examples::
43
44 CPU CPU Bus
45 Virtual Physical Address
46 Address Address Space
47 Space Space
48
49 +-------+ +------+ +------+
50 | | |MMIO | Offset | |
51 | | Virtual |Space | applied | |
52 C +-------+ --------> B +------+ ----------> +------+ A
53 | | mapping | | by host | |
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
56 | CPU | | | | RAM | | | | Device |
57 | | | | | | | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
59 | | Virtual |Buffer| Mapping | |
60 X +-------+ --------> Y +------+ <---------- +------+ Z
61 | | mapping | RAM | by IOMMU
62 | | | |
63 | | | |
64 +-------+ +------+
65
66During the enumeration process, the kernel learns about I/O devices and
67their MMIO space and the host bridges that connect them to the system. For
68example, if a PCI device has a BAR, the kernel reads the bus address (A)
69from the BAR and converts it to a CPU physical address (B). The address B
70is stored in a struct resource and usually exposed via /proc/iomem. When a
71driver claims a device, it typically uses ioremap() to map physical address
72B at a virtual address (C). It can then use, e.g., ioread32(C), to access
73the device registers at bus address A.
74
75If the device supports DMA, the driver sets up a buffer using kmalloc() or
76a similar interface, which returns a virtual address (X). The virtual
77memory system maps X to a physical address (Y) in system RAM. The driver
78can use virtual address X to access the buffer, but the device itself
79cannot because DMA doesn't go through the CPU virtual memory system.
80
81In some simple systems, the device can do DMA directly to physical address
82Y. But in many others, there is IOMMU hardware that translates DMA
83addresses to physical addresses, e.g., it translates Z to Y. This is part
84of the reason for the DMA API: the driver can give a virtual address X to
85an interface like dma_map_single(), which sets up any required IOMMU
86mapping and returns the DMA address Z. The driver then tells the device to
87do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
88RAM.
89
90So that Linux can use the dynamic DMA mapping, it needs some help from the
91drivers, namely it has to take into account that DMA addresses should be
92mapped only for the time they are actually used and unmapped after the DMA
93transfer.
94
95The following API will work of course even on platforms where no such
96hardware exists.
97
98Note that the DMA API works with any bus independent of the underlying
99microprocessor architecture. You should use the DMA API rather than the
100bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
101pci_map_*() interfaces.
102
103First of all, you should make sure::
104
105 #include <linux/dma-mapping.h>
106
107is in your driver, which provides the definition of dma_addr_t. This type
108can hold any valid DMA address for the platform and should be used
109everywhere you hold a DMA address returned from the DMA mapping functions.
110
111What memory is DMA'able?
112========================
113
114The first piece of information you must know is what kernel memory can
115be used with the DMA mapping facilities. There has been an unwritten
116set of rules regarding this, and this text is an attempt to finally
117write them down.
118
119If you acquired your memory via the page allocator
120(i.e. __get_free_page*()) or the generic memory allocators
121(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
122that memory using the addresses returned from those routines.
123
124This means specifically that you may _not_ use the memory/addresses
125returned from vmalloc() for DMA. It is possible to DMA to the
126_underlying_ memory mapped into a vmalloc() area, but this requires
127walking page tables to get the physical addresses, and then
128translating each of those pages back to a kernel address using
129something like __va(). [ EDIT: Update this when we integrate
130Gerd Knorr's generic code which does this. ]
131
132This rule also means that you may use neither kernel image addresses
133(items in data/text/bss segments), nor module image addresses, nor
134stack addresses for DMA. These could all be mapped somewhere entirely
135different than the rest of physical memory. Even if those classes of
136memory could physically work with DMA, you'd need to ensure the I/O
137buffers were cacheline-aligned. Without that, you'd see cacheline
138sharing problems (data corruption) on CPUs with DMA-incoherent caches.
139(The CPU could write to one word, DMA would write to a different one
140in the same cache line, and one of them could be overwritten.)
141
142Also, this means that you cannot take the return of a kmap()
143call and DMA to/from that. This is similar to vmalloc().
144
145What about block I/O and networking buffers? The block I/O and
146networking subsystems make sure that the buffers they use are valid
147for you to DMA from/to.
148
149__dma_from_device_group_begin/end annotations
150=============================================
151
152As explained previously, when a structure contains a DMA_FROM_DEVICE /
153DMA_BIDIRECTIONAL buffer (device writes to memory) alongside fields that the
154CPU writes to, cache line sharing between the DMA buffer and CPU-written fields
155can cause data corruption on CPUs with DMA-incoherent caches.
156
157The ``__dma_from_device_group_begin(GROUP)/__dma_from_device_group_end(GROUP)``
158macros ensure proper alignment to prevent this::
159
160 struct my_device {
161 spinlock_t lock1;
162 __dma_from_device_group_begin();
163 char dma_buffer1[16];
164 char dma_buffer2[16];
165 __dma_from_device_group_end();
166 spinlock_t lock2;
167 };
168
169To isolate a DMA buffer from adjacent fields, use
170``__dma_from_device_group_begin(GROUP)`` before the first DMA buffer
171field and ``__dma_from_device_group_end(GROUP)`` after the last DMA
172buffer field (with the same GROUP name). This protects both the head
173and tail of the buffer from cache line sharing.
174
175The GROUP parameter is an optional identifier that names the DMA buffer group
176(in case you have several in the same structure)::
177
178 struct my_device {
179 spinlock_t lock1;
180 __dma_from_device_group_begin(buffer1);
181 char dma_buffer1[16];
182 __dma_from_device_group_end(buffer1);
183 spinlock_t lock2;
184 __dma_from_device_group_begin(buffer2);
185 char dma_buffer2[16];
186 __dma_from_device_group_end(buffer2);
187 };
188
189On cache-coherent platforms these macros expand to zero-length array markers.
190On non-coherent platforms, they also ensure the minimal DMA alignment, which
191can be as large as 128 bytes.
192
193.. note::
194
195 It is allowed (though somewhat fragile) to include extra fields, not
196 intended for DMA from the device, within the group (in order to pack the
197 structure tightly) - but only as long as the CPU does not write these
198 fields while any fields in the group are mapped for DMA_FROM_DEVICE or
199 DMA_BIDIRECTIONAL.
200
201DMA addressing capabilities
202===========================
203
204By default, the kernel assumes that your device can address 32-bits of DMA
205addressing. For a 64-bit capable device, this needs to be increased, and for
206a device with limitations, it needs to be decreased.
207
208Special note about PCI: PCI-X specification requires PCI-X devices to support
20964-bit addressing (DAC) for all transactions. And at least one platform (SGI
210SN2) requires 64-bit coherent allocations to operate correctly when the IO
211bus is in PCI-X mode.
212
213For correct operation, you must set the DMA mask to inform the kernel about
214your devices DMA addressing capabilities.
215
216This is performed via a call to dma_set_mask_and_coherent()::
217
218 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
219
220which will set the mask for both streaming and coherent APIs together. If you
221have some special requirements, then the following two separate calls can be
222used instead:
223
224 The setup for streaming mappings is performed via a call to
225 dma_set_mask()::
226
227 int dma_set_mask(struct device *dev, u64 mask);
228
229 The setup for coherent allocations is performed via a call
230 to dma_set_coherent_mask()::
231
232 int dma_set_coherent_mask(struct device *dev, u64 mask);
233
234Here, dev is a pointer to the device struct of your device, and mask is a bit
235mask describing which bits of an address your device supports. Often the
236device struct of your device is embedded in the bus-specific device struct of
237your device. For example, &pdev->dev is a pointer to the device struct of a
238PCI device (pdev is a pointer to the PCI device struct of your device).
239
240These calls usually return zero to indicate your device can perform DMA
241properly on the machine given the address mask you provided, but they might
242return an error if the mask is too small to be supportable on the given
243system. If it returns non-zero, your device cannot perform DMA properly on
244this platform, and attempting to do so will result in undefined behavior.
245You must not use DMA on this device unless the dma_set_mask family of
246functions has returned success.
247
248This means that in the failure case, you have two options:
249
2501) Use some non-DMA mode for data transfer, if possible.
2512) Ignore this device and do not initialize it.
252
253It is recommended that your driver print a kernel KERN_WARNING message when
254setting the DMA mask fails. In this manner, if a user of your driver reports
255that performance is bad or that the device is not even detected, you can ask
256them for the kernel messages to find out exactly why.
257
258The 24-bit addressing device would do something like this::
259
260 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(24))) {
261 dev_warn(dev, "mydev: No suitable DMA available\n");
262 goto ignore_this_device;
263 }
264
265The standard 64-bit addressing device would do something like this::
266
267 dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))
268
269dma_set_mask_and_coherent() never return fail when DMA_BIT_MASK(64). Typical
270error code like::
271
272 /* Wrong code */
273 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
274 dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))
275
276dma_set_mask_and_coherent() will never return failure when bigger than 32.
277So typical code like::
278
279 /* Recommended code */
280 if (support_64bit)
281 dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
282 else
283 dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
284
285If the device only supports 32-bit addressing for descriptors in the
286coherent allocations, but supports full 64-bits for streaming mappings
287it would look like this::
288
289 if (dma_set_mask(dev, DMA_BIT_MASK(64))) {
290 dev_warn(dev, "mydev: No suitable DMA available\n");
291 goto ignore_this_device;
292 }
293
294The coherent mask will always be able to set the same or a smaller mask as
295the streaming mask. However for the rare case that a device driver only
296uses coherent allocations, one would have to check the return value from
297dma_set_coherent_mask().
298
299Finally, if your device can only drive the low 24-bits of
300address you might do something like::
301
302 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
303 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
304 goto ignore_this_device;
305 }
306
307When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
308returns zero, the kernel saves away this mask you have provided. The
309kernel will use this information later when you make DMA mappings.
310
311There is a case which we are aware of at this time, which is worth
312mentioning in this documentation. If your device supports multiple
313functions (for example a sound card provides playback and record
314functions) and the various different functions have _different_
315DMA addressing limitations, you may wish to probe each mask and
316only provide the functionality which the machine can handle. It
317is important that the last call to dma_set_mask() be for the
318most specific mask.
319
320Here is pseudo-code showing how this might be done::
321
322 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
323 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
324
325 struct my_sound_card *card;
326 struct device *dev;
327
328 ...
329 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
330 card->playback_enabled = 1;
331 } else {
332 card->playback_enabled = 0;
333 dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
334 card->name);
335 }
336 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
337 card->record_enabled = 1;
338 } else {
339 card->record_enabled = 0;
340 dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
341 card->name);
342 }
343
344A sound card was used as an example here because this genre of PCI
345devices seems to be littered with ISA chips given a PCI front end,
346and thus retaining the 16MB DMA addressing limitations of ISA.
347
348Types of DMA mappings
349=====================
350
351There are two types of DMA mappings:
352
353- Coherent DMA mappings which are usually mapped at driver
354 initialization, unmapped at the end and for which the hardware should
355 guarantee that the device and the CPU can access the data
356 in parallel and will see updates made by each other without any
357 explicit software flushing.
358
359 Think of "coherent" as "synchronous".
360
361 The current default is to return coherent memory in the low 32
362 bits of the DMA space. However, for future compatibility you should
363 set the coherent mask even if this default is fine for your
364 driver.
365
366 Good examples of what to use coherent mappings for are:
367
368 - Network card DMA ring descriptors.
369 - SCSI adapter mailbox command data structures.
370 - Device firmware microcode executed out of
371 main memory.
372
373 The invariant these examples all require is that any CPU store
374 to memory is immediately visible to the device, and vice
375 versa. Coherent mappings guarantee this.
376
377 .. important::
378
379 Coherent DMA memory does not preclude the usage of
380 proper memory barriers. The CPU may reorder stores to
381 coherent memory just as it may normal memory. Example:
382 if it is important for the device to see the first word
383 of a descriptor updated before the second, you must do
384 something like::
385
386 desc->word0 = address;
387 wmb();
388 desc->word1 = DESC_VALID;
389
390 in order to get correct behavior on all platforms.
391
392 Also, on some platforms your driver may need to flush CPU write
393 buffers in much the same way as it needs to flush write buffers
394 found in PCI bridges (such as by reading a register's value
395 after writing it).
396
397- Streaming DMA mappings which are usually mapped for one DMA
398 transfer, unmapped right after it (unless you use dma_sync_* below)
399 and for which hardware can optimize for sequential accesses.
400
401 Think of "streaming" as "asynchronous" or "outside the coherency
402 domain".
403
404 Good examples of what to use streaming mappings for are:
405
406 - Networking buffers transmitted/received by a device.
407 - Filesystem buffers written/read by a SCSI device.
408
409 The interfaces for using this type of mapping were designed in
410 such a way that an implementation can make whatever performance
411 optimizations the hardware allows. To this end, when using
412 such mappings you must be explicit about what you want to happen.
413
414Neither type of DMA mapping has alignment restrictions that come from
415the underlying bus, although some devices may have such restrictions.
416Also, systems with caches that aren't DMA-coherent will work better
417when the underlying buffers don't share cache lines with other data.
418
419
420Using Coherent DMA mappings
421===========================
422
423To allocate and map large (PAGE_SIZE or so) coherent DMA regions,
424you should do::
425
426 dma_addr_t dma_handle;
427
428 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
429
430where device is a ``struct device *``. This may be called in interrupt
431context with the GFP_ATOMIC flag.
432
433Size is the length of the region you want to allocate, in bytes.
434
435This routine will allocate RAM for that region, so it acts similarly to
436__get_free_pages() (but takes size instead of a page order). If your
437driver needs regions sized smaller than a page, you may prefer using
438the dma_pool interface, described below.
439
440The coherent DMA mapping interfaces, will by default return a DMA address
441which is 32-bit addressable. Even if the device indicates (via the DMA mask)
442that it may address the upper 32-bits, coherent allocation will only
443return > 32-bit addresses for DMA if the coherent DMA mask has been
444explicitly changed via dma_set_coherent_mask(). This is true of the
445dma_pool interface as well.
446
447dma_alloc_coherent() returns two values: the virtual address which you
448can use to access it from the CPU and dma_handle which you pass to the
449card.
450
451The CPU virtual address and the DMA address are both
452guaranteed to be aligned to the smallest PAGE_SIZE order which
453is greater than or equal to the requested size. This invariant
454exists (for example) to guarantee that if you allocate a chunk
455which is smaller than or equal to 64 kilobytes, the extent of the
456buffer you receive will not cross a 64K boundary.
457
458To unmap and free such a DMA region, you call::
459
460 dma_free_coherent(dev, size, cpu_addr, dma_handle);
461
462where dev, size are the same as in the above call and cpu_addr and
463dma_handle are the values dma_alloc_coherent() returned to you.
464This function may not be called in interrupt context.
465
466If your driver needs lots of smaller memory regions, you can write
467custom code to subdivide pages returned by dma_alloc_coherent(),
468or you can use the dma_pool API to do that. A dma_pool is like
469a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
470Also, it understands common hardware constraints for alignment,
471like queue heads needing to be aligned on N byte boundaries.
472
473Create a dma_pool like this::
474
475 struct dma_pool *pool;
476
477 pool = dma_pool_create(name, dev, size, align, boundary);
478
479The "name" is for diagnostics (like a kmem_cache name); dev and size
480are as above. The device's hardware alignment requirement for this
481type of data is "align" (which is expressed in bytes, and must be a
482power of two). If your device has no boundary crossing restrictions,
483pass 0 for boundary; passing 4096 says memory allocated from this pool
484must not cross 4KByte boundaries (but at that time it may be better to
485use dma_alloc_coherent() directly instead).
486
487Allocate memory from a DMA pool like this::
488
489 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
490
491flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
492holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(),
493this returns two values, cpu_addr and dma_handle.
494
495Free memory that was allocated from a dma_pool like this::
496
497 dma_pool_free(pool, cpu_addr, dma_handle);
498
499where pool is what you passed to dma_pool_alloc(), and cpu_addr and
500dma_handle are the values dma_pool_alloc() returned. This function
501may be called in interrupt context.
502
503Destroy a dma_pool by calling::
504
505 dma_pool_destroy(pool);
506
507Make sure you've called dma_pool_free() for all memory allocated
508from a pool before you destroy the pool. This function may not
509be called in interrupt context.
510
511DMA Direction
512=============
513
514The interfaces described in subsequent portions of this document
515take a DMA direction argument, which is an integer and takes on
516one of the following values::
517
518 DMA_BIDIRECTIONAL
519 DMA_TO_DEVICE
520 DMA_FROM_DEVICE
521 DMA_NONE
522
523You should provide the exact DMA direction if you know it.
524
525DMA_TO_DEVICE means "from main memory to the device"
526DMA_FROM_DEVICE means "from the device to main memory"
527It is the direction in which the data moves during the DMA
528transfer.
529
530You are _strongly_ encouraged to specify this as precisely
531as you possibly can.
532
533If you absolutely cannot know the direction of the DMA transfer,
534specify DMA_BIDIRECTIONAL. It means that the DMA can go in
535either direction. The platform guarantees that you may legally
536specify this, and that it will work, but this may be at the
537cost of performance for example.
538
539The value DMA_NONE is to be used for debugging. One can
540hold this in a data structure before you come to know the
541precise direction, and this will help catch cases where your
542direction tracking logic has failed to set things up properly.
543
544Another advantage of specifying this value precisely (outside of
545potential platform-specific optimizations of such) is for debugging.
546Some platforms actually have a write permission boolean which DMA
547mappings can be marked with, much like page protections in the user
548program address space. Such platforms can and do report errors in the
549kernel logs when the DMA controller hardware detects violation of the
550permission setting.
551
552Only streaming mappings specify a direction, coherent mappings
553implicitly have a direction attribute setting of
554DMA_BIDIRECTIONAL.
555
556The SCSI subsystem tells you the direction to use in the
557'sc_data_direction' member of the SCSI command your driver is
558working on.
559
560For Networking drivers, it's a rather simple affair. For transmit
561packets, map/unmap them with the DMA_TO_DEVICE direction
562specifier. For receive packets, just the opposite, map/unmap them
563with the DMA_FROM_DEVICE direction specifier.
564
565Using Streaming DMA mappings
566============================
567
568The streaming DMA mapping routines can be called from interrupt
569context. There are two versions of each map/unmap, one which will
570map/unmap a single memory region, and one which will map/unmap a
571scatterlist.
572
573To map a single region, you do::
574
575 struct device *dev = &my_dev->dev;
576 dma_addr_t dma_handle;
577 void *addr = buffer->ptr;
578 size_t size = buffer->len;
579
580 dma_handle = dma_map_single(dev, addr, size, direction);
581 if (dma_mapping_error(dev, dma_handle)) {
582 /*
583 * reduce current DMA mapping usage,
584 * delay and try again later or
585 * reset driver.
586 */
587 goto map_error_handling;
588 }
589
590and to unmap it::
591
592 dma_unmap_single(dev, dma_handle, size, direction);
593
594You should call dma_mapping_error() as dma_map_single() could fail and return
595error. Doing so will ensure that the mapping code will work correctly on all
596DMA implementations without any dependency on the specifics of the underlying
597implementation. Using the returned address without checking for errors could
598result in failures ranging from panics to silent data corruption. The same
599applies to dma_map_page() as well.
600
601You should call dma_unmap_single() when the DMA activity is finished, e.g.,
602from the interrupt which told you that the DMA transfer is done.
603
604Using CPU pointers like this for single mappings has a disadvantage:
605you cannot reference HIGHMEM memory in this way. Thus, there is a
606map/unmap interface pair akin to dma_{map,unmap}_single(). These
607interfaces deal with page/offset pairs instead of CPU pointers.
608Specifically::
609
610 struct device *dev = &my_dev->dev;
611 dma_addr_t dma_handle;
612 struct page *page = buffer->page;
613 unsigned long offset = buffer->offset;
614 size_t size = buffer->len;
615
616 dma_handle = dma_map_page(dev, page, offset, size, direction);
617 if (dma_mapping_error(dev, dma_handle)) {
618 /*
619 * reduce current DMA mapping usage,
620 * delay and try again later or
621 * reset driver.
622 */
623 goto map_error_handling;
624 }
625
626 ...
627
628 dma_unmap_page(dev, dma_handle, size, direction);
629
630Here, "offset" means byte offset within the given page.
631
632You should call dma_mapping_error() as dma_map_page() could fail and return
633error as outlined under the dma_map_single() discussion.
634
635You should call dma_unmap_page() when the DMA activity is finished, e.g.,
636from the interrupt which told you that the DMA transfer is done.
637
638With scatterlists, you map a region gathered from several regions by::
639
640 int i, count = dma_map_sg(dev, sglist, nents, direction);
641 struct scatterlist *sg;
642
643 for_each_sg(sglist, sg, count, i) {
644 hw_address[i] = sg_dma_address(sg);
645 hw_len[i] = sg_dma_len(sg);
646 }
647
648where nents is the number of entries in the sglist.
649
650The implementation is free to merge several consecutive sglist entries
651into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
652consecutive sglist entries can be merged into one provided the first one
653ends and the second one starts on a page boundary - in fact this is a huge
654advantage for cards which either cannot do scatter-gather or have very
655limited number of scatter-gather entries) and returns the actual number
656of sg entries it mapped them to. On failure 0 is returned.
657
658Then you should loop count times (note: this can be less than nents times)
659and use sg_dma_address() and sg_dma_len() macros where you previously
660accessed sg->address and sg->length as shown above.
661
662To unmap a scatterlist, just call::
663
664 dma_unmap_sg(dev, sglist, nents, direction);
665
666Again, make sure DMA activity has already finished.
667
668.. note::
669
670 The 'nents' argument to the dma_unmap_sg call must be
671 the _same_ one you passed into the dma_map_sg call,
672 it should _NOT_ be the 'count' value _returned_ from the
673 dma_map_sg call.
674
675Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
676counterpart, because the DMA address space is a shared resource and
677you could render the machine unusable by consuming all DMA addresses.
678
679If you need to use the same streaming DMA region multiple times and touch
680the data in between the DMA transfers, the buffer needs to be synced
681properly in order for the CPU and device to see the most up-to-date and
682correct copy of the DMA buffer.
683
684So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
685transfer call either::
686
687 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
688
689or::
690
691 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
692
693as appropriate.
694
695Then, if you wish to let the device get at the DMA area again,
696finish accessing the data with the CPU, and then before actually
697giving the buffer to the hardware call either::
698
699 dma_sync_single_for_device(dev, dma_handle, size, direction);
700
701or::
702
703 dma_sync_sg_for_device(dev, sglist, nents, direction);
704
705as appropriate.
706
707.. note::
708
709 The 'nents' argument to dma_sync_sg_for_cpu() and
710 dma_sync_sg_for_device() must be the same passed to
711 dma_map_sg(). It is _NOT_ the count returned by
712 dma_map_sg().
713
714After the last DMA transfer call one of the DMA unmap routines
715dma_unmap_{single,sg}(). If you don't touch the data from the first
716dma_map_*() call till dma_unmap_*(), then you don't have to call the
717dma_sync_*() routines at all.
718
719Here is pseudo code which shows a situation in which you would need
720to use the dma_sync_*() interfaces::
721
722 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
723 {
724 dma_addr_t mapping;
725
726 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
727 if (dma_mapping_error(cp->dev, mapping)) {
728 /*
729 * reduce current DMA mapping usage,
730 * delay and try again later or
731 * reset driver.
732 */
733 goto map_error_handling;
734 }
735
736 cp->rx_buf = buffer;
737 cp->rx_len = len;
738 cp->rx_dma = mapping;
739
740 give_rx_buf_to_card(cp);
741 }
742
743 ...
744
745 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
746 {
747 struct my_card *cp = devid;
748
749 ...
750 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
751 struct my_card_header *hp;
752
753 /* Examine the header to see if we wish
754 * to accept the data. But synchronize
755 * the DMA transfer with the CPU first
756 * so that we see updated contents.
757 */
758 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
759 cp->rx_len,
760 DMA_FROM_DEVICE);
761
762 /* Now it is safe to examine the buffer. */
763 hp = (struct my_card_header *) cp->rx_buf;
764 if (header_is_ok(hp)) {
765 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
766 DMA_FROM_DEVICE);
767 pass_to_upper_layers(cp->rx_buf);
768 make_and_setup_new_rx_buf(cp);
769 } else {
770 /* CPU should not write to
771 * DMA_FROM_DEVICE-mapped area,
772 * so dma_sync_single_for_device() is
773 * not needed here. It would be required
774 * for DMA_BIDIRECTIONAL mapping if
775 * the memory was modified.
776 */
777 give_rx_buf_to_card(cp);
778 }
779 }
780 }
781
782Handling Errors
783===============
784
785DMA address space is limited on some architectures and an allocation
786failure can be determined by:
787
788- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
789
790- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
791 by using dma_mapping_error()::
792
793 dma_addr_t dma_handle;
794
795 dma_handle = dma_map_single(dev, addr, size, direction);
796 if (dma_mapping_error(dev, dma_handle)) {
797 /*
798 * reduce current DMA mapping usage,
799 * delay and try again later or
800 * reset driver.
801 */
802 goto map_error_handling;
803 }
804
805- unmap pages that are already mapped, when mapping error occurs in the middle
806 of a multiple page mapping attempt. These example are applicable to
807 dma_map_page() as well.
808
809Example 1::
810
811 dma_addr_t dma_handle1;
812 dma_addr_t dma_handle2;
813
814 dma_handle1 = dma_map_single(dev, addr, size, direction);
815 if (dma_mapping_error(dev, dma_handle1)) {
816 /*
817 * reduce current DMA mapping usage,
818 * delay and try again later or
819 * reset driver.
820 */
821 goto map_error_handling1;
822 }
823 dma_handle2 = dma_map_single(dev, addr, size, direction);
824 if (dma_mapping_error(dev, dma_handle2)) {
825 /*
826 * reduce current DMA mapping usage,
827 * delay and try again later or
828 * reset driver.
829 */
830 goto map_error_handling2;
831 }
832
833 ...
834
835 map_error_handling2:
836 dma_unmap_single(dma_handle1);
837 map_error_handling1:
838
839Example 2::
840
841 /*
842 * if buffers are allocated in a loop, unmap all mapped buffers when
843 * mapping error is detected in the middle
844 */
845
846 dma_addr_t dma_addr;
847 dma_addr_t array[DMA_BUFFERS];
848 int save_index = 0;
849
850 for (i = 0; i < DMA_BUFFERS; i++) {
851
852 ...
853
854 dma_addr = dma_map_single(dev, addr, size, direction);
855 if (dma_mapping_error(dev, dma_addr)) {
856 /*
857 * reduce current DMA mapping usage,
858 * delay and try again later or
859 * reset driver.
860 */
861 goto map_error_handling;
862 }
863 array[i].dma_addr = dma_addr;
864 save_index++;
865 }
866
867 ...
868
869 map_error_handling:
870
871 for (i = 0; i < save_index; i++) {
872
873 ...
874
875 dma_unmap_single(array[i].dma_addr);
876 }
877
878Networking drivers must call dev_kfree_skb() to free the socket buffer
879and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
880(ndo_start_xmit). This means that the socket buffer is just dropped in
881the failure case.
882
883SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
884fails in the queuecommand hook. This means that the SCSI subsystem
885passes the command to the driver again later.
886
887Optimizing Unmap State Space Consumption
888========================================
889
890On many platforms, dma_unmap_{single,page}() is simply a nop.
891Therefore, keeping track of the mapping address and length is a waste
892of space. Instead of filling your drivers up with ifdefs and the like
893to "work around" this (which would defeat the whole purpose of a
894portable API) the following facilities are provided.
895
896Actually, instead of describing the macros one by one, we'll
897transform some example code.
898
8991) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
900 Example, before::
901
902 struct ring_state {
903 struct sk_buff *skb;
904 dma_addr_t mapping;
905 __u32 len;
906 };
907
908 after::
909
910 struct ring_state {
911 struct sk_buff *skb;
912 DEFINE_DMA_UNMAP_ADDR(mapping);
913 DEFINE_DMA_UNMAP_LEN(len);
914 };
915
9162) Use dma_unmap_{addr,len}_set() to set these values.
917 Example, before::
918
919 ringp->mapping = FOO;
920 ringp->len = BAR;
921
922 after::
923
924 dma_unmap_addr_set(ringp, mapping, FOO);
925 dma_unmap_len_set(ringp, len, BAR);
926
9273) Use dma_unmap_{addr,len}() to access these values.
928 Example, before::
929
930 dma_unmap_single(dev, ringp->mapping, ringp->len,
931 DMA_FROM_DEVICE);
932
933 after::
934
935 dma_unmap_single(dev,
936 dma_unmap_addr(ringp, mapping),
937 dma_unmap_len(ringp, len),
938 DMA_FROM_DEVICE);
939
940It really should be self-explanatory. We treat the ADDR and LEN
941separately, because it is possible for an implementation to only
942need the address in order to perform the unmap operation.
943
944Platform Issues
945===============
946
947If you are just writing drivers for Linux and do not maintain
948an architecture port for the kernel, you can safely skip down
949to "Closing".
950
9511) Struct scatterlist requirements.
952
953 You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture
954 supports IOMMUs (including software IOMMU).
955
9562) ARCH_DMA_MINALIGN
957
958 Architectures must ensure that kmalloc'ed buffer is
959 DMA-safe. Drivers and subsystems depend on it. If an architecture
960 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
961 the CPU cache is identical to data in main memory),
962 ARCH_DMA_MINALIGN must be set so that the memory allocator
963 makes sure that kmalloc'ed buffer doesn't share a cache line with
964 the others. See arch/arm/include/asm/cache.h as an example.
965
966 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
967 constraints. You don't need to worry about the architecture data
968 alignment constraints (e.g. the alignment constraints about 64-bit
969 objects).
970
971Closing
972=======
973
974This document, and the API itself, would not be in its current
975form without the feedback and suggestions from numerous individuals.
976We would like to specifically mention, in no particular order, the
977following people::
978
979 Russell King <rmk@arm.linux.org.uk>
980 Leo Dagum <dagum@barrel.engr.sgi.com>
981 Ralf Baechle <ralf@oss.sgi.com>
982 Grant Grundler <grundler@cup.hp.com>
983 Jay Estabrook <Jay.Estabrook@compaq.com>
984 Thomas Sailer <sailer@ife.ee.ethz.ch>
985 Andrea Arcangeli <andrea@suse.de>
986 Jens Axboe <jens.axboe@oracle.com>
987 David Mosberger-Tang <davidm@hpl.hp.com>