1		 Asynchronous Transfers/Transforms API
2
31 INTRODUCTION
4
52 GENEALOGY
6
73 USAGE
83.1 General format of the API
93.2 Supported operations
103.3 Descriptor management
113.4 When does the operation execute?
123.5 When does the operation complete?
133.6 Constraints
143.7 Example
15
164 DMAENGINE DRIVER DEVELOPER NOTES
174.1 Conformance points
184.2 "My application needs exclusive control of hardware channels"
19
205 SOURCE
21
22---
23
241 INTRODUCTION
25
26The async_tx API provides methods for describing a chain of asynchronous
27bulk memory transfers/transforms with support for inter-transactional
28dependencies.  It is implemented as a dmaengine client that smooths over
29the details of different hardware offload engine implementations.  Code
30that is written to the API can optimize for asynchronous operation and
31the API will fit the chain of operations to the available offload
32resources.
33
342 GENEALOGY
35
36The API was initially designed to offload the memory copy and
37xor-parity-calculations of the md-raid5 driver using the offload engines
38present in the Intel(R) Xscale series of I/O processors.  It also built
39on the 'dmaengine' layer developed for offloading memory copies in the
40network stack using Intel(R) I/OAT engines.  The following design
41features surfaced as a result:
421/ implicit synchronous path: users of the API do not need to know if
43   the platform they are running on has offload capabilities.  The
44   operation will be offloaded when an engine is available and carried out
45   in software otherwise.
462/ cross channel dependency chains: the API allows a chain of dependent
47   operations to be submitted, like xor->copy->xor in the raid5 case.  The
48   API automatically handles cases where the transition from one operation
49   to another implies a hardware channel switch.
503/ dmaengine extensions to support multiple clients and operation types
51   beyond 'memcpy'
52
533 USAGE
54
553.1 General format of the API:
56struct dma_async_tx_descriptor *
57async_<operation>(<op specific parameters>, struct async_submit ctl *submit)
58
593.2 Supported operations:
60memcpy  - memory copy between a source and a destination buffer
61memset  - fill a destination buffer with a byte value
62xor     - xor a series of source buffers and write the result to a
63	  destination buffer
64xor_val - xor a series of source buffers and set a flag if the
65	  result is zero.  The implementation attempts to prevent
66	  writes to memory
67pq	- generate the p+q (raid6 syndrome) from a series of source buffers
68pq_val  - validate that a p and or q buffer are in sync with a given series of
69	  sources
70datap	- (raid6_datap_recov) recover a raid6 data block and the p block
71	  from the given sources
722data	- (raid6_2data_recov) recover 2 raid6 data blocks from the given
73	  sources
74
753.3 Descriptor management:
76The return value is non-NULL and points to a 'descriptor' when the operation
77has been queued to execute asynchronously.  Descriptors are recycled
78resources, under control of the offload engine driver, to be reused as
79operations complete.  When an application needs to submit a chain of
80operations it must guarantee that the descriptor is not automatically recycled
81before the dependency is submitted.  This requires that all descriptors be
82acknowledged by the application before the offload engine driver is allowed to
83recycle (or free) the descriptor.  A descriptor can be acked by one of the
84following methods:
851/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted
862/ submitting an unacknowledged descriptor as a dependency to another
87   async_tx call will implicitly set the acknowledged state.
883/ calling async_tx_ack() on the descriptor.
89
903.4 When does the operation execute?
91Operations do not immediately issue after return from the
92async_<operation> call.  Offload engine drivers batch operations to
93improve performance by reducing the number of mmio cycles needed to
94manage the channel.  Once a driver-specific threshold is met the driver
95automatically issues pending operations.  An application can force this
96event by calling async_tx_issue_pending_all().  This operates on all
97channels since the application has no knowledge of channel to operation
98mapping.
99
1003.5 When does the operation complete?
101There are two methods for an application to learn about the completion
102of an operation.
1031/ Call dma_wait_for_async_tx().  This call causes the CPU to spin while
104   it polls for the completion of the operation.  It handles dependency
105   chains and issuing pending operations.
1062/ Specify a completion callback.  The callback routine runs in tasklet
107   context if the offload engine driver supports interrupts, or it is
108   called in application context if the operation is carried out
109   synchronously in software.  The callback can be set in the call to
110   async_<operation>, or when the application needs to submit a chain of
111   unknown length it can use the async_trigger_callback() routine to set a
112   completion interrupt/callback at the end of the chain.
113
1143.6 Constraints:
1151/ Calls to async_<operation> are not permitted in IRQ context.  Other
116   contexts are permitted provided constraint #2 is not violated.
1172/ Completion callback routines cannot submit new operations.  This
118   results in recursion in the synchronous case and spin_locks being
119   acquired twice in the asynchronous case.
120
1213.7 Example:
122Perform a xor->copy->xor operation where each operation depends on the
123result from the previous operation:
124
125void callback(void *param)
126{
127	struct completion *cmp = param;
128
129	complete(cmp);
130}
131
132void run_xor_copy_xor(struct page **xor_srcs,
133		      int xor_src_cnt,
134		      struct page *xor_dest,
135		      size_t xor_len,
136		      struct page *copy_src,
137		      struct page *copy_dest,
138		      size_t copy_len)
139{
140	struct dma_async_tx_descriptor *tx;
141	addr_conv_t addr_conv[xor_src_cnt];
142	struct async_submit_ctl submit;
143	addr_conv_t addr_conv[NDISKS];
144	struct completion cmp;
145
146	init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL,
147			  addr_conv);
148	tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit)
149
150	submit->depend_tx = tx;
151	tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, &submit);
152
153	init_completion(&cmp);
154	init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST | ASYNC_TX_ACK, tx,
155			  callback, &cmp, addr_conv);
156	tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, &submit);
157
158	async_tx_issue_pending_all();
159
160	wait_for_completion(&cmp);
161}
162
163See include/linux/async_tx.h for more information on the flags.  See the
164ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more
165implementation examples.
166
1674 DRIVER DEVELOPMENT NOTES
168
1694.1 Conformance points:
170There are a few conformance points required in dmaengine drivers to
171accommodate assumptions made by applications using the async_tx API:
1721/ Completion callbacks are expected to happen in tasklet context
1732/ dma_async_tx_descriptor fields are never manipulated in IRQ context
1743/ Use async_tx_run_dependencies() in the descriptor clean up path to
175   handle submission of dependent operations
176
1774.2 "My application needs exclusive control of hardware channels"
178Primarily this requirement arises from cases where a DMA engine driver
179is being used to support device-to-memory operations.  A channel that is
180performing these operations cannot, for many platform specific reasons,
181be shared.  For these cases the dma_request_channel() interface is
182provided.
183
184The interface is:
185struct dma_chan *dma_request_channel(dma_cap_mask_t mask,
186				     dma_filter_fn filter_fn,
187				     void *filter_param);
188
189Where dma_filter_fn is defined as:
190typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param);
191
192When the optional 'filter_fn' parameter is set to NULL
193dma_request_channel simply returns the first channel that satisfies the
194capability mask.  Otherwise, when the mask parameter is insufficient for
195specifying the necessary channel, the filter_fn routine can be used to
196disposition the available channels in the system. The filter_fn routine
197is called once for each free channel in the system.  Upon seeing a
198suitable channel filter_fn returns DMA_ACK which flags that channel to
199be the return value from dma_request_channel.  A channel allocated via
200this interface is exclusive to the caller, until dma_release_channel()
201is called.
202
203The DMA_PRIVATE capability flag is used to tag dma devices that should
204not be used by the general-purpose allocator.  It can be set at
205initialization time if it is known that a channel will always be
206private.  Alternatively, it is set when dma_request_channel() finds an
207unused "public" channel.
208
209A couple caveats to note when implementing a driver and consumer:
2101/ Once a channel has been privately allocated it will no longer be
211   considered by the general-purpose allocator even after a call to
212   dma_release_channel().
2132/ Since capabilities are specified at the device level a dma_device
214   with multiple channels will either have all channels public, or all
215   channels private.
216
2175 SOURCE
218
219include/linux/dmaengine.h: core header file for DMA drivers and api users
220drivers/dma/dmaengine.c: offload engine channel management routines
221drivers/dma/: location for offload engine drivers
222include/linux/async_tx.h: core header file for the async_tx api
223crypto/async_tx/async_tx.c: async_tx interface to dmaengine and common code
224crypto/async_tx/async_memcpy.c: copy offload
225crypto/async_tx/async_xor.c: xor and xor zero sum offload
226