1<html><head><meta http-equiv="Content-Type" content="text/html; charset=ANSI_X3.4-1968"><title>Memory management</title><meta name="generator" content="DocBook XSL Stylesheets V1.78.1"><link rel="home" href="index.html" title="Linux DRM Developer's Guide"><link rel="up" href="drmInternals.html" title="Chapter&#160;2.&#160;DRM Internals"><link rel="prev" href="API-drm-dev-set-unique.html" title="drm_dev_set_unique"><link rel="next" href="API-drm-gem-object-init.html" title="drm_gem_object_init"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">Memory management</th></tr><tr><td width="20%" align="left"><a accesskey="p" href="API-drm-dev-set-unique.html">Prev</a>&#160;</td><th width="60%" align="center">Chapter&#160;2.&#160;DRM Internals</th><td width="20%" align="right">&#160;<a accesskey="n" href="API-drm-gem-object-init.html">Next</a></td></tr></table><hr></div><div class="sect1"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="drm-memory-management"></a>Memory management</h2></div></div></div><div class="toc"><dl class="toc"><dt><span class="sect2"><a href="drm-memory-management.html#idp1122551924">The Translation Table Manager (TTM)</a></span></dt><dt><span class="sect2"><a href="drm-memory-management.html#drm-gem">The Graphics Execution Manager (GEM)</a></span></dt><dt><span class="sect2"><a href="drm-memory-management.html#idp1122718668">VMA Offset Manager</a></span></dt><dt><span class="sect2"><a href="drm-memory-management.html#drm-prime-support">PRIME Buffer Sharing</a></span></dt><dt><span class="sect2"><a href="drm-memory-management.html#idp1122911796">PRIME Function References</a></span></dt><dt><span class="sect2"><a href="drm-memory-management.html#idp1122987828">DRM MM Range Allocator</a></span></dt><dt><span class="sect2"><a href="drm-memory-management.html#idp1119448692">DRM MM Range Allocator Function References</a></span></dt><dt><span class="sect2"><a href="drm-memory-management.html#idp1123226860">CMA Helper Functions Reference</a></span></dt></dl></div><p>
2      Modern Linux systems require large amount of graphics memory to store
3      frame buffers, textures, vertices and other graphics-related data. Given
4      the very dynamic nature of many of that data, managing graphics memory
5      efficiently is thus crucial for the graphics stack and plays a central
6      role in the DRM infrastructure.
7    </p><p>
8      The DRM core includes two memory managers, namely Translation Table Maps
9      (TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory
10      manager to be developed and tried to be a one-size-fits-them all
11      solution. It provides a single userspace API to accommodate the need of
12      all hardware, supporting both Unified Memory Architecture (UMA) devices
13      and devices with dedicated video RAM (i.e. most discrete video cards).
14      This resulted in a large, complex piece of code that turned out to be
15      hard to use for driver development.
16    </p><p>
17      GEM started as an Intel-sponsored project in reaction to TTM's
18      complexity. Its design philosophy is completely different: instead of
19      providing a solution to every graphics memory-related problems, GEM
20      identified common code between drivers and created a support library to
21      share it. GEM has simpler initialization and execution requirements than
22      TTM, but has no video RAM management capabilities and is thus limited to
23      UMA devices.
24    </p><div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="idp1122551924"></a>The Translation Table Manager (TTM)</h3></div></div></div><div class="toc"><dl class="toc"><dt><span class="sect3"><a href="drm-memory-management.html#idp1122552508">TTM initialization</a></span></dt></dl></div><p>
25        TTM design background and information belongs here.
26      </p><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122552508"></a>TTM initialization</h4></div></div></div><div class="warning" style="margin-left: 0.5in; margin-right: 0.5in;"><h3 class="title">Warning</h3><p>This section is outdated.</p></div><p>
27          Drivers wishing to support TTM must fill out a drm_bo_driver
28          structure. The structure contains several fields with function
29          pointers for initializing the TTM, allocating and freeing memory,
30          waiting for command completion and fence synchronization, and memory
31          migration. See the radeon_ttm.c file for an example of usage.
32        </p><p>
33          The ttm_global_reference structure is made up of several fields:
34        </p><pre class="programlisting">
35          struct ttm_global_reference {
36                  enum ttm_global_types global_type;
37                  size_t size;
38                  void *object;
39                  int (*init) (struct ttm_global_reference *);
40                  void (*release) (struct ttm_global_reference *);
41          };
42        </pre><p>
43          There should be one global reference structure for your memory
44          manager as a whole, and there will be others for each object
45          created by the memory manager at runtime.  Your global TTM should
46          have a type of TTM_GLOBAL_TTM_MEM.  The size field for the global
47          object should be sizeof(struct ttm_mem_global), and the init and
48          release hooks should point at your driver-specific init and
49          release routines, which probably eventually call
50          ttm_mem_global_init and ttm_mem_global_release, respectively.
51        </p><p>
52          Once your global TTM accounting structure is set up and initialized
53          by calling ttm_global_item_ref() on it,
54          you need to create a buffer object TTM to
55          provide a pool for buffer object allocation by clients and the
56          kernel itself.  The type of this object should be TTM_GLOBAL_TTM_BO,
57          and its size should be sizeof(struct ttm_bo_global).  Again,
58          driver-specific init and release functions may be provided,
59          likely eventually calling ttm_bo_global_init() and
60          ttm_bo_global_release(), respectively.  Also, like the previous
61          object, ttm_global_item_ref() is used to create an initial reference
62          count for the TTM, which will call your initialization function.
63        </p></div></div><div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="drm-gem"></a>The Graphics Execution Manager (GEM)</h3></div></div></div><div class="toc"><dl class="toc"><dt><span class="sect3"><a href="drm-memory-management.html#idp1122562004">GEM Initialization</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#idp1122564252">GEM Objects Creation</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#idp1122572524">GEM Objects Lifetime</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#idp1122578276">GEM Objects Naming</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#drm-gem-objects-mapping">GEM Objects Mapping</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#idp1122595332">Memory Coherency</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#idp1122598748">Command Execution</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#idp1122600572">GEM Function Reference</a></span></dt></dl></div><p>
64        The GEM design approach has resulted in a memory manager that doesn't
65        provide full coverage of all (or even all common) use cases in its
66        userspace or kernel API. GEM exposes a set of standard memory-related
67        operations to userspace and a set of helper functions to drivers, and let
68        drivers implement hardware-specific operations with their own private API.
69      </p><p>
70        The GEM userspace API is described in the
71        <a class="ulink" href="http://lwn.net/Articles/283798/" target="_top"><em class="citetitle">GEM - the Graphics
72        Execution Manager</em></a> article on LWN. While slightly
73        outdated, the document provides a good overview of the GEM API principles.
74        Buffer allocation and read and write operations, described as part of the
75        common GEM API, are currently implemented using driver-specific ioctls.
76      </p><p>
77        GEM is data-agnostic. It manages abstract buffer objects without knowing
78        what individual buffers contain. APIs that require knowledge of buffer
79        contents or purpose, such as buffer allocation or synchronization
80        primitives, are thus outside of the scope of GEM and must be implemented
81        using driver-specific ioctls.
82      </p><p>
83        On a fundamental level, GEM involves several operations:
84        </p><div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem">Memory allocation and freeing</li><li class="listitem">Command execution</li><li class="listitem">Aperture management at command execution time</li></ul></div><p>
85        Buffer object allocation is relatively straightforward and largely
86        provided by Linux's shmem layer, which provides memory to back each
87        object.
88      </p><p>
89        Device-specific operations, such as command execution, pinning, buffer
90        read &amp; write, mapping, and domain ownership transfers are left to
91        driver-specific ioctls.
92      </p><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122562004"></a>GEM Initialization</h4></div></div></div><p>
93          Drivers that use GEM must set the DRIVER_GEM bit in the struct
94          <span class="structname">drm_driver</span>
95          <em class="structfield"><code>driver_features</code></em> field. The DRM core will
96          then automatically initialize the GEM core before calling the
97          <code class="methodname">load</code> operation. Behind the scene, this will
98          create a DRM Memory Manager object which provides an address space
99          pool for object allocation.
100        </p><p>
101          In a KMS configuration, drivers need to allocate and initialize a
102          command ring buffer following core GEM initialization if required by
103          the hardware. UMA devices usually have what is called a "stolen"
104          memory region, which provides space for the initial framebuffer and
105          large, contiguous memory regions required by the device. This space is
106          typically not managed by GEM, and must be initialized separately into
107          its own DRM MM object.
108        </p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122564252"></a>GEM Objects Creation</h4></div></div></div><p>
109          GEM splits creation of GEM objects and allocation of the memory that
110          backs them in two distinct operations.
111        </p><p>
112          GEM objects are represented by an instance of struct
113          <span class="structname">drm_gem_object</span>. Drivers usually need to extend
114          GEM objects with private information and thus create a driver-specific
115          GEM object structure type that embeds an instance of struct
116          <span class="structname">drm_gem_object</span>.
117        </p><p>
118          To create a GEM object, a driver allocates memory for an instance of its
119          specific GEM object type and initializes the embedded struct
120          <span class="structname">drm_gem_object</span> with a call to
121          <code class="function">drm_gem_object_init</code>. The function takes a pointer to
122          the DRM device, a pointer to the GEM object and the buffer object size
123          in bytes.
124        </p><p>
125          GEM uses shmem to allocate anonymous pageable memory.
126          <code class="function">drm_gem_object_init</code> will create an shmfs file of
127          the requested size and store it into the struct
128          <span class="structname">drm_gem_object</span> <em class="structfield"><code>filp</code></em>
129          field. The memory is used as either main storage for the object when the
130          graphics hardware uses system memory directly or as a backing store
131          otherwise.
132        </p><p>
133          Drivers are responsible for the actual physical pages allocation by
134          calling <code class="function">shmem_read_mapping_page_gfp</code> for each page.
135          Note that they can decide to allocate pages when initializing the GEM
136          object, or to delay allocation until the memory is needed (for instance
137          when a page fault occurs as a result of a userspace memory access or
138          when the driver needs to start a DMA transfer involving the memory).
139        </p><p>
140          Anonymous pageable memory allocation is not always desired, for instance
141          when the hardware requires physically contiguous system memory as is
142          often the case in embedded devices. Drivers can create GEM objects with
143          no shmfs backing (called private GEM objects) by initializing them with
144          a call to <code class="function">drm_gem_private_object_init</code> instead of
145          <code class="function">drm_gem_object_init</code>. Storage for private GEM
146          objects must be managed by drivers.
147        </p><p>
148          Drivers that do not need to extend GEM objects with private information
149          can call the <code class="function">drm_gem_object_alloc</code> function to
150          allocate and initialize a struct <span class="structname">drm_gem_object</span>
151          instance. The GEM core will call the optional driver
152          <code class="methodname">gem_init_object</code> operation after initializing
153          the GEM object with <code class="function">drm_gem_object_init</code>.
154          </p><pre class="synopsis">int (*gem_init_object) (struct drm_gem_object *obj);</pre><p>
155        </p><p>
156          No alloc-and-init function exists for private GEM objects.
157        </p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122572524"></a>GEM Objects Lifetime</h4></div></div></div><p>
158          All GEM objects are reference-counted by the GEM core. References can be
159          acquired and release by <code class="function">calling drm_gem_object_reference</code>
160          and <code class="function">drm_gem_object_unreference</code> respectively. The
161          caller must hold the <span class="structname">drm_device</span>
162          <em class="structfield"><code>struct_mutex</code></em> lock. As a convenience, GEM
163          provides the <code class="function">drm_gem_object_reference_unlocked</code> and
164          <code class="function">drm_gem_object_unreference_unlocked</code> functions that
165          can be called without holding the lock.
166        </p><p>
167          When the last reference to a GEM object is released the GEM core calls
168          the <span class="structname">drm_driver</span>
169          <code class="methodname">gem_free_object</code> operation. That operation is
170          mandatory for GEM-enabled drivers and must free the GEM object and all
171          associated resources.
172        </p><p>
173          </p><pre class="synopsis">void (*gem_free_object) (struct drm_gem_object *obj);</pre><p>
174          Drivers are responsible for freeing all GEM object resources, including
175          the resources created by the GEM core. If an mmap offset has been
176          created for the object (in which case
177          <span class="structname">drm_gem_object</span>::<em class="structfield"><code>map_list</code></em>::<em class="structfield"><code>map</code></em>
178          is not NULL) it must be freed by a call to
179          <code class="function">drm_gem_free_mmap_offset</code>. The shmfs backing store
180          must be released by calling <code class="function">drm_gem_object_release</code>
181          (that function can safely be called if no shmfs backing store has been
182          created).
183        </p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122578276"></a>GEM Objects Naming</h4></div></div></div><p>
184          Communication between userspace and the kernel refers to GEM objects
185          using local handles, global names or, more recently, file descriptors.
186          All of those are 32-bit integer values; the usual Linux kernel limits
187          apply to the file descriptors.
188        </p><p>
189          GEM handles are local to a DRM file. Applications get a handle to a GEM
190          object through a driver-specific ioctl, and can use that handle to refer
191          to the GEM object in other standard or driver-specific ioctls. Closing a
192          DRM file handle frees all its GEM handles and dereferences the
193          associated GEM objects.
194        </p><p>
195          To create a handle for a GEM object drivers call
196          <code class="function">drm_gem_handle_create</code>. The function takes a pointer
197          to the DRM file and the GEM object and returns a locally unique handle.
198          When the handle is no longer needed drivers delete it with a call to
199          <code class="function">drm_gem_handle_delete</code>. Finally the GEM object
200          associated with a handle can be retrieved by a call to
201          <code class="function">drm_gem_object_lookup</code>.
202        </p><p>
203          Handles don't take ownership of GEM objects, they only take a reference
204          to the object that will be dropped when the handle is destroyed. To
205          avoid leaking GEM objects, drivers must make sure they drop the
206          reference(s) they own (such as the initial reference taken at object
207          creation time) as appropriate, without any special consideration for the
208          handle. For example, in the particular case of combined GEM object and
209          handle creation in the implementation of the
210          <code class="methodname">dumb_create</code> operation, drivers must drop the
211          initial reference to the GEM object before returning the handle.
212        </p><p>
213          GEM names are similar in purpose to handles but are not local to DRM
214          files. They can be passed between processes to reference a GEM object
215          globally. Names can't be used directly to refer to objects in the DRM
216          API, applications must convert handles to names and names to handles
217          using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls
218          respectively. The conversion is handled by the DRM core without any
219          driver-specific support.
220        </p><p>
221          GEM also supports buffer sharing with dma-buf file descriptors through
222          PRIME. GEM-based drivers must use the provided helpers functions to
223          implement the exporting and importing correctly. See <a class="xref" href="drm-memory-management.html#drm-prime-support" title="PRIME Buffer Sharing">the section called &#8220;PRIME Buffer Sharing&#8221;</a>.
224          Since sharing file descriptors is inherently more secure than the
225          easily guessable and global GEM names it is the preferred buffer
226          sharing mechanism. Sharing buffers through GEM names is only supported
227          for legacy userspace. Furthermore PRIME also allows cross-device
228          buffer sharing since it is based on dma-bufs.
229        </p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="drm-gem-objects-mapping"></a>GEM Objects Mapping</h4></div></div></div><p>
230          Because mapping operations are fairly heavyweight GEM favours
231          read/write-like access to buffers, implemented through driver-specific
232          ioctls, over mapping buffers to userspace. However, when random access
233          to the buffer is needed (to perform software rendering for instance),
234          direct access to the object can be more efficient.
235        </p><p>
236          The mmap system call can't be used directly to map GEM objects, as they
237          don't have their own file handle. Two alternative methods currently
238          co-exist to map GEM objects to userspace. The first method uses a
239          driver-specific ioctl to perform the mapping operation, calling
240          <code class="function">do_mmap</code> under the hood. This is often considered
241          dubious, seems to be discouraged for new GEM-enabled drivers, and will
242          thus not be described here.
243        </p><p>
244          The second method uses the mmap system call on the DRM file handle.
245          </p><pre class="synopsis">void *mmap(void *addr, size_t length, int prot, int flags, int fd,
246             off_t offset);</pre><p>
247          DRM identifies the GEM object to be mapped by a fake offset passed
248          through the mmap offset argument. Prior to being mapped, a GEM object
249          must thus be associated with a fake offset. To do so, drivers must call
250          <code class="function">drm_gem_create_mmap_offset</code> on the object. The
251          function allocates a fake offset range from a pool and stores the
252          offset divided by PAGE_SIZE in
253          <code class="literal">obj-&gt;map_list.hash.key</code>. Care must be taken not to
254          call <code class="function">drm_gem_create_mmap_offset</code> if a fake offset
255          has already been allocated for the object. This can be tested by
256          <code class="literal">obj-&gt;map_list.map</code> being non-NULL.
257        </p><p>
258          Once allocated, the fake offset value
259          (<code class="literal">obj-&gt;map_list.hash.key &lt;&lt; PAGE_SHIFT</code>)
260          must be passed to the application in a driver-specific way and can then
261          be used as the mmap offset argument.
262        </p><p>
263          The GEM core provides a helper method <code class="function">drm_gem_mmap</code>
264          to handle object mapping. The method can be set directly as the mmap
265          file operation handler. It will look up the GEM object based on the
266          offset value and set the VMA operations to the
267          <span class="structname">drm_driver</span> <em class="structfield"><code>gem_vm_ops</code></em>
268          field. Note that <code class="function">drm_gem_mmap</code> doesn't map memory to
269          userspace, but relies on the driver-provided fault handler to map pages
270          individually.
271        </p><p>
272          To use <code class="function">drm_gem_mmap</code>, drivers must fill the struct
273          <span class="structname">drm_driver</span> <em class="structfield"><code>gem_vm_ops</code></em>
274          field with a pointer to VM operations.
275        </p><p>
276          </p><pre class="synopsis">struct vm_operations_struct *gem_vm_ops
277
278  struct vm_operations_struct {
279          void (*open)(struct vm_area_struct * area);
280          void (*close)(struct vm_area_struct * area);
281          int (*fault)(struct vm_area_struct *vma, struct vm_fault *vmf);
282  };</pre><p>
283        </p><p>
284          The <code class="methodname">open</code> and <code class="methodname">close</code>
285          operations must update the GEM object reference count. Drivers can use
286          the <code class="function">drm_gem_vm_open</code> and
287          <code class="function">drm_gem_vm_close</code> helper functions directly as open
288          and close handlers.
289        </p><p>
290          The fault operation handler is responsible for mapping individual pages
291          to userspace when a page fault occurs. Depending on the memory
292          allocation scheme, drivers can allocate pages at fault time, or can
293          decide to allocate memory for the GEM object at the time the object is
294          created.
295        </p><p>
296          Drivers that want to map the GEM object upfront instead of handling page
297          faults can implement their own mmap file operation handler.
298        </p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122595332"></a>Memory Coherency</h4></div></div></div><p>
299          When mapped to the device or used in a command buffer, backing pages
300          for an object are flushed to memory and marked write combined so as to
301          be coherent with the GPU. Likewise, if the CPU accesses an object
302          after the GPU has finished rendering to the object, then the object
303          must be made coherent with the CPU's view of memory, usually involving
304          GPU cache flushing of various kinds. This core CPU&lt;-&gt;GPU
305          coherency management is provided by a device-specific ioctl, which
306          evaluates an object's current domain and performs any necessary
307          flushing or synchronization to put the object into the desired
308          coherency domain (note that the object may be busy, i.e. an active
309          render target; in that case, setting the domain blocks the client and
310          waits for rendering to complete before performing any necessary
311          flushing operations).
312        </p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122598748"></a>Command Execution</h4></div></div></div><p>
313          Perhaps the most important GEM function for GPU devices is providing a
314          command execution interface to clients. Client programs construct
315          command buffers containing references to previously allocated memory
316          objects, and then submit them to GEM. At that point, GEM takes care to
317          bind all the objects into the GTT, execute the buffer, and provide
318          necessary synchronization between clients accessing the same buffers.
319          This often involves evicting some objects from the GTT and re-binding
320          others (a fairly expensive operation), and providing relocation
321          support which hides fixed GTT offsets from clients. Clients must take
322          care not to submit command buffers that reference more objects than
323          can fit in the GTT; otherwise, GEM will reject them and no rendering
324          will occur. Similarly, if several objects in the buffer require fence
325          registers to be allocated for correct rendering (e.g. 2D blits on
326          pre-965 chips), care must be taken not to require more fence registers
327          than are available to the client. Such resource management should be
328          abstracted from the client in libdrm.
329        </p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122600572"></a>GEM Function Reference</h4></div></div></div></div></div><div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="idp1122718668"></a>VMA Offset Manager</h3></div></div></div><div class="toc"><dl class="toc"><dt><span class="refentrytitle"><a href="API-drm-vma-offset-manager-init.html"><span class="phrase">drm_vma_offset_manager_init</span></a></span><span class="refpurpose"> &#8212; 
330  Initialize new offset-manager
331 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-offset-manager-destroy.html"><span class="phrase">drm_vma_offset_manager_destroy</span></a></span><span class="refpurpose"> &#8212; 
332     Destroy offset manager
333 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-offset-lookup.html"><span class="phrase">drm_vma_offset_lookup</span></a></span><span class="refpurpose"> &#8212; 
334     Find node in offset space
335 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-offset-lookup-locked.html"><span class="phrase">drm_vma_offset_lookup_locked</span></a></span><span class="refpurpose"> &#8212; 
336     Find node in offset space
337 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-offset-add.html"><span class="phrase">drm_vma_offset_add</span></a></span><span class="refpurpose"> &#8212; 
338     Add offset node to manager
339 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-offset-remove.html"><span class="phrase">drm_vma_offset_remove</span></a></span><span class="refpurpose"> &#8212; 
340     Remove offset node from manager
341 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-allow.html"><span class="phrase">drm_vma_node_allow</span></a></span><span class="refpurpose"> &#8212; 
342     Add open-file to list of allowed users
343 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-revoke.html"><span class="phrase">drm_vma_node_revoke</span></a></span><span class="refpurpose"> &#8212; 
344     Remove open-file from list of allowed users
345 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-is-allowed.html"><span class="phrase">drm_vma_node_is_allowed</span></a></span><span class="refpurpose"> &#8212; 
346     Check whether an open-file is granted access
347 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-offset-exact-lookup.html"><span class="phrase">drm_vma_offset_exact_lookup</span></a></span><span class="refpurpose"> &#8212; 
348  Look up node by exact address
349 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-offset-lock-lookup.html"><span class="phrase">drm_vma_offset_lock_lookup</span></a></span><span class="refpurpose"> &#8212; 
350     Lock lookup for extended private use
351 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-offset-unlock-lookup.html"><span class="phrase">drm_vma_offset_unlock_lookup</span></a></span><span class="refpurpose"> &#8212; 
352     Unlock lookup for extended private use
353 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-reset.html"><span class="phrase">drm_vma_node_reset</span></a></span><span class="refpurpose"> &#8212; 
354     Initialize or reset node object
355 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-start.html"><span class="phrase">drm_vma_node_start</span></a></span><span class="refpurpose"> &#8212; 
356     Return start address for page-based addressing
357 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-size.html"><span class="phrase">drm_vma_node_size</span></a></span><span class="refpurpose"> &#8212; 
358     Return size (page-based)
359 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-has-offset.html"><span class="phrase">drm_vma_node_has_offset</span></a></span><span class="refpurpose"> &#8212; 
360     Check whether node is added to offset manager
361 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-offset-addr.html"><span class="phrase">drm_vma_node_offset_addr</span></a></span><span class="refpurpose"> &#8212; 
362     Return sanitized offset for user-space mmaps
363 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-unmap.html"><span class="phrase">drm_vma_node_unmap</span></a></span><span class="refpurpose"> &#8212; 
364     Unmap offset node
365 </span></dt><dt><span class="refentrytitle"><a href="API-drm-vma-node-verify-access.html"><span class="phrase">drm_vma_node_verify_access</span></a></span><span class="refpurpose"> &#8212; 
366     Access verification helper for TTM
367 </span></dt></dl></div><p>
368   </p><p>
369   The vma-manager is responsible to map arbitrary driver-dependent memory
370   regions into the linear user address-space. It provides offsets to the
371   caller which can then be used on the address_space of the drm-device. It
372   takes care to not overlap regions, size them appropriately and to not
373   confuse mm-core by inconsistent fake vm_pgoff fields.
374   Drivers shouldn't use this for object placement in VMEM. This manager should
375   only be used to manage mappings into linear user-space VMs.
376   </p><p>
377   We use drm_mm as backend to manage object allocations. But it is highly
378   optimized for alloc/free calls, not lookups. Hence, we use an rb-tree to
379   speed up offset lookups.
380   </p><p>
381   You must not use multiple offset managers on a single address_space.
382   Otherwise, mm-core will be unable to tear down memory mappings as the VM will
383   no longer be linear.
384   </p><p>
385   This offset manager works on page-based addresses. That is, every argument
386   and return code (with the exception of <code class="function">drm_vma_node_offset_addr</code>) is given
387   in number of pages, not number of bytes. That means, object sizes and offsets
388   must always be page-aligned (as usual).
389   If you want to get a valid byte-based user-space address for a given offset,
390   please see <code class="function">drm_vma_node_offset_addr</code>.
391   </p><p>
392   Additionally to offset management, the vma offset manager also handles access
393   management. For every open-file context that is allowed to access a given
394   node, you must call <code class="function">drm_vma_node_allow</code>. Otherwise, an <code class="function">mmap</code> call on this
395   open-file with the offset of the node will fail with -EACCES. To revoke
396   access again, use <code class="function">drm_vma_node_revoke</code>. However, the caller is responsible
397   for destroying already existing mappings, if required.
398</p></div><div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="drm-prime-support"></a>PRIME Buffer Sharing</h3></div></div></div><div class="toc"><dl class="toc"><dt><span class="sect3"><a href="drm-memory-management.html#idp1122900948">Overview and Driver Interface</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#idp1122906844">PRIME Helper Functions</a></span></dt></dl></div><p>
399        PRIME is the cross device buffer sharing framework in drm, originally
400        created for the OPTIMUS range of multi-gpu platforms. To userspace
401        PRIME buffers are dma-buf based file descriptors.
402      </p><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122900948"></a>Overview and Driver Interface</h4></div></div></div><p>
403          Similar to GEM global names, PRIME file descriptors are
404          also used to share buffer objects across processes. They offer
405          additional security: as file descriptors must be explicitly sent over
406          UNIX domain sockets to be shared between applications, they can't be
407          guessed like the globally unique GEM names.
408        </p><p>
409          Drivers that support the PRIME
410          API must set the DRIVER_PRIME bit in the struct
411          <span class="structname">drm_driver</span>
412          <em class="structfield"><code>driver_features</code></em> field, and implement the
413          <code class="methodname">prime_handle_to_fd</code> and
414          <code class="methodname">prime_fd_to_handle</code> operations.
415        </p><p>
416          </p><pre class="synopsis">int (*prime_handle_to_fd)(struct drm_device *dev,
417                          struct drm_file *file_priv, uint32_t handle,
418                          uint32_t flags, int *prime_fd);
419int (*prime_fd_to_handle)(struct drm_device *dev,
420                          struct drm_file *file_priv, int prime_fd,
421                          uint32_t *handle);</pre><p>
422            Those two operations convert a handle to a PRIME file descriptor and
423            vice versa. Drivers must use the kernel dma-buf buffer sharing framework
424            to manage the PRIME file descriptors. Similar to the mode setting
425            API PRIME is agnostic to the underlying buffer object manager, as
426            long as handles are 32bit unsigned integers.
427          </p><p>
428            While non-GEM drivers must implement the operations themselves, GEM
429            drivers must use the <code class="function">drm_gem_prime_handle_to_fd</code>
430            and <code class="function">drm_gem_prime_fd_to_handle</code> helper functions.
431            Those helpers rely on the driver
432            <code class="methodname">gem_prime_export</code> and
433            <code class="methodname">gem_prime_import</code> operations to create a dma-buf
434            instance from a GEM object (dma-buf exporter role) and to create a GEM
435            object from a dma-buf instance (dma-buf importer role).
436          </p><p>
437            </p><pre class="synopsis">struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
438                             struct drm_gem_object *obj,
439                             int flags);
440struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
441                                            struct dma_buf *dma_buf);</pre><p>
442            These two operations are mandatory for GEM drivers that support
443            PRIME.
444          </p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122906844"></a>PRIME Helper Functions</h4></div></div></div><p>
445   </p><p>
446   Drivers can implement <em class="parameter"><code>gem_prime_export</code></em> and <em class="parameter"><code>gem_prime_import</code></em> in terms of
447   simpler APIs by using the helper functions <em class="parameter"><code>drm_gem_prime_export</code></em> and
448   <em class="parameter"><code>drm_gem_prime_import</code></em>.  These functions implement dma-buf support in terms of
449   five lower-level driver callbacks:
450   </p><p>
451   Export callbacks:
452   </p><p>
453   - <em class="parameter"><code>gem_prime_pin</code></em> (optional): prepare a GEM object for exporting
454   </p><p>
455   - <em class="parameter"><code>gem_prime_get_sg_table</code></em>: provide a scatter/gather table of pinned pages
456   </p><p>
457   - <em class="parameter"><code>gem_prime_vmap</code></em>: vmap a buffer exported by your driver
458   </p><p>
459   - <em class="parameter"><code>gem_prime_vunmap</code></em>: vunmap a buffer exported by your driver
460   </p><p>
461   Import callback:
462   </p><p>
463   - <em class="parameter"><code>gem_prime_import_sg_table</code></em> (import): produce a GEM object from another
464   driver's scatter/gather table
465</p></div></div><div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="idp1122911796"></a>PRIME Function References</h3></div></div></div><div class="toc"><dl class="toc"><dt><span class="refentrytitle"><a href="API-drm-gem-dmabuf-release.html"><span class="phrase">drm_gem_dmabuf_release</span></a></span><span class="refpurpose"> &#8212; 
466  dma_buf release implementation for GEM
467 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-prime-export.html"><span class="phrase">drm_gem_prime_export</span></a></span><span class="refpurpose"> &#8212; 
468     helper library implementation of the export callback
469 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-prime-handle-to-fd.html"><span class="phrase">drm_gem_prime_handle_to_fd</span></a></span><span class="refpurpose"> &#8212; 
470     PRIME export function for GEM drivers
471 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-prime-import.html"><span class="phrase">drm_gem_prime_import</span></a></span><span class="refpurpose"> &#8212; 
472     helper library implementation of the import callback
473 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-prime-fd-to-handle.html"><span class="phrase">drm_gem_prime_fd_to_handle</span></a></span><span class="refpurpose"> &#8212; 
474     PRIME import function for GEM drivers
475 </span></dt><dt><span class="refentrytitle"><a href="API-drm-prime-pages-to-sg.html"><span class="phrase">drm_prime_pages_to_sg</span></a></span><span class="refpurpose"> &#8212; 
476     converts a page array into an sg list
477 </span></dt><dt><span class="refentrytitle"><a href="API-drm-prime-sg-to-page-addr-arrays.html"><span class="phrase">drm_prime_sg_to_page_addr_arrays</span></a></span><span class="refpurpose"> &#8212; 
478     convert an sg table into a page array
479 </span></dt><dt><span class="refentrytitle"><a href="API-drm-prime-gem-destroy.html"><span class="phrase">drm_prime_gem_destroy</span></a></span><span class="refpurpose"> &#8212; 
480     helper to clean up a PRIME-imported GEM object
481 </span></dt></dl></div></div><div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="idp1122987828"></a>DRM MM Range Allocator</h3></div></div></div><div class="toc"><dl class="toc"><dt><span class="sect3"><a href="drm-memory-management.html#idp1122988148">Overview</a></span></dt><dt><span class="sect3"><a href="drm-memory-management.html#idp1119445468">LRU Scan/Eviction Support</a></span></dt></dl></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1122988148"></a>Overview</h4></div></div></div><p>
482   </p><p>
483   drm_mm provides a simple range allocator. The drivers are free to use the
484   resource allocator from the linux core if it suits them, the upside of drm_mm
485   is that it's in the DRM core. Which means that it's easier to extend for
486   some of the crazier special purpose needs of gpus.
487   </p><p>
488   The main data struct is <span class="structname">drm_mm</span>, allocations are tracked in <span class="structname">drm_mm_node</span>.
489   Drivers are free to embed either of them into their own suitable
490   datastructures. drm_mm itself will not do any allocations of its own, so if
491   drivers choose not to embed nodes they need to still allocate them
492   themselves.
493   </p><p>
494   The range allocator also supports reservation of preallocated blocks. This is
495   useful for taking over initial mode setting configurations from the firmware,
496   where an object needs to be created which exactly matches the firmware's
497   scanout target. As long as the range is still free it can be inserted anytime
498   after the allocator is initialized, which helps with avoiding looped
499   depencies in the driver load sequence.
500   </p><p>
501   drm_mm maintains a stack of most recently freed holes, which of all
502   simplistic datastructures seems to be a fairly decent approach to clustering
503   allocations and avoiding too much fragmentation. This means free space
504   searches are O(num_holes). Given that all the fancy features drm_mm supports
505   something better would be fairly complex and since gfx thrashing is a fairly
506   steep cliff not a real concern. Removing a node again is O(1).
507   </p><p>
508   drm_mm supports a few features: Alignment and range restrictions can be
509   supplied. Further more every <span class="structname">drm_mm_node</span> has a color value (which is just an
510   opaqua unsigned long) which in conjunction with a driver callback can be used
511   to implement sophisticated placement restrictions. The i915 DRM driver uses
512   this to implement guard pages between incompatible caching domains in the
513   graphics TT.
514   </p><p>
515   Two behaviors are supported for searching and allocating: bottom-up and top-down.
516   The default is bottom-up. Top-down allocation can be used if the memory area
517   has different restrictions, or just to reduce fragmentation.
518   </p><p>
519   Finally iteration helpers to walk all nodes and all holes are provided as are
520   some basic allocator dumpers for debugging.
521</p></div><div class="sect3"><div class="titlepage"><div><div><h4 class="title"><a name="idp1119445468"></a>LRU Scan/Eviction Support</h4></div></div></div><p>
522   </p><p>
523   Very often GPUs need to have continuous allocations for a given object. When
524   evicting objects to make space for a new one it is therefore not most
525   efficient when we simply start to select all objects from the tail of an LRU
526   until there's a suitable hole: Especially for big objects or nodes that
527   otherwise have special allocation constraints there's a good chance we evict
528   lots of (smaller) objects unecessarily.
529   </p><p>
530   The DRM range allocator supports this use-case through the scanning
531   interfaces. First a scan operation needs to be initialized with
532   <code class="function">drm_mm_init_scan</code> or <code class="function">drm_mm_init_scan_with_range</code>. The the driver adds
533   objects to the roaster (probably by walking an LRU list, but this can be
534   freely implemented) until a suitable hole is found or there's no further
535   evitable object.
536   </p><p>
537   The the driver must walk through all objects again in exactly the reverse
538   order to restore the allocator state. Note that while the allocator is used
539   in the scan mode no other operation is allowed.
540   </p><p>
541   Finally the driver evicts all objects selected in the scan. Adding and
542   removing an object is O(1), and since freeing a node is also O(1) the overall
543   complexity is O(scanned_objects). So like the free stack which needs to be
544   walked before a scan operation even begins this is linear in the number of
545   objects. It doesn't seem to hurt badly.
546</p></div></div><div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="idp1119448692"></a>DRM MM Range Allocator Function References</h3></div></div></div><div class="toc"><dl class="toc"><dt><span class="refentrytitle"><a href="API-drm-mm-reserve-node.html"><span class="phrase">drm_mm_reserve_node</span></a></span><span class="refpurpose"> &#8212; 
547  insert an pre-initialized node
548 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-insert-node-generic.html"><span class="phrase">drm_mm_insert_node_generic</span></a></span><span class="refpurpose"> &#8212; 
549     search for space and insert <em class="parameter"><code>node</code></em>
550 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-insert-node-in-range-generic.html"><span class="phrase">drm_mm_insert_node_in_range_generic</span></a></span><span class="refpurpose"> &#8212; 
551     ranged search for space and insert <em class="parameter"><code>node</code></em>
552 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-remove-node.html"><span class="phrase">drm_mm_remove_node</span></a></span><span class="refpurpose"> &#8212; 
553     Remove a memory node from the allocator.
554 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-replace-node.html"><span class="phrase">drm_mm_replace_node</span></a></span><span class="refpurpose"> &#8212; 
555     move an allocation from <em class="parameter"><code>old</code></em> to <em class="parameter"><code>new</code></em>
556 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-init-scan.html"><span class="phrase">drm_mm_init_scan</span></a></span><span class="refpurpose"> &#8212; 
557     initialize lru scanning
558 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-init-scan-with-range.html"><span class="phrase">drm_mm_init_scan_with_range</span></a></span><span class="refpurpose"> &#8212; 
559     initialize range-restricted lru scanning
560 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-scan-add-block.html"><span class="phrase">drm_mm_scan_add_block</span></a></span><span class="refpurpose"> &#8212; 
561     add a node to the scan list
562 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-scan-remove-block.html"><span class="phrase">drm_mm_scan_remove_block</span></a></span><span class="refpurpose"> &#8212; 
563     remove a node from the scan list
564 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-clean.html"><span class="phrase">drm_mm_clean</span></a></span><span class="refpurpose"> &#8212; 
565     checks whether an allocator is clean
566 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-init.html"><span class="phrase">drm_mm_init</span></a></span><span class="refpurpose"> &#8212; 
567     initialize a drm-mm allocator
568 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-takedown.html"><span class="phrase">drm_mm_takedown</span></a></span><span class="refpurpose"> &#8212; 
569     clean up a drm_mm allocator
570 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-debug-table.html"><span class="phrase">drm_mm_debug_table</span></a></span><span class="refpurpose"> &#8212; 
571     dump allocator state to dmesg
572 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-dump-table.html"><span class="phrase">drm_mm_dump_table</span></a></span><span class="refpurpose"> &#8212; 
573     dump allocator state to a seq_file
574 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-node-allocated.html"><span class="phrase">drm_mm_node_allocated</span></a></span><span class="refpurpose"> &#8212; 
575  checks whether a node is allocated
576 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-initialized.html"><span class="phrase">drm_mm_initialized</span></a></span><span class="refpurpose"> &#8212; 
577     checks whether an allocator is initialized
578 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-hole-node-start.html"><span class="phrase">drm_mm_hole_node_start</span></a></span><span class="refpurpose"> &#8212; 
579     computes the start of the hole following <em class="parameter"><code>node</code></em>
580 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-hole-node-end.html"><span class="phrase">drm_mm_hole_node_end</span></a></span><span class="refpurpose"> &#8212; 
581     computes the end of the hole following <em class="parameter"><code>node</code></em>
582 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-for-each-node.html"><span class="phrase">drm_mm_for_each_node</span></a></span><span class="refpurpose"> &#8212; 
583     iterator to walk over all allocated nodes
584 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-for-each-hole.html"><span class="phrase">drm_mm_for_each_hole</span></a></span><span class="refpurpose"> &#8212; 
585     iterator to walk over all holes
586 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-insert-node.html"><span class="phrase">drm_mm_insert_node</span></a></span><span class="refpurpose"> &#8212; 
587     search for space and insert <em class="parameter"><code>node</code></em>
588 </span></dt><dt><span class="refentrytitle"><a href="API-drm-mm-insert-node-in-range.html"><span class="phrase">drm_mm_insert_node_in_range</span></a></span><span class="refpurpose"> &#8212; 
589     ranged search for space and insert <em class="parameter"><code>node</code></em>
590 </span></dt></dl></div></div><div class="sect2"><div class="titlepage"><div><div><h3 class="title"><a name="idp1123226860"></a>CMA Helper Functions Reference</h3></div></div></div><div class="toc"><dl class="toc"><dt><span class="refentrytitle"><a href="API-drm-gem-cma-create.html"><span class="phrase">drm_gem_cma_create</span></a></span><span class="refpurpose"> &#8212; 
591  allocate an object with the given size
592 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-free-object.html"><span class="phrase">drm_gem_cma_free_object</span></a></span><span class="refpurpose"> &#8212; 
593     free resources associated with a CMA GEM object
594 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-dumb-create-internal.html"><span class="phrase">drm_gem_cma_dumb_create_internal</span></a></span><span class="refpurpose"> &#8212; 
595     create a dumb buffer object
596 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-dumb-create.html"><span class="phrase">drm_gem_cma_dumb_create</span></a></span><span class="refpurpose"> &#8212; 
597     create a dumb buffer object
598 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-dumb-map-offset.html"><span class="phrase">drm_gem_cma_dumb_map_offset</span></a></span><span class="refpurpose"> &#8212; 
599     return the fake mmap offset for a CMA GEM object
600 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-mmap.html"><span class="phrase">drm_gem_cma_mmap</span></a></span><span class="refpurpose"> &#8212; 
601     memory-map a CMA GEM object
602 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-describe.html"><span class="phrase">drm_gem_cma_describe</span></a></span><span class="refpurpose"> &#8212; 
603     describe a CMA GEM object for debugfs
604 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-prime-get-sg-table.html"><span class="phrase">drm_gem_cma_prime_get_sg_table</span></a></span><span class="refpurpose"> &#8212; 
605     provide a scatter/gather table of pinned pages for a CMA GEM object
606 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-prime-import-sg-table.html"><span class="phrase">drm_gem_cma_prime_import_sg_table</span></a></span><span class="refpurpose"> &#8212; 
607     produce a CMA GEM object from another driver's scatter/gather table of pinned pages
608 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-prime-mmap.html"><span class="phrase">drm_gem_cma_prime_mmap</span></a></span><span class="refpurpose"> &#8212; 
609     memory-map an exported CMA GEM object
610 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-prime-vmap.html"><span class="phrase">drm_gem_cma_prime_vmap</span></a></span><span class="refpurpose"> &#8212; 
611     map a CMA GEM object into the kernel's virtual address space
612 </span></dt><dt><span class="refentrytitle"><a href="API-drm-gem-cma-prime-vunmap.html"><span class="phrase">drm_gem_cma_prime_vunmap</span></a></span><span class="refpurpose"> &#8212; 
613     unmap a CMA GEM object from the kernel's virtual address space
614 </span></dt><dt><span class="refentrytitle"><a href="API-struct-drm-gem-cma-object.html"><span class="phrase">struct drm_gem_cma_object</span></a></span><span class="refpurpose"> &#8212; 
615  GEM object backed by CMA memory allocations
616 </span></dt></dl></div><p>
617   </p><p>
618   The Contiguous Memory Allocator reserves a pool of memory at early boot
619   that is used to service requests for large blocks of contiguous memory.
620   </p><p>
621   The DRM GEM/CMA helpers use this allocator as a means to provide buffer
622   objects that are physically contiguous in memory. This is useful for
623   display drivers that are unable to map scattered buffers via an IOMMU.
624</p></div></div><div class="navfooter"><hr><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="API-drm-dev-set-unique.html">Prev</a>&#160;</td><td width="20%" align="center"><a accesskey="u" href="drmInternals.html">Up</a></td><td width="40%" align="right">&#160;<a accesskey="n" href="API-drm-gem-object-init.html">Next</a></td></tr><tr><td width="40%" align="left" valign="top"><span class="phrase">drm_dev_set_unique</span>&#160;</td><td width="20%" align="center"><a accesskey="h" href="index.html">Home</a></td><td width="40%" align="right" valign="top">&#160;<span class="phrase">drm_gem_object_init</span></td></tr></table></div></body></html>
625