Chapter 2. DRM Internals

Table of Contents

Driver Initialization
Driver Information
Device Registration
Driver Load
Memory management
The Translation Table Manager (TTM)
The Graphics Execution Manager (GEM)
VMA Offset Manager
PRIME Buffer Sharing
PRIME Function References
DRM MM Range Allocator
DRM MM Range Allocator Function References
CMA Helper Functions Reference
Mode Setting
Display Modes Function Reference
Atomic Mode Setting Function Reference
Frame Buffer Creation
Dumb Buffer Objects
Output Polling
Locking
KMS Initialization and Cleanup
CRTCs (struct drm_crtc)
Planes (struct drm_plane)
Encoders (struct drm_encoder)
Connectors (struct drm_connector)
Cleanup
Output discovery and initialization example
KMS API Functions
KMS Data Structures
KMS Locking
Mode Setting Helper Functions
Helper Functions
CRTC Helper Operations
Encoder Helper Operations
Connector Helper Operations
Atomic Modeset Helper Functions Reference
Modeset Helper Functions Reference
Output Probing Helper Functions Reference
fbdev Helper Functions Reference
Display Port Helper Functions Reference
Display Port MST Helper Functions Reference
MIPI DSI Helper Functions Reference
EDID Helper Functions Reference
Rectangle Utilities Reference
Flip-work Helper Reference
HDMI Infoframes Helper Reference
Plane Helper Reference
Tile group
KMS Properties
Existing KMS Properties
Vertical Blanking
Vertical Blanking and Interrupt Handling Functions Reference
Open/Close, File Operations and IOCTLs
Open and Close
File Operations
IOCTLs
Legacy Support Code
Legacy Suspend/Resume
Legacy DMA Services

This chapter documents DRM internals relevant to driver authors and developers working to add support for the latest features to existing drivers.

First, we go over some typical driver initialization requirements, like setting up command buffers, creating an initial output configuration, and initializing core services. Subsequent sections cover core internals in more detail, providing implementation notes and examples.

The DRM layer provides several services to graphics drivers, many of them driven by the application interfaces it provides through libdrm, the library that wraps most of the DRM ioctls. These include vblank event handling, memory management, output management, framebuffer management, command submission & fencing, suspend/resume support, and DMA services.

Driver Initialization

At the core of every DRM driver is a drm_driver structure. Drivers typically statically initialize a drm_driver structure, and then pass it to one of the drm_*_init() functions to register it with the DRM subsystem.

Newer drivers that no longer require a drm_bus structure can alternatively use the low-level device initialization and registration functions such as drm_dev_alloc() and drm_dev_register() directly.

The drm_driver structure contains static information that describes the driver and features it supports, and pointers to methods that the DRM core will call to implement the DRM API. We will first go through the drm_driver static information fields, and will then describe individual operations in details as they get used in later sections.

Driver Information

Driver Features

Drivers inform the DRM core about their requirements and supported features by setting appropriate flags in the driver_features field. Since those flags influence the DRM core behaviour since registration time, most of them must be set to registering the drm_driver instance.

u32 driver_features;

Driver Feature Flags

DRIVER_USE_AGP

Driver uses AGP interface, the DRM core will manage AGP resources.

DRIVER_REQUIRE_AGP

Driver needs AGP interface to function. AGP initialization failure will become a fatal error.

DRIVER_PCI_DMA

Driver is capable of PCI DMA, mapping of PCI DMA buffers to userspace will be enabled. Deprecated.

DRIVER_SG

Driver can perform scatter/gather DMA, allocation and mapping of scatter/gather buffers will be enabled. Deprecated.

DRIVER_HAVE_DMA

Driver supports DMA, the userspace DMA API will be supported. Deprecated.

DRIVER_HAVE_IRQ, DRIVER_IRQ_SHARED

DRIVER_HAVE_IRQ indicates whether the driver has an IRQ handler managed by the DRM Core. The core will support simple IRQ handler installation when the flag is set. The installation process is described in the section called “IRQ Registration”.

DRIVER_IRQ_SHARED indicates whether the device & handler support shared IRQs (note that this is required of PCI drivers).

DRIVER_GEM

Driver use the GEM memory manager.

DRIVER_MODESET

Driver supports mode setting interfaces (KMS).

DRIVER_PRIME

Driver implements DRM PRIME buffer sharing.

DRIVER_RENDER

Driver supports dedicated render nodes.

DRIVER_ATOMIC

Driver supports atomic properties. In this case the driver must implement appropriate obj->atomic_get_property() vfuncs for any modeset objects with driver specific properties.

Major, Minor and Patchlevel

int major;
int minor;
int patchlevel;

The DRM core identifies driver versions by a major, minor and patch level triplet. The information is printed to the kernel log at initialization time and passed to userspace through the DRM_IOCTL_VERSION ioctl.

The major and minor numbers are also used to verify the requested driver API version passed to DRM_IOCTL_SET_VERSION. When the driver API changes between minor versions, applications can call DRM_IOCTL_SET_VERSION to select a specific version of the API. If the requested major isn't equal to the driver major, or the requested minor is larger than the driver minor, the DRM_IOCTL_SET_VERSION call will return an error. Otherwise the driver's set_version() method will be called with the requested version.

Name, Description and Date

char *name;
char *desc;
char *date;

The driver name is printed to the kernel log at initialization time, used for IRQ registration and passed to userspace through DRM_IOCTL_VERSION.

The driver description is a purely informative string passed to userspace through the DRM_IOCTL_VERSION ioctl and otherwise unused by the kernel.

The driver date, formatted as YYYYMMDD, is meant to identify the date of the latest modification to the driver. However, as most drivers fail to update it, its value is mostly useless. The DRM core prints it to the kernel log at initialization time and passes it to userspace through the DRM_IOCTL_VERSION ioctl.

Device Registration

drm_pci_alloc — Allocate a PCI consistent memory block, for DMA.
drm_pci_free — Free a PCI consistent memory block
drm_get_pci_dev — Register a PCI device with the DRM subsystem
drm_pci_init — Register matching PCI devices with the DRM subsystem
drm_pci_exit — Unregister matching PCI devices from the DRM subsystem
drm_platform_init — Register a platform device with the DRM subsystem
drm_put_dev — Unregister and release a DRM device
drm_dev_alloc — Allocate new DRM device
drm_dev_ref — Take reference of a DRM device
drm_dev_unref — Drop reference of a DRM device
drm_dev_register — Register DRM device
drm_dev_unregister — Unregister DRM device
drm_dev_set_unique — Set the unique name of a DRM device

A number of functions are provided to help with device registration. The functions deal with PCI and platform devices, respectively.

New drivers that no longer rely on the services provided by the drm_bus structure can call the low-level device registration functions directly. The drm_dev_alloc() function can be used to allocate and initialize a new drm_device structure. Drivers will typically want to perform some additional setup on this structure, such as allocating driver-specific data and storing a pointer to it in the DRM device's dev_private field. Drivers should also set the device's unique name using the drm_dev_set_unique() function. After it has been set up a device can be registered with the DRM subsystem by calling drm_dev_register(). This will cause the device to be exposed to userspace and will call the driver's .load() implementation. When a device is removed, the DRM device can safely be unregistered and freed by calling drm_dev_unregister() followed by a call to drm_dev_unref().

Driver Load

The load method is the driver and device initialization entry point. The method is responsible for allocating and initializing driver private data, performing resource allocation and mapping (e.g. acquiring clocks, mapping registers or allocating command buffers), initializing the memory manager (the section called “Memory management”), installing the IRQ handler (the section called “IRQ Registration”), setting up vertical blanking handling (the section called “Vertical Blanking”), mode setting (the section called “Mode Setting”) and initial output configuration (the section called “KMS Initialization and Cleanup”).

Note

If compatibility is a concern (e.g. with drivers converted over from User Mode Setting to Kernel Mode Setting), care must be taken to prevent device initialization and control that is incompatible with currently active userspace drivers. For instance, if user level mode setting drivers are in use, it would be problematic to perform output discovery & configuration at load time. Likewise, if user-level drivers unaware of memory management are in use, memory management and command buffer setup may need to be omitted. These requirements are driver-specific, and care needs to be taken to keep both old and new applications and libraries working.

int (*load) (struct drm_device *, unsigned long flags);

The method takes two arguments, a pointer to the newly created drm_device and flags. The flags are used to pass the driver_data field of the device id corresponding to the device passed to drm_*_init(). Only PCI devices currently use this, USB and platform DRM drivers have their load method called with flags to 0.

Driver Private Data

The driver private hangs off the main drm_device structure and can be used for tracking various device-specific bits of information, like register offsets, command buffer status, register state for suspend/resume, etc. At load time, a driver may simply allocate one and set drm_device.dev_priv appropriately; it should be freed and drm_device.dev_priv set to NULL when the driver is unloaded.

IRQ Registration

The DRM core tries to facilitate IRQ handler registration and unregistration by providing drm_irq_install and drm_irq_uninstall functions. Those functions only support a single interrupt per device, devices that use more than one IRQs need to be handled manually.

Managed IRQ Registration

drm_irq_install starts by calling the irq_preinstall driver operation. The operation is optional and must make sure that the interrupt will not get fired by clearing all pending interrupt flags or disabling the interrupt.

The passed-in IRQ will then be requested by a call to request_irq. If the DRIVER_IRQ_SHARED driver feature flag is set, a shared (IRQF_SHARED) IRQ handler will be requested.

The IRQ handler function must be provided as the mandatory irq_handler driver operation. It will get passed directly to request_irq and thus has the same prototype as all IRQ handlers. It will get called with a pointer to the DRM device as the second argument.

Finally the function calls the optional irq_postinstall driver operation. The operation usually enables interrupts (excluding the vblank interrupt, which is enabled separately), but drivers may choose to enable/disable interrupts at a different time.

drm_irq_uninstall is similarly used to uninstall an IRQ handler. It starts by waking up all processes waiting on a vblank interrupt to make sure they don't hang, and then calls the optional irq_uninstall driver operation. The operation must disable all hardware interrupts. Finally the function frees the IRQ by calling free_irq.

Manual IRQ Registration

Drivers that require multiple interrupt handlers can't use the managed IRQ registration functions. In that case IRQs must be registered and unregistered manually (usually with the request_irq and free_irq functions, or their devm_* equivalent).

When manually registering IRQs, drivers must not set the DRIVER_HAVE_IRQ driver feature flag, and must not provide the irq_handler driver operation. They must set the drm_device irq_enabled field to 1 upon registration of the IRQs, and clear it to 0 after unregistering the IRQs.

Memory Manager Initialization

Every DRM driver requires a memory manager which must be initialized at load time. DRM currently contains two memory managers, the Translation Table Manager (TTM) and the Graphics Execution Manager (GEM). This document describes the use of the GEM memory manager only. See the section called “Memory management” for details.

Miscellaneous Device Configuration

Another task that may be necessary for PCI devices during configuration is mapping the video BIOS. On many devices, the VBIOS describes device configuration, LCD panel timings (if any), and contains flags indicating device state. Mapping the BIOS can be done using the pci_map_rom() call, a convenience function that takes care of mapping the actual ROM, whether it has been shadowed into memory (typically at address 0xc0000) or exists on the PCI device in the ROM BAR. Note that after the ROM has been mapped and any necessary information has been extracted, it should be unmapped; on many devices, the ROM address decoder is shared with other BARs, so leaving it mapped could cause undesired behaviour like hangs or memory corruption.