Table of Contents
This chapter documents DRM internals relevant to driver authors and developers working to add support for the latest features to existing drivers.
First, we go over some typical driver initialization requirements, like setting up command buffers, creating an initial output configuration, and initializing core services. Subsequent sections cover core internals in more detail, providing implementation notes and examples.
The DRM layer provides several services to graphics drivers, many of them driven by the application interfaces it provides through libdrm, the library that wraps most of the DRM ioctls. These include vblank event handling, memory management, output management, framebuffer management, command submission & fencing, suspend/resume support, and DMA services.
At the core of every DRM driver is a drm_driver
structure. Drivers typically statically initialize a drm_driver structure,
and then pass it to one of the drm_*_init()
functions
to register it with the DRM subsystem.
Newer drivers that no longer require a drm_bus
structure can alternatively use the low-level device initialization and
registration functions such as drm_dev_alloc()
and
drm_dev_register()
directly.
The drm_driver structure contains static information that describes the driver and features it supports, and pointers to methods that the DRM core will call to implement the DRM API. We will first go through the drm_driver static information fields, and will then describe individual operations in details as they get used in later sections.
Drivers inform the DRM core about their requirements and supported
features by setting appropriate flags in the
driver_features
field. Since those flags
influence the DRM core behaviour since registration time, most of them
must be set to registering the drm_driver
instance.
u32 driver_features;
Driver Feature Flags
Driver uses AGP interface, the DRM core will manage AGP resources.
Driver needs AGP interface to function. AGP initialization failure will become a fatal error.
Driver is capable of PCI DMA, mapping of PCI DMA buffers to userspace will be enabled. Deprecated.
Driver can perform scatter/gather DMA, allocation and mapping of scatter/gather buffers will be enabled. Deprecated.
Driver supports DMA, the userspace DMA API will be supported. Deprecated.
DRIVER_HAVE_IRQ indicates whether the driver has an IRQ handler managed by the DRM Core. The core will support simple IRQ handler installation when the flag is set. The installation process is described in the section called “IRQ Registration”.
DRIVER_IRQ_SHARED indicates whether the device & handler support shared IRQs (note that this is required of PCI drivers).
Driver use the GEM memory manager.
Driver supports mode setting interfaces (KMS).
Driver implements DRM PRIME buffer sharing.
Driver supports dedicated render nodes.
Driver supports atomic properties. In this case the driver must implement appropriate obj->atomic_get_property() vfuncs for any modeset objects with driver specific properties.
int major; int minor; int patchlevel;
The DRM core identifies driver versions by a major, minor and patch level triplet. The information is printed to the kernel log at initialization time and passed to userspace through the DRM_IOCTL_VERSION ioctl.
The major and minor numbers are also used to verify the requested driver API version passed to DRM_IOCTL_SET_VERSION. When the driver API changes between minor versions, applications can call DRM_IOCTL_SET_VERSION to select a specific version of the API. If the requested major isn't equal to the driver major, or the requested minor is larger than the driver minor, the DRM_IOCTL_SET_VERSION call will return an error. Otherwise the driver's set_version() method will be called with the requested version.
char *name; char *desc; char *date;
The driver name is printed to the kernel log at initialization time, used for IRQ registration and passed to userspace through DRM_IOCTL_VERSION.
The driver description is a purely informative string passed to userspace through the DRM_IOCTL_VERSION ioctl and otherwise unused by the kernel.
The driver date, formatted as YYYYMMDD, is meant to identify the date of the latest modification to the driver. However, as most drivers fail to update it, its value is mostly useless. The DRM core prints it to the kernel log at initialization time and passes it to userspace through the DRM_IOCTL_VERSION ioctl.
A number of functions are provided to help with device registration. The functions deal with PCI and platform devices, respectively.
New drivers that no longer rely on the services provided by the
drm_bus structure can call the low-level
device registration functions directly. The
drm_dev_alloc()
function can be used to allocate
and initialize a new drm_device structure.
Drivers will typically want to perform some additional setup on this
structure, such as allocating driver-specific data and storing a
pointer to it in the DRM device's dev_private
field. Drivers should also set the device's unique name using the
drm_dev_set_unique()
function. After it has been
set up a device can be registered with the DRM subsystem by calling
drm_dev_register()
. This will cause the device to
be exposed to userspace and will call the driver's
.load()
implementation. When a device is
removed, the DRM device can safely be unregistered and freed by calling
drm_dev_unregister()
followed by a call to
drm_dev_unref()
.
The load
method is the driver and device
initialization entry point. The method is responsible for allocating and
initializing driver private data, performing resource allocation and
mapping (e.g. acquiring
clocks, mapping registers or allocating command buffers), initializing
the memory manager (the section called “Memory management”), installing
the IRQ handler (the section called “IRQ Registration”), setting up
vertical blanking handling (the section called “Vertical Blanking”), mode
setting (the section called “Mode Setting”) and initial output
configuration (the section called “KMS Initialization and Cleanup”).
If compatibility is a concern (e.g. with drivers converted over from User Mode Setting to Kernel Mode Setting), care must be taken to prevent device initialization and control that is incompatible with currently active userspace drivers. For instance, if user level mode setting drivers are in use, it would be problematic to perform output discovery & configuration at load time. Likewise, if user-level drivers unaware of memory management are in use, memory management and command buffer setup may need to be omitted. These requirements are driver-specific, and care needs to be taken to keep both old and new applications and libraries working.
int (*load) (struct drm_device *, unsigned long flags);
The method takes two arguments, a pointer to the newly created
drm_device and flags. The flags are used to
pass the driver_data
field of the device id
corresponding to the device passed to drm_*_init()
.
Only PCI devices currently use this, USB and platform DRM drivers have
their load
method called with flags to 0.
The driver private hangs off the main
drm_device structure and can be used for
tracking various device-specific bits of information, like register
offsets, command buffer status, register state for suspend/resume, etc.
At load time, a driver may simply allocate one and set
drm_device.dev_priv
appropriately; it should be freed and
drm_device.dev_priv
set to NULL when the driver is unloaded.
The DRM core tries to facilitate IRQ handler registration and
unregistration by providing drm_irq_install
and
drm_irq_uninstall
functions. Those functions only
support a single interrupt per device, devices that use more than one
IRQs need to be handled manually.
drm_irq_install
starts by calling the
irq_preinstall
driver operation. The operation
is optional and must make sure that the interrupt will not get fired by
clearing all pending interrupt flags or disabling the interrupt.
The passed-in IRQ will then be requested by a call to
request_irq
. If the DRIVER_IRQ_SHARED driver
feature flag is set, a shared (IRQF_SHARED) IRQ handler will be
requested.
The IRQ handler function must be provided as the mandatory irq_handler
driver operation. It will get passed directly to
request_irq
and thus has the same prototype as all
IRQ handlers. It will get called with a pointer to the DRM device as the
second argument.
Finally the function calls the optional
irq_postinstall
driver operation. The operation
usually enables interrupts (excluding the vblank interrupt, which is
enabled separately), but drivers may choose to enable/disable interrupts
at a different time.
drm_irq_uninstall
is similarly used to uninstall an
IRQ handler. It starts by waking up all processes waiting on a vblank
interrupt to make sure they don't hang, and then calls the optional
irq_uninstall
driver operation. The operation
must disable all hardware interrupts. Finally the function frees the IRQ
by calling free_irq
.
Drivers that require multiple interrupt handlers can't use the managed
IRQ registration functions. In that case IRQs must be registered and
unregistered manually (usually with the request_irq
and free_irq
functions, or their devm_* equivalent).
When manually registering IRQs, drivers must not set the DRIVER_HAVE_IRQ
driver feature flag, and must not provide the
irq_handler
driver operation. They must set the
drm_device irq_enabled
field to 1 upon registration of the IRQs, and clear it to 0 after
unregistering the IRQs.
Every DRM driver requires a memory manager which must be initialized at load time. DRM currently contains two memory managers, the Translation Table Manager (TTM) and the Graphics Execution Manager (GEM). This document describes the use of the GEM memory manager only. See the section called “Memory management” for details.
Another task that may be necessary for PCI devices during configuration is mapping the video BIOS. On many devices, the VBIOS describes device configuration, LCD panel timings (if any), and contains flags indicating device state. Mapping the BIOS can be done using the pci_map_rom() call, a convenience function that takes care of mapping the actual ROM, whether it has been shadowed into memory (typically at address 0xc0000) or exists on the PCI device in the ROM BAR. Note that after the ROM has been mapped and any necessary information has been extracted, it should be unmapped; on many devices, the ROM address decoder is shared with other BARs, so leaving it mapped could cause undesired behaviour like hangs or memory corruption.