Table of Contents
This chapter documents DRM internals relevant to driver authors and developers working to add support for the latest features to existing drivers.
First, we go over some typical driver initialization requirements, like setting up command buffers, creating an initial output configuration, and initializing core services. Subsequent sections cover core internals in more detail, providing implementation notes and examples.
The DRM layer provides several services to graphics drivers, many of them driven by the application interfaces it provides through libdrm, the library that wraps most of the DRM ioctls. These include vblank event handling, memory management, output management, framebuffer management, command submission & fencing, suspend/resume support, and DMA services.
At the core of every DRM driver is a drm_driver
structure. Drivers typically statically initialize a drm_driver structure,
and then pass it to drm_dev_alloc()
to allocate a
device instance. After the device instance is fully initialized it can be
registered (which makes it accessible from userspace) using
drm_dev_register()
.
The drm_driver structure contains static information that describes the driver and features it supports, and pointers to methods that the DRM core will call to implement the DRM API. We will first go through the drm_driver static information fields, and will then describe individual operations in details as they get used in later sections.
Drivers inform the DRM core about their requirements and supported
features by setting appropriate flags in the
driver_features
field. Since those flags
influence the DRM core behaviour since registration time, most of them
must be set to registering the drm_driver
instance.
u32 driver_features;
Driver Feature Flags
Driver uses AGP interface, the DRM core will manage AGP resources.
Driver needs AGP interface to function. AGP initialization failure will become a fatal error.
Driver is capable of PCI DMA, mapping of PCI DMA buffers to userspace will be enabled. Deprecated.
Driver can perform scatter/gather DMA, allocation and mapping of scatter/gather buffers will be enabled. Deprecated.
Driver supports DMA, the userspace DMA API will be supported. Deprecated.
DRIVER_HAVE_IRQ indicates whether the driver has an IRQ handler managed by the DRM Core. The core will support simple IRQ handler installation when the flag is set. The installation process is described in the section called “IRQ Registration”.
DRIVER_IRQ_SHARED indicates whether the device & handler support shared IRQs (note that this is required of PCI drivers).
Driver use the GEM memory manager.
Driver supports mode setting interfaces (KMS).
Driver implements DRM PRIME buffer sharing.
Driver supports dedicated render nodes.
Driver supports atomic properties. In this case the driver must implement appropriate obj->atomic_get_property() vfuncs for any modeset objects with driver specific properties.
int major; int minor; int patchlevel;
The DRM core identifies driver versions by a major, minor and patch level triplet. The information is printed to the kernel log at initialization time and passed to userspace through the DRM_IOCTL_VERSION ioctl.
The major and minor numbers are also used to verify the requested driver API version passed to DRM_IOCTL_SET_VERSION. When the driver API changes between minor versions, applications can call DRM_IOCTL_SET_VERSION to select a specific version of the API. If the requested major isn't equal to the driver major, or the requested minor is larger than the driver minor, the DRM_IOCTL_SET_VERSION call will return an error. Otherwise the driver's set_version() method will be called with the requested version.
char *name; char *desc; char *date;
The driver name is printed to the kernel log at initialization time, used for IRQ registration and passed to userspace through DRM_IOCTL_VERSION.
The driver description is a purely informative string passed to userspace through the DRM_IOCTL_VERSION ioctl and otherwise unused by the kernel.
The driver date, formatted as YYYYMMDD, is meant to identify the date of the latest modification to the driver. However, as most drivers fail to update it, its value is mostly useless. The DRM core prints it to the kernel log at initialization time and passes it to userspace through the DRM_IOCTL_VERSION ioctl.
A device instance for a drm driver is represented by struct drm_device. This
is allocated with drm_dev_alloc
, usually from bus-specific ->probe
callbacks implemented by the driver. The driver then needs to initialize all
the various subsystems for the drm device like memory management, vblank
handling, modesetting support and intial output configuration plus obviously
initialize all the corresponding hardware bits. An important part of this is
also calling drm_dev_set_unique
to set the userspace-visible unique name of
this device instance. Finally when everything is up and running and ready for
userspace the device instance can be published using drm_dev_register
.
There is also deprecated support for initalizing device instances using
bus-specific helpers and the ->load
callback. But due to
backwards-compatibility needs the device instance have to be published too
early, which requires unpretty global locking to make safe and is therefore
only support for existing drivers not yet converted to the new scheme.
When cleaning up a device instance everything needs to be done in reverse:
First unpublish the device instance with drm_dev_unregister
. Then clean up
any other resources allocated at device initialization and drop the driver's
reference to drm_device using drm_dev_unref
.
Note that the lifetime rules for drm_device instance has still a lot of
historical baggage. Hence use the reference counting provided by
drm_dev_ref
and drm_dev_unref
only carefully.
Also note that embedding of drm_device is currently not (yet) supported (but it would be easy to add). Drivers can store driver-private data in the dev_priv field of drm_device.
The DRM core tries to facilitate IRQ handler registration and
unregistration by providing drm_irq_install
and
drm_irq_uninstall
functions. Those functions only
support a single interrupt per device, devices that use more than one
IRQs need to be handled manually.
drm_irq_install
starts by calling the
irq_preinstall
driver operation. The operation
is optional and must make sure that the interrupt will not get fired by
clearing all pending interrupt flags or disabling the interrupt.
The passed-in IRQ will then be requested by a call to
request_irq
. If the DRIVER_IRQ_SHARED driver
feature flag is set, a shared (IRQF_SHARED) IRQ handler will be
requested.
The IRQ handler function must be provided as the mandatory irq_handler
driver operation. It will get passed directly to
request_irq
and thus has the same prototype as all
IRQ handlers. It will get called with a pointer to the DRM device as the
second argument.
Finally the function calls the optional
irq_postinstall
driver operation. The operation
usually enables interrupts (excluding the vblank interrupt, which is
enabled separately), but drivers may choose to enable/disable interrupts
at a different time.
drm_irq_uninstall
is similarly used to uninstall an
IRQ handler. It starts by waking up all processes waiting on a vblank
interrupt to make sure they don't hang, and then calls the optional
irq_uninstall
driver operation. The operation
must disable all hardware interrupts. Finally the function frees the IRQ
by calling free_irq
.
Drivers that require multiple interrupt handlers can't use the managed
IRQ registration functions. In that case IRQs must be registered and
unregistered manually (usually with the request_irq
and free_irq
functions, or their devm_* equivalent).
When manually registering IRQs, drivers must not set the DRIVER_HAVE_IRQ
driver feature flag, and must not provide the
irq_handler
driver operation. They must set the
drm_device irq_enabled
field to 1 upon registration of the IRQs, and clear it to 0 after
unregistering the IRQs.
Every DRM driver requires a memory manager which must be initialized at load time. DRM currently contains two memory managers, the Translation Table Manager (TTM) and the Graphics Execution Manager (GEM). This document describes the use of the GEM memory manager only. See the section called “Memory management” for details.
Another task that may be necessary for PCI devices during configuration is mapping the video BIOS. On many devices, the VBIOS describes device configuration, LCD panel timings (if any), and contains flags indicating device state. Mapping the BIOS can be done using the pci_map_rom() call, a convenience function that takes care of mapping the actual ROM, whether it has been shadowed into memory (typically at address 0xc0000) or exists on the PCI device in the ROM BAR. Note that after the ROM has been mapped and any necessary information has been extracted, it should be unmapped; on many devices, the ROM address decoder is shared with other BARs, so leaving it mapped could cause undesired behaviour like hangs or memory corruption.
A number of functions are provided to help with device registration. The functions deal with PCI and platform devices respectively and are only provided for historical reasons. These are all deprecated and shouldn't be used in new drivers. Besides that there's a few helpers for pci drivers.