Lines Matching refs:the
11 The frame buffer device provides an abstraction for the graphics hardware. It
12 represents the frame buffer of some video hardware and allows application
13 software to access the graphics hardware through a well-defined interface, so
14 the software doesn't need to know anything about the low-level (hardware
17 The device is accessed through special device nodes, usually located in the
24 From the user's point of view, the frame buffer device looks just like any
25 other device in /dev. It's a character device using major 29; the minor
26 specifies the frame buffer number.
28 By convention, the following device nodes are used (numbers indicate the device
36 For backwards compatibility, you may want to create the following symbolic
50 graphics card in addition to the built-in hardware. The corresponding frame
53 Application software that uses the frame buffer device (e.g. the X server) will
55 an alternative frame buffer device by setting the environment variable
56 $FRAMEBUFFER to the path name of a frame buffer device, e.g. (for sh/bash
65 After this the X server will use the second frame buffer.
72 it has the same features. You can read it, write it, seek to some location in
73 it and mmap() it (the main usage). The difference is just that the memory that
74 appears in the special file is not the whole memory, but the frame buffer of
78 the hardware can be queried and set. The color map handling works via ioctls,
82 - You can request unchangeable information about the hardware, like name,
83 organization of the screen memory (planes, packed pixels, ...) and address
84 and length of the screen memory.
86 - You can request and change variable information about the hardware, like
88 If you try to change that information, the driver maybe will round up some
89 values to meet the hardware's capabilities (or return EINVAL if that isn't
92 - You can get and set parts of the color map. Communication is done with 16
94 existing hardware. The driver does all the computations needed to apply
95 it to the hardware (round it down to less bits, maybe throw away
98 All this hardware abstraction makes the implementation of application programs
99 easier and more portable. E.g. the X server works completely on /dev/fb* and
100 thus doesn't need to know, for example, how the color registers of the concrete
103 application programs is the screen organization (bitplanes or chunky pixels
104 etc.), because it works on the frame buffer image data directly.
106 For the future it is planned that frame buffer drivers for graphics cards and
107 the like can be implemented as kernel modules that are loaded at runtime. Such
109 Writing and distributing such drivers independently from the kernel will save
116 Frame buffer resolutions are maintained using the utility `fbset'. It can
117 change the video mode properties of a frame buffer device. Its main usage is
118 to change the current video mode, e.g. during boot up in one of your /etc/rc.*
128 The X server (XF68_FBDev) is the most notable application program for the frame
129 buffer device. Starting with XFree86 release 3.2, the X server is part of
132 - If the `Display' subsection for the `fbdev' driver in the /etc/XF86Config
137 line, the X server will use the scheme discussed above, i.e. it will start
138 up in the resolution determined by /dev/fb0 (or $FRAMEBUFFER, if set). You
139 still have to specify the color depth (using the Depth keyword) and virtual
140 resolution (using the Virtual keyword) though. This is the default for the
141 configuration file supplied with XFree86. It's the most simple
144 - Therefore it's also possible to specify resolutions in the /etc/XF86Config
145 file. This allows for on-the-fly resolution switching while retaining the
147 /dev/fb0current (or $FRAMEBUFFER), but the available resolutions are
149 specify the timings in a different format (but `fbset -x' may help).
152 work 100% with XF68_FBDev: the reported clock values are always incorrect.
158 A monitor draws an image on the screen by using an electron beam (3 electron
160 the screen is covered by a pattern of colored phosphors (pixels). If a phosphor
164 from the top to the bottom of the screen. By modifying the intensity of the
167 After each scanline the electron beam has to move back to the left side of the
168 screen and to the next line: this is called the horizontal retrace. After the
169 whole screen (frame) was painted, the beam moves back to the upper left corner:
170 this is called the vertical retrace. During both the horizontal and vertical
171 retrace, the electron beam is turned off (blanked).
173 The speed at which the electron beam paints the pixels is determined by the
174 dotclock in the graphics board. For a dotclock of e.g. 28.37516 MHz (millions
179 If the screen resolution is 640x480, it will take
183 to paint the 640 (xres) pixels on one scanline. But the horizontal retrace
188 We'll say that the horizontal scanrate is about 31 kHz:
192 A full screen counts 480 (yres) lines, but we have to consider the vertical
201 This means the screen data is refreshed about 59 times per second. To have a
203 at least 72 Hz. But the perceived flicker is very human dependent: some people
206 Since the monitor doesn't know when a new scanline starts, the graphics board
209 vsync) for each new frame. The position of the image on the screen is
210 influenced by the moments at which the synchronization pulses occur.
213 the sum of the left margin, the right margin and the hsync length, while the
214 vertical retrace time is the sum of the upper margin, the lower margin and the
259 An XFree86 mode line consists of the following fields:
263 The frame buffer device uses the following fields:
289 Good examples for VESA timings can be found in the XFree86 source tree,
296 For more specific information about the frame buffer device and its
297 applications, please refer to the Linux-fbdev website:
301 and to the following documentation:
339 This readme was written by Geert Uytterhoeven, partly based on the original