Lines Matching refs:a
1 Frontswap provides a "transcendent memory" interface for swap pages.
3 swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk.
8 See the LWN.net article "Transcendent memory in a nutshell" for a detailed
13 a "backing" store for a swap device. The storage is assumed to be
14 a synchronous concurrency-safe page-oriented "pseudo-RAM device" conforming
33 Once a page is successfully stored, a matching load on the page will normally
34 succeed. So when the kernel finds itself in a situation where it needs
35 to swap out a page, it first attempts to use frontswap. If the store returns
37 a disk write and, if the data is later read back, a disk read are avoided.
38 If a store returns failure, transcendent memory has rejected the data, and the
41 If a backend chooses, frontswap can be configured as a "writethrough
43 in swap device writes is lost (and also a non-trivial performance advantage)
47 Note that if a page is stored and the page already exists in transcendent memory
48 (a "duplicate" store), either the store succeeds and the data is overwritten,
67 When a workload starts swapping, performance falls through the floor.
69 providing a clean, dynamic interface to read and write swap pages to
71 This interface is ideal when data is transformed to a different form
74 evicted page-cache pages) are a great use for this kind of slower-than-RAM-
76 cleancache) interface to transcendent memory provides a nice way to read
79 Frontswap -- and cleancache -- with a fairly small impact on the kernel,
80 provides a huge amount of flexibility for more dynamic, flexible RAM
88 low while providing a significant performance improvement (25%+)
96 vice versa. RAMster can also be configured as a memory server so
97 many servers in a cluster can swap, dynamically as needed, to a single
98 server configured with a large amount of RAM... without pre-configuring
118 a memory extension technology.
124 nothingness and the only overhead is a few extra bytes per swapon'ed
128 AND a frontswap backend registers AND the backend fails every "store"
131 precedes a swap page write-to-disk, the system is highly likely
132 to be I/O bound and using a small fraction of a percent of a CPU
135 As for space, if CONFIG_FRONTSWAP is enabled AND a frontswap backend
142 later.) For very large swap disks (which are rare) on a standard
146 out to disk, there is a side effect that this may create more memory
151 3) OK, how about a quick overview of what this frontswap patch does
152 in terms that a kernel hacker can grok?
154 Let's assume that a frontswap "backend" has registered during
160 Whenever a swap-device is swapon'd frontswap_init() is called,
161 passing the swap device number (aka "type") as a parameter.
165 Whenever the swap subsystem is readying a page to write to a swap
170 backend is unpredictable to the kernel; it may choose to never accept a
172 page. But if the backend does accept a page, the data from the page
175 frontswap sets a bit in the "frontswap_map" for the swap device
179 When the swap subsystem needs to swap-in a page (swap_readpage()),
186 So every time the frontswap backend accepts a page, a swap device read
187 and (potentially) a swap device write are replaced by a "frontswap backend
188 store" and (possibly) a "frontswap backend loads", which are presumably much
191 4) Can't frontswap be configured as a "special" swap device that is
196 swap hierarchy. Perhaps it could be rewritten to accommodate a hierarchy,
199 assumes a swap device is fixed size and any page in it is linearly
202 a great deal of flexibility and dynamicity.
207 backend. In zcache, one cannot know a priori how compressible a page is.
211 Further, frontswap is entirely synchronous whereas a real swap
214 that are inappropriate for a RAM-oriented device including delaying
215 the write of some pages for a significant amount of time. Synchrony is
222 kernel sockets to move compressed frontswap pages to a remote machine.
223 Similarly, a KVM guest-side implementation could do in-guest compression
226 In a virtualized environment, the dynamicity allows the hypervisor
231 There is a downside to the transcendent memory specifications for
232 frontswap: Since any "store" might fail, there must always be a real
233 slot on a real swap device to swap the page. Thus frontswap must be
234 implemented as a "shadow" to every swapon'd device with the potential
240 can still use frontswap but a backend for such devices must configure
243 5) Why this weird definition about "duplicate stores"? If a page
252 frontswap rejects a store that would overwrite, it also must invalidate
259 When the (non-frontswap) swap subsystem swaps out a page to a real
261 space. But if frontswap has placed a page in transcendent memory, that
265 For example, in RAMster, a "suction driver" thread will attempt
266 to "repatriate" pages sent to a remote machine back to the local machine;
274 static and global. This seemed a reasonable compromise: Define
275 them as global but declare them in a new include file that isn't