Lines Matching refs:and
5 (Note, frontswap -- and cleancache (merged at 3.0) -- are the "frontends"
6 and the only necessary changes to the core kernel for transcendent memory;
9 overview of frontswap and related kernel parts:
18 kernel and is of unknown and possibly time-varying size. The driver
20 frontswap_ops funcs appropriately and the functions it provides must
25 copy the page to transcendent memory and associate it with the type and
29 from transcendent memory and an "invalidate_area" will remove ALL pages
30 associated with the swap type (e.g., like swapoff) and notify the "device"
36 success, the data has been successfully saved to transcendent memory and
37 a disk write and, if the data is later read back, a disk read are avoided.
38 If a store returns failure, transcendent memory has rejected the data, and the
43 in swap device writes is lost (and also a non-trivial performance advantage)
47 Note that if a page is stored and the page already exists in transcendent memory
48 (a "duplicate" store), either the store succeeds and the data is overwritten,
69 providing a clean, dynamic interface to read and write swap pages to
72 and size (such as with compression) or secretly moved (as might be
73 useful for write-balancing for some RAM-like devices). Swap pages (and
75 but-much-faster-than-disk "pseudo-RAM device" and the frontswap (and
77 and write -- and indirectly "name" -- the pages.
79 Frontswap -- and cleancache -- with a fairly small impact on the kernel,
83 In the single kernel case, aka "zcache", pages are compressed and
94 allows RAM to be dynamically load-balanced back-and-forth as needed,
95 i.e. when system A is overcommitted, it can swap to system B, and
103 virtual machines. This is really hard to do with RAM and efforts to do
108 virtual machines, but the pages can be compressed and deduplicated to
112 to be swapped to and from hypervisor RAM (if overall host system memory
116 A KVM implementation is underway and has been RFC'ed to lkml. And,
124 nothingness and the only overhead is a few extra bytes per swapon'ed
130 CPU overhead is still negligible -- and since every frontswap fail
132 to be I/O bound and using a small fraction of a percent of a CPU
158 entirely dynamic and random.
167 consults with the frontswap backend and if the backend says it does NOT
168 have room, frontswap_store returns -1 and the kernel swaps the page
173 has already been copied and associated with the type and offset,
174 and the backend guarantees the persistence of the data. In this case,
182 it was, the page of data is filled from the frontswap backend and
187 and (potentially) a swap device write are replaced by a "frontswap backend
188 store" and (possibly) a "frontswap backend loads", which are presumably much
199 assumes a swap device is fixed size and any page in it is linearly
201 and works around the constraints of the block I/O subsystem to provide
202 a great deal of flexibility and dynamicity.
208 "Poorly" compressible pages can be rejected, and "poorly" can itself be
212 device is, by definition, asynchronous and uses block I/O. The
216 required to ensure the dynamicity of the backend and to avoid thorny race
217 conditions that would unnecessarily and greatly complicate frontswap
218 and/or the block I/O subsystem. That said, only the initial "store"
219 and "load" operations need be synchronous. A separate asynchronous thread
224 and use "batched" hypercalls.
236 and the possibility that it might hold no pages at all. This means
241 some kind of "ghost" swap device and ensure that it is never used.
248 where data is compressed and the original 4K page has been compressed
250 is non-compressible and so would take the entire 4K. But the backend
253 the old data and ensure that it is no longer accessible. Since the
264 of the memory managed by frontswap and back into kernel-addressable memory.
273 structures that have, over the years, moved back and forth between
274 static and global. This seemed a reasonable compromise: Define