Lines Matching refs:to

6 and the only necessary changes to the core kernel for transcendent memory;
13 a "backing" store for a swap device. The storage is assumed to be
15 to the requirements of transcendent memory (such as Xen's "tmem", or
19 links itself to frontswap by calling frontswap_register_ops to set the
21 conform to certain policies as follows:
23 An "init" prepares the device to receive frontswap pages associated
25 copy the page to transcendent memory and associate it with the type and
31 to refuse further stores with that swap type.
35 to swap out a page, it first attempts to use frontswap. If the store returns
36 success, the data has been successfully saved to transcendent memory and
39 page can be written to swap as usual.
44 in order to allow the backend to arbitrarily "reclaim" space used to
45 store frontswap pages to more completely manage its memory usage.
69 providing a clean, dynamic interface to read and write swap pages to
70 "transcendent memory" that is otherwise not directly addressable to the kernel.
71 This interface is ideal when data is transformed to a different form
76 cleancache) interface to transcendent memory provides a nice way to read
91 "RAMster" builds on zcache by adding "peer-to-peer" transcendent memory
93 as in zcache, but then "remotified" to another system's RAM. This
94 allows RAM to be dynamically load-balanced back-and-forth as needed,
95 i.e. when system A is overcommitted, it can swap to system B, and
97 many servers in a cluster can swap, dynamically as needed, to a single
101 In the virtual case, the whole point of virtualization is to statistically
103 virtual machines. This is really hard to do with RAM and efforts to do
107 "fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
108 virtual machines, but the pages can be compressed and deduplicated to
109 optimize RAM utilization. And when guest OS's are induced to surrender
112 to be swapped to and from hypervisor RAM (if overall host system memory
116 A KVM implementation is underway and has been RFC'ed to lkml. And,
126 registers, there is one extra global variable compared to zero for
131 precedes a swap page write-to-disk, the system is highly likely
132 to be I/O bound and using a small fraction of a percent of a CPU
137 device that is swapon'd. This is added to the EIGHT bits (which
146 out to disk, there is a side effect that this may create more memory
148 backend, such as zcache, must implement policies to carefully (but
149 dynamically) manage memory limits to ensure this doesn't happen.
156 frontswap backend has access to some "memory" that is not directly
162 This notifies frontswap to expect attempts to "store" swap pages
165 Whenever the swap subsystem is readying a page to write to a swap
169 to the swap device as normal. Note that the response from the frontswap
170 backend is unpredictable to the kernel; it may choose to never accept a
176 corresponding to the page offset on the swap device to which it would
179 When the swap subsystem needs to swap-in a page (swap_readpage()),
180 it first calls frontswap_load() which checks the frontswap_map to
184 executed to obtain the page of data from the real swap device.
196 swap hierarchy. Perhaps it could be rewritten to accommodate a hierarchy,
201 and works around the constraints of the block I/O subsystem to provide
205 entirely unpredictable. This is critical to the definition of frontswap
206 backends because it grants completely dynamic discretion to the
216 required to ensure the dynamicity of the backend and to avoid thorny race
220 is free to manipulate the pages stored by frontswap. For example,
222 kernel sockets to move compressed frontswap pages to a remote machine.
227 (or host OS) to do "intelligent overcommit". For example, it can
228 choose to accept pages only until host-swapping might be imminent,
229 then force guests to do their own swapping.
231 There is a downside to the transcendent memory specifications for
233 slot on a real swap device to swap the page. Thus frontswap must be
234 implemented as a "shadow" to every swapon'd device with the potential
249 to 1K. Now an attempt is made to overwrite the page with data that
254 swap subsystem then writes the new data to the read swap device,
255 this is the correct course of action to ensure coherency.
259 When the (non-frontswap) swap subsystem swaps out a page to a real
263 routine allows code outside of the swap subsystem to force pages out
266 to "repatriate" pages sent to a remote machine back to the local machine;