1MOTIVATION 2 3Cleancache is a new optional feature provided by the VFS layer that 4potentially dramatically increases page cache effectiveness for 5many workloads in many environments at a negligible cost. 6 7Cleancache can be thought of as a page-granularity victim cache for clean 8pages that the kernel's pageframe replacement algorithm (PFRA) would like 9to keep around, but can't since there isn't enough memory. So when the 10PFRA "evicts" a page, it first attempts to use cleancache code to 11put the data contained in that page into "transcendent memory", memory 12that is not directly accessible or addressable by the kernel and is 13of unknown and possibly time-varying size. 14 15Later, when a cleancache-enabled filesystem wishes to access a page 16in a file on disk, it first checks cleancache to see if it already 17contains it; if it does, the page of data is copied into the kernel 18and a disk access is avoided. 19 20Transcendent memory "drivers" for cleancache are currently implemented 21in Xen (using hypervisor memory) and zcache (using in-kernel compressed 22memory) and other implementations are in development. 23 24FAQs are included below. 25 26IMPLEMENTATION OVERVIEW 27 28A cleancache "backend" that provides transcendent memory registers itself 29to the kernel's cleancache "frontend" by calling cleancache_register_ops, 30passing a pointer to a cleancache_ops structure with funcs set appropriately. 31The functions provided must conform to certain semantics as follows: 32 33Most important, cleancache is "ephemeral". Pages which are copied into 34cleancache have an indefinite lifetime which is completely unknowable 35by the kernel and so may or may not still be in cleancache at any later time. 36Thus, as its name implies, cleancache is not suitable for dirty pages. 37Cleancache has complete discretion over what pages to preserve and what 38pages to discard and when. 39 40Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a 41pool id which, if positive, must be saved in the filesystem's superblock; 42a negative return value indicates failure. A "put_page" will copy a 43(presumably about-to-be-evicted) page into cleancache and associate it with 44the pool id, a file key, and a page index into the file. (The combination 45of a pool id, a file key, and an index is sometimes called a "handle".) 46A "get_page" will copy the page, if found, from cleancache into kernel memory. 47An "invalidate_page" will ensure the page no longer is present in cleancache; 48an "invalidate_inode" will invalidate all pages associated with the specified 49file; and, when a filesystem is unmounted, an "invalidate_fs" will invalidate 50all pages in all files specified by the given pool id and also surrender 51the pool id. 52 53An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache 54to treat the pool as shared using a 128-bit UUID as a key. On systems 55that may run multiple kernels (such as hard partitioned or virtualized 56systems) that may share a clustered filesystem, and where cleancache 57may be shared among those kernels, calls to init_shared_fs that specify the 58same UUID will receive the same pool id, thus allowing the pages to 59be shared. Note that any security requirements must be imposed outside 60of the kernel (e.g. by "tools" that control cleancache). Or a 61cleancache implementation can simply disable shared_init by always 62returning a negative value. 63 64If a get_page is successful on a non-shared pool, the page is invalidated 65(thus making cleancache an "exclusive" cache). On a shared pool, the page 66is NOT invalidated on a successful get_page so that it remains accessible to 67other sharers. The kernel is responsible for ensuring coherency between 68cleancache (shared or not), the page cache, and the filesystem, using 69cleancache invalidate operations as required. 70 71Note that cleancache must enforce put-put-get coherency and get-get 72coherency. For the former, if two puts are made to the same handle but 73with different data, say AAA by the first put and BBB by the second, a 74subsequent get can never return the stale data (AAA). For get-get coherency, 75if a get for a given handle fails, subsequent gets for that handle will 76never succeed unless preceded by a successful put with that handle. 77 78Last, cleancache provides no SMP serialization guarantees; if two 79different Linux threads are simultaneously putting and invalidating a page 80with the same handle, the results are indeterminate. Callers must 81lock the page to ensure serial behavior. 82 83CLEANCACHE PERFORMANCE METRICS 84 85If properly configured, monitoring of cleancache is done via debugfs in 86the /sys/kernel/debug/cleancache directory. The effectiveness of cleancache 87can be measured (across all filesystems) with: 88 89succ_gets - number of gets that were successful 90failed_gets - number of gets that failed 91puts - number of puts attempted (all "succeed") 92invalidates - number of invalidates attempted 93 94A backend implementation may provide additional metrics. 95 96FAQ 97 981) Where's the value? (Andrew Morton) 99 100Cleancache provides a significant performance benefit to many workloads 101in many environments with negligible overhead by improving the 102effectiveness of the pagecache. Clean pagecache pages are 103saved in transcendent memory (RAM that is otherwise not directly 104addressable to the kernel); fetching those pages later avoids "refaults" 105and thus disk reads. 106 107Cleancache (and its sister code "frontswap") provide interfaces for 108this transcendent memory (aka "tmem"), which conceptually lies between 109fast kernel-directly-addressable RAM and slower DMA/asynchronous devices. 110Disallowing direct kernel or userland reads/writes to tmem 111is ideal when data is transformed to a different form and size (such 112as with compression) or secretly moved (as might be useful for write- 113balancing for some RAM-like devices). Evicted page-cache pages (and 114swap pages) are a great use for this kind of slower-than-RAM-but-much- 115faster-than-disk transcendent memory, and the cleancache (and frontswap) 116"page-object-oriented" specification provides a nice way to read and 117write -- and indirectly "name" -- the pages. 118 119In the virtual case, the whole point of virtualization is to statistically 120multiplex physical resources across the varying demands of multiple 121virtual machines. This is really hard to do with RAM and efforts to 122do it well with no kernel change have essentially failed (except in some 123well-publicized special-case workloads). Cleancache -- and frontswap -- 124with a fairly small impact on the kernel, provide a huge amount 125of flexibility for more dynamic, flexible RAM multiplexing. 126Specifically, the Xen Transcendent Memory backend allows otherwise 127"fallow" hypervisor-owned RAM to not only be "time-shared" between multiple 128virtual machines, but the pages can be compressed and deduplicated to 129optimize RAM utilization. And when guest OS's are induced to surrender 130underutilized RAM (e.g. with "self-ballooning"), page cache pages 131are the first to go, and cleancache allows those pages to be 132saved and reclaimed if overall host system memory conditions allow. 133 134And the identical interface used for cleancache can be used in 135physical systems as well. The zcache driver acts as a memory-hungry 136device that stores pages of data in a compressed state. And 137the proposed "RAMster" driver shares RAM across multiple physical 138systems. 139 1402) Why does cleancache have its sticky fingers so deep inside the 141 filesystems and VFS? (Andrew Morton and Christoph Hellwig) 142 143The core hooks for cleancache in VFS are in most cases a single line 144and the minimum set are placed precisely where needed to maintain 145coherency (via cleancache_invalidate operations) between cleancache, 146the page cache, and disk. All hooks compile into nothingness if 147cleancache is config'ed off and turn into a function-pointer- 148compare-to-NULL if config'ed on but no backend claims the ops 149functions, or to a compare-struct-element-to-negative if a 150backend claims the ops functions but a filesystem doesn't enable 151cleancache. 152 153Some filesystems are built entirely on top of VFS and the hooks 154in VFS are sufficient, so don't require an "init_fs" hook; the 155initial implementation of cleancache didn't provide this hook. 156But for some filesystems (such as btrfs), the VFS hooks are 157incomplete and one or more hooks in fs-specific code are required. 158And for some other filesystems, such as tmpfs, cleancache may 159be counterproductive. So it seemed prudent to require a filesystem 160to "opt in" to use cleancache, which requires adding a hook in 161each filesystem. Not all filesystems are supported by cleancache 162only because they haven't been tested. The existing set should 163be sufficient to validate the concept, the opt-in approach means 164that untested filesystems are not affected, and the hooks in the 165existing filesystems should make it very easy to add more 166filesystems in the future. 167 168The total impact of the hooks to existing fs and mm files is only 169about 40 lines added (not counting comments and blank lines). 170 1713) Why not make cleancache asynchronous and batched so it can 172 more easily interface with real devices with DMA instead 173 of copying each individual page? (Minchan Kim) 174 175The one-page-at-a-time copy semantics simplifies the implementation 176on both the frontend and backend and also allows the backend to 177do fancy things on-the-fly like page compression and 178page deduplication. And since the data is "gone" (copied into/out 179of the pageframe) before the cleancache get/put call returns, 180a great deal of race conditions and potential coherency issues 181are avoided. While the interface seems odd for a "real device" 182or for real kernel-addressable RAM, it makes perfect sense for 183transcendent memory. 184 1854) Why is non-shared cleancache "exclusive"? And where is the 186 page "invalidated" after a "get"? (Minchan Kim) 187 188The main reason is to free up space in transcendent memory and 189to avoid unnecessary cleancache_invalidate calls. If you want inclusive, 190the page can be "put" immediately following the "get". If 191put-after-get for inclusive becomes common, the interface could 192be easily extended to add a "get_no_invalidate" call. 193 194The invalidate is done by the cleancache backend implementation. 195 1965) What's the performance impact? 197 198Performance analysis has been presented at OLS'09 and LCA'10. 199Briefly, performance gains can be significant on most workloads, 200especially when memory pressure is high (e.g. when RAM is 201overcommitted in a virtual workload); and because the hooks are 202invoked primarily in place of or in addition to a disk read/write, 203overhead is negligible even in worst case workloads. Basically 204cleancache replaces I/O with memory-copy-CPU-overhead; on older 205single-core systems with slow memory-copy speeds, cleancache 206has little value, but in newer multicore machines, especially 207consolidated/virtualized machines, it has great value. 208 2096) How do I add cleancache support for filesystem X? (Boaz Harrash) 210 211Filesystems that are well-behaved and conform to certain 212restrictions can utilize cleancache simply by making a call to 213cleancache_init_fs at mount time. Unusual, misbehaving, or 214poorly layered filesystems must either add additional hooks 215and/or undergo extensive additional testing... or should just 216not enable the optional cleancache. 217 218Some points for a filesystem to consider: 219 220- The FS should be block-device-based (e.g. a ram-based FS such 221 as tmpfs should not enable cleancache) 222- To ensure coherency/correctness, the FS must ensure that all 223 file removal or truncation operations either go through VFS or 224 add hooks to do the equivalent cleancache "invalidate" operations 225- To ensure coherency/correctness, either inode numbers must 226 be unique across the lifetime of the on-disk file OR the 227 FS must provide an "encode_fh" function. 228- The FS must call the VFS superblock alloc and deactivate routines 229 or add hooks to do the equivalent cleancache calls done there. 230- To maximize performance, all pages fetched from the FS should 231 go through the do_mpag_readpage routine or the FS should add 232 hooks to do the equivalent (cf. btrfs) 233- Currently, the FS blocksize must be the same as PAGESIZE. This 234 is not an architectural restriction, but no backends currently 235 support anything different. 236- A clustered FS should invoke the "shared_init_fs" cleancache 237 hook to get best performance for some backends. 238 2397) Why not use the KVA of the inode as the key? (Christoph Hellwig) 240 241If cleancache would use the inode virtual address instead of 242inode/filehandle, the pool id could be eliminated. But, this 243won't work because cleancache retains pagecache data pages 244persistently even when the inode has been pruned from the 245inode unused list, and only invalidates the data page if the file 246gets removed/truncated. So if cleancache used the inode kva, 247there would be potential coherency issues if/when the inode 248kva is reused for a different file. Alternately, if cleancache 249invalidated the pages when the inode kva was freed, much of the value 250of cleancache would be lost because the cache of pages in cleanache 251is potentially much larger than the kernel pagecache and is most 252useful if the pages survive inode cache removal. 253 2548) Why is a global variable required? 255 256The cleancache_enabled flag is checked in all of the frequently-used 257cleancache hooks. The alternative is a function call to check a static 258variable. Since cleancache is enabled dynamically at runtime, systems 259that don't enable cleancache would suffer thousands (possibly 260tens-of-thousands) of unnecessary function calls per second. So the 261global variable allows cleancache to be enabled by default at compile 262time, but have insignificant performance impact when cleancache remains 263disabled at runtime. 264 2659) Does cleanache work with KVM? 266 267The memory model of KVM is sufficiently different that a cleancache 268backend may have less value for KVM. This remains to be tested, 269especially in an overcommitted system. 270 27110) Does cleancache work in userspace? It sounds useful for 272 memory hungry caches like web browsers. (Jamie Lokier) 273 274No plans yet, though we agree it sounds useful, at least for 275apps that bypass the page cache (e.g. O_DIRECT). 276 277Last updated: Dan Magenheimer, April 13 2011 278