1= Transparent Hugepage Support =
2
3== Objective ==
4
5Performance critical computing applications dealing with large memory
6working sets are already running on top of libhugetlbfs and in turn
7hugetlbfs. Transparent Hugepage Support is an alternative means of
8using huge pages for the backing of virtual memory with huge pages
9that supports the automatic promotion and demotion of page sizes and
10without the shortcomings of hugetlbfs.
11
12Currently it only works for anonymous memory mappings but in the
13future it can expand over the pagecache layer starting with tmpfs.
14
15The reason applications are running faster is because of two
16factors. The first factor is almost completely irrelevant and it's not
17of significant interest because it'll also have the downside of
18requiring larger clear-page copy-page in page faults which is a
19potentially negative effect. The first factor consists in taking a
20single page fault for each 2M virtual region touched by userland (so
21reducing the enter/exit kernel frequency by a 512 times factor). This
22only matters the first time the memory is accessed for the lifetime of
23a memory mapping. The second long lasting and much more important
24factor will affect all subsequent accesses to the memory for the whole
25runtime of the application. The second factor consist of two
26components: 1) the TLB miss will run faster (especially with
27virtualization using nested pagetables but almost always also on bare
28metal without virtualization) and 2) a single TLB entry will be
29mapping a much larger amount of virtual memory in turn reducing the
30number of TLB misses. With virtualization and nested pagetables the
31TLB can be mapped of larger size only if both KVM and the Linux guest
32are using hugepages but a significant speedup already happens if only
33one of the two is using hugepages just because of the fact the TLB
34miss is going to run faster.
35
36== Design ==
37
38- "graceful fallback": mm components which don't have transparent
39  hugepage knowledge fall back to breaking a transparent hugepage and
40  working on the regular pages and their respective regular pmd/pte
41  mappings
42
43- if a hugepage allocation fails because of memory fragmentation,
44  regular pages should be gracefully allocated instead and mixed in
45  the same vma without any failure or significant delay and without
46  userland noticing
47
48- if some task quits and more hugepages become available (either
49  immediately in the buddy or through the VM), guest physical memory
50  backed by regular pages should be relocated on hugepages
51  automatically (with khugepaged)
52
53- it doesn't require memory reservation and in turn it uses hugepages
54  whenever possible (the only possible reservation here is kernelcore=
55  to avoid unmovable pages to fragment all the memory but such a tweak
56  is not specific to transparent hugepage support and it's a generic
57  feature that applies to all dynamic high order allocations in the
58  kernel)
59
60- this initial support only offers the feature in the anonymous memory
61  regions but it'd be ideal to move it to tmpfs and the pagecache
62  later
63
64Transparent Hugepage Support maximizes the usefulness of free memory
65if compared to the reservation approach of hugetlbfs by allowing all
66unused memory to be used as cache or other movable (or even unmovable
67entities). It doesn't require reservation to prevent hugepage
68allocation failures to be noticeable from userland. It allows paging
69and all other advanced VM features to be available on the
70hugepages. It requires no modifications for applications to take
71advantage of it.
72
73Applications however can be further optimized to take advantage of
74this feature, like for example they've been optimized before to avoid
75a flood of mmap system calls for every malloc(4k). Optimizing userland
76is by far not mandatory and khugepaged already can take care of long
77lived page allocations even for hugepage unaware applications that
78deals with large amounts of memory.
79
80In certain cases when hugepages are enabled system wide, application
81may end up allocating more memory resources. An application may mmap a
82large region but only touch 1 byte of it, in that case a 2M page might
83be allocated instead of a 4k page for no good. This is why it's
84possible to disable hugepages system-wide and to only have them inside
85MADV_HUGEPAGE madvise regions.
86
87Embedded systems should enable hugepages only inside madvise regions
88to eliminate any risk of wasting any precious byte of memory and to
89only run faster.
90
91Applications that gets a lot of benefit from hugepages and that don't
92risk to lose memory by using hugepages, should use
93madvise(MADV_HUGEPAGE) on their critical mmapped regions.
94
95== sysfs ==
96
97Transparent Hugepage Support can be entirely disabled (mostly for
98debugging purposes) or only enabled inside MADV_HUGEPAGE regions (to
99avoid the risk of consuming more memory resources) or enabled system
100wide. This can be achieved with one of:
101
102echo always >/sys/kernel/mm/transparent_hugepage/enabled
103echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
104echo never >/sys/kernel/mm/transparent_hugepage/enabled
105
106It's also possible to limit defrag efforts in the VM to generate
107hugepages in case they're not immediately free to madvise regions or
108to never try to defrag memory and simply fallback to regular pages
109unless hugepages are immediately available. Clearly if we spend CPU
110time to defrag memory, we would expect to gain even more by the fact
111we use hugepages later instead of regular pages. This isn't always
112guaranteed, but it may be more likely in case the allocation is for a
113MADV_HUGEPAGE region.
114
115echo always >/sys/kernel/mm/transparent_hugepage/defrag
116echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
117echo never >/sys/kernel/mm/transparent_hugepage/defrag
118
119By default kernel tries to use huge zero page on read page fault.
120It's possible to disable huge zero page by writing 0 or enable it
121back by writing 1:
122
123echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
124echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page
125
126khugepaged will be automatically started when
127transparent_hugepage/enabled is set to "always" or "madvise, and it'll
128be automatically shutdown if it's set to "never".
129
130khugepaged runs usually at low frequency so while one may not want to
131invoke defrag algorithms synchronously during the page faults, it
132should be worth invoking defrag at least in khugepaged. However it's
133also possible to disable defrag in khugepaged by writing 0 or enable
134defrag in khugepaged by writing 1:
135
136echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
137echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
138
139You can also control how many pages khugepaged should scan at each
140pass:
141
142/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
143
144and how many milliseconds to wait in khugepaged between each pass (you
145can set this to 0 to run khugepaged at 100% utilization of one core):
146
147/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
148
149and how many milliseconds to wait in khugepaged if there's an hugepage
150allocation failure to throttle the next allocation attempt.
151
152/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
153
154The khugepaged progress can be seen in the number of pages collapsed:
155
156/sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
157
158for each pass:
159
160/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
161
162max_ptes_none specifies how many extra small pages (that are
163not already mapped) can be allocated when collapsing a group
164of small pages into one large page.
165
166/sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
167
168A higher value leads to use additional memory for programs.
169A lower value leads to gain less thp performance. Value of
170max_ptes_none can waste cpu time very little, you can
171ignore it.
172
173== Boot parameter ==
174
175You can change the sysfs boot time defaults of Transparent Hugepage
176Support by passing the parameter "transparent_hugepage=always" or
177"transparent_hugepage=madvise" or "transparent_hugepage=never"
178(without "") to the kernel command line.
179
180== Need of application restart ==
181
182The transparent_hugepage/enabled values only affect future
183behavior. So to make them effective you need to restart any
184application that could have been using hugepages. This also applies to
185the regions registered in khugepaged.
186
187== Monitoring usage ==
188
189The number of transparent huge pages currently used by the system is
190available by reading the AnonHugePages field in /proc/meminfo. To
191identify what applications are using transparent huge pages, it is
192necessary to read /proc/PID/smaps and count the AnonHugePages fields
193for each mapping. Note that reading the smaps file is expensive and
194reading it frequently will incur overhead.
195
196There are a number of counters in /proc/vmstat that may be used to
197monitor how successfully the system is providing huge pages for use.
198
199thp_fault_alloc is incremented every time a huge page is successfully
200	allocated to handle a page fault. This applies to both the
201	first time a page is faulted and for COW faults.
202
203thp_collapse_alloc is incremented by khugepaged when it has found
204	a range of pages to collapse into one huge page and has
205	successfully allocated a new huge page to store the data.
206
207thp_fault_fallback is incremented if a page fault fails to allocate
208	a huge page and instead falls back to using small pages.
209
210thp_collapse_alloc_failed is incremented if khugepaged found a range
211	of pages that should be collapsed into one huge page but failed
212	the allocation.
213
214thp_split is incremented every time a huge page is split into base
215	pages. This can happen for a variety of reasons but a common
216	reason is that a huge page is old and is being reclaimed.
217
218thp_zero_page_alloc is incremented every time a huge zero page is
219	successfully allocated. It includes allocations which where
220	dropped due race with other allocation. Note, it doesn't count
221	every map of the huge zero page, only its allocation.
222
223thp_zero_page_alloc_failed is incremented if kernel fails to allocate
224	huge zero page and falls back to using small pages.
225
226As the system ages, allocating huge pages may be expensive as the
227system uses memory compaction to copy data around memory to free a
228huge page for use. There are some counters in /proc/vmstat to help
229monitor this overhead.
230
231compact_stall is incremented every time a process stalls to run
232	memory compaction so that a huge page is free for use.
233
234compact_success is incremented if the system compacted memory and
235	freed a huge page for use.
236
237compact_fail is incremented if the system tries to compact memory
238	but failed.
239
240compact_pages_moved is incremented each time a page is moved. If
241	this value is increasing rapidly, it implies that the system
242	is copying a lot of data to satisfy the huge page allocation.
243	It is possible that the cost of copying exceeds any savings
244	from reduced TLB misses.
245
246compact_pagemigrate_failed is incremented when the underlying mechanism
247	for moving a page failed.
248
249compact_blocks_moved is incremented each time memory compaction examines
250	a huge page aligned range of pages.
251
252It is possible to establish how long the stalls were using the function
253tracer to record how long was spent in __alloc_pages_nodemask and
254using the mm_page_alloc tracepoint to identify which allocations were
255for huge pages.
256
257== get_user_pages and follow_page ==
258
259get_user_pages and follow_page if run on a hugepage, will return the
260head or tail pages as usual (exactly as they would do on
261hugetlbfs). Most gup users will only care about the actual physical
262address of the page and its temporary pinning to release after the I/O
263is complete, so they won't ever notice the fact the page is huge. But
264if any driver is going to mangle over the page structure of the tail
265page (like for checking page->mapping or other bits that are relevant
266for the head page and not the tail page), it should be updated to jump
267to check head page instead (while serializing properly against
268split_huge_page() to avoid the head and tail pages to disappear from
269under it, see the futex code to see an example of that, hugetlbfs also
270needed special handling in futex code for similar reasons).
271
272NOTE: these aren't new constraints to the GUP API, and they match the
273same constrains that applies to hugetlbfs too, so any driver capable
274of handling GUP on hugetlbfs will also work fine on transparent
275hugepage backed mappings.
276
277In case you can't handle compound pages if they're returned by
278follow_page, the FOLL_SPLIT bit can be specified as parameter to
279follow_page, so that it will split the hugepages before returning
280them. Migration for example passes FOLL_SPLIT as parameter to
281follow_page because it's not hugepage aware and in fact it can't work
282at all on hugetlbfs (but it instead works fine on transparent
283hugepages thanks to FOLL_SPLIT). migration simply can't deal with
284hugepages being returned (as it's not only checking the pfn of the
285page and pinning it during the copy but it pretends to migrate the
286memory in regular page sizes and with regular pte/pmd mappings).
287
288== Optimizing the applications ==
289
290To be guaranteed that the kernel will map a 2M page immediately in any
291memory region, the mmap region has to be hugepage naturally
292aligned. posix_memalign() can provide that guarantee.
293
294== Hugetlbfs ==
295
296You can use hugetlbfs on a kernel that has transparent hugepage
297support enabled just fine as always. No difference can be noted in
298hugetlbfs other than there will be less overall fragmentation. All
299usual features belonging to hugetlbfs are preserved and
300unaffected. libhugetlbfs will also work fine as usual.
301
302== Graceful fallback ==
303
304Code walking pagetables but unware about huge pmds can simply call
305split_huge_page_pmd(vma, addr, pmd) where the pmd is the one returned by
306pmd_offset. It's trivial to make the code transparent hugepage aware
307by just grepping for "pmd_offset" and adding split_huge_page_pmd where
308missing after pmd_offset returns the pmd. Thanks to the graceful
309fallback design, with a one liner change, you can avoid to write
310hundred if not thousand of lines of complex code to make your code
311hugepage aware.
312
313If you're not walking pagetables but you run into a physical hugepage
314but you can't handle it natively in your code, you can split it by
315calling split_huge_page(page). This is what the Linux VM does before
316it tries to swapout the hugepage for example.
317
318Example to make mremap.c transparent hugepage aware with a one liner
319change:
320
321diff --git a/mm/mremap.c b/mm/mremap.c
322--- a/mm/mremap.c
323+++ b/mm/mremap.c
324@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
325		return NULL;
326
327	pmd = pmd_offset(pud, addr);
328+	split_huge_page_pmd(vma, addr, pmd);
329	if (pmd_none_or_clear_bad(pmd))
330		return NULL;
331
332== Locking in hugepage aware code ==
333
334We want as much code as possible hugepage aware, as calling
335split_huge_page() or split_huge_page_pmd() has a cost.
336
337To make pagetable walks huge pmd aware, all you need to do is to call
338pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
339mmap_sem in read (or write) mode to be sure an huge pmd cannot be
340created from under you by khugepaged (khugepaged collapse_huge_page
341takes the mmap_sem in write mode in addition to the anon_vma lock). If
342pmd_trans_huge returns false, you just fallback in the old code
343paths. If instead pmd_trans_huge returns true, you have to take the
344mm->page_table_lock and re-run pmd_trans_huge. Taking the
345page_table_lock will prevent the huge pmd to be converted into a
346regular pmd from under you (split_huge_page can run in parallel to the
347pagetable walk). If the second pmd_trans_huge returns false, you
348should just drop the page_table_lock and fallback to the old code as
349before. Otherwise you should run pmd_trans_splitting on the pmd. In
350case pmd_trans_splitting returns true, it means split_huge_page is
351already in the middle of splitting the page. So if pmd_trans_splitting
352returns true it's enough to drop the page_table_lock and call
353wait_split_huge_page and then fallback the old code paths. You are
354guaranteed by the time wait_split_huge_page returns, the pmd isn't
355huge anymore. If pmd_trans_splitting returns false, you can proceed to
356process the huge pmd and the hugepage natively. Once finished you can
357drop the page_table_lock.
358
359== compound_lock, get_user_pages and put_page ==
360
361split_huge_page internally has to distribute the refcounts in the head
362page to the tail pages before clearing all PG_head/tail bits from the
363page structures. It can do that easily for refcounts taken by huge pmd
364mappings. But the GUI API as created by hugetlbfs (that returns head
365and tail pages if running get_user_pages on an address backed by any
366hugepage), requires the refcount to be accounted on the tail pages and
367not only in the head pages, if we want to be able to run
368split_huge_page while there are gup pins established on any tail
369page. Failure to be able to run split_huge_page if there's any gup pin
370on any tail page, would mean having to split all hugepages upfront in
371get_user_pages which is unacceptable as too many gup users are
372performance critical and they must work natively on hugepages like
373they work natively on hugetlbfs already (hugetlbfs is simpler because
374hugetlbfs pages cannot be split so there wouldn't be requirement of
375accounting the pins on the tail pages for hugetlbfs). If we wouldn't
376account the gup refcounts on the tail pages during gup, we won't know
377anymore which tail page is pinned by gup and which is not while we run
378split_huge_page. But we still have to add the gup pin to the head page
379too, to know when we can free the compound page in case it's never
380split during its lifetime. That requires changing not just
381get_page, but put_page as well so that when put_page runs on a tail
382page (and only on a tail page) it will find its respective head page,
383and then it will decrease the head page refcount in addition to the
384tail page refcount. To obtain a head page reliably and to decrease its
385refcount without race conditions, put_page has to serialize against
386__split_huge_page_refcount using a special per-page lock called
387compound_lock.
388