Lines Matching refs:can

13 future it can expand over the pagecache layer starting with tmpfs.
31 TLB can be mapped of larger size only if both KVM and the Linux guest
73 Applications however can be further optimized to take advantage of
76 is by far not mandatory and khugepaged already can take care of long
97 Transparent Hugepage Support can be entirely disabled (mostly for
100 wide. This can be achieved with one of:
139 You can also control how many pages khugepaged should scan at each
145 can set this to 0 to run khugepaged at 100% utilization of one core):
154 The khugepaged progress can be seen in the number of pages collapsed:
163 not already mapped) can be allocated when collapsing a group
170 max_ptes_none can waste cpu time very little, you can
175 You can change the sysfs boot time defaults of Transparent Hugepage
215 pages. This can happen for a variety of reasons but a common
277 In case you can't handle compound pages if they're returned by
278 follow_page, the FOLL_SPLIT bit can be specified as parameter to
281 follow_page because it's not hugepage aware and in fact it can't work
283 hugepages thanks to FOLL_SPLIT). migration simply can't deal with
292 aligned. posix_memalign() can provide that guarantee.
296 You can use hugetlbfs on a kernel that has transparent hugepage
297 support enabled just fine as always. No difference can be noted in
304 Code walking pagetables but unware about huge pmds can simply call
309 fallback design, with a one liner change, you can avoid to write
314 but you can't handle it natively in your code, you can split it by
346 regular pmd from under you (split_huge_page can run in parallel to the
355 huge anymore. If pmd_trans_splitting returns false, you can proceed to
356 process the huge pmd and the hugepage natively. Once finished you can
363 page structures. It can do that easily for refcounts taken by huge pmd
379 too, to know when we can free the compound page in case it's never