Lines Matching refs:by

20 single page fault for each 2M virtual region touched by userland (so
21 reducing the enter/exit kernel frequency by a 512 times factor). This
50 backed by regular pages should be relocated on hugepages
65 if compared to the reservation approach of hugetlbfs by allowing all
76 is by far not mandatory and khugepaged already can take care of long
92 risk to lose memory by using hugepages, should use
110 time to defrag memory, we would expect to gain even more by the fact
120 It's possible to disable huge zero page by writing 0 or enable it
121 back by writing 1:
133 also possible to disable defrag in khugepaged by writing 0 or enable
134 defrag in khugepaged by writing 1:
176 Support by passing the parameter "transparent_hugepage=always" or
189 The number of transparent huge pages currently used by the system is
190 available by reading the AnonHugePages field in /proc/meminfo. To
203 thp_collapse_alloc is incremented by khugepaged when it has found
277 In case you can't handle compound pages if they're returned by
305 split_huge_page_pmd(vma, addr, pmd) where the pmd is the one returned by
307 by just grepping for "pmd_offset" and adding split_huge_page_pmd where
314 but you can't handle it natively in your code, you can split it by
338 pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
340 created from under you by khugepaged (khugepaged collapse_huge_page
354 guaranteed by the time wait_split_huge_page returns, the pmd isn't
363 page structures. It can do that easily for refcounts taken by huge pmd
364 mappings. But the GUI API as created by hugetlbfs (that returns head
365 and tail pages if running get_user_pages on an address backed by any
377 anymore which tail page is pinned by gup and which is not while we run