Lines Matching refs:by
20 single page fault for each 2M virtual region touched by userland (so
21 reducing the enter/exit kernel frequency by a 512 times factor). This
50 backed by regular pages should be relocated on hugepages
65 if compared to the reservation approach of hugetlbfs by allowing all
76 is by far not mandatory and khugepaged already can take care of long
92 risk to lose memory by using hugepages, should use
110 time to defrag memory, we would expect to gain even more by the fact
120 It's possible to disable huge zero page by writing 0 or enable it
121 back by writing 1:
133 also possible to disable defrag in khugepaged by writing 0 or enable
134 defrag in khugepaged by writing 1:
186 Support by passing the parameter "transparent_hugepage=always" or
199 The number of transparent huge pages currently used by the system is
200 available by reading the AnonHugePages field in /proc/meminfo. To
213 thp_collapse_alloc is incremented by khugepaged when it has found
287 In case you can't handle compound pages if they're returned by
315 split_huge_page_pmd(vma, addr, pmd) where the pmd is the one returned by
317 by just grepping for "pmd_offset" and adding split_huge_page_pmd where
324 but you can't handle it natively in your code, you can split it by
348 pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
350 created from under you by khugepaged (khugepaged collapse_huge_page
364 guaranteed by the time wait_split_huge_page returns, the pmd isn't
373 page structures. It can do that easily for refcounts taken by huge pmd
374 mappings. But the GUI API as created by hugetlbfs (that returns head
375 and tail pages if running get_user_pages on an address backed by any
387 anymore which tail page is pinned by gup and which is not while we run