Lines Matching refs:in

30      - Page reclaim in shrink_*_list().
43 implementation. The latter design rationale is discussed in the context of an
56 reclaim in Linux. The problems have been observed at customer sites on large
60 main memory will have over 32 million 4k pages in a single zone. When a large
63 of pages that are evictable. This can result in a situation where all CPUs are
64 spending 100% of their time in vmscan for hours or days on end, with the system
76 unevictable, either by definition or by circumstance, in the future.
87 PG_active flag in that it indicates on which LRU list a page resides when
93 (1) We get to "treat unevictable pages just like we treat other pages in the
109 in fact, evictable.
142 are unevictable, the evictable portion of the working set of the tasks in
168 These are currently used in two places in the kernel:
175 Note that SHM_LOCK is not required to page in the locked pages if they're
177 ensure they're in memory.
183 The function page_evictable() in vmscan.c determines whether a page is
190 any special effort to push any pages in the SHM_LOCK'd area to the unevictable
195 the pages in the region and "rescue" them from the unevictable list if no other
197 the pages are also "rescued" from the unevictable list in the process of
202 faulted into a VM_LOCKED vma, or found in a vma being VM_LOCKED.
208 If unevictable pages are culled in the fault path, or moved to the unevictable
214 pages in all of the shrink_{active|inactive|page}_list() functions and will
221 map in try_to_unmap(). If try_to_unmap() returns SWAP_MLOCK,
232 extra evictabilty checks should not occur in the majority of calls to
240 The unevictable page list is also useful for mlock(), in addition to ramfs and
241 SYSV SHM. Note that mlock() is only available in CONFIG_MMU=y situations; in
249 posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU".
275 the LRU. Such pages can be "noticed" by memory management in several places:
277 (1) in the mlock()/mlockall() system call handlers;
279 (2) in the mmap() system call handler when mmapping a region with the
282 (3) mmapping a region in a task that has called mlockall() with the MCL_FUTURE
285 (4) in the fault path, if mlocked pages are "culled" in the fault path,
288 (5) as mentioned above, in vmscan:shrink_page_list() when attempting to
289 reclaim a page in a VM_LOCKED VMA via try_to_unmap()
291 all of which result in the VM_LOCKED flag being set for the VMA if it doesn't
296 (1) mapped in a range unlocked via the munlock()/munlockall() system calls;
304 (4) before a page is COW'd in a VM_LOCKED VMA.
311 for each VMA in the range specified by the call. In the case of mlockall(),
317 If the VMA passes some filtering as described in "Filtering Special Vmas"
321 populate_vma_page_range() to fault in the pages via get_user_pages() and to
325 get_user_pages() will be unable to fault in the pages. That's okay. If pages
326 do end up getting faulted into this VM_LOCKED VMA, we'll handle them in the
327 fault path or in vmscan.
334 In the worst case, this will result in a page mapped in a VM_LOCKED VMA
341 especially do not want to count an mlocked page more than once in the
361 mlocked. In any case, most of the pages have no struct page in which to so
363 so there is no sense in attempting to visit them.
368 mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to
391 VM_LOCKED will not be set in any "special" VMAs. So, these VMAs will be
400 faulting in and mlocking pages, get_user_pages() was unreliable for visiting
419 isolate_lru_page() could fail, in which case we couldn't try_to_munlock(). So,
440 This has been discussed from the mlock/munlock perspective in the respective
461 the page migration code and the same work flow as described in MIGRATING
472 in the newly mapped memory being mlocked. Before the unevictable/mlock
485 attempting to fault in a VMA with PROT_NONE access. In this case, we leave the
496 Before the unevictable/mlock changes, mlocking did not mark the pages in any
520 in the process of munlocking the page could not isolate the page from the LRU.
522 in section "vmscan's handling of unevictable pages". To handle this situation,
533 To unmap anonymous pages, each VMA in the list anchored in the anon_vma
539 try_to_unmap_anon() attempts to acquire in read mode the mmap semaphore of
553 in the page's mapping's reverse map priority search tree. It also visits
554 each VMA in the page's mapping's non-linear list, if the list is
565 try_to_un{map|lock}() must also visit each VMA in that list to determine
566 whether the page is mapped in a VM_LOCKED VMA. Again, the scan must visit
567 all VMAs in the non-linear list to ensure that the pages is not/should not
570 If a VM_LOCKED VMA is found in the list, the scan could terminate.
572 mapped in a given VMA - either for unmapping or testing whether the
576 number of pages - a "cluster" - in each non-linear VMA associated with the
581 recirculate this page. We take advantage of the cluster scan in
591 Then, for each page in the cluster, if we're holding the mmap semaphore
594 but will mlock any pages in the non-linear mapping that happen to be
597 If one of the pages so mlocked is the page passed in to try_to_unmap(),
623 pages mapped in linear VMAs, as in the try_to_unmap() case, the functions
639 Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's
663 allocate or fault in the pages in the shared memory region. This happens
668 unevictable list in mlock_vma_page().