Lines Matching refs:pages
15 - vmscan's handling of unevictable pages.
17 (*) mlock()'d pages.
24 - Migrating mlocked pages.
25 - Compacting mlocked pages.
39 pages.
54 pages and to hide these pages from vmscan. This mechanism is based on a patch
60 main memory will have over 32 million 4k pages in a single zone. When a large
61 fraction of these pages are not evictable for any reason [see below], vmscan
63 of pages that are evictable. This can result in a situation where all CPUs are
67 The unevictable list addresses the following classes of unevictable pages:
75 The infrastructure may also be able to handle other conditions that make pages
90 The Unevictable LRU infrastructure maintains unevictable pages on an additional
93 (1) We get to "treat unevictable pages just like we treat other pages in the
98 (2) We want to be able to migrate unevictable pages between nodes for memory
100 can only migrate pages that it can successfully isolate from the LRU
101 lists. If we were to maintain pages elsewhere than on an LRU-like list,
103 migration, unless we reworked migration code to find the unevictable pages
108 swap-backed pages. This differentiation is only important while the pages are,
115 unevictable pages are placed directly on the page's zone's unevictable list
116 under the zone lru_lock. This allows us to prevent the stranding of pages on
130 lru_list enum element). The memory controller tracks the movement of pages to
134 not attempt to reclaim pages on the unevictable list. This has a couple of
137 (1) Because the pages are "hidden" from reclaim on the unevictable list, the
138 reclaim process can be more efficient, dealing only with pages that have a
141 (2) On the other hand, if too many of the pages charged to the control group
150 For facilities such as ramfs none of the pages attached to the address space
151 may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE
175 Note that SHM_LOCK is not required to page in the locked pages if they're
176 swapped out; the application must touch the pages manually if it wants to
190 any special effort to push any pages in the SHM_LOCK'd area to the unevictable
191 list. Instead, vmscan will do this if and when it encounters the pages during
195 the pages in the region and "rescue" them from the unevictable list if no other
197 the pages are also "rescued" from the unevictable list in the process of
200 page_evictable() also checks for mlocked pages by testing an additional page
208 If unevictable pages are culled in the fault path, or moved to the unevictable
209 list at mlock() or mmap() time, vmscan will not encounter the pages until they
214 pages in all of the shrink_{active|inactive|page}_list() functions and will
215 "cull" such pages that it encounters: that is, it diverts those pages to the
219 page is not marked as PG_mlocked. Such pages will make it all the way to
231 event and movement of pages onto the unevictable list should be rare, these
249 posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU".
251 to achieve the same objective: hiding mlocked pages from vmscan.
255 prevented the management of the pages on an LRU list, and thus mlocked pages
259 Nick resolved this by putting mlocked pages back on the lru list before
269 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
270 pages. When such a page has been "noticed" by the memory management subsystem,
275 the LRU. Such pages can be "noticed" by memory management in several places:
285 (4) in the fault path, if mlocked pages are "culled" in the fault path,
294 mlocked pages become unlocked and rescued from the unevictable list when:
321 populate_vma_page_range() to fault in the pages via get_user_pages() and to
322 mark the pages as mlocked via mlock_vma_page().
325 get_user_pages() will be unable to fault in the pages. That's okay. If pages
336 detect and cull such pages.
359 1) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind
361 mlocked. In any case, most of the pages have no struct page in which to so
366 neither need nor want to mlock() these pages. However, to preserve the
369 allocate the huge pages and populate the ptes.
371 3) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages,
372 such as the VDSO page, relay channel pages, etc. These pages
400 faulting in and mlocking pages, get_user_pages() was unreliable for visiting
401 these pages for munlocking. Because we don't want to leave pages mlocked,
403 fetching the pages - all of which should be resident as a result of previous
406 For munlock(), populate_vma_page_range() unlocks individual pages by calling
412 mlocked pages. Note, however, that at this point we haven't checked whether
436 of mlocked pages and other unevictable pages. This involves simply moving the
444 can skip these pages by testing the page mapping under page lock.
446 To complete page migration, we place the new and old pages back onto the LRU
449 process is released. To ensure that we don't strand pages on the unevictable
451 putback_lru_page() function to add migrated pages back to the LRU.
473 area will still have properties of the locked area - aka. pages will not get
479 changes, the kernel simply called make_pages_present() to allocate pages and
488 populate_vma_page_range() returns the number of pages NOT mlocked. All of the
493 and pages allocated into that region.
501 munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages.
502 Before the unevictable/mlock changes, mlocking did not mark the pages in any
510 actually contain mlocked pages will be passed to munlock_vma_pages_all().
528 in section "vmscan's handling of unevictable pages". To handle this situation,
534 functions handle anonymous and mapped file and KSM pages, as these types of
535 pages have different reverse map lookup mechanisms, with different locking.
547 holepunching, and truncation of file pages and their anonymous COWed pages.
563 mapped file and KSM pages with a flag argument specifying unlock versus unmap
580 shrink_active_list() culls any obviously unevictable pages - i.e.
582 However, shrink_active_list() only sees unevictable pages that made it onto the
583 active/inactive lru lists. Note that these pages do not have PageUnevictable
587 Some examples of these unevictable pages on the LRU lists are:
589 (1) ramfs pages that have been placed on the LRU lists when first allocated.
591 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
592 allocate or fault in the pages in the shared memory region. This happens
596 (3) mlocked pages that could not be isolated from the LRU and moved to the
599 shrink_inactive_list() also diverts any unevictable pages that it finds on the
602 shrink_inactive_list() should only see SHM_LOCK'd pages that became SHM_LOCK'd
603 after shrink_active_list() had moved them to the inactive list, or pages mapped
608 shrink_page_list() again culls obviously unevictable pages that it could