Searched refs:do_swap_page (Results 1 - 9 of 9) sorted by relevance
/linux-4.4.14/include/linux/ |
H A D | ksm.h | 51 * When do_swap_page() first faults in from swap what used to be a KSM page, 54 * offset in the same anon_vma). do_swap_page() cannot do all the locking
|
/linux-4.4.14/arch/alpha/include/asm/ |
H A D | cacheflush.h | 66 /* This is used only in __do_fault and do_swap_page. */
|
/linux-4.4.14/arch/avr32/mm/ |
H A D | cache.c | 117 * This one is called from __do_fault() and do_swap_page().
|
/linux-4.4.14/mm/ |
H A D | swapfile.c | 1456 * Wait for and lock page. When do_swap_page races with try_to_unuse() 1457 * try_to_unuse, do_swap_page can handle the fault much try_to_unuse() 1460 * defer to do_swap_page in such a case - in some tests, try_to_unuse() 1461 * do_swap_page and try_to_unuse repeatedly compete. try_to_unuse()
|
H A D | rmap.c | 1147 * Special version of the above for do_swap_page, which often runs 1387 * pte. do_swap_page() will wait until the migration try_to_unmap_one()
|
H A D | memory.c | 1944 * parts, do_swap_page must check under lock before unmapping the pte and 2486 static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma, do_swap_page() function 3338 return do_swap_page(mm, vma, address, handle_pte_fault()
|
H A D | memory-failure.c | 699 * interception code in do_swap_page to catch it).
|
H A D | ksm.c | 1898 return page; /* let do_swap_page report the error */ ksm_might_need_to_copy()
|
H A D | shmem.c | 973 * NUMA mempolicy, and applied also to anonymous pages in do_swap_page();
|
Completed in 296 milliseconds