Lines Matching refs:the
5 Userfaults allow the implementation of on-demand paging from userland
7 memory page faults, something otherwise only the kernel code could do.
10 of the PROT_NONE+SIGSEGV trick.
14 Userfaults are delivered and resolved through the userfaultfd syscall.
19 1) read/POLLIN protocol to notify a userland thread of the faults
22 2) various UFFDIO_* ioctls that can manage the virtual memory regions
23 registered in the userfaultfd that allows userland to efficiently
24 resolve the userfaults it receives via 1) or to manage the virtual
25 memory in the background
28 management of mremap/mprotect is that the userfaults in all their
29 operations never involve heavyweight structures like vmas (in fact the
30 userfaultfd runtime load never takes the mmap_sem for writing).
36 The userfaultfd once opened by invoking the syscall, can also be
37 passed using unix domain sockets to a manager process, so the same
38 manager process could handle the userfaults of a multitude of
40 (well of course unless they later try to use the userfaultfd
41 themselves on the same region the manager is already tracking, which
46 When first opened the userfaultfd must be enabled invoking the
48 a later API version) which will specify the read/POLLIN protocol
49 userland intends to speak on the UFFD and the uffdio_api.features
50 userland requires. The UFFDIO_API ioctl if successful (i.e. if the
51 requested uffdio_api.api is spoken also by the running kernel and the
54 respectively all the available features of the read(2) protocol and
55 the generic ioctl available.
57 Once the userfaultfd has been enabled the UFFDIO_REGISTER ioctl should
58 be invoked (if present in the returned uffdio_api.ioctls bitmask) to
59 register a memory range in the userfaultfd by setting the
61 bitmask will specify to the kernel which kind of faults to track for
62 the range (UFFDIO_REGISTER_MODE_MISSING would track missing
63 pages). The UFFDIO_REGISTER ioctl will return the
65 userfaults on the range registered. Not all ioctls will necessarily be
66 supported for all memory types depending on the underlying virtual
70 Userland can use the uffdio_register.ioctls to manage the virtual
71 address space in the background (to add or potentially also remove
72 memory from the userfaultfd registered range). This means a userfault
73 could be triggering just before userland maps in the background the
77 atomically copies a page into the userfault registered range and wakes
78 up the blocked userfaults (unless uffdio_copy.mode &
81 half copied page since it'll keep userfaulting until the copy has
86 QEMU/KVM is using the userfaultfd syscall to implement postcopy live
89 all of its memory residing on a different node in the cloud. The
96 page faults in the guest scheduler so those guest processes that
98 the guest vcpus.
105 single bidirectional socket but in the future two different sockets
106 will be used (to reduce the latency of the userfaults to the minimum
109 The QEMU in the source node writes all pages that it knows are missing
110 in the destination node, into the socket, and the migration thread of
111 the QEMU running in the destination node runs UFFDIO_COPY|ZEROPAGE
112 ioctls on the userfaultfd in order to map the received pages into the
113 guest (UFFDIO_ZEROCOPY is used if the source page was a zero page).
115 A different postcopy thread in the destination node listens with
116 poll() to the userfaultfd in parallel. When a POLLIN event is
117 generated after a userfault triggers, the postcopy thread read() from
118 the userfaultfd and receives the fault address (or -EAGAIN in case the
120 by the parallel QEMU migration thread).
122 After the QEMU postcopy thread (running in the destination node) gets
123 the userfault address it writes the information about the missing page
124 into the socket. The QEMU source node receives the information and
127 (just the time to flush the tcp_wmem queue through the network) the
128 migration thread in the QEMU running in the destination node will
129 receive the page that triggered the userfault and it'll map it as
130 usual with the UFFDIO_COPY|ZEROPAGE (without actually knowing if it
131 was spontaneously sent by the source or if it was an urgent page
134 By the time the userfaults start, the QEMU in the destination node
135 doesn't need to keep any per-page state bitmap relative to the live
137 the QEMU running in the source node to know which pages are still
138 missing in the destination node. The bitmap in the source node is
141 course the bitmap is updated accordingly. It's also useful to avoid
142 sending the same page twice (in case the userfault is read by the
143 postcopy thread just before UFFDIO_COPY|ZEROPAGE runs in the migration