Lines Matching refs:of
8 document attempts to describe the concepts and APIs of the 2.6 memory policy
14 memory may be allocated by a set of processes. Memory policies are a
15 programming interface that a NUMA-aware application can take advantage of. When
16 both cpusets and policies are applied to a task, the restrictions of the cpuset
21 Scope of Memory Policies
23 The Linux kernel supports _scopes_ of memory policy, described here from
28 by one of the more specific policy scopes discussed below. When the
37 on behalf of the task that aren't controlled by a more specific scope.
42 The task policy applies to the entire address space of a task. Thus,
46 executable image that has no awareness of memory policy. See the
47 MEMORY POLICY APIS section, below, for an overview of the system call
61 VMA Policy: A "VMA" or "Virtual Memory Area" refers to a range of a task's
63 of its virtual address space. See the MEMORY POLICIES APIS section,
64 below, for an overview of the mbind() system call used to set a VMA
67 A VMA policy will govern the allocation of pages that back this region of
68 the address space. Any regions of the task's address space that don't
76 any regions of the address space mmap()ed with the MAP_ANONYMOUS flag.
84 space--a.k.a. threads--independent of when the policy is installed; and
86 to a specific region of a task's address space, and because the address
91 A task may install a new VMA policy on a sub-range of a previously
105 the mbind() system call specifying a range of virtual addresses that map
107 to be an attribute of a range of a task's address space, shared policies
112 As of 2.6.22, only shared memory segments, created by shmget() or
121 As mentioned above [re: VMA policies], allocations of page cache
129 ranges of the shared object. However, Linux still splits the VMA of
130 the task that installs the policy for each range of distinct policy.
133 can be seen by examining the /proc/<pid>/numa_maps of tasks sharing
135 one or more ranges of the region.
137 Components of Memory Policies
139 A Linux memory policy consists of a "mode", optional mode flags, and an
140 optional set of nodes. The mode determines the behavior of the policy,
141 the optional mode flags determine the behavior of the mode, and the
142 optional set of nodes can be viewed as the arguments to the policy
146 structure, struct mempolicy. Details of this structure will be discussed
161 When specified in one of the memory policy APIs, the Default mode
162 does not use the optional set of nodes.
164 It is an error for the set of nodes specified for this policy to
168 set of nodes specified by the policy. Memory will be allocated from
174 allocation fails, the kernel will search other nodes, in order of
179 preferred_node member of struct mempolicy. When the internal
196 For allocation of anonymous pages and shared memory pages,
197 Interleave mode indexes the set of nodes specified by the policy
198 using the page offset of the faulting address into the segment
199 [VMA] containing the address modulo the number of nodes specified
205 For allocation of page cache pages, Interleave mode indexes the set
206 of nodes specified by the policy using a node counter maintained
218 the user should not be remapped if the task or VMA's set of allowed
221 Without this flag, anytime a mempolicy is rebound because of a
222 change in the set of allowed nodes, the node (Preferred) or
223 nodemask (Bind, Interleave) is remapped to the new set of
229 applied to their intersection. If the two sets of nodes do not
246 by the user will be mapped relative to the set of the task or VMA's
247 set of allowed nodes. The kernel stores the user-passed nodemask,
249 be remapped relative to the new set of allowed nodes.
252 mempolicy is rebound because of a change in the set of allowed
254 remapped to the new set of allowed nodes. That remap may not
255 preserve the relative nature of the user's passed nodemask to its
256 set of allowed nodes upon successive rebinds: a nodemask of
257 1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
261 the user's passed nodemask are relative to the set of allowed
264 Bind or Interleave case, the third and fifth) nodes in the set of
266 relative to task or VMA's set of allowed nodes.
269 of the new set of allowed nodes (for example, node 5 is set in
270 the user's nodemask when the set of allowed nodes is only 0-3),
271 then the remap wraps around to the beginning of the nodemask and,
285 memory nodes 0 to N-1, where N is the number of memory nodes the
287 set of memory nodes allowed by the task's cpuset, as that may
307 on completion of the policy installation.
309 During run-time "usage" of the policy, we attempt to minimize atomic operations
311 and NUMA nodes. "Usage" here means one of the following:
313 1) querying of the policy, either by the task itself [using the get_mempolicy()
317 2) examination of the policy to determine the policy mode and associated node
334 of a task or thread freeing a policy while another task or thread is
337 3) Page allocation usage of task or vma policy occurs in the fault path where
354 Because of this extra reference counting, and because we must lookup
370 for user space applications reside in a package that is not part of
381 specified by the 'mode' argument and the set of nodes defined
382 by 'nmask'. 'nmask' points to a bit mask of node ids containing
396 Queries the "task/process memory policy" of the calling task, or
397 the policy or location of a specified virtual address, depending
403 Install VMA/Shared Policy for a Range of Task's Address Space
410 a VMA policy for the range of the calling task's address space
418 Although not strictly part of the Linux implementation of memory policy,
426 The numactl(8) tool is packaged with the run-time version of the library
435 that require a node or set of nodes, the nodes are restricted to the set of
438 MPOL_F_RELATIVE_NODES is not used, the intersection of the set of nodes
439 specified for the policy and the set of nodes with memory is used. If the
442 onto and folded into the task's set of allowed nodes as previously described.
444 The interaction of memory policies and cpusets can be problematic when tasks
446 created by shmget() of mmap() with the MAP_ANONYMOUS and MAP_SHARED flags, and
447 any of the tasks install shared policy on the region, only nodes whose