1
2What is Linux Memory Policy?
3
4In the Linux kernel, "memory policy" determines from which node the kernel will
5allocate memory in a NUMA system or in an emulated NUMA system.  Linux has
6supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
7The current memory policy support was added to Linux 2.6 around May 2004.  This
8document attempts to describe the concepts and APIs of the 2.6 memory policy
9support.
10
11Memory policies should not be confused with cpusets
12(Documentation/cgroups/cpusets.txt)
13which is an administrative mechanism for restricting the nodes from which
14memory may be allocated by a set of processes. Memory policies are a
15programming interface that a NUMA-aware application can take advantage of.  When
16both cpusets and policies are applied to a task, the restrictions of the cpuset
17takes priority.  See "MEMORY POLICIES AND CPUSETS" below for more details.
18
19MEMORY POLICY CONCEPTS
20
21Scope of Memory Policies
22
23The Linux kernel supports _scopes_ of memory policy, described here from
24most general to most specific:
25
26    System Default Policy:  this policy is "hard coded" into the kernel.  It
27    is the policy that governs all page allocations that aren't controlled
28    by one of the more specific policy scopes discussed below.  When the
29    system is "up and running", the system default policy will use "local
30    allocation" described below.  However, during boot up, the system
31    default policy will be set to interleave allocations across all nodes
32    with "sufficient" memory, so as not to overload the initial boot node
33    with boot-time allocations.
34
35    Task/Process Policy:  this is an optional, per-task policy.  When defined
36    for a specific task, this policy controls all page allocations made by or
37    on behalf of the task that aren't controlled by a more specific scope.
38    If a task does not define a task policy, then all page allocations that
39    would have been controlled by the task policy "fall back" to the System
40    Default Policy.
41
42	The task policy applies to the entire address space of a task. Thus,
43	it is inheritable, and indeed is inherited, across both fork()
44	[clone() w/o the CLONE_VM flag] and exec*().  This allows a parent task
45	to establish the task policy for a child task exec()'d from an
46	executable image that has no awareness of memory policy.  See the
47	MEMORY POLICY APIS section, below, for an overview of the system call
48	that a task may use to set/change its task/process policy.
49
50	In a multi-threaded task, task policies apply only to the thread
51	[Linux kernel task] that installs the policy and any threads
52	subsequently created by that thread.  Any sibling threads existing
53	at the time a new task policy is installed retain their current
54	policy.
55
56	A task policy applies only to pages allocated after the policy is
57	installed.  Any pages already faulted in by the task when the task
58	changes its task policy remain where they were allocated based on
59	the policy at the time they were allocated.
60
61    VMA Policy:  A "VMA" or "Virtual Memory Area" refers to a range of a task's
62    virtual address space.  A task may define a specific policy for a range
63    of its virtual address space.   See the MEMORY POLICIES APIS section,
64    below, for an overview of the mbind() system call used to set a VMA
65    policy.
66
67    A VMA policy will govern the allocation of pages that back this region of
68    the address space.  Any regions of the task's address space that don't
69    have an explicit VMA policy will fall back to the task policy, which may
70    itself fall back to the System Default Policy.
71
72    VMA policies have a few complicating details:
73
74	VMA policy applies ONLY to anonymous pages.  These include pages
75	allocated for anonymous segments, such as the task stack and heap, and
76	any regions of the address space mmap()ed with the MAP_ANONYMOUS flag.
77	If a VMA policy is applied to a file mapping, it will be ignored if
78	the mapping used the MAP_SHARED flag.  If the file mapping used the
79	MAP_PRIVATE flag, the VMA policy will only be applied when an
80	anonymous page is allocated on an attempt to write to the mapping--
81	i.e., at Copy-On-Write.
82
83	VMA policies are shared between all tasks that share a virtual address
84	space--a.k.a. threads--independent of when the policy is installed; and
85	they are inherited across fork().  However, because VMA policies refer
86	to a specific region of a task's address space, and because the address
87	space is discarded and recreated on exec*(), VMA policies are NOT
88	inheritable across exec().  Thus, only NUMA-aware applications may
89	use VMA policies.
90
91	A task may install a new VMA policy on a sub-range of a previously
92	mmap()ed region.  When this happens, Linux splits the existing virtual
93	memory area into 2 or 3 VMAs, each with it's own policy.
94
95	By default, VMA policy applies only to pages allocated after the policy
96	is installed.  Any pages already faulted into the VMA range remain
97	where they were allocated based on the policy at the time they were
98	allocated.  However, since 2.6.16, Linux supports page migration via
99	the mbind() system call, so that page contents can be moved to match
100	a newly installed policy.
101
102    Shared Policy:  Conceptually, shared policies apply to "memory objects"
103    mapped shared into one or more tasks' distinct address spaces.  An
104    application installs a shared policies the same way as VMA policies--using
105    the mbind() system call specifying a range of virtual addresses that map
106    the shared object.  However, unlike VMA policies, which can be considered
107    to be an attribute of a range of a task's address space, shared policies
108    apply directly to the shared object.  Thus, all tasks that attach to the
109    object share the policy, and all pages allocated for the shared object,
110    by any task, will obey the shared policy.
111
112	As of 2.6.22, only shared memory segments, created by shmget() or
113	mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy.  When shared
114	policy support was added to Linux, the associated data structures were
115	added to hugetlbfs shmem segments.  At the time, hugetlbfs did not
116	support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
117	shmem segments were never "hooked up" to the shared policy support.
118	Although hugetlbfs segments now support lazy allocation, their support
119	for shared policy has not been completed.
120
121	As mentioned above [re: VMA policies], allocations of page cache
122	pages for regular files mmap()ed with MAP_SHARED ignore any VMA
123	policy installed on the virtual address range backed by the shared
124	file mapping.  Rather, shared page cache pages, including pages backing
125	private mappings that have not yet been written by the task, follow
126	task policy, if any, else System Default Policy.
127
128	The shared policy infrastructure supports different policies on subset
129	ranges of the shared object.  However, Linux still splits the VMA of
130	the task that installs the policy for each range of distinct policy.
131	Thus, different tasks that attach to a shared memory segment can have
132	different VMA configurations mapping that one shared object.  This
133	can be seen by examining the /proc/<pid>/numa_maps of tasks sharing
134	a shared memory region, when one task has installed shared policy on
135	one or more ranges of the region.
136
137Components of Memory Policies
138
139    A Linux memory policy consists of a "mode", optional mode flags, and an
140    optional set of nodes.  The mode determines the behavior of the policy,
141    the optional mode flags determine the behavior of the mode, and the
142    optional set of nodes can be viewed as the arguments to the policy
143    behavior.
144
145   Internally, memory policies are implemented by a reference counted
146   structure, struct mempolicy.  Details of this structure will be discussed
147   in context, below, as required to explain the behavior.
148
149   Linux memory policy supports the following 4 behavioral modes:
150
151	Default Mode--MPOL_DEFAULT:  This mode is only used in the memory
152	policy APIs.  Internally, MPOL_DEFAULT is converted to the NULL
153	memory policy in all policy scopes.  Any existing non-default policy
154	will simply be removed when MPOL_DEFAULT is specified.  As a result,
155	MPOL_DEFAULT means "fall back to the next most specific policy scope."
156
157	    For example, a NULL or default task policy will fall back to the
158	    system default policy.  A NULL or default vma policy will fall
159	    back to the task policy.
160
161	    When specified in one of the memory policy APIs, the Default mode
162	    does not use the optional set of nodes.
163
164	    It is an error for the set of nodes specified for this policy to
165	    be non-empty.
166
167	MPOL_BIND:  This mode specifies that memory must come from the
168	set of nodes specified by the policy.  Memory will be allocated from
169	the node in the set with sufficient free memory that is closest to
170	the node where the allocation takes place.
171
172	MPOL_PREFERRED:  This mode specifies that the allocation should be
173	attempted from the single node specified in the policy.  If that
174	allocation fails, the kernel will search other nodes, in order of
175	increasing distance from the preferred node based on information
176	provided by the platform firmware.
177
178	    Internally, the Preferred policy uses a single node--the
179	    preferred_node member of struct mempolicy.  When the internal
180	    mode flag MPOL_F_LOCAL is set, the preferred_node is ignored and
181	    the policy is interpreted as local allocation.  "Local" allocation
182	    policy can be viewed as a Preferred policy that starts at the node
183	    containing the cpu where the allocation takes place.
184
185	    It is possible for the user to specify that local allocation is
186	    always preferred by passing an empty nodemask with this mode.
187	    If an empty nodemask is passed, the policy cannot use the
188	    MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES flags described
189	    below.
190
191	MPOL_INTERLEAVED:  This mode specifies that page allocations be
192	interleaved, on a page granularity, across the nodes specified in
193	the policy.  This mode also behaves slightly differently, based on
194	the context where it is used:
195
196	    For allocation of anonymous pages and shared memory pages,
197	    Interleave mode indexes the set of nodes specified by the policy
198	    using the page offset of the faulting address into the segment
199	    [VMA] containing the address modulo the number of nodes specified
200	    by the policy.  It then attempts to allocate a page, starting at
201	    the selected node, as if the node had been specified by a Preferred
202	    policy or had been selected by a local allocation.  That is,
203	    allocation will follow the per node zonelist.
204
205	    For allocation of page cache pages, Interleave mode indexes the set
206	    of nodes specified by the policy using a node counter maintained
207	    per task.  This counter wraps around to the lowest specified node
208	    after it reaches the highest specified node.  This will tend to
209	    spread the pages out over the nodes specified by the policy based
210	    on the order in which they are allocated, rather than based on any
211	    page offset into an address range or file.  During system boot up,
212	    the temporary interleaved system default policy works in this
213	    mode.
214
215   Linux memory policy supports the following optional mode flags:
216
217	MPOL_F_STATIC_NODES:  This flag specifies that the nodemask passed by
218	the user should not be remapped if the task or VMA's set of allowed
219	nodes changes after the memory policy has been defined.
220
221	    Without this flag, anytime a mempolicy is rebound because of a
222	    change in the set of allowed nodes, the node (Preferred) or
223	    nodemask (Bind, Interleave) is remapped to the new set of
224	    allowed nodes.  This may result in nodes being used that were
225	    previously undesired.
226
227	    With this flag, if the user-specified nodes overlap with the
228	    nodes allowed by the task's cpuset, then the memory policy is
229	    applied to their intersection.  If the two sets of nodes do not
230	    overlap, the Default policy is used.
231
232	    For example, consider a task that is attached to a cpuset with
233	    mems 1-3 that sets an Interleave policy over the same set.  If
234	    the cpuset's mems change to 3-5, the Interleave will now occur
235	    over nodes 3, 4, and 5.  With this flag, however, since only node
236	    3 is allowed from the user's nodemask, the "interleave" only
237	    occurs over that node.  If no nodes from the user's nodemask are
238	    now allowed, the Default behavior is used.
239
240	    MPOL_F_STATIC_NODES cannot be combined with the
241	    MPOL_F_RELATIVE_NODES flag.  It also cannot be used for
242	    MPOL_PREFERRED policies that were created with an empty nodemask
243	    (local allocation).
244
245	MPOL_F_RELATIVE_NODES:  This flag specifies that the nodemask passed
246	by the user will be mapped relative to the set of the task or VMA's
247	set of allowed nodes.  The kernel stores the user-passed nodemask,
248	and if the allowed nodes changes, then that original nodemask will
249	be remapped relative to the new set of allowed nodes.
250
251	    Without this flag (and without MPOL_F_STATIC_NODES), anytime a
252	    mempolicy is rebound because of a change in the set of allowed
253	    nodes, the node (Preferred) or nodemask (Bind, Interleave) is
254	    remapped to the new set of allowed nodes.  That remap may not
255	    preserve the relative nature of the user's passed nodemask to its
256	    set of allowed nodes upon successive rebinds: a nodemask of
257	    1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
258	    allowed nodes is restored to its original state.
259
260	    With this flag, the remap is done so that the node numbers from
261	    the user's passed nodemask are relative to the set of allowed
262	    nodes.  In other words, if nodes 0, 2, and 4 are set in the user's
263	    nodemask, the policy will be effected over the first (and in the
264	    Bind or Interleave case, the third and fifth) nodes in the set of
265	    allowed nodes.  The nodemask passed by the user represents nodes
266	    relative to task or VMA's set of allowed nodes.
267
268	    If the user's nodemask includes nodes that are outside the range
269	    of the new set of allowed nodes (for example, node 5 is set in
270	    the user's nodemask when the set of allowed nodes is only 0-3),
271	    then the remap wraps around to the beginning of the nodemask and,
272	    if not already set, sets the node in the mempolicy nodemask.
273
274	    For example, consider a task that is attached to a cpuset with
275	    mems 2-5 that sets an Interleave policy over the same set with
276	    MPOL_F_RELATIVE_NODES.  If the cpuset's mems change to 3-7, the
277	    interleave now occurs over nodes 3,5-7.  If the cpuset's mems
278	    then change to 0,2-3,5, then the interleave occurs over nodes
279	    0,2-3,5.
280
281	    Thanks to the consistent remapping, applications preparing
282	    nodemasks to specify memory policies using this flag should
283	    disregard their current, actual cpuset imposed memory placement
284	    and prepare the nodemask as if they were always located on
285	    memory nodes 0 to N-1, where N is the number of memory nodes the
286	    policy is intended to manage.  Let the kernel then remap to the
287	    set of memory nodes allowed by the task's cpuset, as that may
288	    change over time.
289
290	    MPOL_F_RELATIVE_NODES cannot be combined with the
291	    MPOL_F_STATIC_NODES flag.  It also cannot be used for
292	    MPOL_PREFERRED policies that were created with an empty nodemask
293	    (local allocation).
294
295MEMORY POLICY REFERENCE COUNTING
296
297To resolve use/free races, struct mempolicy contains an atomic reference
298count field.  Internal interfaces, mpol_get()/mpol_put() increment and
299decrement this reference count, respectively.  mpol_put() will only free
300the structure back to the mempolicy kmem cache when the reference count
301goes to zero.
302
303When a new memory policy is allocated, its reference count is initialized
304to '1', representing the reference held by the task that is installing the
305new policy.  When a pointer to a memory policy structure is stored in another
306structure, another reference is added, as the task's reference will be dropped
307on completion of the policy installation.
308
309During run-time "usage" of the policy, we attempt to minimize atomic operations
310on the reference count, as this can lead to cache lines bouncing between cpus
311and NUMA nodes.  "Usage" here means one of the following:
312
3131) querying of the policy, either by the task itself [using the get_mempolicy()
314   API discussed below] or by another task using the /proc/<pid>/numa_maps
315   interface.
316
3172) examination of the policy to determine the policy mode and associated node
318   or node lists, if any, for page allocation.  This is considered a "hot
319   path".  Note that for MPOL_BIND, the "usage" extends across the entire
320   allocation process, which may sleep during page reclaimation, because the
321   BIND policy nodemask is used, by reference, to filter ineligible nodes.
322
323We can avoid taking an extra reference during the usages listed above as
324follows:
325
3261) we never need to get/free the system default policy as this is never
327   changed nor freed, once the system is up and running.
328
3292) for querying the policy, we do not need to take an extra reference on the
330   target task's task policy nor vma policies because we always acquire the
331   task's mm's mmap_sem for read during the query.  The set_mempolicy() and
332   mbind() APIs [see below] always acquire the mmap_sem for write when
333   installing or replacing task or vma policies.  Thus, there is no possibility
334   of a task or thread freeing a policy while another task or thread is
335   querying it.
336
3373) Page allocation usage of task or vma policy occurs in the fault path where
338   we hold them mmap_sem for read.  Again, because replacing the task or vma
339   policy requires that the mmap_sem be held for write, the policy can't be
340   freed out from under us while we're using it for page allocation.
341
3424) Shared policies require special consideration.  One task can replace a
343   shared memory policy while another task, with a distinct mmap_sem, is
344   querying or allocating a page based on the policy.  To resolve this
345   potential race, the shared policy infrastructure adds an extra reference
346   to the shared policy during lookup while holding a spin lock on the shared
347   policy management structure.  This requires that we drop this extra
348   reference when we're finished "using" the policy.  We must drop the
349   extra reference on shared policies in the same query/allocation paths
350   used for non-shared policies.  For this reason, shared policies are marked
351   as such, and the extra reference is dropped "conditionally"--i.e., only
352   for shared policies.
353
354   Because of this extra reference counting, and because we must lookup
355   shared policies in a tree structure under spinlock, shared policies are
356   more expensive to use in the page allocation path.  This is especially
357   true for shared policies on shared memory regions shared by tasks running
358   on different NUMA nodes.  This extra overhead can be avoided by always
359   falling back to task or system default policy for shared memory regions,
360   or by prefaulting the entire shared memory region into memory and locking
361   it down.  However, this might not be appropriate for all applications.
362
363MEMORY POLICY APIs
364
365Linux supports 3 system calls for controlling memory policy.  These APIS
366always affect only the calling task, the calling task's address space, or
367some shared object mapped into the calling task's address space.
368
369	Note:  the headers that define these APIs and the parameter data types
370	for user space applications reside in a package that is not part of
371	the Linux kernel.  The kernel system call interfaces, with the 'sys_'
372	prefix, are defined in <linux/syscalls.h>; the mode and flag
373	definitions are defined in <linux/mempolicy.h>.
374
375Set [Task] Memory Policy:
376
377	long set_mempolicy(int mode, const unsigned long *nmask,
378					unsigned long maxnode);
379
380	Set's the calling task's "task/process memory policy" to mode
381	specified by the 'mode' argument and the set of nodes defined
382	by 'nmask'.  'nmask' points to a bit mask of node ids containing
383	at least 'maxnode' ids.  Optional mode flags may be passed by
384	combining the 'mode' argument with the flag (for example:
385	MPOL_INTERLEAVE | MPOL_F_STATIC_NODES).
386
387	See the set_mempolicy(2) man page for more details
388
389
390Get [Task] Memory Policy or Related Information
391
392	long get_mempolicy(int *mode,
393			   const unsigned long *nmask, unsigned long maxnode,
394			   void *addr, int flags);
395
396	Queries the "task/process memory policy" of the calling task, or
397	the policy or location of a specified virtual address, depending
398	on the 'flags' argument.
399
400	See the get_mempolicy(2) man page for more details
401
402
403Install VMA/Shared Policy for a Range of Task's Address Space
404
405	long mbind(void *start, unsigned long len, int mode,
406		   const unsigned long *nmask, unsigned long maxnode,
407		   unsigned flags);
408
409	mbind() installs the policy specified by (mode, nmask, maxnodes) as
410	a VMA policy for the range of the calling task's address space
411	specified by the 'start' and 'len' arguments.  Additional actions
412	may be requested via the 'flags' argument.
413
414	See the mbind(2) man page for more details.
415
416MEMORY POLICY COMMAND LINE INTERFACE
417
418Although not strictly part of the Linux implementation of memory policy,
419a command line tool, numactl(8), exists that allows one to:
420
421+ set the task policy for a specified program via set_mempolicy(2), fork(2) and
422  exec(2)
423
424+ set the shared policy for a shared memory segment via mbind(2)
425
426The numactl(8) tool is packaged with the run-time version of the library
427containing the memory policy system call wrappers.  Some distributions
428package the headers and compile-time libraries in a separate development
429package.
430
431
432MEMORY POLICIES AND CPUSETS
433
434Memory policies work within cpusets as described above.  For memory policies
435that require a node or set of nodes, the nodes are restricted to the set of
436nodes whose memories are allowed by the cpuset constraints.  If the nodemask
437specified for the policy contains nodes that are not allowed by the cpuset and
438MPOL_F_RELATIVE_NODES is not used, the intersection of the set of nodes
439specified for the policy and the set of nodes with memory is used.  If the
440result is the empty set, the policy is considered invalid and cannot be
441installed.  If MPOL_F_RELATIVE_NODES is used, the policy's nodes are mapped
442onto and folded into the task's set of allowed nodes as previously described.
443
444The interaction of memory policies and cpusets can be problematic when tasks
445in two cpusets share access to a memory region, such as shared memory segments
446created by shmget() of mmap() with the MAP_ANONYMOUS and MAP_SHARED flags, and
447any of the tasks install shared policy on the region, only nodes whose
448memories are allowed in both cpusets may be used in the policies.  Obtaining
449this information requires "stepping outside" the memory policy APIs to use the
450cpuset information and requires that one know in what cpusets other task might
451be attaching to the shared region.  Furthermore, if the cpusets' allowed
452memory sets are disjoint, "local" allocation is the only valid policy.
453