Lines Matching refs:of

2 The intent of this file is to give a brief summary of hugetlbpage support in
3 the Linux kernel. This support is built on top of multiple page size support
7 256M and ppc64 supports 4K and 16M. A TLB is a cache of virtual-to-physical
9 Operating systems try to make best use of limited number of TLB resources.
21 The /proc/meminfo file provides information about the total number of
23 information about the number of free, reserved and surplus huge pages and the
25 proper alignment and size of the arguments to system calls that map huge page
28 The output of "cat /proc/meminfo" will include lines like:
38 HugePages_Total is the size of the pool of huge pages.
39 HugePages_Free is the number of huge pages in the pool that are not yet
41 HugePages_Rsvd is short for "reserved," and is the number of huge pages for
45 huge page from the pool of huge pages at fault time.
46 HugePages_Surp is short for "surplus," and is the number of huge pages in
48 maximum number of surplus huge pages is controlled by
51 /proc/filesystems should also show a filesystem of type "hugetlbfs" configured
54 /proc/sys/vm/nr_hugepages indicates the current number of "persistent" huge
58 by increasing or decreasing the value of 'nr_hugepages'.
64 Once a number of huge pages have been pre-allocated to the kernel huge page
66 or shared memory system calls to use the huge pages. See the discussion of
71 number of huge pages requested. This is the most reliable method of
75 of a specific size, one must precede the huge pages boot command parameters
81 indicates the current number of pre-allocated huge pages of the default size.
87 This command will try to adjust the number of default sized huge pages in the
91 over all the set of allowed nodes specified by the NUMA memory policy of the
96 below of the interaction of task memory policy, cpusets and per node attributes
97 with the allocation and freeing of persistent huge pages.
99 The success or failure of huge page allocation depends on the amount of
100 physically contiguous memory that is present in system at the time of the
106 System administrators may want to put this command in one of the local rc
108 the boot process when the possibility of getting physical contiguous pages
109 is still very high. Administrators can verify the number of huge pages
111 distribution of huge pages in a NUMA system, use:
115 /proc/sys/vm/nr_overcommit_hugepages specifies how large the pool of
119 number of "surplus" huge pages from the kernel's normal page pool, when the
128 The administrator may shrink the pool of persistent huge pages for
130 smaller value. The kernel will attempt to balance the freeing of huge pages
131 across all nodes in the memory policy of the task modifying nr_hugepages.
136 it becomes less than the number of huge pages in use will convert the balance
137 of the in-use huge pages to surplus huge pages. This will occur even if
138 the number of surplus pages it would exceed the overcommit value. As long as
140 increased sufficiently, or the surplus huge pages go out of use and are freed--
143 With support for multiple huge page pools at run-time available, much of
151 will exist, of the form:
155 Inside each of these directories, the same set of files will exist:
167 Interaction of Task Memory Policy with Huge Page Allocation/Freeing
173 NUMA memory policy of the task that modifies the nr_hugepages_mempolicy
188 specified in <node-list>, depending on whether number of persistent huge pages
196 1) Regardless of mempolicy mode [see Documentation/vm/numa_memory_policy.txt],
202 undesirable imbalance in the distribution of the huge page pool, or
203 possibly, allocation of persistent huge pages on nodes not allowed by
214 Any of the other mempolicy modes may be used to specify a single node.
217 whether this policy was set explicitly by the task itself or one of its
220 node list of "all" with numactl --interleave or --membind [-m] to achieve
224 the resource limits of any cpuset in which the task runs. Thus, there will
226 subset of the system nodes to allocate huge pages outside the cpuset
227 without first moving to a cpuset that contains all of the desired nodes.
230 of huge pages over all on-lines nodes with memory.
235 A subset of the contents of the root huge page control directory in sysfs,
236 described above, will be replicated under each the system device of each
249 of free and surplus [overcommitted] huge pages, respectively, on the parent
252 The nr_hugepages attribute returns the total number of huge pages on the
253 specified node. When this attribute is written, the number of persistent huge
255 resources exist, regardless of the task's mempolicy or cpuset constraints.
257 Note that the number of overcommit and reserve pages remain global quantities,
266 call, then it is required that system administrator mount a file system of
273 This command mounts a (pseudo) filesystem of type hugetlbfs on the directory
275 options sets the owner and group of the root of the file system. By default
276 the uid and gid of the current process are taken. The mode option sets the
277 mode of root of file system to value & 01777. This value is given in octal.
282 size option sets the maximum value of memory (huge pages) allowed for that
284 percentage of the specified huge page pool (nr_hugepages). The size is
286 value of memory (huge pages) allowed for the filesystem. min_size can be
287 specified in the same way as size, either bytes or a percentage of the
288 huge page pool. At mount time, the number of huge pages specified by
292 of allocated and reserved huge pages is always at least min_size. The option
293 nr_inodes sets the maximum number of inodes that /mnt/huge can use. If the
307 MAP_HUGETLB. For an example of how to use mmap with MAP_HUGETLB see map_hugetlb
311 member of a supplementary group and system admin needs to configure that gid
313 applications to use any combination of mmaps and shm* calls, though the mount of
317 aligned to the native page size of the processor; they will normally fail with
333 provides a wide range of userspace tools to help with huge page usability,
339 The most complete set of hugetlb tests are in the libhugetlbfs repository.