Lines Matching refs:of
13 of the virtual memory (VM) subsystem of the Linux kernel and
14 the writeout of dirty data to disk.
16 Default values and initialization routines for most of these
67 The amount of free memory in the system that should be reserved for users
70 admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
76 for the full Virtual Memory Size of programs used to recover. Otherwise,
86 For overcommit 'never', we can take the max of their virtual sizes (VSZ)
87 and add the sum of their RSS.
105 blocks where possible. This can be important for example in the allocation of
122 Contains the amount of dirty memory at which the background kernel
125 Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only
126 one of them may be specified at a time. When one sysctl is written it is
134 Contains, as a percentage of total available memory that contains free pages
135 and reclaimable pages, the number of pages at which the background kernel
144 Contains the amount of dirty memory at which a process generating disk writes
147 Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be
162 of a second. Data which has been dirty in-memory for longer than this
169 Contains, as a percentage of total available memory that contains free pages
170 and reclaimable pages, the number of pages at which a process which is
181 100'ths of a second.
201 To increase the number of objects freed by this operation, the user may run
203 number of dirty objects on the system and create more candidates to be
206 This file is not a means to control the growth of the various kernel caches
210 Use of this file can cause performance problems. Since it discards cached
211 objects, it may cost a significant amount of I/O and CPU to recreate the
212 dropped objects, especially if they were under heavy use. Because of this,
213 use outside of a testing or debugging environment is not recommended.
231 of memory, values towards 1000 imply failures are due to fragmentation and -1
249 of the value of this parameter.
252 Assuming that hugepages are not migratable in your system, one usecase of
289 system call, or by unavailability of swapspace.
291 And on large highmem machines this lack of reclaimable lowmem memory
296 a certain amount of lowmem is defended from the possibility of being
315 Note: # of this elements is one fewer than number of zones. Because the highest
318 But, these values are not used directly. The kernel calculates # of protection
319 pages for each zones from them. These are shown as array of protection pages
320 in /proc/zoneinfo like followings. (This is an example of x86-64 box).
321 Each zone has an array of protection pages like this.
352 = (total sums of managed_pages from zone[i+1] to zone[j] on the node)
359 The default values of lowmem_reserve_ratio[i] are
362 As above expression, they are reciprocal number of ratio.
363 256 means 1/256. # of protection pages becomes about "0.39%" of total managed
364 pages of higher zones on the node.
373 This file contains the maximum number of memory map areas a process
374 may have. Memory map areas are used as a side-effect of calling
379 programs, particularly malloc debuggers, may consume lots of them,
393 no other uptodate copy of the data it will kill to prevent any data
398 for a few types of pages, like kernel internally allocated data or
399 the swap cache, but works for the majority of user pages.
427 of kilobytes free. The VM uses this number to compute a
429 Each lowmem zone gets a number of reserved free pages based
432 Some minimal amount of memory is needed to satisfy PF_MEMALLOC
444 A percentage of the total pages in each zone. On Zone reclaim
446 than this percentage of pages in a zone are reclaimable slab pages.
453 The process of reclaiming slab memory is currently not node specific
462 This is a percentage of the total pages in each zone. Zone reclaim will
463 only occur if more than this percentage of pages are in a state that
477 This file indicates the amount of address space which a user process will
479 accidentally operate based on the information in the first couple of pages
480 of memory userspace processes should not be allowed to write to them. By
483 vast majority of applications to work correctly and provide defense in depth
490 Change the minimum size of the hugepage pool.
498 Change the maximum size of the hugepage pool. The maximum is
509 This value adjusts the excess page trimming behaviour of power-of-2 aligned
512 A value of 0 disables trimming of allocations entirely, while a value of 1
514 trimming of allocations is initiated.
534 In NUMA case, you can think of following 2 types of order.
535 Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL
541 will be used before ZONE_NORMAL exhaustion. This increases possibility of
542 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
544 Type(B) cannot offer the best locality but is more robust against OOM of
558 (2) if the DMA zone comprises greater than 50% of the available memory or
559 (3) if any node's DMA zone comprises greater than 70% of its local memory and
560 the amount of local memory is big enough.
577 large systems with thousands of tasks it may not be feasible to dump
592 out-of-memory situations.
596 selects a rogue memory-hogging task that frees up a large amount of
600 triggered the out-of-memory condition. This avoids the expensive
613 permitted to exceed swap plus this amount of physical RAM. See below.
615 Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one
616 of them may be specified at a time. Setting one disables the other (which
626 of free memory left when userspace requests more memory.
632 policy that attempts to prevent any overcommit of memory.
635 This feature can be very useful because there are a lot of
636 programs that malloc() huge amounts of memory "just-in-case"
637 and don't use much of it.
650 of physical RAM. See above.
656 page-cluster controls the number of pages up to which consecutive pages
659 The mentioned consecutivity is not in terms of virtual/physical addresses,
671 extra faults and I/O delays for following faults if they would have been part of
678 This enables or disables panic on out-of-memory feature.
684 If this is set to 1, the kernel panics when out-of-memory happens.
696 1 and 2 are for failover of clustering. Please select either
697 according to your policy of failover.
705 This is the fraction of pages at most (high mark pcp->high) in each zone that
707 means that we don't allow more than 1/8th of pages in each zone to be
709 of hot per cpu pagelists. User can specify a number like 100 to allocate
710 1/100th of each zone to each per cpu page list.
712 The batch value of each per cpu pagelist is also updated as a result. It is
713 set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
732 decrease the amount of swap. A value of 0 instructs the kernel not to
733 initiate swap until the amount of free and file-backed pages is less
743 min(3% of current process size, user_reserve_kbytes) of free memory.
747 user_reserve_kbytes defaults to min(3% of the current process size, 128MB).
761 This percentage value controls the tendency of the kernel to reclaim
762 the memory which is used for caching of directory and inode objects.
764 At the default value of vfs_cache_pressure=100 the kernel will attempt to
769 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
782 reclaim memory when a zone runs out of memory. If it is set to zero then no
786 This is value ORed together of
804 writing large amounts of data from dirtying pages on other nodes. Zone
806 throttle the process. This may decrease the performance of a single process
807 since it cannot use all of system memory to buffer the outgoing writes
809 of other processes running on other nodes will not be affected.
815 ============ End of Document =================================