Lines Matching refs:to

10 or more CPUs, local memory, and/or IO buses.  For brevity and to
19 point-to-point link are common types of NUMA system interconnects. Both of
20 these types of interconnects can be aggregated to create NUMA platforms with
25 to and accessible from any CPU attached to any cell and cache coherency
30 cell containing the target memory. For example, access to memory by CPUs
31 attached to the same cell will experience faster access times and higher
32 bandwidths than accesses to memory on other, remote cells. NUMA platforms
35 Platform vendors don't build NUMA systems just to make software developers'
36 lives interesting. Rather, this architecture is a means to provide scalable
37 memory bandwidth. However, to achieve scalable memory bandwidth, system and
39 [cache misses] to be to "local" memory--memory on the same cell, if any--or
40 to the closest cell with memory.
42 This leads to the Linux software view of a NUMA system:
48 CPUs, memory and/or IO buses. And, again, memory accesses to memory on
49 "closer" nodes--nodes that map to closer cells--will generally experience
50 faster access times and higher effective bandwidth than accesses to more
54 physical cell that has no memory attached, and reassign any CPUs attached to
55 that cell to a node representing a cell that does have memory. Thus, on
70 statistics and locks to mediate access. In addition, Linux constructs for
72 an ordered "zonelist". A zonelist specifies the zones/nodes to visit when a
74 when a zone has no available memory to satisfy a request, is called
78 memory, Linux must decide whether to order the zonelists such that allocations
79 fall back to the same zone type on a different node, or to a different zone
83 to the total memory of the node and the total memory of the system. The
88 By default, Linux will attempt to satisfy memory allocation requests from the
89 node to which the CPU that executes the request is assigned. Specifically,
90 Linux will attempt to allocate from the first node in the appropriate zonelist
96 Local allocation will tend to keep subsequent access to the allocated memory
97 "local" to the underlying physical resources and off the system interconnect--
102 attempts to minimize task migration to distant scheduling domains. However,
108 to improve NUMA locality using various CPU affinity command line interfaces,
125 allows such allocations to fallback to other nearby nodes when a node that
129 behavior. Rather they want to be sure they get memory from the specified node
133 A typical model for making such an allocation is to obtain the node id of the
134 node to which the "current CPU" is attached using one of the kernel's
137 may revert to its own fallback path. The slab kernel memory allocator is an
138 example of this. Or, the subsystem may choose to disable or not to enable
143 attached to memoryless nodes would always incur the fallback path overhead
144 or some subsystems would fail to initialize if they attempted to allocated
147 or cpu_to_mem() function to locate the "local memory node" for the calling or