Lines Matching refs:or
9 comprises multiple components or assemblies each of which may contain 0
10 or more CPUs, local memory, and/or IO buses. For brevity and to
18 connected together with some sort of system interconnect--e.g., a crossbar or
24 Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible
26 is handled in hardware by the processor caches and/or the system interconnect.
29 away the cell containing the CPU or IO bus making the memory access is from the
39 [cache misses] to be to "local" memory--memory on the same cell, if any--or
47 architectures. As with physical cells, software nodes may contain 0 or more
48 CPUs, memory and/or IO buses. And, again, memory accesses to memory on
61 the existing nodes--or the system memory for non-NUMA platforms--into multiple
71 each memory zone [one or more of DMA, DMA32, NORMAL, HIGH_MEMORY, MOVABLE],
75 "overflow" or "fallback".
79 fall back to the same zone type on a different node, or to a different zone
81 such as DMA or DMA32, represent relatively scarce resources. Linux chooses
85 boot parameter or sysctl. [see Documentation/kernel-parameters.txt and
115 privileged user can specify in the scheduling or NUMA commands and functions
128 Some kernel allocations do not want or cannot tolerate this allocation fallback
130 or get notified that the node has no free memory. This is usually the case when
135 numa_node_id() or CPU_to_node() functions and then request memory from only
138 example of this. Or, the subsystem may choose to disable or not to enable
144 or some subsystems would fail to initialize if they attempted to allocated
147 or cpu_to_mem() function to locate the "local memory node" for the calling or