Lines Matching refs:in
46 hierarchy visible in a virtual file system. These are the essential
50 Cpusets use the generic cgroup subsystem described in
54 include CPUs in its CPU affinity mask, and using the mbind(2) and
55 set_mempolicy(2) system calls to include Memory Nodes in its memory
57 CPUs or Memory Nodes not in that cpuset. The scheduler will not
58 schedule a task on a CPU that is not allowed in its cpus_allowed
60 node that is not allowed in the requesting task's mems_allowed vector.
62 User level code may create and destroy cpusets by name in the cgroup
102 leverages existing CPU and Memory Placement facilities in the Linux
121 - Each task in the system is attached to a cpuset, via a pointer
122 in the task structure to a reference counted cgroup structure.
124 allowed in that task's cpuset.
126 those Memory Nodes allowed in that task's cpuset.
139 into the rest of the kernel, none in performance critical paths:
141 - in init/main.c, to initialize the root cpuset at system boot.
142 - in fork and exit, to attach and detach a task from its cpuset.
143 - in sched_setaffinity, to mask the requested CPUs by what's
144 allowed in that task's cpuset.
145 - in sched.c migrate_live_tasks(), to keep migrating tasks within
147 - in the mbind and set_mempolicy system calls, to mask the requested
148 Memory Nodes by what's allowed in that task's cpuset.
149 - in page_alloc.c, to restrict memory to allowed nodes.
150 - in vmscan.c, to restrict page recovery to the current cpuset.
152 You should mount the "cgroup" filesystem type in order to enable
160 in the two formats seen in the following example:
167 Each cpuset is represented by a directory in the cgroup file system
171 - cpuset.cpus: list of CPUs in that cpuset
172 - cpuset.mems: list of Memory Nodes in that cpuset
177 - cpuset.memory_pressure: measure of how much paging pressure in cpuset
189 to the appropriate file in that cpusets directory, as listed above.
201 Such management of a system "in the large" integrates smoothly with
218 The cpus and mems files in the root (top_cpuset) cpuset are
238 isolating each job's user allocation in its own cpuset. To do this,
249 of the rate that the tasks in a cpuset are attempting to free up in
253 This enables batch managers monitoring jobs running in dedicated
270 /dev/cpuset/memory_pressure_enabled, the hook in the rebalance
289 pressure in a cpuset, with a single read, rather than having to
291 set of tasks in the cpuset.
299 the tasks in the cpuset, in units of reclaims attempted per second,
306 kernel allocates pages for the file system buffers and related in
335 mempolicies will not notice any change in these calls as a result of
348 PFA_SPREAD_PAGE for each task that is in that cpuset or subsequently
360 node in the current task's mems_allowed to prefer for the allocation.
362 This memory placement policy is also known (in other contexts) as
368 the several nodes in the jobs cpuset in order to fit. Without this
369 policy, especially for jobs that might have one thread reading in the
370 data set, the memory allocation across the nodes in the jobs cpuset
387 Each sched domain covers some subset of the CPUs in the system;
388 no two sched domains overlap; some CPUs might not be in any sched
392 than one big one, but doing so means that overloads in one of the
397 the isolated CPUs will not participate in load balancing, and will not
410 setting), it requests that all the CPUs in that cpusets allowed 'cpuset.cpus'
411 be contained in a single sched domain, ensuring that load balancing
413 from any CPU in that cpuset to any other.
416 scheduler will avoid load balancing across the CPUs in that cpuset,
417 --except-- in so far as is necessary because some overlapping cpuset
422 CPUs, and the setting of the "cpuset.sched_load_balance" flag in any other
425 Therefore in the above two situations, the top cpuset flag
429 When doing this, you don't usually want to leave any unpinned tasks in
432 the particulars of this flag setting in descendant cpusets. Even if
433 such a task could use spare CPU cycles in some other CPUs, the kernel
437 Of course, tasks pinned to a particular CPU can be left in a cpuset
443 overlap and each CPU is in at most one sched domain.
464 paragraphs above. In the general case, as in the top cpuset case,
465 don't leave tasks that might use non-trivial amounts of CPU in
470 CPUs in "cpuset.isolcpus" were excluded from load balancing by the
472 of the value of "cpuset.sched_load_balance" in any cpuset.
479 ensure that it can load balance across all the CPUs in that cpuset
480 (makes sure that all the CPUs in the cpus_allowed of that cpuset are
481 in the same sched domain.)
484 then they will be (must be) both in the same sched domain.
497 CPUs in the system. This partition is a set of subsets (represented
512 setup - one sched domain for each element (struct cpumask) in the
525 In sched domain, the scheduler migrates tasks in 2 ways; periodic load
534 And if a CPU run out of tasks in its runqueue, the CPU try to pull
539 idle CPUs, the scheduler might not search all CPUs in the domain
540 every time. In fact, in some architectures, the searching ranges on
541 events are limited in the same socket or node where the CPU locates,
548 on the next tick. For some applications in special situation, waiting
553 indicates size of searching range in levels ideally as follows,
558 1 : search siblings (hyperthreads in a core).
559 2 : search cores in a package.
560 3 : search cpus in a node [= system wide on non-NUMA system]
561 4 : search nodes in a chunk of node [on NUMA system]
603 in the task's cpuset, and update its per-task memory placement to
607 of MPOL_BIND nodes are still allowed in the new cpuset. If the task
609 in the new cpuset, then the task will be essentially treated as if it
616 If a cpuset has its 'cpuset.cpus' modified, then each task in that cpuset
621 the task will be allowed to run on any CPU allowed in its new cpuset,
634 allocated to it on nodes in its previous cpuset are migrated
641 'cpuset.mems' file is modified, pages allocated to tasks in that
642 cpuset, that were on nodes in the previous setting of 'cpuset.mems',
643 will be moved to nodes in the new setting of 'mems.'
644 Pages that were not in the task's prior cpuset, or in the cpuset's
649 then all the tasks in that cpuset will be moved to the nearest ancestor
653 in the original cpuset, and the kernel will automatically update
662 The kernel may drop some request, in rare cases even panic, if a
672 3) Create the new cpuset by doing mkdir's and write's (or echo's) in
681 and then start a subshell 'sh' in that cpuset:
691 # The subshell 'sh' is now running in cpuset Charlie
722 tree of the cpusets in the system. For instance, /sys/fs/cgroup/cpuset
760 You can also create cpusets inside your cpuset by using mkdir in this
766 This will fail if the cpuset is in use (has cpusets inside, or has
784 This is the syntax to use when writing in the cpus or mems files
785 in cpuset directories:
829 errors. If you use it in the cpuset file system, you won't be