Lines Matching refs:that
42 an on-line node that contains memory.
56 policy, are both filtered through that task's cpuset, filtering out any
57 CPUs or Memory Nodes not in that cpuset. The scheduler will not
58 schedule a task on a CPU that is not allowed in its cpus_allowed
60 node that is not allowed in the requesting task's mems_allowed vector.
124 allowed in that task's cpuset.
126 those Memory Nodes allowed in that task's cpuset.
133 - A cpuset may be marked exclusive, which ensures that no other
144 allowed in that task's cpuset.
148 Memory Nodes by what's allowed in that task's cpuset.
169 files describing that cpuset:
171 - cpuset.cpus: list of CPUs in that cpuset
172 - cpuset.mems: list of Memory Nodes in that cpuset
180 - cpuset.sched_load_balance flag: if set, load balance within CPUs on that cpuset
189 to the appropriate file in that cpusets directory, as listed above.
195 children of that task, to a cpuset allows organizing the work load
196 on a system into related sets of tasks such that each set is constrained
232 A cpuset that is cpuset.mem_exclusive *or* cpuset.mem_hardwall is "hardwalled",
236 space. This enables configuring a system so that several independent
249 of the rate that the tasks in a cpuset are attempting to free up in
254 cpusets to efficiently detect what level of memory pressure that job
258 submitted jobs, which may choose to terminate or re-prioritize jobs that
261 computing jobs that will dramatically fail to meet required performance
272 that the cpuset_memory_pressure_enabled flag is zero. So only
273 systems that enable this feature will compute the metric.
294 of data per-cpuset) is kept, and updated by any task attached to that
305 There are two boolean flag files per cpuset that control where the
312 over all the nodes that the faulting task is allowed to use, instead
317 such as for inodes and dentries evenly over all the nodes that the
341 files. By default they contain "0", meaning that the feature is off
342 for that cpuset. If a "1" is written to that file, then that turns
348 PFA_SPREAD_PAGE for each task that is in that cpuset or subsequently
349 joins that cpuset. The page allocation calls for the page cache
365 This policy can provide substantial improvements for jobs that need
366 to place thread local data on the corresponding node, but that need
367 to access large file system data sets that need to be spread across
369 policy, especially for jobs that might have one thread reading in the
377 tasks. If one CPU is underutilized, kernel code running on that
386 domains such that it only load balances within each sched domain.
392 than one big one, but doing so means that overloads in one of the
407 balancing if that is not needed.
410 setting), it requests that all the CPUs in that cpusets allowed 'cpuset.cpus'
411 be contained in a single sched domain, ensuring that load balancing
413 from any CPU in that cpuset to any other.
416 scheduler will avoid load balancing across the CPUs in that cpuset,
430 the top cpuset that might use non-trivial amounts of CPU, as such tasks
434 scheduler might not consider the possibility of load balancing that
435 task to that underused CPU.
438 that disables "cpuset.sched_load_balance" as those tasks aren't going anywhere
447 that would be beyond our understanding. So if each of two partially
449 form a single sched domain that is a superset of both. We won't move
451 code might waste some compute cycles considering that possibility.
465 don't leave tasks that might use non-trivial amounts of CPU in
479 ensure that it can load balance across all the CPUs in that cpuset
480 (makes sure that all the CPUs in the cpus_allowed of that cpuset are
487 then by the above that means there is a single sched domain covering
490 The kernel commits to user space that it will avoid load balancing
498 as an array of struct cpumask) of CPUs, pairwise disjoint, that cover
499 all the CPUs that must be load balanced.
531 then scheduler migrate task B to CPU Y so that task B can start on
554 otherwise initial value -1 that indicates the cpuset has no request.
576 Note that modifying this file will have both good and bad effects,
594 code, such as the scheduler, and due to the fact that the kernel
601 to that cpuset, the next time that the kernel attempts to allocate
602 a page of memory for that task, the kernel will notice the change
613 memory placement, as above, the next time that the kernel attempts
614 to allocate a page of memory for that task.
616 If a cpuset has its 'cpuset.cpus' modified, then each task in that cpuset
625 updated by the kernel, on the next allocation of a page for that task,
629 of main memory) then that page stays on whatever node it
633 tasks are attached to that cpuset, any pages that task had
640 Also if 'cpuset.memory_migrate' is set true, then if that cpuset's
641 'cpuset.mems' file is modified, pages allocated to tasks in that
642 cpuset, that were on nodes in the previous setting of 'cpuset.mems',
644 Pages that were not in the task's prior cpuset, or in the cpuset's
648 to remove all the CPUs that are currently assigned to a cpuset,
649 then all the tasks in that cpuset will be moved to the nearest ancestor
657 violate cpuset placement, over starving a task that has had all
661 kernel internal allocations that must be satisfied, immediately.
668 To start a new job that is to be contained within a cpuset, the steps are:
674 4) Start a task that will be the "founding father" of the new job.
675 5) Attach that task to the new cpuset by writing its pid to the
676 /sys/fs/cgroup/cpuset tasks file for that cpuset.
681 and then start a subshell 'sh' in that cpuset:
721 Then under /sys/fs/cgroup/cpuset you can find a tree that corresponds to the
723 is the cpuset that holds the whole system.
744 the CPUs and Memory Nodes it can use, the processes that are using
769 Note that for legacy reasons, the "cpuset" filesystem exists as a
815 Note that it is PID, not PIDs. You can only attach ONE task at a time.