Lines Matching refs:task
45 the resources within a task's current cpuset. They form a nested
53 Requests by a task, using the sched_setaffinity(2) system call to
56 policy, are both filtered through that task's cpuset, filtering out any
58 schedule a task on a CPU that is not allowed in its cpus_allowed
60 node that is not allowed in the requesting task's mems_allowed vector.
65 specify and query to which cpuset a task is assigned, and list the
66 task pids assigned to a cpuset.
114 CPUs a task may be scheduled (sched_setaffinity) and on which Memory
121 - Each task in the system is attached to a cpuset, via a pointer
122 in the task structure to a reference counted cgroup structure.
124 allowed in that task's cpuset.
126 those Memory Nodes allowed in that task's cpuset.
142 - in fork and exit, to attach and detach a task from its cpuset.
144 allowed in that task's cpuset.
148 Memory Nodes by what's allowed in that task's cpuset.
157 The /proc/<pid>/status file for each task has four added lines,
158 displaying the task's cpus_allowed (on which CPUs it may be scheduled)
194 The attachment of each task, automatically inherited at fork by any
195 children of that task, to a cpuset allows organizing the work load
197 to using the CPUs and Memory Nodes of a particular cpuset. A task
277 Because this meter is per-cpuset, rather than per-task or mm,
287 Because this meter is per-cpuset rather than per-task or mm,
294 of data per-cpuset) is kept, and updated by any task attached to that
312 over all the nodes that the faulting task is allowed to use, instead
313 of preferring to put those pages on the node where the task is running.
318 faulting task is allowed to use, instead of preferring to put those
319 pages on the node where the task is running.
322 stack segment pages of a task.
325 pages are allocated on the node local to where the task is running,
326 except perhaps as modified by the task's NUMA mempolicy or cpuset
333 or slab caches to ignore the task's NUMA mempolicy and be spread
336 their containing task's memory spread settings. If memory spreading
348 PFA_SPREAD_PAGE for each task that is in that cpuset or subsequently
350 is modified to perform an inline check for this PFA_SPREAD_PAGE task
359 value of a per-task rotor cpuset_mem_spread_rotor to select the next
360 node in the current task's mems_allowed to prefer for the allocation.
383 kernel data structures such as the task list increases more than
406 system overhead on those CPUs, including avoiding task load
412 can move a task (not otherwised pinned, as by sched_setaffinity)
433 such a task could use spare CPU cycles in some other CPUs, the kernel
435 task to that underused CPU.
450 a task to a CPU outside its cpuset, but the scheduler load balancing
528 When a task is woken up, scheduler try to move the task on idle CPU.
529 For example, if a task A running on CPU X activates another task B
531 then scheduler migrate task B to CPU Y so that task B can start on
532 CPU Y without waiting task A on CPU X.
546 woken task B from X to Z since it is out of its searching range.
547 As the result, task B on CPU X need to wait task A or wait load balance
595 does not support one task updating the memory placement of another
596 task directly, the impact on a task of changing its cpuset CPU
597 or Memory Node placement, or of changing to which cpuset a task
600 If a cpuset has its Memory Nodes modified, then for each task attached
602 a page of memory for that task, the kernel will notice the change
603 in the task's cpuset, and update its per-task memory placement to
604 remain within the new cpusets memory placement. If the task was using
606 its new cpuset, then the task will continue to use whatever subset
607 of MPOL_BIND nodes are still allowed in the new cpuset. If the task
609 in the new cpuset, then the task will be essentially treated as if it
611 as queried by get_mempolicy(), doesn't change). If a task is moved
612 from one cpuset to another, then the kernel will adjust the task's
614 to allocate a page of memory for that task.
616 If a cpuset has its 'cpuset.cpus' modified, then each task in that cpuset
618 if a task's pid is written to another cpusets 'cpuset.tasks' file, then its
619 allowed CPU placement is changed immediately. If such a task had been
621 the task will be allowed to run on any CPU allowed in its new cpuset,
624 In summary, the memory placement of a task whose cpuset is changed is
625 updated by the kernel, on the next allocation of a page for that task,
633 tasks are attached to that cpuset, any pages that task had
635 to the task's new cpuset. The relative placement of the page within
644 Pages that were not in the task's prior cpuset, or in the cpuset's
652 on task attaching. In this failing case, those tasks will stay
657 violate cpuset placement, over starving a task that has had all
664 the current task's cpuset, then we relax the cpuset, and look for
674 4) Start a task that will be the "founding father" of the new job.
675 5) Attach that task to the new cpuset by writing its pid to the
677 6) fork, exec or clone the job tasks from this founding father task.
815 Note that it is PID, not PIDs. You can only attach ONE task at a time.