Searched refs:tasks (Results 1 - 200 of 450) sorted by relevance

123

/linux-4.1.27/include/linux/sched/
H A Ddeadline.h5 * SCHED_DEADLINE tasks has negative priorities, reflecting
7 * NORMAL/BATCH tasks.
H A Drt.h55 * default timeslice is 100 msecs (used only for SCHED_RR tasks).
H A Dprio.h11 * tasks are in the range MAX_RT_PRIO..MAX_PRIO-1. Priority
/linux-4.1.27/include/uapi/linux/
H A Dcgroupstats.h32 __u64 nr_sleeping; /* Number of tasks sleeping */
33 __u64 nr_running; /* Number of tasks running */
34 __u64 nr_stopped; /* Number of tasks in stopped state */
35 __u64 nr_uninterruptible; /* Number of tasks in uninterruptible */
37 __u64 nr_io_wait; /* Number of tasks waiting on IO */
H A Dmsg.h63 * MSGMNB is the default size of a new message queue. Non-root tasks can
64 * decrease the size with msgctl(IPC_SET), root tasks
/linux-4.1.27/kernel/sched/
H A Didle_task.c6 * (NOTE: these are not related to SCHED_IDLE tasks which are
14 return task_cpu(p); /* IDLE tasks as never migrated */ select_task_rq_idle()
19 * Idle tasks are unconditionally rescheduled:
83 * Simple, special scheduling class for the per-CPU idle tasks:
87 /* no enqueue/yield_task for idle tasks */
H A Dstop_task.c16 return task_cpu(p); /* stop tasks as never migrate */ select_task_rq_stop()
110 * Simple, special scheduling class for the per-CPU stop tasks:
H A Dauto_group.c34 /* We've redirected RT tasks to the root task group... */ autogroup_destroy()
86 * Autogroup RT tasks are redirected to the root task group autogroup_create()
87 * so we don't have to move tasks around upon policy change, autogroup_create()
H A Dfair.c39 * Targeted preemption latency for CPU-bound tasks:
66 * Minimal preemption granularity for CPU-bound tasks:
348 * both tasks until we find their ancestors who are siblings of common find_matching_se()
612 * When there are too many tasks (sched_nr_latency) we have to stretch
747 * Are we enqueueing a waiting task? (for current tasks update_stats_enqueue()
801 * calculated based on the tasks virtual memory size and
874 spinlock_t lock; /* nr_tasks, tasks */
970 * of nodes, and move tasks towards the group with the most for_each_online_node()
1007 * larger multiplier, in order to group tasks together that are almost
1125 /* Approximate capacity in terms of runnable tasks on a node */
1255 * be improved if the source tasks was migrated to the target dst_cpu taking
1298 * be incurred if the tasks were swapped. task_numa_compare()
1306 * If dst and source tasks are in the same NUMA group, or not task_numa_compare()
1314 * tasks within a group over tiny differences. task_numa_compare()
1361 * better than swapping tasks around, check if a move is task_numa_compare()
1433 * imbalance and would be the first to start moving tasks about. task_numa_migrate()
1435 * And we want to avoid any moving of tasks about, as that would create task_numa_migrate()
1436 * random movement of tasks -- counter the numa conditions we're trying task_numa_migrate()
1525 * alternative node to recheck if the tasks is now properly placed.
1738 * tasks from numa_groups near each other in the system, and
1834 * Normalize the faults_from, so all tasks in a group for_each_online_node()
2853 * Newly forked tasks are enqueued with se->avg.decay_count == 0, they enqueue_entity_load_avg()
2879 /* migrated tasks did not contribute to our blocked load */ enqueue_entity_load_avg()
3036 * The 'current' period is already promised to the current tasks, place_entity()
3252 * when there are only lesser-weight tasks around): set_next_entity()
4165 * CFS operations on tasks: unthrottle_offline_cfs_rqs()
4445 * has 7 equal weight tasks, distributed as below (rw_i), with the resulting
4574 * If we wake multiple tasks be careful to not bounce wake_affine()
4790 * tasks. The unit of the return value must be the one of capacity so we can
4793 * cfs.utilization_load_avg is the sum of running time of runnable tasks on a
4799 * after migrating tasks until the average stabilizes with the new running
4923 * be negative here since on-rq tasks have decay-count == 0. migrate_task_rq_fair()
4945 * By using 'se' instead of 'curr' we penalize light tasks, so wakeup_gran()
5051 /* Idle tasks are by definition preempted by non-idle tasks. */ check_preempt_wakeup()
5057 * Batch and idle tasks do not preempt non-idle tasks (their preemption check_preempt_wakeup()
5319 * We them move tasks around to minimize the imbalance. In the continuous
5348 * Coupled with a limit on how many tasks we can migrate every balance pass,
5436 struct list_head tasks; member in struct:lb_env
5571 * We do not migrate tasks that are: can_migrate_task()
5590 * meet load balance goals by pulling other tasks on src_cpu. can_migrate_task()
5689 * Returns number of detached tasks if successful and 0 otherwise.
5693 struct list_head *tasks = &env->src_rq->cfs_tasks; detach_tasks() local
5703 while (!list_empty(tasks)) { detach_tasks()
5704 p = list_first_entry(tasks, struct task_struct, se.group_node); detach_tasks()
5711 /* take a breather every nr_migrate tasks */ detach_tasks()
5730 list_add(&p->se.group_node, &env->tasks); detach_tasks()
5754 list_move_tail(&p->se.group_node, tasks); detach_tasks()
5792 * attach_tasks() -- attaches all tasks detached by detach_tasks() to their
5797 struct list_head *tasks = &env->tasks; attach_tasks() local
5802 while (!list_empty(tasks)) { attach_tasks()
5803 p = list_first_entry(tasks, struct task_struct, se.group_node); attach_tasks()
5941 unsigned long sum_weighted_load; /* Weighted load of group's tasks */
5945 unsigned int sum_nr_running; /* Nr tasks running in the group */
6163 * Imagine a situation of two groups of 4 cpus each and 4 tasks each with a
6170 * If we were to balance group-wise we'd place two tasks in the first group and
6171 * two tasks in the second group. Clearly this is undesired as it will overload
6176 * moving tasks due to affinity constraints.
6195 * be used by some tasks.
6198 * capacity for CFS tasks.
6200 * account the variance of the tasks' load and to return true if the available
6219 * group_is_overloaded returns true if the group has more tasks than it can
6222 * with the exact right number of tasks, has no more spare capacity but is not
6427 * In case the child domain prefers tasks go to siblings update_sd_lb_stats()
6429 * and move all the excess tasks away. We lower the capacity update_sd_lb_stats()
6431 * these excess tasks. The extra check prevents the case where update_sd_lb_stats()
6434 * the tasks on the system). update_sd_lb_stats()
6545 * OK, we don't have enough imbalance to justify moving tasks, fix_small_imbalance()
6646 * there is no guarantee that any tasks will be moved so we'll have calculate_imbalance()
6660 * CPUs can be put to idle by rebalancing those tasks elsewhere, if
6671 * put to idle by rebalancing its tasks onto our group.
6693 /* There is no busy sibling group to pull tasks from */ find_busiest_group()
6715 * don't try and pull any tasks. find_busiest_group()
6721 * Don't pull any tasks if this group is already above the domain find_busiest_group()
6777 * - regular: there are !numa tasks for_each_cpu_and()
6778 * - remote: there are numa tasks that run on the 'wrong' node for_each_cpu_and()
6781 * In order to avoid migrating ideally placed numa tasks, for_each_cpu_and()
6786 * queue by moving tasks around inside the node. for_each_cpu_and()
6790 * allow migration of more tasks. for_each_cpu_and()
6832 * Max backoff if we encounter pinned tasks. Pretty arbitrary value, but
6847 * ASYM_PACKING needs to force migrate tasks from busy but need_active_balance()
6848 * higher numbered CPUs in order to pack all tasks in the need_active_balance()
6909 * tasks if there is an imbalance.
6931 .tasks = LIST_HEAD_INIT(env.tasks), load_balance()
6973 * Attempt to move tasks. If find_busiest_group has found load_balance()
6991 * We've detached some tasks from busiest_rq. Every load_balance()
6994 * that nobody can manipulate the tasks in parallel. load_balance()
7013 * Revisit (affine) tasks on src_cpu that couldn't be moved to load_balance()
7059 /* All tasks on this runqueue were pinned by CPU affinity */ load_balance()
7154 * We reach balance because all tasks are pinned at this level so load_balance()
7203 * idle. Attempts to pull tasks from other CPUs.
7268 * Stop searching for tasks to pull if there are for_each_domain()
7269 * now runnable tasks on this rq. for_each_domain()
7308 * running tasks off the busiest CPU onto idle CPUs. It requires at
7593 * state even if we migrated tasks. Update it. for_each_domain()
7681 * significantly reduced because of RT tasks or IRQs.
8004 * it must have been asleep, sleeping tasks keep their ->vruntime task_move_group_fair()
8213 * Time slice is 0 for SCHED_OTHER tasks that are on an otherwise get_rr_interval_fair()
H A Dcpuacct.c20 /* Time spent by the tasks of the cpu accounting group executing in ... */
28 /* track cpu usage of a group of tasks and its child groups */
H A Drt.c94 /* We start is dequeued state, because no RT tasks are queued */ init_rt_rq()
263 /* Try to pull RT tasks here if we lower this rq's prio */ need_pull_rt_task()
933 * Update the current task's runtime statistics. Skip current tasks that
1342 * For equal prio tasks, we just let the scheduler sort it out. select_task_rq_rt()
1567 * lowest priority tasks in the system. Now we want to elect find_lowest_rq()
1639 * Target rq has tasks of equal or higher priority, find_lock_lowest_rq()
1760 /* No more tasks, just exit */ push_rt_task()
2048 * try to push tasks away now
2125 * that we might want to pull RT tasks from other runqueues.
2130 * If there are other RT tasks then we will reschedule switched_from_rt()
2131 * and the scheduling of the other RT tasks will handle switched_from_rt()
2133 * we may need to handle the pulling of RT tasks switched_from_rt()
2156 * with RT tasks. In this case we try to push them off to
2196 * may need to pull tasks to this runqueue. prio_changed_rt()
2255 * RR tasks need a special form of timeslice management. task_tick_rt()
2256 * FIFO tasks have no timeslices. task_tick_rt()
2292 * Time slice is 0 for SCHED_FIFO tasks get_rr_interval_rt()
H A Ddeadline.c77 /* zero means no -deadline tasks */ init_dl_rq()
155 * sched_rt.c, it is an rb-tree with tasks ordered by deadline.
417 * result in breaking guarantees promised to other tasks (refer to
665 * observed by schedulable tasks (excluding time spent update_curr_dl()
699 * account our runtime there too, otherwise actual rt tasks update_curr_dl()
713 * We'll let actual RT tasks worry about the overflow here, we update_curr_dl()
744 * If the dl_rq had no -deadline tasks, or if the new task inc_dl_deadline()
949 * Yield task semantic for -deadline tasks is:
1051 * tasks.
1167 * SCHED_DEADLINE tasks cannot fork and this is achieved through task_fork_dl()
1261 * This is, among the runqueues where the current tasks find_later_rq()
1388 * See if the non running -deadline tasks on this rq
1446 /* No more tasks */ push_dl_task()
1511 * If there are no more pullable tasks on the pull_dl_task()
1688 * Note, p may migrate OR new deadline tasks cancel_dl_timer()
H A Dsched.h139 * To keep the bandwidth of -deadline tasks and groups under control
155 * Moreover, groups consume bandwidth on each CPU, while tasks only
399 * leaf cfs_rqs are those that hold tasks (lowest schedulable entity in
498 * an rb-tree, ordered by tasks' deadlines, with caching
530 * than one runnable -deadline task (as it is below for RT tasks).
579 /* capture load from *all* tasks on this cpu: */
894 * Return the group to which this tasks belongs. sched_ttwu_pending()
1099 * of tasks with abnormal "nice" values across CPUs the contribution that
1101 * scheduling class and "nice" value. For SCHED_NORMAL tasks this is just a
1179 * tasks.
H A Dcompletion.c27 * changing the task state if and only if any tasks are woken up.
47 * changing the task state if and only if any tasks are woken up.
H A Dcore.c282 * Number of tasks to iterate in a single balance run.
304 * part of the period that we allow rt tasks to run in us.
697 * tasks are of a lower priority. The scheduler tick does nothing. sched_can_stop_tick()
703 * Round-robin realtime tasks time slice with other tasks at the same sched_can_stop_tick()
793 * SCHED_IDLE tasks get minimal weight: set_load_weight()
950 * be boosted by RT tasks, or might be boosted by
958 * If we are RT tasks or we were boosted to RT priority, effective_prio()
1033 * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks. set_task_cpu()
1123 * Cross migrate two tasks
1362 * Don't tell them about moving exiting tasks or
1758 * changing the task state if and only if any tasks are woken up.
2163 * prepare_task_switch - prepare to switch tasks
2670 * Optimization: we know that if all tasks are in pick_next_task()
2709 * To drive preemption between tasks, the scheduler sets the flag in timer
3183 * RT tasks are offset by -200. Normal tasks are centered
3272 * not on the safe side. It does however guarantee tasks will never __setparam_dl()
3301 * getparam()/getattr() don't report silly values for !rt tasks. __setscheduler_params()
3460 * Allow unprivileged RT tasks to decrease priority: __sched_setscheduler()
3486 * unprivileged DL tasks to increase their relative deadline __sched_setscheduler()
3554 * Do not allow realtime tasks into groups that have no runtime __sched_setscheduler()
3569 * Don't allow tasks with an affinity mask smaller than __sched_setscheduler()
3603 * Take priority boosted tasks into account. If the new __sched_setscheduler()
4068 * tasks allowed to run on all the CPUs in the task's sched_setaffinity()
4207 * This function yields the current CPU to other tasks. If there are no
4584 * Only show locks if all tasks are dumped:
4640 * The idle tasks have their own, simple scheduling class: init_idle()
4685 * success of set_cpus_allowed_ptr() on all attached tasks task_can_attach()
4894 * tasks on the runqueues
4936 * We need to explicitly wake pending tasks before running migration_cpu_stop()
4996 * Migrate all tasks from the rq, sleeping tasks will be migrated by
7185 * system cpu resource is divided among the tasks of for_each_possible_cpu()
7190 * In other words, if root_task_group has 10 tasks of weight for_each_possible_cpu()
7196 * We achieve this by letting root_task_group's tasks sit for_each_possible_cpu()
7377 * Only normalize user tasks: for_each_process_thread()
7392 * tasks back to 0: for_each_process_thread()
7595 * Autogroups do not have RT tasks; see autogroup_create(). tg_has_rt_tasks()
7636 * Ensure we don't starve existing RT tasks. tg_rt_schedulable()
7788 /* Don't accept realtime tasks when there is no way for them to run */ sched_rt_can_attach()
8020 /* We don't support RT-tasks being in separate groups */ cgroup_taskset_for_each()
H A Dwait.c59 * number) then we wake all the non-exclusive tasks and one exclusive task.
87 * changing the task state if and only if any tasks are woken up.
130 * changing the task state if and only if any tasks are woken up.
H A Dcpudeadline.c152 * called for a CPU without -dl tasks running. cpudl_set()
H A Dcpupri.c20 * searches). For tasks with affinity restrictions, the algorithm has a
H A Dstats.h126 * Called when tasks are switched involuntarily due, typically, to expiring
H A Dcputime.c283 * Accumulate raw cputime values of dead tasks (sig->[us]time) and live
284 * tasks (sum on group iteration) belonging to @tsk's group.
/linux-4.1.27/tools/perf/bench/
H A Dfutex.h49 * futex_wake() - wake one or more tasks blocked on uaddr
50 * @nr_wake: wake up to this many tasks
59 * futex_cmp_requeue() - requeue tasks from uaddr to uaddr2
60 * @nr_wake: wake up to this many tasks
61 * @nr_requeue: requeue up to this many tasks
H A Dfutex-requeue.c8 * requeues without waking up any tasks -- thus mimicking a regular futex_wait.
27 * How many tasks to requeue at a time.
169 * Do not wakeup any tasks blocked on futex1, allowing bench_futex_requeue()
190 warnx("couldn't wakeup all tasks (%d/%d)", nrequeued, nthreads); bench_futex_requeue()
H A Dnuma.c188 "bind the first N tasks to these specific cpus (the rest is unbound)",
191 "bind the first N tasks to these specific memory nodes (the rest is unbound)",
448 tprintf("# binding tasks to CPUs:\n"); parse_setup_cpu_list()
545 printf("# NOTE: %d tasks bound, %d tasks unbound\n", t, g->p.nr_tasks - t); parse_setup_cpu_list()
585 tprintf("# binding tasks to NODEs:\n"); parse_setup_node_list()
660 printf("# NOTE: %d tasks mem-bound, %d tasks unbound\n", t, g->p.nr_tasks - t); parse_setup_node_list()
1273 g->p.nr_tasks, g->p.nr_tasks == 1 ? "task" : "tasks", g->p.nr_nodes, g->p.nr_cpus); print_summary()
H A Dfutex-wake.c8 * one or more tasks, and thus the waitqueue is never empty.
/linux-4.1.27/kernel/power/
H A Dprocess.c72 * We need to retry, but first give the freezing tasks some
88 pr_err("Freezing of tasks %s after %d.%03d seconds "
89 "(%d tasks refusing to freeze, wq_busy=%d):\n",
146 * killable tasks. freeze_processes()
161 * (if any) before thawing the userspace tasks. So, it is the responsibility
162 * of the caller to thaw the userspace tasks, when the time is right.
168 pr_info("Freezing remaining freezable tasks ... "); freeze_kernel_threads()
196 pr_info("Restarting tasks ... "); thaw_processes()
H A Dpower.h249 * failure. So we have to thaw the userspace tasks ourselves. suspend_freeze_processes()
/linux-4.1.27/arch/xtensa/include/asm/
H A Dswitch_to.h12 /* * switch_to(n) should switch tasks to task nr n, first
/linux-4.1.27/scripts/gdb/linux/
H A Dtasks.py36 t = g = utils.container_of(g['tasks']['next'],
37 task_ptr_type, "tasks")
51 $lx_task_by_pid(PID): Given PID, iterate over all tasks of the target and
H A Dcpus.py16 from linux import tasks, utils
30 return tasks.get_thread_info(tasks.get_task_by_pid(tid))['cpu']
/linux-4.1.27/tools/perf/scripts/python/
H A Dsched-migration.py98 def __init__(self, tasks = [0], event = RunqueueEventUnknown()):
99 self.tasks = tuple(tasks)
105 if taskState(prev_state) == "R" and next in self.tasks \
106 and prev in self.tasks:
112 next_tasks = list(self.tasks[:])
113 if prev in self.tasks:
125 if old not in self.tasks:
127 next_tasks = [task for task in self.tasks if task != old]
132 if new in self.tasks:
135 next_tasks = self.tasks[:] + tuple([new])
149 """ Provide the number of tasks on the runqueue.
151 return len(self.tasks) - 1
154 ret = self.tasks.__repr__()
275 for t in rq.tasks:
/linux-4.1.27/kernel/
H A Dhung_task.c4 * kernel/hung_task.c - kernel thread for detecting tasks stuck in D state
22 * The number of tasks checked:
27 * Limit number of tasks checked in a batch.
126 panic("hung_task: blocked tasks"); check_hung_task()
166 * do not report extra hung tasks: check_hung_uninterruptible_tasks()
180 /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */ for_each_process_thread()
223 * kthread which checks for tasks stuck in D state
H A Dcgroup_freezer.c151 * current state. freezer_attach() is responsible for making new tasks
155 * @freezer->lock. freezer_attach() makes the new tasks conform to the
156 * current state and all following state changes can see the new tasks.
168 * Make the new tasks conform to the current state of @new_css. freezer_attach()
174 * current state before executing the following - !frozen tasks may freezer_attach()
175 * be visible in a FROZEN cgroup and frozen tasks in a THAWED one. freezer_attach()
237 * this function checks whether all tasks of this cgroup and the descendant
243 * Task states and freezer state might disagree while tasks are being
273 /* are all tasks frozen? */
H A Dcpuset.c4 * Processor and Memory placement constraints for sets of tasks.
87 * The effective masks is the real masks that apply to the tasks
100 /* user-configured CPUs and Memory Nodes allow to tasks */
104 /* effective CPUs and Memory Nodes allow to tasks */
109 * This is old Memory Nodes tasks took on.
115 * cpuset.mems_allowed and have tasks' nodemask updated, and
496 * Cpusets with tasks - existing or newly being attached - can't
511 * tasks.
836 * update_tasks_cpumask - Update the cpumasks of tasks in the cpuset.
855 * update_cpumasks_hier - Update effective cpumasks and tasks in the subtree
922 * update_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it
937 * An empty cpus_allowed is ok only if the cpuset has no tasks. update_cpumask()
940 * with tasks have cpus. update_cpumask()
976 * Temporarilly set tasks mems_allowed to target nodes of migration,
1014 * Allow tasks that have access to memory reserves because they have cpuset_change_task_nodemask()
1054 * update_tasks_nodemask - Update the nodemasks of tasks in the cpuset.
1102 * All the tasks' nodemasks have been updated, update update_tasks_nodemask()
1112 * update_nodemasks_hier - Update effective nodemasks and tasks in the subtree
1172 * migrate the tasks pages to the new memory.
1175 * Will take tasklist_lock, scan tasklist for tasks in cpuset cs,
1176 * lock each such tasks mm->mmap_sem, scan its vma's and rebind
1194 * An empty mems_allowed is ok iff there are no tasks in the cpuset. update_nodemask()
1197 * with tasks have memory. update_nodemask()
1260 * update_tasks_flags - update the spread flags of tasks in the cpuset.
1441 /* allow moving tasks into an empty cpuset if on default hierarchy */ cpuset_can_attach()
1654 * configuration and transfers all tasks to the nearest ancestor cpuset_write_resmask()
1659 * proceeding, so that we don't end up keep removing tasks added cpuset_write_resmask()
2096 * last CPU or node from a cpuset, then move the tasks in the empty
2113 pr_err("cpuset: failed to transfer tasks out of empty cpuset "); remove_tasks_in_empty_cpuset()
2135 * as the tasks will be migratecd to an ancestor. hotplug_update_tasks_legacy()
2148 * Move tasks to the nearest ancestor with execution resources, hotplug_update_tasks_legacy()
2180 * cpuset_hotplug_update_tasks - update tasks in a cpuset for hotunplug
2185 * all its tasks are moved to the nearest ancestor with both resources.
2262 /* we don't mess with cpumasks of tasks in top_cpuset */ cpuset_hotplug_workfn()
2309 * otherwise, the scheduler will get confused and put tasks to the cpuset_update_active_cpus()
2352 * cpuset_cpus_allowed - return cpus_allowed mask from a tasks cpuset.
2359 * tasks cpuset.
2404 * cpuset_mems_allowed - return mems_allowed mask from a tasks cpuset.
2410 * tasks cpuset.
2463 * and do not allow allocations outside the current tasks cpuset
2471 * current tasks mems_allowed came up empty on the first pass over
2489 * GFP_USER - only nodes in current tasks mems allowed ok.
2502 * Allow tasks that have access to memory reserves because they have __cpuset_node_allowed()
2530 * tasks in a cpuset with is_spread_page or is_spread_slab set),
2536 * node around the tasks mems_allowed nodes.
2633 * page reclaim efforts initiated by tasks in each cpuset.
2657 * - Print tasks cpuset path into seq_file.
H A Dpid.c8 * pid-structures are backing objects for tasks sharing a given ID to chain
10 * parking tasks using given ID's on a list.
332 INIT_HLIST_HEAD(&pid->tasks[type]); alloc_pid()
392 hlist_add_head_rcu(&link->node, &link->pid->tasks[type]); attach_pid()
409 if (!hlist_empty(&pid->tasks[tmp])) __change_pid()
440 first = rcu_dereference_check(hlist_first_rcu(&pid->tasks[type]), pid_task()
H A Dcgroup.c79 * objects, and the chain of tasks off each css_set.
141 * unattached - it never has more than a single cgroup, and all tasks are
178 /* This flag indicates whether tasks in the fork and exit paths should
428 * A cgroup can be associated with multiple css_sets as different tasks may
457 .tasks = LIST_HEAD_INIT(init_css_set.tasks),
814 INIT_LIST_HEAD(&cset->tasks); find_css_set()
982 * means that no tasks are currently attached, therefore there is no
990 * A cgroup can only be deleted if both its 'count' of using tasks
992 * tasks in the system use _some_ cgroup, and since there is always at
994 * always has either children cgroups and/or using tasks. So we don't
998 * update of a tasks cgroup pointer by cgroup_attach_task()
1556 * each css_set to its tasks until we see the list actually used - in other
1595 list_add(&p->cg_list, &cset->tasks); do_each_thread()
1752 * linking each css_set to its tasks and fix up all existing tasks. cgroup_mount()
1968 /* used to track tasks and other necessary states during migration */
1977 * Before migration is committed, the target migration tasks are on
2167 * with tasks so that child cgroups don't compete against tasks. cgroup_migrate_prepare_dst()
2244 * Prevent freeing of tasks while we take a snapshot. Tasks that are cgroup_migrate()
2297 * Now that we're guaranteed success, proceed to move all tasks to
2310 * Migration is committed, all target tasks are now on dst_csets.
2334 list_splice_tail_init(&cset->mg_tasks, &cset->tasks);
2380 * function to attach either it or all tasks in its threadgroup. Will lock
2409 * even if we're attaching all tasks in the thread group, we __cgroup_procs_write()
2597 * updated css_sets and migrates the tasks to the new ones.
2636 * All tasks in src_cset need to be migrated to the
2638 * walk tasks but migrate processes. The leader might even
2645 task = list_first_entry_or_null(&src_cset->tasks,
2767 * with tasks so that child cgroups don't compete against tasks.
2844 * css associations of all tasks in the subtree.
2851 * All tasks are migrated out of disabled csses. Kill or hide
3332 * cgroup_task_count - count the number of tasks in a cgroup.
3335 * Return the number of tasks in the cgroup.
3603 } while (list_empty(&cset->tasks) && list_empty(&cset->mg_tasks)); css_advance_task_iter()
3607 if (!list_empty(&cset->tasks)) css_advance_task_iter()
3608 it->task_pos = cset->tasks.next; css_advance_task_iter()
3612 it->tasks_head = &cset->tasks; css_advance_task_iter()
3618 * @css: the css to walk tasks of
3621 * Initiate iteration through the tasks of @css. The caller can call
3622 * css_task_iter_next() to walk through the tasks until the function
3664 /* If the iterator cg is NULL, we have no tasks */ css_task_iter_next()
3670 * Advance iterator to find next entry. cset->tasks is consumed css_task_iter_next()
3700 * cgroup_trasnsfer_tasks - move tasks from one cgroup to another
3701 * @to: cgroup to which the tasks will be moved
3702 * @from: cgroup in which the tasks currently reside
3720 /* all tasks in @from are being moved, all csets are source */ cgroup_transfer_tasks()
3731 * Migrate tasks one-by-one until @form is empty. This fails iff cgroup_transfer_tasks()
3753 * Stuff for reading the 'tasks'/'procs' files.
3756 * *lots* of attached tasks. So it may need several calls to read(),
3770 * of the cgroup files ("procs" or "tasks"). We keep a list of such pidlists,
3771 * a pair (one each for procs, tasks) for each pid namespace that's relevant
3886 * making it impossible to use, for example, single rbtree of member tasks
3942 * find the appropriate pidlist for our purpose (given procs vs tasks)
3973 * Load a cgroup's pidarray with either procs' tgids or tasks' pids
4002 /* get tgid or pid for procs or tasks file respectively */ pidlist_array_load()
4100 * seq_file methods for the tasks/procs files. The seq_file position is the
4293 .name = "tasks",
4941 * newly registered, all tasks and hence the cgroup_init_subsys()
4948 * registered, no tasks have been forked, so we don't cgroup_init_subsys()
4950 BUG_ON(!list_empty(&init_task.tasks)); cgroup_init_subsys()
5229 * when implementing operations which need to migrate all tasks of cgroup_post_fork()
5233 * will remain in init_css_set. This is safe because all tasks are cgroup_post_fork()
5235 * operation which transfers all tasks out of init_css_set. cgroup_post_fork()
5244 list_add(&child->cg_list, &cset->tasks); cgroup_post_fork()
5274 * We set the exiting tasks cgroup to the root cgroup (top_cgroup). We
5550 list_for_each_entry(task, &cset->tasks, cg_list) { cgroup_css_links_read()
H A Dtorture.c275 static int shuffle_idle_cpu; /* Force all torture tasks off this CPU */
300 * Unregister all tasks, for example, at the end of the torture run.
315 /* Shuffle tasks such that we allow shuffle_idle_cpu to become idle.
317 * the tasks to run on all CPUs.
347 /* Shuffle tasks across CPUs, with the intent of allowing each CPU in the
H A Dcpu.c261 * clear_tasks_mm_cpumask - Safely clear tasks' mm_cpumask for a CPU
278 * offline, so its not like new tasks will ever get this cpu set in clear_tasks_mm_cpumask()
409 * runnable tasks from the cpu, there's only the idle task left now _cpu_down()
661 * ensure that the state of the system with respect to the tasks being frozen
H A Dfreezer.c38 * target tasks see the updated state.
H A Dtaskstats.c216 * Add additional stats from live tasks except zombie thread group fill_stats_for_tgid()
217 * leaders who are already counted with the dead tasks fill_stats_for_tgid()
H A Dpid_namespace.c248 * But this ns can also have other tasks injected by setns()+fork(). zap_pid_ns_processes()
H A Dfutex.c90 * In futex wake up scenarios where no tasks are blocked on a futex, taking
208 * @list: priority-sorted list of tasks waiting on this futex
220 * A futex_q has a woken state, just like tasks have TASK_RUNNING.
1240 /* Make sure we really have tasks to wakeup */ futex_wake()
1504 * >=0 - on success, the number of tasks requeued or woken;
1532 * requeue_pi must wake as many tasks as it can, up to nr_wake futex_requeue()
/linux-4.1.27/drivers/isdn/hardware/eicon/
H A Dos_4bri.c154 int tasks = _4bri_is_rev_2_bri_card(a->CardOrdinal) ? 1 : MQ_INSTANCE_COUNT; diva_4bri_init_card() local
155 int factor = (tasks == 1) ? 1 : 2; diva_4bri_init_card()
168 DBG_TRC(("SDRAM_LENGTH=%08x, tasks=%d, factor=%d", diva_4bri_init_card()
169 bar_length[2], tasks, factor)) diva_4bri_init_card()
260 if (tasks > 1) { diva_4bri_init_card()
301 for (i = 0; i < (tasks - 1); i++) { diva_4bri_init_card()
314 for (i = 0; i < tasks; i++) { diva_4bri_init_card()
316 adapter_list[i]->xdi_adapter.tasks = tasks; diva_4bri_init_card()
321 for (i = 0; i < tasks; i++) { diva_4bri_init_card()
346 for (i = 1; i < (tasks - 1); i++) { diva_4bri_init_card()
357 for (i = 1; i < (tasks - 1); i++) { diva_4bri_init_card()
365 for (i = 1; i < (tasks - 1); i++) { diva_4bri_init_card()
377 for (i = 1; i < (tasks - 1); i++) { diva_4bri_init_card()
396 for (i = 0; i < tasks; i++) { diva_4bri_init_card()
410 for (i = 0; i < tasks; i++) { diva_4bri_init_card()
441 for (i = 1; i < (tasks - 1); i++) { diva_4bri_init_card()
452 if (tasks > 1) { diva_4bri_init_card()
592 for (i = 0; i < a->xdi_adapter.tasks; i++) { diva_4bri_cleanup_slave_adapters()
874 for (i = 0; ((i < IoAdapter->tasks) && IoAdapter->QuadroList); i++) { diva_4bri_reset_adapter()
962 for (i = 1; i < IoAdapter->tasks; i++) { diva_4bri_start_adapter()
980 for (i = 0; i < IoAdapter->tasks; i++) { diva_4bri_start_adapter()
988 for (i = 0; i < IoAdapter->tasks; i++) { diva_4bri_start_adapter()
998 for (i = 0; i < IoAdapter->tasks; i++) { diva_4bri_start_adapter()
1001 (IoAdapter->tasks == 1) ? "BRI 2.0" : "4BRI")) diva_4bri_start_adapter()
1094 for (i = 0; i < IoAdapter->tasks; i++) { diva_4bri_stop_adapter()
1101 for (i = 0; i < IoAdapter->tasks; i++) { diva_4bri_stop_adapter()
H A Ds_4bri.c52 int factor = (IoAdapter->tasks == 1) ? 1 : 2; qBri_cpu_trapped()
394 for (i = 0; i < IoAdapter->tasks; ++i) qBri_ISR()
468 if (!IoAdapter->tasks) { set_qBri_functions()
469 IoAdapter->tasks = MQ_INSTANCE_COUNT; set_qBri_functions()
477 if (!IoAdapter->tasks) { set_qBri2_functions()
478 IoAdapter->tasks = MQ_INSTANCE_COUNT; set_qBri2_functions()
480 IoAdapter->MemorySize = (IoAdapter->tasks == 1) ? BRI2_MEMORY_SIZE : MQ2_MEMORY_SIZE; set_qBri2_functions()
497 if (!IoAdapter->tasks) { prepare_qBri2_functions()
498 IoAdapter->tasks = MQ_INSTANCE_COUNT; prepare_qBri2_functions()
502 if (IoAdapter->tasks > 1) { prepare_qBri2_functions()
H A Dio.h220 dword tasks; member in struct:_ISDN_ADAPTER
/linux-4.1.27/include/linux/
H A Dnsproxy.h21 * 'count' is the number of tasks holding a reference.
23 * of nsproxies pointing to it, not the number of tasks.
25 * The nsproxy is shared by tasks which share all namespaces.
H A Dmutex.h48 * locks and tasks (and only those tasks)
70 * This is the control structure for tasks blocked on mutex,
H A Dinit_task.h75 .tasks = { \
213 .tasks = LIST_HEAD_INIT(tsk.tasks), \
H A Dcgroup-defs.h147 * Lists running through all tasks using this cgroup group.
148 * mg_tasks lists tasks which belong to this cset but are in the
150 * css_set_rwsem, but, during migration, once tasks are moved to
153 struct list_head tasks; member in struct:css_set
183 * target tasks on this cset should be migrated to. Protected by
219 * If this cgroup contains any tasks, it contributes one to
246 * List of cgrp_cset_links pointing at css_sets with tasks in this
262 * for tasks); created on demand.
H A Dpid.h18 * It refers to individual tasks, process groups, and sessions. While
61 /* lists of tasks that use this pid */
62 struct hlist_head tasks[PIDTYPE_MAX]; member in struct:pid
179 &(pid)->tasks[type], pids[type].node) {
H A Dwriteback.h18 * Further beyond, all dirtier tasks will enter a loop waiting (possibly long
24 * dirty limit will follow down slowly to prevent livelocking all dirtier tasks.
H A Dsched.h74 * the tasks may be useful for a wide variety of application fields, e.g.,
103 * and policies, that can be used to ensure all the tasks will make their
349 * Only dump TASK_* tasks. (0 for all tasks)
739 * group_rwsem prevents new tasks from entering the threadgroup and
740 * member tasks from exiting,a more specifically, setting of
918 #define SD_PREFER_SIBLING 0x1000 /* Prefer to place tasks in a sibling domain */
964 unsigned int cache_nice_tries; /* Leave cache hot tasks for # tries */
1353 struct list_head tasks; member in struct:task_struct
1416 * ptraced is the list of tasks this task is using ptrace on.
1996 * tasks can access tsk->flags in readonly mode for example
2490 /* Remove the current tasks stale references to the old mm_struct */
2537 list_entry_rcu((p)->tasks.next, struct task_struct, tasks)
2659 * and member tasks aren't allowed to exit (as indicated by PF_EXITING) or
H A Dbacking-dev.h82 * All the bdi tasks' dirty rate will be curbed under it.
H A Dcpu.h104 /* Used for CPU hotplug events occurring while tasks are frozen due to a suspend
H A Dfreezer.h85 * appropriately in case the child has exited before the freezing of tasks is
H A Dcgroup.h147 * - "tasks" is removed. Everything should be at process granularity. Use
163 * - cpuset: tasks will be kept in empty cpusets when hotplug happens and
H A Dplatform_device.h126 * enumeration tasks, they don't fully conform to the Linux driver model.
H A Drcupdate.h239 * to determine that all tasks have passed through a safe state, not so
358 * Note a voluntary context switch for RCU-tasks benefit. This is a
379 * report potential quiescent states to RCU-tasks even if the cond_resched()
/linux-4.1.27/arch/sh/include/asm/
H A Dswitch_to_64.h17 * switch_to() should switch tasks to task nr n, first
H A Dswitch_to_32.h67 * switch_to() should switch tasks to task nr n, first
/linux-4.1.27/arch/nios2/include/asm/
H A Dswitch_to.h12 * switch_to(n) should switch tasks to task ptr, first checking that
/linux-4.1.27/arch/blackfin/include/asm/
H A Dswitch_to.h14 * switch_to(n) should switch tasks to task ptr, first checking that
H A Dptrace.h32 * ptracing these tasks will fail.
/linux-4.1.27/drivers/gpu/drm/
H A Ddrm_flip_work.c114 struct list_head tasks; flip_worker() local
120 INIT_LIST_HEAD(&tasks); flip_worker()
122 list_splice_tail(&work->commited, &tasks); flip_worker()
126 if (list_empty(&tasks)) flip_worker()
129 list_for_each_entry_safe(task, tmp, &tasks, node) { flip_worker()
/linux-4.1.27/security/apparmor/include/
H A Dcontext.h60 * struct aa_task_cxt - primary label for confined tasks
129 * __aa_current_profile - find the current tasks confining profile
133 * This fn will not update the tasks cred to the most up to date version
142 * aa_current_profile - find the current tasks confining profile and do updates
146 * This fn will update the tasks cred structure if the profile has been
H A Dpolicy.h186 * used to determine profile attachment against unconfined tasks. All other
196 * determining profile attachment on "unconfined" tasks.
/linux-4.1.27/include/linux/sunrpc/
H A Dsched.h35 struct list_head links; /* Links to related tasks */
45 struct list_head tk_task; /* global list of tasks */
56 * action next procedure for async tasks
77 pid_t tk_owner; /* Process id for batching tasks */
183 struct list_head tasks[RPC_NR_PRIORITY]; /* task queue for each priority level */ member in struct:rpc_wait_queue
187 unsigned char nr; /* # tasks remaining for cookie */
188 unsigned short qlen; /* total # tasks waiting in queue */
H A Dclnt.h38 struct list_head cl_tasks; /* List of tasks */
/linux-4.1.27/kernel/locking/
H A Dsemaphore.c24 * The ->count variable represents how many more tasks can acquire this
25 * semaphore. If it's zero, there may be tasks waiting on the wait_list.
46 * Acquires the semaphore. If no more tasks are allowed to acquire the
70 * Attempts to acquire the semaphore. If no more tasks are allowed to
95 * Attempts to acquire the semaphore. If no more tasks are allowed to
150 * Attempts to acquire the semaphore. If no more tasks are allowed to
176 * context and even by tasks which have never called down().
H A Drtmutex_common.h19 * call schedule_rt_mutex_test() instead of schedule() for the tasks which
40 * This is the control structure for tasks blocked on a rt_mutex,
H A Drtmutex.c156 * associated tasks. rt_mutex_waiter_less()
665 * in the owner tasks pi waiters list with this waiter rt_mutex_adjust_prio_chain()
676 * the owner tasks pi waiters list with the new top rt_mutex_adjust_prio_chain()
761 * other tasks which try to modify @lock into the slow path try_to_take_rt_mutex()
960 * Remove the top waiter from the current tasks pi waiter list and
986 * the added benefit of forcing all new tasks into the wakeup_next_waiter()
/linux-4.1.27/arch/arm64/include/asm/
H A Dshmparam.h20 * For IPC syscalls from compat tasks, we need to use the legacy 16k
H A Dstat.h26 * struct stat64 is needed for compat tasks only. Its definition is different
/linux-4.1.27/net/sunrpc/
H A Dsched.c50 * RPC tasks sit here while waiting for conditions to improve.
103 struct list_head *q = &queue->tasks[queue->priority]; rpc_rotate_queue_owner()
149 q = &queue->tasks[queue_priority]; list_for_each_entry()
162 * Swapper tasks always get inserted at the head of the queue.
178 list_add(&task->u.tk_wait.list, &queue->tasks[0]); __rpc_add_wait_queue()
180 list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]); __rpc_add_wait_queue()
225 for (i = 0; i < ARRAY_SIZE(queue->tasks); i++) __rpc_init_priority_wait_queue()
226 INIT_LIST_HEAD(&queue->tasks[i]); __rpc_init_priority_wait_queue()
284 * and then waking up all tasks that were sleeping.
348 * By always appending tasks to the list we ensure FIFO behavior.
468 * Service a batch of tasks from a single owner. __rpc_find_next_queued_priority()
470 q = &queue->tasks[queue->priority]; __rpc_find_next_queued_priority()
488 if (q == &queue->tasks[0]) __rpc_find_next_queued_priority()
489 q = &queue->tasks[queue->maxpriority]; __rpc_find_next_queued_priority()
496 } while (q != &queue->tasks[queue->priority]); __rpc_find_next_queued_priority()
502 rpc_set_waitqueue_priority(queue, (unsigned int)(q - &queue->tasks[0])); __rpc_find_next_queued_priority()
513 if (!list_empty(&queue->tasks[0])) __rpc_find_next_queued()
514 return list_first_entry(&queue->tasks[0], struct rpc_task, u.tk_wait.list); __rpc_find_next_queued()
558 * @queue: rpc_wait_queue on which the tasks are sleeping
567 head = &queue->tasks[queue->maxpriority]; rpc_wake_up()
576 if (head == &queue->tasks[0]) rpc_wake_up()
586 * @queue: rpc_wait_queue on which the tasks are sleeping
596 head = &queue->tasks[queue->maxpriority]; rpc_wake_up_status()
606 if (head == &queue->tasks[0]) rpc_wake_up_status()
909 /* Initialize workqueue for async tasks */ rpc_init_task()
H A Ddebugfs.c152 /* make tasks file */ rpc_clnt_debugfs_register()
153 if (!debugfs_create_file("tasks", S_IFREG | S_IRUSR, clnt->cl_debugfs, rpc_clnt_debugfs_register()
254 /* make tasks file */ rpc_xprt_debugfs_register()
H A Dsysctl.c113 /* Display the RPC tasks on writing to rpc_debug */ proc_dodebug()
H A Dxprt.c33 * tasks that rely on callbacks.
340 * @xprt: transport with other tasks potentially waiting
357 * @xprt: transport with other tasks potentially waiting
469 * xprt_wake_pending_tasks - wake all tasks on a transport's pending queue
470 * @xprt: transport with waiting tasks
504 * @xprt: transport with waiting tasks
H A Dclnt.c487 * this behavior so asynchronous tasks can also use rpc_create.
648 * there are no active RPC tasks by using some form of locking.
715 * Kill all tasks for the given client.
725 dprintk("RPC: killing all tasks for client %p\n", clnt); rpc_killall_tasks()
903 /* Add to the client's list of all tasks */ rpc_task_set_client()
/linux-4.1.27/include/linux/fsl/bestcomm/
H A Dgen_bd.h2 * Header for Bestcomm General Buffer Descriptor tasks driver
H A Dfec.h2 * Header for Bestcomm FEC tasks driver
H A Dbestcomm.h95 /* BD based tasks helpers */
/linux-4.1.27/drivers/xen/
H A Dpreempt.c19 * seconds. Allow tasks running hypercalls via the privcmd driver to
/linux-4.1.27/arch/tile/include/asm/
H A Dswitch_to.h21 * switch_to(n) should switch tasks to task nr n, first
49 /* Address that switched-away from tasks are at. */
H A Dmmu_context.h70 * that much time in kernel tasks in general, so just leaving the
/linux-4.1.27/net/irda/
H A Dirda_device.c58 static hashbin_t *tasks = NULL; variable
71 tasks = hashbin_new(HB_LOCK); irda_device_init()
72 if (tasks == NULL) { irda_device_init()
73 net_warn_ratelimited("IrDA: Can't allocate tasks hashbin!\n"); irda_device_init()
93 hashbin_delete(tasks, (FREE_FUNC) __irda_task_delete); irda_device_cleanup()
172 hashbin_remove(tasks, (long) task, NULL); irda_task_delete()
/linux-4.1.27/ipc/
H A Dsem.c52 * sleeping tasks and completes any pending operations that can be fulfilled.
53 * Semaphores are actively given to waiting tasks (necessary for FIFO).
708 * @pt: list of tasks to be woken up
712 * could be destroyed already and the tasks can disappear as soon as the
773 * wake_const_ops - wake up non-alter tasks
776 * @pt: list head for the tasks that must be woken up.
782 * The tasks that must be woken up are added to @pt. The return code
822 * do_smart_wakeup_zero - wakeup all wait for zero tasks
826 * @pt: list head of the tasks that must be woken up.
873 * update_queue - look for tasks that can be completed.
876 * @pt: list head for the tasks that must be woken up.
882 * The tasks that must be woken up are added to @pt. The return code
965 * @pt: list head of the tasks that must be woken up.
1044 * semncnt number of tasks waiting on semval being nonzero
1045 * semzcnt number of tasks waiting on semval being zero
1092 struct list_head tasks; freeary() local
1107 INIT_LIST_HEAD(&tasks); freeary()
1110 wake_up_sem_queue_prepare(&tasks, q, -EIDRM); freeary()
1115 wake_up_sem_queue_prepare(&tasks, q, -EIDRM); freeary()
1121 wake_up_sem_queue_prepare(&tasks, q, -EIDRM); freeary()
1125 wake_up_sem_queue_prepare(&tasks, q, -EIDRM); freeary()
1134 wake_up_sem_queue_do(&tasks); freeary()
1274 struct list_head tasks; semctl_setval() local
1287 INIT_LIST_HEAD(&tasks); semctl_setval()
1331 do_smart_update(sma, NULL, 0, 0, &tasks); semctl_setval()
1334 wake_up_sem_queue_do(&tasks); semctl_setval()
1346 struct list_head tasks; semctl_main() local
1348 INIT_LIST_HEAD(&tasks); semctl_main()
1457 do_smart_update(sma, NULL, 0, 0, &tasks); semctl_main()
1493 wake_up_sem_queue_do(&tasks); semctl_main()
1804 struct list_head tasks; SYSCALL_DEFINE4() local
1844 INIT_LIST_HEAD(&tasks); SYSCALL_DEFINE4()
1911 do_smart_update(sma, sops, nsops, 1, &tasks); SYSCALL_DEFINE4()
2020 wake_up_sem_queue_do(&tasks); SYSCALL_DEFINE4()
2034 * parent and child tasks.
2081 struct list_head tasks; exit_sem() local
2166 INIT_LIST_HEAD(&tasks); exit_sem()
2167 do_smart_update(sma, NULL, 0, 1, &tasks); exit_sem()
2170 wake_up_sem_queue_do(&tasks); exit_sem()
/linux-4.1.27/drivers/dma/bestcomm/
H A Dfec.c2 * Bestcomm FEC tasks driver
28 /* fec tasks images */
160 /* Nothing special for the FEC tasks */ bcom_fec_rx_release()
261 /* Nothing special for the FEC tasks */ bcom_fec_tx_release()
267 MODULE_DESCRIPTION("BestComm FEC tasks driver");
H A Dgen_bd.c33 /* gen_bd tasks images */
167 /* Nothing special for the GenBD tasks */ bcom_gen_bd_rx_release()
251 /* Nothing special for the GenBD tasks */ bcom_gen_bd_tx_release()
264 * specific parameters to bestcomm tasks.
351 MODULE_DESCRIPTION("BestComm General Buffer Descriptor tasks driver");
H A Data.c148 /* Nothing special for the ATA tasks */ bcom_ata_release()
H A Dbestcomm.c346 /* Stop all tasks */ bcom_engine_cleanup()
/linux-4.1.27/kernel/rcu/
H A Dupdate.c458 /* Track exiting tasks in order to allow them to be waited for. */
468 * Post an RCU-tasks callback. First call must be from process context
491 * synchronize_rcu_tasks - wait until an rcu-tasks grace period has elapsed.
493 * Control will return to the caller some time after a full rcu-tasks
495 * executing rcu-tasks read-side critical sections have elapsed. These
508 * end of its last RCU-tasks read-side critical section whose beginning
510 * having an RCU-tasks read-side critical section that extends beyond
513 * and before the beginning of that RCU-tasks read-side critical section.
547 /* See if tasks are still holding out, complain if so. */ check_holdout_task()
566 pr_err("INFO: rcu_tasks detected stalls on tasks:\n"); check_holdout_task()
578 /* RCU-tasks kthread that detects grace periods and invokes callbacks. */ rcu_tasks_kthread()
594 * one RCU-tasks grace period and then invokes the callbacks. rcu_tasks_kthread()
635 * RCU-tasks grace period. Start off by scanning rcu_tasks_kthread()
636 * the task list for tasks that are not already rcu_tasks_kthread()
637 * voluntarily blocked. Mark these tasks and make rcu_tasks_kthread()
654 * Wait for tasks that are in the process of exiting.
656 * tasks that were previously exiting reach the point
664 * of holdout tasks, removing any that are no longer
693 * cause their RCU-tasks read-side critical sections to
705 * tasks to complete their final preempt_disable() region
H A Dtree_plugin.h111 * not in a quiescent state. There might be any number of tasks blocked
171 * in unnecessarily waiting on tasks that started very rcu_preempt_note_context_switch()
246 * Return true if the specified rcu_node structure has tasks that were
380 * Dump detailed information for all tasks blocking the current RCU
401 * Dump detailed information for all tasks blocking the current RCU
439 * Scan the current list of tasks blocked within RCU read-side critical
461 * Check that the list of blocked tasks for the newly completed grace
467 * Also, if there are blocked tasks on the list, they automatically
551 * Return non-zero if there are any tasks in RCU read-side critical
564 * tasks covered by the specified rcu_node structure have done their bit
617 * Snapshot the tasks blocking the newly started preemptible-RCU expedited
619 * are such tasks, set the ->expmask bits up the rcu_node tree and also
637 /* No blocked tasks, nothing to do. */ sync_rcu_preempt_exp_init1()
658 * Snapshot the tasks blocking the newly started preemptible-RCU expedited
660 * leaf rcu_node structure has its ->expmask field set, check for tasks.
685 * If there are still blocked tasks, set up ->exp_tasks so that sync_rcu_preempt_exp_init2()
694 /* No longer any blocked tasks, so undo bit setting. */ sync_rcu_preempt_exp_init2()
703 * idea is to invoke synchronize_sched_expedited() to push all the tasks to
724 * operation that finds an rcu_node structure with tasks in the synchronize_rcu_expedited()
725 * process of being boosted will know that all tasks blocking synchronize_rcu_expedited()
727 * being boosted. This simplifies the process of moving tasks synchronize_rcu_expedited()
872 * tasks blocked within RCU read-side critical sections.
880 * tasks blocked within RCU read-side critical sections.
889 * so there is no need to check for blocked tasks. So check only for
933 * Because preemptible RCU does not exist, tasks cannot possibly exit
989 * Returns 1 if there are more tasks needing to be boosted.
1005 * Recheck under the lock: all tasks in need of boosting rcu_boost()
1014 * Preferentially boost tasks blocking expedited grace periods. rcu_boost()
1016 * expedited grace period must boost all blocked tasks, including rcu_boost()
H A Dtree.h154 bool wait_blkd_tasks;/* Necessary to wait for blocked tasks to */
178 /* structure. If there are no tasks */
193 /* Total number of tasks boosted. */
195 /* Number of tasks boosted for expedited GP. */
197 /* Number of tasks boosted for normal GP. */
199 /* Refused to boost: no blocked tasks. */
H A Dtree.c144 * is capable of creating new tasks. So RCU processing (for example,
145 * creating tasks for RCU priority boosting) must be delayed until after
1075 * in-kernel CPU-bound tasks cannot advance grace periods. rcu_implicit_dynticks_qs()
1143 * Dump stacks of all tasks running on stalled CPUs.
1189 pr_err("INFO: %s detected stalls on CPUs/tasks:", print_other_cpu_stall()
1230 /* Complain about tasks blocking the grace period. */
1793 else if (rcu_preempt_has_tasks(rnp)) /* blocked tasks */ rcu_for_each_leaf_node()
1800 * If all waited-on tasks from prior grace period are rcu_for_each_leaf_node()
2209 * Record a quiescent state for all tasks that were previously queued
2463 * and all tasks that were preempted within an RCU read-side critical
2473 * all CPUs offline and no blocked tasks, so it is OK to invoke it
3915 * idle tasks are prohibited from containing RCU read-side critical
/linux-4.1.27/arch/x86/include/asm/
H A Dswitch_to.h28 * Saving eflags is important. It switches not only IOPL between tasks,
29 * it also protects other tasks from NT leaking through sysenter etc.
H A Dsyscall.h230 * TIF_IA32 tasks should always have TS_COMPAT set at syscall_get_arch()
233 * x32 tasks should be considered AUDIT_ARCH_X86_64. syscall_get_arch()
/linux-4.1.27/include/drm/
H A Ddrm_flip_work.h69 * @queued: queued tasks
70 * @commited: commited tasks
/linux-4.1.27/include/uapi/asm-generic/
H A Dresource.h47 #define RLIMIT_RTTIME 15 /* timeout for RT tasks in us */
/linux-4.1.27/arch/parisc/lib/
H A Ddelay.c44 /* Allow RT tasks to run */ __cr16_delay()
/linux-4.1.27/arch/m68k/include/asm/
H A Dswitch_to.h5 * switch_to(n) should switch tasks to task ptr, first checking that
/linux-4.1.27/arch/arm/include/asm/
H A Dsyscall.h108 /* ARM tasks don't change audit architectures on the fly. */ syscall_get_arch()
/linux-4.1.27/lib/
H A Dis_single_threaded.c47 * will see other CLONE_VM tasks which might be for_each_process()
/linux-4.1.27/samples/bpf/
H A Dtest_maps.c186 static void run_parallel(int tasks, void (*fn)(int i, void *data), void *data) run_parallel() argument
188 pid_t pid[tasks]; run_parallel()
191 for (i = 0; i < tasks; i++) { run_parallel()
201 for (i = 0; i < tasks; i++) { run_parallel()
/linux-4.1.27/security/apparmor/
H A Dcontext.c89 * aa_replace_current_profile - replace the current tasks profiles
127 * aa_set_current_onexec - set the tasks change_profile to happen onexec
149 * aa_set_current_hat - set the current tasks hat
153 * Do switch of tasks hat. If the task is currently in a hat
H A Dresource.c128 * to the less of the tasks hard limit and the init tasks soft limit __aa_transition_rlimits()
/linux-4.1.27/arch/ia64/kernel/
H A Dsys_ia64.c51 * For 64-bit tasks, align shared segments to 1MB to avoid potential arch_get_unmapped_area()
53 * tasks, we prefer to avoid exhausting the address space too quickly by arch_get_unmapped_area()
/linux-4.1.27/arch/arm/kernel/
H A Dreboot.c100 * activity (executing tasks, handling interrupts). smp_send_stop()
114 * activity (executing tasks, handling interrupts). smp_send_stop()
/linux-4.1.27/kernel/debug/kdb/
H A Dkdb_bt.c124 /* Run the active tasks first */ for_each_online_cpu()
130 /* Now the inactive tasks */ kdb_do_each_thread()
/linux-4.1.27/drivers/iio/
H A Dindustrialio-triggered-buffer.c33 * This function combines some common tasks which will normally be performed
/linux-4.1.27/drivers/staging/comedi/drivers/
H A Damplc_pc236.c37 * used to wake up tasks. This is like the comedi_parport device, but the
H A Damplc_pci236.c39 * external trigger, which can be used to wake up tasks. This is like
H A Dcomedi_parport.c64 * as a external trigger, which can be used to wake up tasks.
/linux-4.1.27/arch/x86/lib/
H A Ddelay.c65 /* Allow RT tasks to run */ delay_tsc()
/linux-4.1.27/include/video/
H A Duvesafb.h94 /* Max number of concurrent tasks */
/linux-4.1.27/include/trace/events/
H A Dkmem.h154 * it has no impact on the condition since tasks can migrate
173 * it has no impact on the condition since tasks can migrate
290 * it has no impact on the condition since tasks can migrate
H A Dsched.h402 * Tracepoint for showing priority inheritance modifying a tasks
482 * Tracks migration of tasks from one runqueue to another. Can be used to
H A Drcu.h218 * Tracepoint for tasks blocking within preemptible-RCU read-side
246 * Tracepoint for tasks that blocked within a given preemptible-RCU
276 * whether there are any blocked tasks blocking the current grace period.
/linux-4.1.27/drivers/oprofile/
H A Dbuffer_sync.c60 list_add(&task->tasks, &dying_tasks); task_free_notify()
427 /* Move tasks along towards death. Any tasks on dead_tasks
447 list_for_each_entry_safe(task, ttask, &local_dead_tasks, tasks) { process_task_mortuary()
448 list_del(&task->tasks); process_task_mortuary()
/linux-4.1.27/tools/perf/
H A Dbuiltin-sched.c129 struct task_desc **tasks; member in struct:perf_sched
362 sched->tasks = realloc(sched->tasks, sched->nr_tasks * sizeof(struct task_desc *)); register_pid()
363 BUG_ON(!sched->tasks); register_pid()
364 sched->tasks[task->nr] = task; register_pid()
379 task = sched->tasks[i]; print_task_traces()
391 task1 = sched->tasks[i]; add_cross_task_wakeups()
395 task2 = sched->tasks[j]; add_cross_task_wakeups()
563 parms->task = task = sched->tasks[i]; create_tasks()
586 task = sched->tasks[i]; wait_for_tasks()
599 task = sched->tasks[i]; wait_for_tasks()
622 task = sched->tasks[i]; wait_for_tasks()
H A Dbuiltin-timechart.c1461 /* We'd like to show at least proc_num tasks; write_svg_file()
1939 "highlight tasks. Pass duration in ns or process name.", cmd_timechart()
1942 OPT_BOOLEAN('T', "tasks-only", &tchart.tasks_only, cmd_timechart()
1950 "min. number of tasks to print"), cmd_timechart()
1972 OPT_BOOLEAN('T', "tasks-only", &tchart.tasks_only, cmd_timechart()
H A Dbuiltin-inject.c441 "where and how long tasks slept"), cmd_inject()
/linux-4.1.27/drivers/scsi/bnx2i/
H A Dbnx2i_sysfs.c55 * because of how libiscsi preallocates tasks.
/linux-4.1.27/arch/powerpc/mm/
H A Dmmu_context_hash32.c37 * at most around 30,000 tasks in the system anyway, and it means
H A Dmmu_context_nohash.c426 * task switch. A better way would be to keep track of tasks that mmu_context_init()
428 * tasks don't always have to pay the TLB reload overhead. The mmu_context_init()
/linux-4.1.27/arch/arc/kernel/
H A Dctx_sw.c11 * backtrace out of it (e.g. tasks sleeping in kernel).
/linux-4.1.27/mm/
H A Dvmacache.c23 * Single threaded tasks need not iterate the entire vmacache_flush_all()
H A Dpage-writeback.c260 * real-time tasks.
529 * conditions, or when there are 1000 dd tasks writing to a slow 10MB/s USB key.
530 * In the other normal situations, it acts more gently by throttling the tasks
947 * Normal bdi tasks will be curbed at or below it in long term.
948 * Obviously it should be around (write_bw / N) when there are N dd tasks.
988 * if there are N dd tasks, each throttled at task_ratelimit, the bdi's bdi_update_dirty_ratelimit()
1228 * (N * 10ms) on 2^N concurrent tasks. bdi_min_pause()
1512 * to go through, so that tasks on them still remain responsive. balance_dirty_pages()
1550 * Normal tasks are throttled by
1559 * randomly into the running tasks. This works well for the above worst case,
1595 * 1000+ tasks, all of them start dirtying pages at exactly the same balance_dirty_pages_ratelimited()
1606 * Pick up the dirtied pages by the exited tasks. This avoids lots of balance_dirty_pages_ratelimited()
1607 * short-lived tasks (eg. gcc invocations in a kernel build) escaping balance_dirty_pages_ratelimited()
H A Doom_kill.c339 * dump_tasks - dump current memory state of all system tasks
343 * Dumps the current memory state of all eligible tasks. Tasks not in the same
427 * that TIF_MEMDIE tasks should be ignored. mark_tsk_oom_victim()
459 * The function cannot be called when there are runnable user tasks because
821 * There shouldn't be any user tasks runable while the pagefault_out_of_memory()
H A Dmmu_notifier.c47 * runs with mm_users == 0. Other tasks may still invoke mmu notifiers
/linux-4.1.27/sound/pci/cs46xx/
H A Ddsp_spos.c564 snd_iprintf(buffer,"\n%04x %s:\n",ins->tasks[i].address,ins->tasks[i].task_name); cs46xx_dsp_proc_task_tree_read()
566 for (col = 0,j = 0;j < ins->tasks[i].size; j++,col++) { cs46xx_dsp_proc_task_tree_read()
572 val = readl(dst + (ins->tasks[i].address + j) * sizeof(u32)); cs46xx_dsp_proc_task_tree_read()
1022 strcpy(ins->tasks[ins->ntask].task_name, name); _map_task_tree()
1024 strcpy(ins->tasks[ins->ntask].task_name, "(NULL)"); _map_task_tree()
1025 ins->tasks[ins->ntask].address = dest; _map_task_tree()
1026 ins->tasks[ins->ntask].size = size; _map_task_tree()
1029 ins->tasks[ins->ntask].index = ins->ntask; _map_task_tree()
1030 desc = (ins->tasks + ins->ntask); _map_task_tree()
2015 struct dsp_task_descriptor *t = &ins->tasks[i]; cs46xx_dsp_resume()
H A Dcs46xx_dsp_spos.h194 struct dsp_task_descriptor tasks[DSP_MAX_TASK_DESC]; member in struct:dsp_spos_instance
H A Dcs46xx_dsp_scb_types.h93 may be freed for use by other tasks, but the pointer to the SCB must
118 /* Pointer to this tasks parameter block & stream function pointer
/linux-4.1.27/drivers/misc/mic/card/
H A Dmic_device.c244 * mic_driver_init - MIC driver initialization tasks.
290 * mic_driver_uninit - MIC driver uninitialization tasks.
/linux-4.1.27/drivers/net/ethernet/smsc/
H A Dsmc9194.h50 . that is needed for simple run time tasks.
199 . or slightly complicated, repeated tasks.
/linux-4.1.27/net/mac80211/
H A Docb.c32 * enum ocb_deferred_task_flags - mac80211 OCB deferred tasks
33 * @OCB_WORK_HOUSEKEEPING: run the periodic OCB housekeeping tasks
H A Dmesh.h49 * enum mesh_deferred_task_flags - mac80211 mesh deferred tasks
53 * @MESH_WORK_HOUSEKEEPING: run the periodic mesh housekeeping tasks
/linux-4.1.27/fs/
H A Dcoredump.c326 * We should find and kill all tasks which use this mm, and we should zap_threads()
334 * process to the tail of init_task.tasks list, and lock/unlock zap_threads()
343 * It does list_replace_rcu(&leader->tasks, &current->tasks), zap_threads()
671 * Using user namespaces, normal user tasks can change do_coredump()
H A Ddcookies.c10 * kernel until released by the tasks needing the persistent
/linux-4.1.27/tools/perf/util/
H A Devent.c387 DIR *tasks; __event__synthesize_thread() local
410 tasks = opendir(filename); __event__synthesize_thread()
411 if (tasks == NULL) { __event__synthesize_thread()
416 while (!readdir_r(tasks, &dirent, &next) && next) { __event__synthesize_thread()
448 closedir(tasks); __event__synthesize_thread()
/linux-4.1.27/arch/x86/kernel/cpu/
H A Dperf_event_intel_cqm.c67 * rmid 0 is reserved by the hardware for all non-monitored tasks, which
223 * tasks that are not monitored. intel_cqm_setup_rmid_cache()
242 * Determine if @a and @b measure the same set of tasks.
244 * If @a and @b measure the same set of tasks then we want to share a
289 * Determine if @a's tasks intersect with @b's tasks
838 /* All tasks in a group share an RMID */ intel_cqm_setup_event()
/linux-4.1.27/kernel/events/
H A Dhw_breakpoint.c55 /* tsk_pinned[n] is the number of tasks having n+1 breakpoints */
69 /* Keep track of the breakpoints attached to tasks */
261 * bp for every cpu and we keep the max one. Same for the per tasks
/linux-4.1.27/drivers/pwm/
H A Dpwm-jz4740.c56 * Timers 0 and 1 are used for system tasks, so they are unavailable jz4740_pwm_request()
/linux-4.1.27/arch/x86/kernel/
H A Dprocess_64.c264 * switch_to(x,y) should switch tasks from x to y.
404 * preempt_count of all tasks was equal here and this would not be __switch_to()
492 /* Mark the associated mm as containing 32-bit tasks. */ set_personality_ia32()
543 * The tasks stack pointer points at the location where the get_wchan()
H A Dprocess_32.c215 * switch_to(x,y) should switch tasks from x to y.
282 * preempt_count of all tasks was equal here and this would not be __switch_to()
/linux-4.1.27/arch/sh/kernel/
H A Dprocess_32.c177 * switch_to(x,y) should switch tasks from x to y.
/linux-4.1.27/drivers/tty/
H A Dsysrq.c290 .help_msg = "show-blocked-tasks(w)",
349 .help_msg = "terminate-all-tasks(e)",
394 .help_msg = "kill-all-tasks(i)",
405 .help_msg = "nice-all-RT-tasks(n)",
/linux-4.1.27/arch/arm64/kernel/
H A Dvdso.c54 * Create and map the vectors page for AArch32 tasks.
H A Dprocess.c113 * activity (executing tasks, handling interrupts). smp_send_stop()
125 * activity (executing tasks, handling interrupts). smp_send_stop()
H A Dfpsimd.c53 * when switching between tasks. Instead, we can defer this check to userland
/linux-4.1.27/arch/cris/arch-v10/kernel/
H A Dptrace.c32 * in the tasks thread struct get_reg()
/linux-4.1.27/arch/frv/mm/
H A Dinit.c73 /* allocate some pages for kernel housekeeping tasks */ paging_init()
/linux-4.1.27/arch/arm/mach-pxa/
H A Dpalmte2.c80 GPIO7_GPIO, /* tasks */
/linux-4.1.27/kernel/trace/
H A Dtrace_sched_wakeup.c533 * - wakeup tracer handles all tasks in the system, independently probe_wakeup()
535 * - wakeup_rt tracer handles tasks belonging to sched_dl and probe_wakeup()
537 * - wakeup_dl handles tasks belonging to sched_dl class only. probe_wakeup()
713 /* make sure we put back any tasks we are tracing */ wakeup_tracer_reset()
H A Dtrace_syscalls.c51 * the 32bit tasks the same as they do for 64bit tasks.
H A Dring_buffer_benchmark.c439 * Run them as low-prio background tasks by default: ring_buffer_benchmark_init()
/linux-4.1.27/include/asm-generic/
H A Dmutex-xchg.h31 * to ensure that any waiting tasks are woken up by the __mutex_fastpath_lock()
/linux-4.1.27/include/drm/ttm/
H A Dttm_memory.h60 * for the GPU, and this will otherwise block other workqueue tasks(?)
/linux-4.1.27/drivers/net/ethernet/mellanox/mlx4/
H A Den_main.c288 /* Create our own workqueue for reset/multicast tasks mlx4_en_add()
295 /* At this stage all non-port specific tasks are complete: mlx4_en_add()
/linux-4.1.27/drivers/target/
H A Dtarget_core_tmr.c413 * tasks shall be terminated by the device server without any response core_tmr_lun_reset()
414 * to the application client. A TAS bit set to one specifies that tasks core_tmr_lun_reset()
/linux-4.1.27/drivers/pci/hotplug/
H A Dacpiphp_core.c131 * Actual tasks are done in acpiphp_enable_slot()
148 * Actual tasks are done in acpiphp_disable_slot()
/linux-4.1.27/drivers/acpi/acpica/
H A Dhwxfsleep.c283 * various OS-specific tasks between the two steps.
394 * various OS-specific tasks between the two steps. ACPI_EXPORT_SYMBOL()
H A Devxface.c291 /* Make sure all deferred notify tasks are completed */ ACPI_EXPORT_SYMBOL()
351 /* Make sure all deferred notify tasks are completed */ ACPI_EXPORT_SYMBOL()
1015 /* Make sure all deferred GPE tasks are completed */ ACPI_EXPORT_SYMBOL()
/linux-4.1.27/arch/mips/kernel/
H A Dprocess.c159 * New tasks lose permission to use the fpu. This accelerates context copy_thread()
621 * progress when FP is first used in a tasks time slice. Pretty much all mips_set_process_fp_mode()
/linux-4.1.27/sound/soc/fsl/
H A Dmpc5200_dma.c414 * DMA tasks */ mpc5200_audio_dma_create()
422 dev_err(&op->dev, "Could not allocate bestcomm tasks\n"); mpc5200_audio_dma_create()
/linux-4.1.27/drivers/staging/wlan-ng/
H A Dprism2usb.c162 * might have some tasks or tasklets that must be prism2sta_disconnect_usb()
/linux-4.1.27/drivers/infiniband/ulp/ipoib/
H A Dipoib_verbs.c161 * the various IPoIB tasks assume they will never race against ipoib_transport_dev_init()
/linux-4.1.27/arch/sh/kernel/cpu/sh5/
H A Dswitchto.S64 to allow unwinding switched tasks in show_state() */
/linux-4.1.27/arch/sparc/include/asm/
H A Dprocessor_64.h68 * Used with spin lock debugging to catch tasks
/linux-4.1.27/arch/um/kernel/
H A Dphysmem.c78 * of all user space processes/kernel tasks.
/linux-4.1.27/arch/mips/include/asm/octeon/
H A Dcvmx-helper.h30 * Helper functions for common, but complicated tasks.
/linux-4.1.27/drivers/uio/
H A Duio_dmem_genirq.c131 * Serialize this operation to support multiple tasks. uio_dmem_genirq_irqcontrol()
H A Duio_pdrv_genirq.c88 * Serialize this operation to support multiple tasks and concurrency uio_pdrv_genirq_irqcontrol()
/linux-4.1.27/arch/powerpc/platforms/52xx/
H A Dlite5200_pm.c179 /* restore tasks */ lite5200_restore_regs()
/linux-4.1.27/arch/s390/include/asm/
H A Delf.h161 tasks are aligned to 4GB. */
/linux-4.1.27/arch/blackfin/mach-common/
H A Dinterrupt.S282 * procedure, since we may not switch tasks when IRQ levels are
/linux-4.1.27/arch/arm/nwfpe/
H A Dentry.S52 so that the emulator startup cost can be avoided for tasks that don't
/linux-4.1.27/arch/alpha/mm/
H A Dinit.c178 forking other tasks. */ callback_init()
/linux-4.1.27/arch/arc/include/asm/
H A Dmmu_context.h29 * with same vaddr (different tasks) to co-exit. This provides for
/linux-4.1.27/drivers/md/bcache/
H A Dalloc.c520 * The ideas is if you've got multiple tasks pulling data into the cache at the
529 * Both of those tasks will be doing fairly random IO so we can't rely on
/linux-4.1.27/drivers/scsi/isci/
H A Dtask.c491 * in the device, because tasks driving resets may land here isci_task_abort_task()
760 * primary duty of this function is to cleanup tasks, so that is the isci_reset_device()
/linux-4.1.27/drivers/scsi/
H A Dlibiscsi.c482 * up mgmt tasks then returns the task to the pool.
669 * we should start checking the cmdsn numbers for mgmt tasks. iscsi_prep_mgmt_task()
1119 * This should be used for mgmt tasks like login and nops, or if
1344 * This should be used for cmd tasks.
2021 * too many tasks or the LU is bad. iscsi_eh_cmd_timed_out()
2712 * tasks they support. The iscsi layer reserves ISCSI_MGMT_CMDS_MAX tasks
2737 * The iscsi layer needs some tasks for nop handling and tmfs, iscsi_session_setup()
/linux-4.1.27/drivers/s390/cio/
H A Dchp.c38 /* Map for pending configure tasks. */
50 /* Workqueue to perform pending configure tasks. */
/linux-4.1.27/arch/tile/kernel/
H A Dprocess.c113 * calling schedule_tail(), etc., and (for userspace tasks) copy_thread()
540 * Remove the task from the list of tasks that are associated exit_thread()
H A Dhardwall.c103 struct list_head task_head; /* head of tasks in this hardwall */
862 * Deactivate any remaining tasks. It's possible to race with hardwall_destroy()
865 * deactivate any remaining tasks before freeing the hardwall_destroy()
/linux-4.1.27/arch/s390/kernel/
H A Dcompat_signal.c400 * gprs_high are always present for 31-bit compat tasks. setup_frame32()
487 * gprs_high are always present for 31-bit compat tasks. setup_rt_frame32()
/linux-4.1.27/arch/powerpc/include/asm/
H A Dimmap_qe.h70 __be32 cevter; /* QE virtual tasks event register */
71 __be32 cevtmr; /* QE virtual tasks mask register */
/linux-4.1.27/arch/arm/common/
H A DbL_switcher.c96 * Fancy under cover tasks could be performed here. For now bL_do_switch()
204 * tasks to be scheduled in the mean time. bL_switch_to()
/linux-4.1.27/tools/testing/selftests/mqueue/
H A Dmq_perf_tests.c109 "other tasks on the system. This test is intended "
131 "system level tasks as this would free up resources on "
/linux-4.1.27/drivers/mtd/
H A Dmtdblock.c134 * means. Let's declare it empty and leave buffering tasks to write_cached_data()
/linux-4.1.27/arch/xtensa/kernel/
H A Dprocess.c63 /* Make sure we don't switch tasks during this operation. */ coprocessor_release_all()
/linux-4.1.27/drivers/char/
H A Dmspec.c89 * protect in fork case where multiple tasks share the vma_data.
/linux-4.1.27/drivers/acpi/
H A Dacpi_pad.c194 * scheduled out for 5% CPU time to not starve other tasks. But power_saving_thread()
/linux-4.1.27/fs/hfs/
H A Dsuper.c377 * filesystem. It performs all the tasks necessary to get enough data
/linux-4.1.27/include/net/irda/
H A Dirda_device.h77 IRDA_TASK_INIT, /* All tasks are initialized with this state */
/linux-4.1.27/arch/parisc/kernel/
H A Dptrace.c93 * disable interrupts in the tasks PSW here also, to avoid user_enable_single_step()
/linux-4.1.27/arch/blackfin/kernel/
H A Dprocess.c31 /* The number of tasks currently using a L1 stack area. The SRAM is
/linux-4.1.27/arch/cris/arch-v32/kernel/
H A Dptrace.c42 * in the tasks thread struct get_reg()
/linux-4.1.27/kernel/irq/
H A Dproc.c336 * tasks might try to register at the same time. register_irq_proc()
/linux-4.1.27/security/yama/
H A Dyama_lsm.c108 * yama_ptracer_del - remove exceptions related to the given tasks
/linux-4.1.27/drivers/scsi/sym53c8xx_2/
H A Dsym_hipd.h106 * Number of tasks per device we want to handle.
114 * Donnot use more tasks that we can handle.
541 * Set when we want to clear all tasks.
/linux-4.1.27/drivers/media/i2c/
H A Dsaa7115.c337 R_80_GLOBAL_CNTL_1, 0x0, /* No tasks enabled at init */
357 R_80_GLOBAL_CNTL_1, 0x00, /* reset tasks */
510 R_80_GLOBAL_CNTL_1, 0x00, /* reset tasks */
512 R_80_GLOBAL_CNTL_1, 0x30, /* Activate both tasks */
520 R_80_GLOBAL_CNTL_1, 0x00, /* reset tasks */

Completed in 4675 milliseconds

123