Lines Matching refs:task
99 structure holds a pointer to the task, as well as the mutex that
100 the task is blocked on. It also has the plist node structures to
101 place the task in the waiter_list of a mutex as well as the
102 pi_list of a mutex owner task (described below).
104 waiter is sometimes used in reference to the task that is waiting
105 on a mutex. This is the same as waiter->task.
114 Note: task and process are used interchangeably in this document, mostly to
222 The top of the task's PI list is always the highest priority task that
223 is waiting on a mutex that is owned by the task. So if the task has
224 inherited a priority, it will always be the priority of the task that is
227 This list is stored in the task structure of a process as a plist called
228 pi_list. This list is protected by a spin lock also in the task structure,
315 have the task structure on at least a four byte alignment (and if this is
365 The functions implementing the task adjustments are rt_mutex_adjust_prio,
366 __rt_mutex_adjust_prio (same as the former, but expects the task pi_lock
371 rt_mutex_getprio returns the priority that the task should have. Either the
372 task's own normal priority, or if a process of a higher priority is waiting on
373 a mutex owned by the task, then that higher priority should be returned.
374 Since the pi_list of a task holds an order by priority list of all the top
375 waiters of all the mutexes that the task owns, rt_mutex_getprio simply needs
380 prio is returned. This is because the prio field in the task structure
385 result does not equal the task's current priority, then rt_mutex_setprio
386 is called to adjust the priority of the task to the new priority.
391 or decrease the priority of the task. In the case that a higher priority
392 process has just blocked on a mutex owned by the task, __rt_mutex_adjust_prio
393 would increase/boost the task's priority. But if a higher priority task
395 would decrease/unboost the priority of the task. That is because the pi_list
396 always contains the highest priority task that is waiting on a mutex owned
397 by the task, so we only need to compare the priority of that top pi waiter
398 to the normal priority of the given task.
413 rt_mutex_adjust_prio_chain is called with a task to be checked for PI
415 check for deadlocking, the mutex that the task owns, and a pointer to a waiter
425 Before this function is called, the task has already had rt_mutex_adjust_prio
426 performed on it. This means that the task is set to the priority that it
427 should be at, but the plist nodes of the task's waiter have not been updated
428 with the new priorities, and that this task may not be in the proper locations
429 in the pi_lists and wait_lists that the task is blocked on. This function
432 A loop is entered, where task is the owner to be checked for PI changes that
433 was passed by parameter (for the first iteration). The pi_lock of this task is
434 taken to prevent any more changes to the pi_list of the task. This also
436 task.
438 If the task is not blocked on a mutex then the loop is exited. We are at
442 on the current mutex) is the top pi waiter of the task. That is, is this
443 waiter on the top of the task's pi_list. If it is not, it either means that
445 mutexes that the task owns, or that the waiter has just woken up via a signal
447 we don't need to do any more changes to the priority of the current task, or any
448 task that owns a mutex that this current task is waiting on. A priority chain
449 walk is only needed when a new top pi waiter is made to a task.
451 The next check sees if the task's waiter plist node has the priority equal to
452 the priority the task is set at. If they are equal, then we are done with
454 task adjusted, but the plist nodes that hold the task in other processes
457 Next, we look at the mutex that the task is blocked on. The mutex's wait_lock
462 Now that we have both the pi_lock of the task as well as the wait_lock of
463 the mutex the task is blocked on, we update the task's waiter's plist node
466 Now we release the pi_lock of the task.
469 task's entry in the owner's pi_list. If the task is the highest priority
471 from the owner's pi_list, and replace it with the task.
473 Note: It is possible that the task was the current top waiter on the mutex,
474 in which case the task is not yet on the pi_list of the waiter. This
478 If the task was not the top waiter of the mutex, but it was before we
480 task. In this case, the task is removed from the pi_list of the owner,
483 Lastly, we unlock both the pi_lock of the task, as well as the mutex's
485 loop, the previous owner of the mutex will be the task that will be
491 become the task that is being processed in the PI chain, since
492 we have taken that task's pi_lock at the beginning of the loop.
497 end of the PI chain is when the task isn't blocked on anything or the
498 task's waiter structure "task" element is NULL. This check is
499 protected only by the task's pi_lock. But the code to unlock the mutex
500 sets the task's waiter structure "task" element to NULL with only
502 Isn't this a race condition if the task becomes the new owner?
506 task and continue the loop, doing the end of PI chain check again.
569 The slow path function is where the task's waiter structure is created on
572 the task on the wait_list of the mutex, and if need be, the pi_list of
582 try_to_take_rt_mutex is used every time the task tries to grab a mutex in the
586 the current task also won't have any waiters. But we don't have the lock
604 current task. This is because this function is also used for the pending
640 priority than the current task.
652 in the loop, this would likely succeed, since the task would likely be
658 The waiter structure has a "task" field that points to the task that is blocked
660 or if the task is a pending owner and had its mutex stolen. If the "task"
667 the process. The "task" field is set to the process, and the "lock" field
684 mutex (waiter "task" field is not NULL), then we go to sleep (call schedule).
703 highest priority task on the wait_list.
708 If a timeout or signal occurred, the waiter's "task" field would not be
709 NULL and the task needs to be taken off the wait_list of the mutex and perhaps
747 as well as the pi_list of the current owner. The task field of the new