Lines Matching refs:that

10 It doesn't describe the reasons why rtmutex.c exists. For that please see
12 that happen without this code, but that is in the concept to understand
16 inheritance (PI) algorithm that is used, as well as reasons for the
17 decisions that were made to implement PI in the manner that was done.
26 to use a resource that a lower priority process has (a mutex for example),
36 that C owns and must wait and lets C run to release the lock. But in the
70 inherited priority, and A then can continue with the resource that C had.
75 Here I explain some terminology that is used in this document to help describe
76 the design that is used to implement PI.
78 PI chain - The PI chain is an ordered series of locks and processes that cause
79 processes to inherit priorities from a previous process that is
83 mutex - In this document, to differentiate from locks that implement
84 PI and spin locks that are used in the PI code, from now on
88 referring to spin locks that are used to protect parts of the PI
95 waiter - A waiter is a struct that is stored on the stack of a blocked
99 structure holds a pointer to the task, as well as the mutex that
104 waiter is sometimes used in reference to the task that is waiting
107 waiters - A list of processes that are blocked on a mutex.
112 that a specific process owns.
115 differentiate between two processes that are being described together.
121 The PI chain is a list of processes and mutexes that may cause priority
167 have multiple chains merge at mutexes. If we add another process G that is
188 the struct list_head functionality that is already in the kernel.
193 being that plist is a priority sorted linked list. This means that the
194 priorities of the plist are sorted, such that it takes O(1) to retrieve the
198 Another difference, which is important for implementation, is that, unlike
200 So the head of the list is declared as struct plist_head and nodes that will
207 Every mutex keeps track of all the waiters that are blocked on itself. The mutex
209 a spin lock that is located in the struct of the mutex. This lock is called
218 a list of all top waiters of the mutexes that are owned by the process.
219 Note that this list only holds the top waiters and not all waiters that are
222 The top of the task's PI list is always the highest priority task that
223 is waiting on a mutex that is owned by the task. So if the task has
224 inherited a priority, it will always be the priority of the task that is
241 be directly nested that way.
283 Now we add 4 processes that run each of these functions separately.
285 respectively, and such that D runs first and A last. With D being preempted
286 in func4 in the "do something again" area, we have a locking that follows:
300 it still is very difficult to find the possibilities of that depth.
303 type of application that nests large amounts of mutexes to create a large
354 the system for architectures that support it. This will also be explained
361 The implementation of the PI code in rtmutex.c has several places that a
371 rt_mutex_getprio returns the priority that the task should have. Either the
373 a mutex owned by the task, then that higher priority should be returned.
375 waiters of all the mutexes that the task owns, rt_mutex_getprio simply needs
379 (Note: if looking at the code, you will notice that the lower number of
387 Note that rt_mutex_setprio is defined in kernel/sched/core.c to implement the
390 It is interesting to note that __rt_mutex_adjust_prio can either increase
391 or decrease the priority of the task. In the case that a higher priority
396 always contains the highest priority task that is waiting on a mutex owned
397 by the task, so we only need to compare the priority of that top pi waiter
414 (de)boosting (the owner of a mutex that a process is blocking on), a flag to
415 check for deadlocking, the mutex that the task owns, and a pointer to a waiter
416 that is the process's waiter struct that is blocked on the mutex (although this
423 that the state of the owner and lock can change when entered into this function.
426 performed on it. This means that the task is set to the priority that it
428 with the new priorities, and that this task may not be in the proper locations
429 in the pi_lists and wait_lists that the task is blocked on. This function
430 solves all that.
432 A loop is entered, where task is the owner to be checked for PI changes that
435 prevents new tasks from completing the blocking on a mutex that is owned by this
441 A check is now done to see if the original waiter (the process that is blocked
443 waiter on the top of the task's pi_list. If it is not, it either means that
444 there is another process higher in priority that is blocked on one of the
445 mutexes that the task owns, or that the waiter has just woken up via a signal
448 task that owns a mutex that this current task is waiting on. A priority chain
453 the loop. Remember that the function started with the priority of the
454 task adjusted, but the plist nodes that hold the task in other processes
457 Next, we look at the mutex that the task is blocked on. The mutex's wait_lock
462 Now that we have both the pi_lock of the task as well as the wait_lock of
464 that is located on the mutex's wait_list.
473 Note: It is possible that the task was the current top waiter on the mutex,
479 did the priority updates, that means we are deboosting/lowering the
485 loop, the previous owner of the mutex will be the task that will be
488 Note: One might think that the owner of this mutex might have changed
490 The important thing to remember is that the owner could not have
491 become the task that is being processed in the PI chain, since
492 we have taken that task's pi_lock at the beginning of the loop.
493 So as long as there is an owner of this mutex that is not the same
505 wait_lock. If we fail that lock, we release the pi_lock of the
517 If you don't quite understand the above, that's OK. You don't have to,
525 What this means is that an owner was chosen by the process releasing the
526 mutex, but that owner has yet to wake up and actually take the mutex.
533 If a high priority process owns a mutex that a lower priority process is
535 process. What if the higher priority process wants to take that mutex again.
536 The high priority process would fail to take that mutex that it just gave up
538 latency of that critical section (since the low priority process just entered
541 There's no reason a high priority process that gives up a mutex should be
542 penalized if it tries to take that mutex again. If the new owner of the
543 mutex has not woken up yet, there's no reason that the higher priority process
544 could not take that mutex away.
547 new process is given a mutex that it was blocked on, it is only given
548 pending ownership. This means that it's the new owner, unless a higher
549 priority process comes in and tries to grab that mutex. If a higher priority
550 process does come along and wants that mutex, we let the higher priority
561 The first thing that is tried is the fast taking of the mutex. This is
578 We then call try_to_take_rt_mutex. This is where the architecture that
583 slow path. The first thing that is done here is an atomic setting of
588 play nice for those architectures that do have CMPXCHG. By setting this flag
594 Now that we know that we can't have any races with the owner releasing the
599 1) Has owner that is pending
615 means that if the mutex doesn't have any waiters, there's no accounting needed
621 add it to the current pi_list. Note that at this moment, the pending owner
623 would add itself back when it realizes that it had the ownership stolen
633 OK that we set that flag early, since now it is cleared.
638 The most interesting case is when we fail to take ownership. This means that
647 Now we enter a loop that will continue to try to take ownership of the mutex, or
658 The waiter structure has a "task" field that points to the task that is blocked
675 and add the current process to that list. Since the pi_list of the owner
731 A check is made to see if the mutex has waiters or not. On architectures that
732 do not have CMPXCHG, this is the location that the owner of the mutex will
733 determine if a waiter needs to be awoken or not. On architectures that
734 do have CMPXCHG, that check is done in the fast path, but it is still needed
742 If there are waiters, then we need to wake one up and give that waiter
754 pi_lock is taken. Remember that this is the trick to prevent the race