synchronize_sched_expedited — Brute-force RCU-sched grace period
void synchronize_sched_expedited ( | void) ; |
Wait for an RCU-sched grace period to elapse, but use a “big hammer”
approach to force the grace period to end quickly. This consumes
significant time on all CPUs and is unfriendly to real-time workloads,
so is thus not recommended for any sort of common-case code. In fact,
if you are using synchronize_sched_expedited
in a loop, please
restructure your code to batch your updates, and then use a single
synchronize_sched
instead.
This implementation can be thought of as an application of ticket
locking to RCU, with sync_sched_expedited_started and
sync_sched_expedited_done taking on the roles of the halves
of the ticket-lock word. Each task atomically increments
sync_sched_expedited_started upon entry, snapshotting the old value,
then attempts to stop all the CPUs. If this succeeds, then each
CPU will have executed a context switch, resulting in an RCU-sched
grace period. We are then done, so we use atomic_cmpxchg
to
update sync_sched_expedited_done to match our snapshot -- but
only if someone else has not already advanced past our snapshot.
On the other hand, if try_stop_cpus
fails, we check the value
of sync_sched_expedited_done. If it has advanced past our
initial snapshot, then someone else must have forced a grace period
some time after we took our snapshot. In this case, our work is
done for us, and we can simply return. Otherwise, we try again,
but keep our initial snapshot for purposes of checking for someone
doing our work for us.
If we fail too many times in a row, we fall back to synchronize_sched
.