1Review Checklist for RCU Patches
2
3
4This document contains a checklist for producing and reviewing patches
5that make use of RCU.  Violating any of the rules listed below will
6result in the same sorts of problems that leaving out a locking primitive
7would cause.  This list is based on experiences reviewing such patches
8over a rather long period of time, but improvements are always welcome!
9
100.	Is RCU being applied to a read-mostly situation?  If the data
11	structure is updated more than about 10% of the time, then you
12	should strongly consider some other approach, unless detailed
13	performance measurements show that RCU is nonetheless the right
14	tool for the job.  Yes, RCU does reduce read-side overhead by
15	increasing write-side overhead, which is exactly why normal uses
16	of RCU will do much more reading than updating.
17
18	Another exception is where performance is not an issue, and RCU
19	provides a simpler implementation.  An example of this situation
20	is the dynamic NMI code in the Linux 2.6 kernel, at least on
21	architectures where NMIs are rare.
22
23	Yet another exception is where the low real-time latency of RCU's
24	read-side primitives is critically important.
25
261.	Does the update code have proper mutual exclusion?
27
28	RCU does allow -readers- to run (almost) naked, but -writers- must
29	still use some sort of mutual exclusion, such as:
30
31	a.	locking,
32	b.	atomic operations, or
33	c.	restricting updates to a single task.
34
35	If you choose #b, be prepared to describe how you have handled
36	memory barriers on weakly ordered machines (pretty much all of
37	them -- even x86 allows later loads to be reordered to precede
38	earlier stores), and be prepared to explain why this added
39	complexity is worthwhile.  If you choose #c, be prepared to
40	explain how this single task does not become a major bottleneck on
41	big multiprocessor machines (for example, if the task is updating
42	information relating to itself that other tasks can read, there
43	by definition can be no bottleneck).
44
452.	Do the RCU read-side critical sections make proper use of
46	rcu_read_lock() and friends?  These primitives are needed
47	to prevent grace periods from ending prematurely, which
48	could result in data being unceremoniously freed out from
49	under your read-side code, which can greatly increase the
50	actuarial risk of your kernel.
51
52	As a rough rule of thumb, any dereference of an RCU-protected
53	pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
54	rcu_read_lock_sched(), or by the appropriate update-side lock.
55	Disabling of preemption can serve as rcu_read_lock_sched(), but
56	is less readable.
57
583.	Does the update code tolerate concurrent accesses?
59
60	The whole point of RCU is to permit readers to run without
61	any locks or atomic operations.  This means that readers will
62	be running while updates are in progress.  There are a number
63	of ways to handle this concurrency, depending on the situation:
64
65	a.	Use the RCU variants of the list and hlist update
66		primitives to add, remove, and replace elements on
67		an RCU-protected list.	Alternatively, use the other
68		RCU-protected data structures that have been added to
69		the Linux kernel.
70
71		This is almost always the best approach.
72
73	b.	Proceed as in (a) above, but also maintain per-element
74		locks (that are acquired by both readers and writers)
75		that guard per-element state.  Of course, fields that
76		the readers refrain from accessing can be guarded by
77		some other lock acquired only by updaters, if desired.
78
79		This works quite well, also.
80
81	c.	Make updates appear atomic to readers.  For example,
82		pointer updates to properly aligned fields will
83		appear atomic, as will individual atomic primitives.
84		Sequences of perations performed under a lock will -not-
85		appear to be atomic to RCU readers, nor will sequences
86		of multiple atomic primitives.
87
88		This can work, but is starting to get a bit tricky.
89
90	d.	Carefully order the updates and the reads so that
91		readers see valid data at all phases of the update.
92		This is often more difficult than it sounds, especially
93		given modern CPUs' tendency to reorder memory references.
94		One must usually liberally sprinkle memory barriers
95		(smp_wmb(), smp_rmb(), smp_mb()) through the code,
96		making it difficult to understand and to test.
97
98		It is usually better to group the changing data into
99		a separate structure, so that the change may be made
100		to appear atomic by updating a pointer to reference
101		a new structure containing updated values.
102
1034.	Weakly ordered CPUs pose special challenges.  Almost all CPUs
104	are weakly ordered -- even x86 CPUs allow later loads to be
105	reordered to precede earlier stores.  RCU code must take all of
106	the following measures to prevent memory-corruption problems:
107
108	a.	Readers must maintain proper ordering of their memory
109		accesses.  The rcu_dereference() primitive ensures that
110		the CPU picks up the pointer before it picks up the data
111		that the pointer points to.  This really is necessary
112		on Alpha CPUs.	If you don't believe me, see:
113
114			http://www.openvms.compaq.com/wizard/wiz_2637.html
115
116		The rcu_dereference() primitive is also an excellent
117		documentation aid, letting the person reading the
118		code know exactly which pointers are protected by RCU.
119		Please note that compilers can also reorder code, and
120		they are becoming increasingly aggressive about doing
121		just that.  The rcu_dereference() primitive therefore also
122		prevents destructive compiler optimizations.  However,
123		with a bit of devious creativity, it is possible to
124		mishandle the return value from rcu_dereference().
125		Please see rcu_dereference.txt in this directory for
126		more information.
127
128		The rcu_dereference() primitive is used by the
129		various "_rcu()" list-traversal primitives, such
130		as the list_for_each_entry_rcu().  Note that it is
131		perfectly legal (if redundant) for update-side code to
132		use rcu_dereference() and the "_rcu()" list-traversal
133		primitives.  This is particularly useful in code that
134		is common to readers and updaters.  However, lockdep
135		will complain if you access rcu_dereference() outside
136		of an RCU read-side critical section.  See lockdep.txt
137		to learn what to do about this.
138
139		Of course, neither rcu_dereference() nor the "_rcu()"
140		list-traversal primitives can substitute for a good
141		concurrency design coordinating among multiple updaters.
142
143	b.	If the list macros are being used, the list_add_tail_rcu()
144		and list_add_rcu() primitives must be used in order
145		to prevent weakly ordered machines from misordering
146		structure initialization and pointer planting.
147		Similarly, if the hlist macros are being used, the
148		hlist_add_head_rcu() primitive is required.
149
150	c.	If the list macros are being used, the list_del_rcu()
151		primitive must be used to keep list_del()'s pointer
152		poisoning from inflicting toxic effects on concurrent
153		readers.  Similarly, if the hlist macros are being used,
154		the hlist_del_rcu() primitive is required.
155
156		The list_replace_rcu() and hlist_replace_rcu() primitives
157		may be used to replace an old structure with a new one
158		in their respective types of RCU-protected lists.
159
160	d.	Rules similar to (4b) and (4c) apply to the "hlist_nulls"
161		type of RCU-protected linked lists.
162
163	e.	Updates must ensure that initialization of a given
164		structure happens before pointers to that structure are
165		publicized.  Use the rcu_assign_pointer() primitive
166		when publicizing a pointer to a structure that can
167		be traversed by an RCU read-side critical section.
168
1695.	If call_rcu(), or a related primitive such as call_rcu_bh(),
170	call_rcu_sched(), or call_srcu() is used, the callback function
171	must be written to be called from softirq context.  In particular,
172	it cannot block.
173
1746.	Since synchronize_rcu() can block, it cannot be called from
175	any sort of irq context.  The same rule applies for
176	synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(),
177	synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(),
178	synchronize_sched_expedite(), and synchronize_srcu_expedited().
179
180	The expedited forms of these primitives have the same semantics
181	as the non-expedited forms, but expediting is both expensive
182	and unfriendly to real-time workloads.	Use of the expedited
183	primitives should be restricted to rare configuration-change
184	operations that would not normally be undertaken while a real-time
185	workload is running.
186
187	In particular, if you find yourself invoking one of the expedited
188	primitives repeatedly in a loop, please do everyone a favor:
189	Restructure your code so that it batches the updates, allowing
190	a single non-expedited primitive to cover the entire batch.
191	This will very likely be faster than the loop containing the
192	expedited primitive, and will be much much easier on the rest
193	of the system, especially to real-time workloads running on
194	the rest of the system.
195
196	In addition, it is illegal to call the expedited forms from
197	a CPU-hotplug notifier, or while holding a lock that is acquired
198	by a CPU-hotplug notifier.  Failing to observe this restriction
199	will result in deadlock.
200
2017.	If the updater uses call_rcu() or synchronize_rcu(), then the
202	corresponding readers must use rcu_read_lock() and
203	rcu_read_unlock().  If the updater uses call_rcu_bh() or
204	synchronize_rcu_bh(), then the corresponding readers must
205	use rcu_read_lock_bh() and rcu_read_unlock_bh().  If the
206	updater uses call_rcu_sched() or synchronize_sched(), then
207	the corresponding readers must disable preemption, possibly
208	by calling rcu_read_lock_sched() and rcu_read_unlock_sched().
209	If the updater uses synchronize_srcu() or call_srcu(), then
210	the corresponding readers must use srcu_read_lock() and
211	srcu_read_unlock(), and with the same srcu_struct.  The rules for
212	the expedited primitives are the same as for their non-expedited
213	counterparts.  Mixing things up will result in confusion and
214	broken kernels.
215
216	One exception to this rule: rcu_read_lock() and rcu_read_unlock()
217	may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
218	in cases where local bottom halves are already known to be
219	disabled, for example, in irq or softirq context.  Commenting
220	such cases is a must, of course!  And the jury is still out on
221	whether the increased speed is worth it.
222
2238.	Although synchronize_rcu() is slower than is call_rcu(), it
224	usually results in simpler code.  So, unless update performance is
225	critically important, the updaters cannot block, or the latency of
226	synchronize_rcu() is visible from userspace, synchronize_rcu()
227	should be used in preference to call_rcu().  Furthermore,
228	kfree_rcu() usually results in even simpler code than does
229	synchronize_rcu() without synchronize_rcu()'s multi-millisecond
230	latency.  So please take advantage of kfree_rcu()'s "fire and
231	forget" memory-freeing capabilities where it applies.
232
233	An especially important property of the synchronize_rcu()
234	primitive is that it automatically self-limits: if grace periods
235	are delayed for whatever reason, then the synchronize_rcu()
236	primitive will correspondingly delay updates.  In contrast,
237	code using call_rcu() should explicitly limit update rate in
238	cases where grace periods are delayed, as failing to do so can
239	result in excessive realtime latencies or even OOM conditions.
240
241	Ways of gaining this self-limiting property when using call_rcu()
242	include:
243
244	a.	Keeping a count of the number of data-structure elements
245		used by the RCU-protected data structure, including
246		those waiting for a grace period to elapse.  Enforce a
247		limit on this number, stalling updates as needed to allow
248		previously deferred frees to complete.	Alternatively,
249		limit only the number awaiting deferred free rather than
250		the total number of elements.
251
252		One way to stall the updates is to acquire the update-side
253		mutex.	(Don't try this with a spinlock -- other CPUs
254		spinning on the lock could prevent the grace period
255		from ever ending.)  Another way to stall the updates
256		is for the updates to use a wrapper function around
257		the memory allocator, so that this wrapper function
258		simulates OOM when there is too much memory awaiting an
259		RCU grace period.  There are of course many other
260		variations on this theme.
261
262	b.	Limiting update rate.  For example, if updates occur only
263		once per hour, then no explicit rate limiting is
264		required, unless your system is already badly broken.
265		Older versions of the dcache subsystem take this approach,
266		guarding updates with a global lock, limiting their rate.
267
268	c.	Trusted update -- if updates can only be done manually by
269		superuser or some other trusted user, then it might not
270		be necessary to automatically limit them.  The theory
271		here is that superuser already has lots of ways to crash
272		the machine.
273
274	d.	Use call_rcu_bh() rather than call_rcu(), in order to take
275		advantage of call_rcu_bh()'s faster grace periods.  (This
276		is only a partial solution, though.)
277
278	e.	Periodically invoke synchronize_rcu(), permitting a limited
279		number of updates per grace period.
280
281	The same cautions apply to call_rcu_bh(), call_rcu_sched(),
282	call_srcu(), and kfree_rcu().
283
284	Note that although these primitives do take action to avoid memory
285	exhaustion when any given CPU has too many callbacks, a determined
286	user could still exhaust memory.  This is especially the case
287	if a system with a large number of CPUs has been configured to
288	offload all of its RCU callbacks onto a single CPU, or if the
289	system has relatively little free memory.
290
2919.	All RCU list-traversal primitives, which include
292	rcu_dereference(), list_for_each_entry_rcu(), and
293	list_for_each_safe_rcu(), must be either within an RCU read-side
294	critical section or must be protected by appropriate update-side
295	locks.	RCU read-side critical sections are delimited by
296	rcu_read_lock() and rcu_read_unlock(), or by similar primitives
297	such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which
298	case the matching rcu_dereference() primitive must be used in
299	order to keep lockdep happy, in this case, rcu_dereference_bh().
300
301	The reason that it is permissible to use RCU list-traversal
302	primitives when the update-side lock is held is that doing so
303	can be quite helpful in reducing code bloat when common code is
304	shared between readers and updaters.  Additional primitives
305	are provided for this case, as discussed in lockdep.txt.
306
30710.	Conversely, if you are in an RCU read-side critical section,
308	and you don't hold the appropriate update-side lock, you -must-
309	use the "_rcu()" variants of the list macros.  Failing to do so
310	will break Alpha, cause aggressive compilers to generate bad code,
311	and confuse people trying to read your code.
312
31311.	Note that synchronize_rcu() -only- guarantees to wait until
314	all currently executing rcu_read_lock()-protected RCU read-side
315	critical sections complete.  It does -not- necessarily guarantee
316	that all currently running interrupts, NMIs, preempt_disable()
317	code, or idle loops will complete.  Therefore, if your
318	read-side critical sections are protected by something other
319	than rcu_read_lock(), do -not- use synchronize_rcu().
320
321	Similarly, disabling preemption is not an acceptable substitute
322	for rcu_read_lock().  Code that attempts to use preemption
323	disabling where it should be using rcu_read_lock() will break
324	in real-time kernel builds.
325
326	If you want to wait for interrupt handlers, NMI handlers, and
327	code under the influence of preempt_disable(), you instead
328	need to use synchronize_irq() or synchronize_sched().
329
330	This same limitation also applies to synchronize_rcu_bh()
331	and synchronize_srcu(), as well as to the asynchronous and
332	expedited forms of the three primitives, namely call_rcu(),
333	call_rcu_bh(), call_srcu(), synchronize_rcu_expedited(),
334	synchronize_rcu_bh_expedited(), and synchronize_srcu_expedited().
335
33612.	Any lock acquired by an RCU callback must be acquired elsewhere
337	with softirq disabled, e.g., via spin_lock_irqsave(),
338	spin_lock_bh(), etc.  Failing to disable irq on a given
339	acquisition of that lock will result in deadlock as soon as
340	the RCU softirq handler happens to run your RCU callback while
341	interrupting that acquisition's critical section.
342
34313.	RCU callbacks can be and are executed in parallel.  In many cases,
344	the callback code simply wrappers around kfree(), so that this
345	is not an issue (or, more accurately, to the extent that it is
346	an issue, the memory-allocator locking handles it).  However,
347	if the callbacks do manipulate a shared data structure, they
348	must use whatever locking or other synchronization is required
349	to safely access and/or modify that data structure.
350
351	RCU callbacks are -usually- executed on the same CPU that executed
352	the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(),
353	but are by -no- means guaranteed to be.  For example, if a given
354	CPU goes offline while having an RCU callback pending, then that
355	RCU callback will execute on some surviving CPU.  (If this was
356	not the case, a self-spawning RCU callback would prevent the
357	victim CPU from ever going offline.)
358
35914.	SRCU (srcu_read_lock(), srcu_read_unlock(), srcu_dereference(),
360	synchronize_srcu(), synchronize_srcu_expedited(), and call_srcu())
361	may only be invoked from process context.  Unlike other forms of
362	RCU, it -is- permissible to block in an SRCU read-side critical
363	section (demarked by srcu_read_lock() and srcu_read_unlock()),
364	hence the "SRCU": "sleepable RCU".  Please note that if you
365	don't need to sleep in read-side critical sections, you should be
366	using RCU rather than SRCU, because RCU is almost always faster
367	and easier to use than is SRCU.
368
369	Also unlike other forms of RCU, explicit initialization
370	and cleanup is required via init_srcu_struct() and
371	cleanup_srcu_struct().	These are passed a "struct srcu_struct"
372	that defines the scope of a given SRCU domain.	Once initialized,
373	the srcu_struct is passed to srcu_read_lock(), srcu_read_unlock()
374	synchronize_srcu(), synchronize_srcu_expedited(), and call_srcu().
375	A given synchronize_srcu() waits only for SRCU read-side critical
376	sections governed by srcu_read_lock() and srcu_read_unlock()
377	calls that have been passed the same srcu_struct.  This property
378	is what makes sleeping read-side critical sections tolerable --
379	a given subsystem delays only its own updates, not those of other
380	subsystems using SRCU.	Therefore, SRCU is less prone to OOM the
381	system than RCU would be if RCU's read-side critical sections
382	were permitted to sleep.
383
384	The ability to sleep in read-side critical sections does not
385	come for free.	First, corresponding srcu_read_lock() and
386	srcu_read_unlock() calls must be passed the same srcu_struct.
387	Second, grace-period-detection overhead is amortized only
388	over those updates sharing a given srcu_struct, rather than
389	being globally amortized as they are for other forms of RCU.
390	Therefore, SRCU should be used in preference to rw_semaphore
391	only in extremely read-intensive situations, or in situations
392	requiring SRCU's read-side deadlock immunity or low read-side
393	realtime latency.
394
395	Note that, rcu_assign_pointer() relates to SRCU just as it does
396	to other forms of RCU.
397
39815.	The whole point of call_rcu(), synchronize_rcu(), and friends
399	is to wait until all pre-existing readers have finished before
400	carrying out some otherwise-destructive operation.  It is
401	therefore critically important to -first- remove any path
402	that readers can follow that could be affected by the
403	destructive operation, and -only- -then- invoke call_rcu(),
404	synchronize_rcu(), or friends.
405
406	Because these primitives only wait for pre-existing readers, it
407	is the caller's responsibility to guarantee that any subsequent
408	readers will execute safely.
409
41016.	The various RCU read-side primitives do -not- necessarily contain
411	memory barriers.  You should therefore plan for the CPU
412	and the compiler to freely reorder code into and out of RCU
413	read-side critical sections.  It is the responsibility of the
414	RCU update-side primitives to deal with this.
415
41617.	Use CONFIG_PROVE_RCU, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the
417	__rcu sparse checks (enabled by CONFIG_SPARSE_RCU_POINTER) to
418	validate your RCU code.  These can help find problems as follows:
419
420	CONFIG_PROVE_RCU: check that accesses to RCU-protected data
421		structures are carried out under the proper RCU
422		read-side critical section, while holding the right
423		combination of locks, or whatever other conditions
424		are appropriate.
425
426	CONFIG_DEBUG_OBJECTS_RCU_HEAD: check that you don't pass the
427		same object to call_rcu() (or friends) before an RCU
428		grace period has elapsed since the last time that you
429		passed that same object to call_rcu() (or friends).
430
431	__rcu sparse checks: tag the pointer to the RCU-protected data
432		structure with __rcu, and sparse will warn you if you
433		access that pointer without the services of one of the
434		variants of rcu_dereference().
435
436	These debugging aids can help you find problems that are
437	otherwise extremely difficult to spot.
438