Lines Matching refs:stores
142 Furthermore, the stores committed by a CPU to the memory system may not be
143 perceived by the loads made by another CPU in the same order as the stores were
209 (*) Overlapping loads and stores within a particular CPU will appear to be
226 (Loads and stores overlap if they are targeted at overlapping pieces of
237 (*) It _must_not_ be assumed that independent loads and stores will be issued
349 A write barrier is a partial ordering on stores only; it is not required
353 memory system as time progresses. All stores before a write barrier will
354 occur in the sequence _before_ all the stores after the write barrier.
370 only; it is not required to have any effect on stores, independent loads
374 committing sequences of stores to the memory system that the CPU being
377 load touches one of a sequence of stores from another CPU, then by the
378 time the barrier completes, the effects of all the stores prior to that
404 have any effect on stores.
420 A general memory barrier is a partial ordering over both loads and stores.
617 However, stores are not speculated. This means that ordering -is- provided
628 loads from 'a', and the store to 'b' with other stores to 'b', with
641 It is tempting to try to enforce ordering on identical stores on both
686 ordering is guaranteed only when the stores differ, for example:
738 Please note once again that the stores to 'b' differ. If they were
786 between the loads and stores in the CPU 0 and CPU 1 code fragments,
796 (*) Control dependencies can order prior loads against later stores.
798 Not prior loads against later loads, nor prior stores against
800 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
803 (*) If both legs of the "if" statement begin with identical stores
874 [!] Note that the stores before the write barrier would normally be expected to
916 | | +------+ } requires all stores prior to the
918 | | : +------+ } further stores may take place
923 | Sequence in which stores are committed to the
1354 (*) The compiler is within its rights to reorder loads and stores
1542 (*) The compiler is within its rights to invent stores to a variable,
1610 loads followed by a pair of 32-bit stores. This would result in
1805 combined with a following ACQUIRE, orders prior stores against
1806 subsequent loads and stores. Note that this is weaker than smp_mb()!
2048 order multiple stores before the wake-up with respect to loads of those stored
2175 this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2176 before either of the stores issued on CPU 2.
2428 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2538 force stores to be ordered.
2644 their own loads and stores as if they had happened in program order.
2846 execution progress, whereas stores can often be deferred without a
2858 (*) loads and stores may be combined to improve performance when talking to