Lines Matching refs:loads

143 perceived by the loads made by another CPU in the same order as the stores were
209 (*) Overlapping loads and stores within a particular CPU will appear to be
237 (*) It _must_not_ be assumed that independent loads and stores will be issued
331 deferral and combination of memory operations; speculative loads; speculative
350 to have any effect on loads.
363 where two loads are performed such that the second depends on the result
369 A data dependency barrier is a partial ordering on interdependent loads
370 only; it is not required to have any effect on stores, independent loads
371 or overlapping loads.
379 touched by the load will be perceptible to any loads issued after the data
403 A read barrier is a partial ordering on loads only; it is not required to
420 A general memory barrier is a partial ordering over both loads and stores.
628 loads from 'a', and the store to 'b' with other stores to 'b', with
786 between the loads and stores in the CPU 0 and CPU 1 code fragments,
796 (*) Control dependencies can order prior loads against later stores.
798 Not prior loads against later loads, nor prior stores against
801 later loads, smp_mb().
875 match the loads after the read barrier or the data dependency barrier, and vice
929 loads. Consider the following sequence of events:
1011 subsequent loads +-------+ | |
1015 And thirdly, a read barrier acts as a partial order on loads. Consider the
1101 Even though the two loads of A both occur after the load of B, they may both
1161 Many CPUs speculate with loads: that is they see that they will need to load an
1163 other loads, and so do the load in advance - even though they haven't actually
1302 of loads, it does not guarantee to order CPU 1's store. Therefore, if
1354 (*) The compiler is within its rights to reorder loads and stores
1356 rights to reorder loads to the same variable. This means that
1371 (*) The compiler is within its rights to merge successive loads from
1567 The compiler can also invent loads. These are usually less
1570 invented loads.
1610 loads followed by a pair of 32-bit stores. This would result in
1646 to issue the loads in the correct order (eg. `a[b]` would have to load
1806 subsequent loads and stores. Note that this is weaker than smp_mb()!
2048 order multiple stores before the wake-up with respect to loads of those stored
2644 their own loads and stores as if they had happened in program order.
2699 (*) the coherency queue is not flushed by normal loads to lines already
2701 potentially affect those loads.
2754 barrier between the loads. This will force the cache to commit its coherency
2845 (*) loads are more likely to need to be completed immediately to permit
2849 (*) loads may be done speculatively, and the result discarded should it prove
2852 (*) loads may be done speculatively, leading to the result having been fetched
2858 (*) loads and stores may be combined to improve performance when talking to
2904 where a given CPU might reorder successive loads to the same location.