Lines Matching refs:load
23 - Read memory barriers vs load speculation.
163 Note that CPU 2 will never try and load C into D because the CPU will load P
164 into Q before issuing the load of *Q.
364 of the first (eg: the first load retrieves the address to which the second
365 load will be directed), a data dependency barrier would be required to
366 make sure that the target of the second load is updated before the address
367 obtained by the first load is accessed.
376 under consideration guarantees that for any load preceding it, if that
377 load touches one of a sequence of stores from another CPU, then by the
379 touched by the load will be perceptible to any loads issued after the data
385 [!] Note that the first load really has to have a _data_ dependency and
386 not a control dependency. If the address for the second load is dependent
387 on the first load, but the dependency is through a conditional rather than
396 (3) Read (or load) memory barriers.
543 between the address load and the data load:
595 A load-load control dependency requires a full read memory barrier, not
608 the load from b as having happened before the load from a. In such a
618 for load-store control dependencies, as in the following example:
627 ACCESS_ONCE(), might combine the load from 'a' with other loads from
660 ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */
669 Now there is no conditional between the load from 'a' and the store to
722 between the load from variable 'a' and the store to variable 'b'. It is
729 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
757 the compiler to actually emit code for a given load, it does not force
806 between the prior load and the subsequent store, and this
807 conditional must involve the prior load. If the compiler
962 The load of X holds ---> \ | X->9 |------>| |
969 In the above example, CPU 2 perceives that B is 7, despite the load of *C
972 If, however, a data dependency barrier were to be placed between the load of C
973 and the load of *C (ie: B) on CPU 2:
1049 If, however, a read barrier were to be placed between the load of B and the
1050 load of A on CPU 2:
1086 contained a load of A either side of the read barrier:
1095 LOAD A [first load of A]
1097 LOAD A [second load of A]
1099 Even though the two loads of A both occur after the load of B, they may both
1151 The guarantee is that the second load will always come up with A == 1 if the
1152 load of B came up with B == 2. No such guarantee exists for the first load of
1159 Many CPUs speculate with loads: that is they see that they will need to load an
1161 other loads, and so do the load in advance - even though they haven't actually
1163 actual load instruction to potentially complete immediately because the CPU
1167 branch circumvented the load - in which case it can discard the value or just
1198 load:
1267 Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1268 This indicates that CPU 2's load from X in some sense follows CPU 1's
1269 store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1270 store to Y. The question is then "Can CPU 3's load from X return 0?"
1272 Because CPU 2's load from X in some sense came after CPU 1's store, it
1273 is natural to expect that CPU 3's load from X must therefore return 1.
1274 This expectation is an example of transitivity: if a load executing on
1275 CPU A follows a load from the same variable executing on CPU B, then
1276 CPU A's load must either return the same value that CPU B's load did,
1280 transitivity. Therefore, in the above example, if CPU 2's load from X
1281 returns 1 and its load from Y returns 0, then CPU 3's load from X must
1296 legal for CPU 2's load from X to return 1, its load from Y to return 0,
1297 and CPU 3's load from X to return 0.
1344 (*) Within a loop, forces the compiler to load the variables used
1418 (*) The compiler is within its rights to omit a load entirely if it knows
1430 rid of a load and a branch. The problem is that the compiler will
1448 the code into near-nonexistence. (It will still load from the
1567 with a single memory-reference instruction, prevents "load tearing"
1586 Use of packed structures can also result in load and store tearing,
1604 of 32-bit stores. This would result in load tearing on 'foo1.b'
1639 issue the loads in the correct order (eg. `a[b]` would have to load the value
1641 that the compiler may not speculate the value of b (eg. is equal to 1) and load
2033 wake_up(); load from Y sees 1, no memory barrier
2034 load from X might see 0
2036 In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2224 Furthermore, following a store by a load from the same device obviates the need
2225 for the mmiowb(), because the load forces the store to complete before the load
2523 sections will include synchronous load operations on strictly ordered I/O
2574 deferral if it so wishes; to flush a store, a load from the same location
2575 is preferred[*], but a load from the same device or from configuration
2578 [*] NOTE! attempting to load from the same location as was written to may
2626 ultimate effect. For example, if two adjacent instructions both load an
2670 Although any particular load or store may not actually appear outside of the
2678 generate load and store operations which then go into the queue of memory
2739 displace a dirty cacheline or to do a speculative load;
2918 (Where "LOAD {*C,*D}" is a combined load)