Lines Matching refs:store

342  (1) Write (or store) memory barriers.
352 A CPU can be viewed as committing a sequence of store operations to the
618 for load-store control dependencies, as in the following example:
628 'a', and the store to 'b' with other stores to 'b', with possible highly
669 Now there is no conditional between the load from 'a' and the store to
722 between the load from variable 'a' and the store to variable 'b'. It is
729 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
739 identical, as noted earlier, the compiler could pull this store outside
806 between the prior load and the subsequent store, and this
888 Firstly, write barriers act as partial orderings on store operations.
1007 prior to the store of C \ +-------+ | |
1269 store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1270 store to Y. The question is then "Can CPU 3's load from X return 0?"
1272 Because CPU 2's load from X in some sense came after CPU 1's store, it
1300 of loads, it does not guarantee to order CPU 1's store. Therefore, if
1301 this example runs on a system where CPUs 1 and 2 share a store buffer
1451 (*) Similarly, the compiler is within its rights to omit a store entirely
1459 /* Code that does not store to variable a. */
1463 it might well omit the second store. This would come as a fatal
1471 /* Code that does not store to variable a. */
1568 and "store tearing," in which a single large access is replaced by
1570 16-bit store instructions with 7-bit immediate fields, the compiler
1571 might be tempted to use two 16-bit store-immediate instructions to
1572 implement the following 32-bit store:
1578 than two instructions to build the constant and then store it.
1581 this optimization in a volatile store. In the absence of such bugs,
1582 use of ACCESS_ONCE() prevents store tearing in the following example:
1586 Use of packed structures can also result in load and store tearing,
1605 and store tearing on 'foo2.b'. ACCESS_ONCE() again prevents tearing
1888 For example, with the following code, the store to *A will always be
1889 seen by other CPUs before the store to *B:
1907 ensures that the store to *A will always be seen as happening before
1908 the store to *B.
2224 Furthermore, following a store by a load from the same device obviates the need
2225 for the mmiowb(), because the load forces the store to complete before the load
2511 The store to the data register might happen after the second store to the
2574 deferral if it so wishes; to flush a store, a load from the same location
2670 Although any particular load or store may not actually appear outside of the
2678 generate load and store operations which then go into the queue of memory
2909 mechanisms may alleviate this - once the store has actually hit the cache