Lines Matching refs:accesses

41      - Locks vs memory accesses.
42 - Locks vs I/O accesses.
121 The set of accesses as seen by the memory system in the middle can be arranged
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
458 an ACQUIRE on a given variable, all memory accesses preceding any prior
460 words, within a given variable's critical section, all accesses of all
484 (*) There is no guarantee that any of the memory accesses specified before a
487 access queue that accesses of the appropriate type may not cross.
492 of the first CPU's accesses occur, but see the next point:
495 from a second CPU's accesses, even _if_ the second CPU uses a memory
500 hardware[*] will not reorder the memory accesses. CPU cache coherency
1306 on the combined order of CPU 1's and CPU 2's accesses.
1330 compiler from moving the memory accesses either side of it to the other side:
1337 accesses flagged by the READ_ONCE() or WRITE_ONCE().
1341 (*) Prevents the compiler from reordering accesses following the
1342 barrier() to precede any accesses preceding the barrier().
1369 accesses from multiple CPUs to a single variable.
1477 (*) The compiler is within its rights to reorder memory accesses unless
1521 be interrupted by something that also accesses 'flag' and 'msg',
1575 multiple smaller accesses. For example, given an architecture having
1657 and will order overlapping accesses correctly with respect to itself.
1665 used to control MMIO effects on accesses through relaxed memory I/O windows.
1769 See the subsection "Locks vs I/O accesses" for more information.
1841 the two accesses can themselves then cross:
1901 anything at all - especially with respect to I/O accesses - unless combined
2067 separate data accesses. Thus the above sleeper ought to do:
2115 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2133 Under certain circumstances (especially involving NUMA), I/O accesses within
2305 In this case, the barrier makes a guarantee that all memory accesses before the
2306 barrier will appear to happen before all the memory accesses after the barrier
2308 the memory accesses before the barrier will be complete by the time the barrier
2414 make the right memory accesses in exactly the right order.
2417 in that the carefully sequenced accesses in the driver code won't reach the
2419 efficient to reorder, combine or merge accesses - something that would cause
2423 routines - such as inb() or writel() - which know how to make such accesses
2472 If ordering rules are relaxed, it must be assumed that accesses done inside an
2474 accesses performed in an interrupt - and vice versa - unless implicit or
2477 Normally this won't be a problem because the I/O accesses done inside such
2547 respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2549 required, an mmiowb() barrier can be used. Note that relaxed accesses to
2634 accesses to be performed. The core may place these in the queue in any order
2639 accesses cross from the CPU side of things to the memory side of things, and
2646 [!] MMIO or other device accesses may bypass the cache system. This depends on
2781 cachelets for normal memory accesses. The semantics of the Alpha removes the
2813 Amongst these properties is usually the fact that such accesses bypass the
2814 caching entirely and go directly to the device buses. This means MMIO accesses
2815 may, in effect, overtake accesses to cached memory that were emitted earlier.
2855 (*) the order of the memory accesses may be rearranged to promote better use
2859 memory or I/O hardware that can do batched accesses of adjacent locations,
2877 _own_ accesses appear to be correctly ordered, without the need for a memory
2896 accesses: