Lines Matching refs:atomic_t

10 	The atomic_t type should be defined as a signed integer and
15 typedef struct { int counter; } atomic_t;
21 local_t is very similar to atomic_t. If the counter is per CPU and only
25 The first operations to implement for atomic_t's are the initializers and
33 static atomic_t my_counter = ATOMIC_INIT(1);
46 struct foo { atomic_t counter; };
81 so all users of atomic_t should treat atomic_read() and atomic_set() as simple
87 compiler optimizes the section accessing atomic_t variables.
181 void atomic_add(int i, atomic_t *v);
182 void atomic_sub(int i, atomic_t *v);
183 void atomic_inc(atomic_t *v);
184 void atomic_dec(atomic_t *v);
187 atomic_t value. The first two routines pass explicit integers by
193 atomic_t counter update in an SMP safe manner.
197 int atomic_inc_return(atomic_t *v);
198 int atomic_dec_return(atomic_t *v);
201 atomic_t and return the new counter value after the operation is
219 int atomic_add_return(int i, atomic_t *v);
220 int atomic_sub_return(int i, atomic_t *v);
229 int atomic_inc_and_test(atomic_t *v);
230 int atomic_dec_and_test(atomic_t *v);
239 int atomic_sub_and_test(int i, atomic_t *v);
245 int atomic_add_negative(int i, atomic_t *v);
254 int atomic_xchg(atomic_t *v, int new);
262 int atomic_cmpxchg(atomic_t *v, int old, int new);
278 int atomic_add_unless(atomic_t *v, int a, int u);
290 If a caller requires memory barrier semantics around an atomic_t
314 atomic_t implementation above can have disastrous results. Here is
378 the atomic_t memory barrier requirements quite clearly.)
404 With the memory barrier semantics required of the atomic_t operations
411 24-bits of its atomic_t type. This was because it used 8 bits
416 indexed into based upon the address of the atomic_t being operated
420 Another note is that the atomic_t operations returning values are
425 to the atomic_t ops above.
464 These routines, like the atomic_t counter operations returning values,
496 They are used as follows, and are akin to their atomic_t operation
563 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
605 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)