[cfe-dev] atomic intrinsics
Howard Hinnant
hhinnant at apple.com
Wed May 26 17:32:35 PDT 2010
I'm beginning to survey Chapter 29, <atomic> (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3092.pdf) for what is actually required/desired in the way of compiler intrinsics regarding atomic synchronization. It appears to be a superset of the gcc __sync_* intrinsics (http://gcc.gnu.org/onlinedocs/gcc-4.5.0/gcc/Atomic-Builtins.html#Atomic-Builtins). I would like to start a conversation on exactly what intrinsics we would like to support in clang. Maybe we want the full set specified by <atomic>. Or maybe we want a subset and have the rest build on this subset. At this point I don't know, and I'm looking for people with more expertise in this area than I have.
There are approximately 14 operations, crossed with various memory ordering constraints specified by <atomic> and I've summarized them below:
void store(volatile type* x, type source) : memory_order_relaxed,
memory_order_release,
memory_order_seq_cst
type load(volatile type* x) : memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_seq_cst
type exchange(volatile type* x, type source) : memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
bool compare_exchange_strong(volatile type* x, type* expected, type desired,
success_order, failure_order)
: memory_order_relaxed, ...
memory_order_consume, ...
memory_order_acquire, ...
memory_order_release, ...
memory_order_acq_rel, ...
memory_order_seq_cst, ...
bool compare_exchange_weak(volatile type* x, type* expected, type desired,
success_order, failure_order)
: memory_order_relaxed, ...
memory_order_consume, ...
memory_order_acquire, ...
memory_order_release, ...
memory_order_acq_rel, ...
memory_order_seq_cst, ...
type fetch_add(volatile* x, type y): memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
type fetch_sub(volatile* x, type y): memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
type fetch_or(volatile* x, type y): memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
type fetch_xor(volatile* x, type y): memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
type fetch_and(volatile* x, type y): memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
bool test_and_set(volatile flag* x): memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
void clear(volatile flag* x): memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
void fence() : memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
void signal_fence() : memory_order_relaxed,
memory_order_consume,
memory_order_acquire,
memory_order_release,
memory_order_acq_rel,
memory_order_seq_cst
-Howard
More information about the cfe-dev
mailing list