[PATCH] D137980: [ARM] Pretend atomics are always lock-free for small widths.
John Brawn via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Thu Nov 17 07:07:23 PST 2022
john.brawn added a comment.
Looking at GCC it looks like there (for cortex-m0 at least) the behaviour is that loads and stores are generated inline, but more complex operations go to the atomic library calls (not the sync library calls). e.g. for
int x, y;
int fn() {
return __atomic_load_n(&x, __ATOMIC_SEQ_CST);
}
int fn2() {
return __atomic_compare_exchange_n(&x, &y, 0, 0, 0, __ATOMIC_SEQ_CST);
}
I get with `arm-none-eabi-gcc tmp.c -O1 -mcpu=cortex-m0`
fn:
ldr r3, .L2
dmb ish
ldr r0, [r3]
dmb ish
bx lr
fn2:
push {lr}
sub sp, sp, #12
ldr r0, .L5
adds r1, r0, #4
movs r3, #5
str r3, [sp]
movs r3, #0
movs r2, #0
bl __atomic_compare_exchange_4
add sp, sp, #12
pop {pc}
so if we're doing this for compatibility with GCC we should do the same.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D137980/new/
https://reviews.llvm.org/D137980
More information about the llvm-commits
mailing list