[PATCH] D120026: [ARM] Fix ARM backend to correctly use atomic expansion routines.
Eli Friedman via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Tue Mar 22 11:08:37 PDT 2022
efriedma added inline comments.
================
Comment at: llvm/test/CodeGen/ARM/atomic-op.ll:414
; CHECK-T1-M0: dmb
-; CHECK-T1-M0: str [[R0]], [r1]
+; CHECK-T1-M0: __atomic_store_4
----------------
aykevl wrote:
> This looks like a regression. I'm pretty sure that these loads and stores are always atomic. Is there a reason why this has to go through a library function?
In general, `__atomic_*` routines are not guaranteed to be lock-free. Suppose `___atomic_compare_exchange_4` is running at the same time on a different thread; if we store directly to the address, we're ignoring whatever lock `___atomic_compare_exchange_4` uses internally.
Because of that, unless we have some prior knowledge that libatomic supports a lock-free cmpxchg with the given width, all atomic operations have to go through libatomic. It doesn't matter if load or store ops are atomic at a hardware level.
This isn't a problem with __sync_* because they're guaranteed to be lock-free.
See also https://llvm.org/docs/Atomics.html#libcalls-atomic .
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D120026/new/
https://reviews.llvm.org/D120026
More information about the llvm-commits
mailing list