[llvm] BPF: Generate locked insn for __sync_fetch_and_add() with cpu v1/v2 (PR #106494)
via llvm-commits
llvm-commits at lists.llvm.org
Thu Aug 29 16:18:20 PDT 2024
================
@@ -171,81 +168,14 @@ bool BPFMIPreEmitChecking::processAtomicInsts() {
}
}
}
-
----------------
yonghong-song wrote:
I am aware of this issue and I debate with myself. atomic-* insns are introduced and supposed to be in v3. But somehow we allowed __sync_fetch_and_and() style functions exist for v1 as well for quite some time. In previous implementation (e.g. llvm19), if no return value, the atomic_fetch_and() will become locked and insn. Now to keep __sync_fetch_and_and() barrier semantic, we cannot change to locked insn any more.
Should we just ensure all atomic_fetch_*() insns (except atomic_fetch_add()) only supports cpu=v3? This is my original thinking as well, but I am afraid some people may already use it (for >= 5.12 kernel) for v1 . If we disable usage in v1 then some existing use case may break.
Or we assume people only use locked add insn for v1/v2, all other atomic insns should be only available >= v3?
@4ast Any comments?
https://github.com/llvm/llvm-project/pull/106494
More information about the llvm-commits
mailing list