[llvm] BPF: Generate locked insn for __sync_fetch_and_add() with cpu v1/v2 (PR #106494)

via llvm-commits llvm-commits at lists.llvm.org
Thu Aug 29 19:11:02 PDT 2024


================
@@ -171,81 +168,14 @@ bool BPFMIPreEmitChecking::processAtomicInsts() {
       }
     }
   }
-
----------------
yonghong-song wrote:

Okay, sounds good to me. Indeed, for non-XADD* insns, which are introduced in llvm12, regardless whether they use atomic_fetch_*() or atomic_<op>(), they are all introduced in 5.12 kernel and llvm12. So from source code perspective, they should be okay whether underlying we use atomic_fetch_*() or atomic_<op>() or not, and also regardless of whether it is v1 or v3. Maybe somebody already started to use it with v1...

So I will keep current non-XADD* behavior in the patch.

https://github.com/llvm/llvm-project/pull/106494


More information about the llvm-commits mailing list