[llvm] [BPF] Handle certain mem intrinsic functions with addr-space arguments (PR #160025)

via llvm-commits llvm-commits at lists.llvm.org
Tue Oct 7 09:03:36 PDT 2025


================
@@ -493,21 +559,69 @@ bool BPFCheckAndAdjustIR::insertASpaceCasts(Module &M) {
   for (Function &F : M) {
     DenseMap<Value *, Value *> CastsCache;
     for (BasicBlock &BB : F) {
-      for (Instruction &I : BB) {
+      for (Instruction &I : llvm::make_early_inc_range(BB)) {
         unsigned PtrOpNum;
 
-        if (auto *LD = dyn_cast<LoadInst>(&I))
+        if (auto *LD = dyn_cast<LoadInst>(&I)) {
           PtrOpNum = LD->getPointerOperandIndex();
-        else if (auto *ST = dyn_cast<StoreInst>(&I))
+          aspaceWrapOperand(CastsCache, &I, PtrOpNum);
+          continue;
+        }
+        if (auto *ST = dyn_cast<StoreInst>(&I)) {
           PtrOpNum = ST->getPointerOperandIndex();
-        else if (auto *CmpXchg = dyn_cast<AtomicCmpXchgInst>(&I))
+          aspaceWrapOperand(CastsCache, &I, PtrOpNum);
+          continue;
+        }
+        if (auto *CmpXchg = dyn_cast<AtomicCmpXchgInst>(&I)) {
           PtrOpNum = CmpXchg->getPointerOperandIndex();
-        else if (auto *RMW = dyn_cast<AtomicRMWInst>(&I))
+          aspaceWrapOperand(CastsCache, &I, PtrOpNum);
+          continue;
+        }
+        if (auto *RMW = dyn_cast<AtomicRMWInst>(&I)) {
           PtrOpNum = RMW->getPointerOperandIndex();
-        else
+          aspaceWrapOperand(CastsCache, &I, PtrOpNum);
           continue;
+        }
+
+        auto *CI = dyn_cast<CallInst>(&I);
+        if (!CI)
+          continue;
+
+        Function *Callee = CI->getCalledFunction();
+        if (!Callee || !Callee->isIntrinsic())
+          continue;
+
+        // Check memset/memcpy/memmove
+        Intrinsic::ID ID = Callee->getIntrinsicID();
+        bool IsSet = ID == Intrinsic::memset;
+        bool IsCpy = ID == Intrinsic::memcpy;
+        bool IsMove = ID == Intrinsic::memmove;
----------------
yonghong-song wrote:

Thanks for list the intrinsic's in the above. I missed __builtin_memcpy_inline and __builtin_memset_inline which is very similar to __builtin_mem{cpy,set} but the __inline version requires the 'size' argument to be constant. In current bpf progs, we all use __builtin_mem{set,cpy}() with constant size, so it essentially equivalent to __builtin_mem{set,cpy}_line(). It will be trivial to add both to the pull request. 

I think we can ignore mem{cpy,move,set}_element_unordered_atomic. I am aware of this set of intrinsics. The operand of these memory operations need to be atomic and so for our addr-space arguments, we can ignore them.

For Intrinsic:experimental_memset_pattern, it tries to convert a loop like
```
    for (unsigned i = 0; i < 2 * n; i += 2) {
      f[i] = 2;
      f[i+1] = 2;
    }
```
to the following intrinsic
```
// Memset variant that writes a given pattern.
def int_experimental_memset_pattern
    : Intrinsic<[],            
      [llvm_anyptr_ty, // Destination.
       llvm_any_ty,    // Pattern value.
       llvm_anyint_ty, // Count (number of times to fill value).
       llvm_i1_ty],    // IsVolatile.
      [IntrWriteMem, IntrArgMemOnly, IntrWillReturn, IntrNoFree, IntrNoCallback,
       NoCapture<ArgIndex<0>>, WriteOnly<ArgIndex<0>>,
       ImmArg<ArgIndex<3>>]>;
```
This should be rare. But for completeness, I think I can add this as well.


https://github.com/llvm/llvm-project/pull/160025


More information about the llvm-commits mailing list