[llvm] [MemoryLocation] Size Scalable Masked MemOps (PR #154785)

David Sherwood via llvm-commits llvm-commits at lists.llvm.org
Thu Aug 21 08:39:19 PDT 2025


================
@@ -150,6 +150,29 @@ MemoryLocation::getForDest(const CallBase *CB, const TargetLibraryInfo &TLI) {
   return MemoryLocation::getBeforeOrAfter(UsedV, CB->getAAMetadata());
 }
 
+static std::optional<FixedVectorType *>
+getFixedTypeFromScalableMemOp(Value *Mask, Type *Ty) {
+  auto ActiveLaneMask = dyn_cast<IntrinsicInst>(Mask);
+  if (!ActiveLaneMask ||
+      ActiveLaneMask->getIntrinsicID() != Intrinsic::get_active_lane_mask)
+    return std::nullopt;
+
+  auto ScalableTy = dyn_cast<ScalableVectorType>(Ty);
+  if (!ScalableTy)
+    return std::nullopt;
+
+  auto LaneMaskLo = dyn_cast<ConstantInt>(ActiveLaneMask->getOperand(0));
+  auto LaneMaskHi = dyn_cast<ConstantInt>(ActiveLaneMask->getOperand(1));
+  if (!LaneMaskLo || !LaneMaskHi)
+    return std::nullopt;
+
+  uint64_t NumElts = LaneMaskHi->getZExtValue() - LaneMaskLo->getZExtValue();
----------------
david-arm wrote:

I think you need to be careful with logic like this because it's possible for hi to be lower than lo. I think you need an extra check like:

```
   if (LaneMaskHi <= LaneMaskLo)
     return std::nullopt;
```

If the mask would return all-false then essentially this operation doesn't touch memory at all, although I don't know if we have to worry about that.

>From the LangRef:

```
The '``llvm.get.active.lane.mask.*``' intrinsics are semantically equivalent
to:

::

      %m[i] = icmp ult (%base + i), %n
```

https://github.com/llvm/llvm-project/pull/154785


More information about the llvm-commits mailing list