[llvm] [MemoryLocation] Size Scalable Masked MemOps (PR #154785)

David Sherwood via llvm-commits llvm-commits at lists.llvm.org
Thu Aug 28 10:01:23 PDT 2025


================
@@ -150,6 +151,28 @@ MemoryLocation::getForDest(const CallBase *CB, const TargetLibraryInfo &TLI) {
   return MemoryLocation::getBeforeOrAfter(UsedV, CB->getAAMetadata());
 }
 
+// If the mask for a memory op is a get active lane mask intrinsic
+// we can possibly infer the size of memory written or read
+static std::optional<FixedVectorType *>
+getKnownTypeFromMaskedOp(Value *Mask, VectorType *Ty) {
+  using namespace llvm::PatternMatch;
+  ConstantInt *Op0, *Op1;
+  if (!match(Mask, m_Intrinsic<Intrinsic::get_active_lane_mask>(
+                       m_ConstantInt(Op0), m_ConstantInt(Op1))))
+    return std::nullopt;
+
+  uint64_t LaneMaskLo = Op0->getZExtValue();
+  uint64_t LaneMaskHi = Op1->getZExtValue();
+  if ((LaneMaskHi == 0) || (LaneMaskHi <= LaneMaskLo))
+    return std::nullopt;
+
+  uint64_t NumElts = LaneMaskHi - LaneMaskLo;
+  if (NumElts > Ty->getElementCount().getKnownMinValue())
----------------
david-arm wrote:

I think we only need to bail out if it's a scalable vector, e.g. we can support get.active.lane.mask with arguments (0,8) when the vector type is <4 x i32>. The maximum permitted by the type is 4.

If you change this code, would be good to have a fixed-width test showing it works. Thanks!

https://github.com/llvm/llvm-project/pull/154785


More information about the llvm-commits mailing list