[Mlir-commits] [mlir] [MLIR] Fixing the memref linearization size computation (PR #138922)

Krzysztof Drewniak llvmlistbot at llvm.org
Wed May 7 12:18:11 PDT 2025


================
@@ -75,18 +74,48 @@ std::pair<LinearizedMemRefInfo, OpFoldResult> getLinearizedMemRefOffsetAndSize(
     addMulMap = addMulMap + symbols[offsetIdx] * symbols[offsetIdx + 1];
     offsetValues[offsetIdx] = indicesVec[i];
     offsetValues[offsetIdx + 1] = strides[i];
-
-    mulMap = mulMap * symbols[i];
   }
 
   // Adjust linearizedIndices and size by the scale factor (dstBits / srcBits).
   int64_t scaler = dstBits / srcBits;
-  mulMap = mulMap.floorDiv(scaler);
+  size_t symbolIndex = 0;
----------------
krzysz00 wrote:

This is mostly a note to myself that this could carry a `std::optional<ArrayRef<int64_t>>` permutation in the future such that memrefs with contiguous layouts could get simpler handling. But that doesn't block this PR

https://github.com/llvm/llvm-project/pull/138922


More information about the Mlir-commits mailing list