[Mlir-commits] [mlir] [MLIR] Fix use-after-move for DEBUG builds, and broken assert logic. (PR #164763)

Slava Gurevich llvmlistbot at llvm.org
Sat Oct 25 16:59:18 PDT 2025


https://github.com/noclowns updated https://github.com/llvm/llvm-project/pull/164763

>From b874a02fb7b7cb7f3b5ca6d5526913461463b13f Mon Sep 17 00:00:00 2001
From: Slava Gurevich <sgurevich at gmail.com>
Date: Wed, 22 Oct 2025 21:39:40 -0700
Subject: [PATCH] [MLIR] Fix use-after-move for Debug builds, and broken assert
 logic.

These issues affect only Debug builds, and Release builds with asserts enabled.

1. In `SparseTensor.h` a variable is moved-from within an assert, introducing a side effect that alters its subsequent use, and causes divergence between Debug and Release builds (with asserts disabled).

2. In `IterationGraphSorter.cpp`, the class constructor arguments are moved-from to initialize class member variables via the initializer list. Because both the arguments and class members are identically named, there's a naming collision where the arguments shadow their identically-named member variables counterparts inside the constructor body. In the original code, unqualified names inside the asserts, referred to the constructor arguments. This is wrong, because these have already been moved-from. It's not just a UB, but is broken. These SmallVector types when moved-from are reset i.e. the size resets to 0. This actually renders the affected asserts ineffective, since the comparisons operate on two hollowed-out objects and always succeed. This name ambiguity is fixed by using 'this->' to correctly refer to the initialized member variables carrying the relevant state.

3. While the fix 2 above made the asserts act as intended, it also unexpectedly broke one mlir test: `llvm-lit -v mlir/test/Dialect/SparseTensor/sparse_scalars.mlir` This required fixing the assert logic itself, which likely has never worked and went unnoticed all this time due to the bug 2. Specifically, in the failing test that uses `mlir/test/Dialect/SparseTensor/sparse_scalars.mlir` the '%argq' of 'ins' is defined as 'f32' scalar type, but the original code inside the assert had no support for scalar types as written, and was breaking the test.

Testing:
```
ninja check-mlir
llvm-lit -v mlir/test/Dialect/SparseTensor/sparse_scalars.mlir
```
---
 .../mlir/Dialect/SparseTensor/IR/SparseTensor.h  |  6 ++----
 .../Transforms/Utils/IterationGraphSorter.cpp    | 16 ++++++++++++----
 2 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h
index d0a3f01afe871..43e48a6d34026 100644
--- a/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h
+++ b/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensor.h
@@ -158,16 +158,14 @@ namespace sparse_tensor {
 /// Convenience method to abbreviate casting `getType()`.
 template <typename T>
 inline RankedTensorType getRankedTensorType(T &&t) {
-  assert(static_cast<bool>(std::forward<T>(t)) &&
-         "getRankedTensorType got null argument");
+  assert(static_cast<bool>(t) && "getRankedTensorType got null argument");
   return dyn_cast<RankedTensorType>(std::forward<T>(t).getType());
 }
 
 /// Convenience method to abbreviate casting `getType()`.
 template <typename T>
 inline MemRefType getMemRefType(T &&t) {
-  assert(static_cast<bool>(std::forward<T>(t)) &&
-         "getMemRefType got null argument");
+  assert(static_cast<bool>(t) && "getMemRefType got null argument");
   return cast<MemRefType>(std::forward<T>(t).getType());
 }
 
diff --git a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/IterationGraphSorter.cpp b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/IterationGraphSorter.cpp
index 73e0f3d2891d7..f53d2727c9b00 100644
--- a/mlir/lib/Dialect/SparseTensor/Transforms/Utils/IterationGraphSorter.cpp
+++ b/mlir/lib/Dialect/SparseTensor/Transforms/Utils/IterationGraphSorter.cpp
@@ -159,14 +159,22 @@ IterationGraphSorter::IterationGraphSorter(
       loop2OutLvl(loop2OutLvl), iterTypes(std::move(iterTypes)),
       strategy(strategy) {
   // One map per tensor.
-  assert(loop2InsLvl.size() == ins.size());
+  assert(this->loop2InsLvl.size() == this->ins.size());
   // All the affine maps have the same number of dimensions (loops).
   assert(llvm::all_equal(llvm::map_range(
-      loop2InsLvl, [](AffineMap m) { return m.getNumDims(); })));
+      this->loop2InsLvl, [](AffineMap m) { return m.getNumDims(); })));
   // The number of results of the map should match the rank of the tensor.
-  assert(llvm::all_of(llvm::zip(loop2InsLvl, ins), [](auto mvPair) {
+  assert(llvm::all_of(llvm::zip(this->loop2InsLvl, this->ins), [](auto mvPair) {
     auto [m, v] = mvPair;
-    return m.getNumResults() == cast<ShapedType>(v.getType()).getRank();
+
+    // For ranked types the rank must match.
+    // Simply return true for UnrankedTensorType
+    if (auto shapedType = llvm::dyn_cast<ShapedType>(v.getType())) {
+      return !shapedType.hasRank() ||
+             (m.getNumResults() == shapedType.getRank());
+    }
+    // Non-shaped (scalar) types behave like rank-0.
+    return m.getNumResults() == 0;
   }));
 
   itGraph.resize(getNumLoops(), std::vector<bool>(getNumLoops(), false));



More information about the Mlir-commits mailing list