[Mlir-commits] [mlir] [MLIR] [SparseTensor] Implement multiple loop ordering heuristics for sparse tensor dialect (PR #151885)

Peiming Liu llvmlistbot at llvm.org
Sat Aug 9 13:15:48 PDT 2025


================
@@ -271,3 +349,907 @@ void IterationGraphSorter::addConstraints(Value t, AffineMap loop2LvlMap) {
     }
   }
 }
+
+// get encoding info (storage format, level types, etc)
+SparseTensorEncodingAttr getEncodingInfo(Value tensor) {
+  auto tensorType = dyn_cast<RankedTensorType>(tensor.getType());
+  if (!tensorType)
+    return nullptr; // Not a ranked tensor type
+  return getSparseTensorEncoding(tensorType);
+}
+
+void IterationGraphSorter::analyzeMemoryPatterns() {
----------------
PeimingLiu wrote:

TBH, I am not convinced that the current implementation is desired: the heuristics (especially the implementation) is not intuitive. 

Personally, I would prefer a much more simpler set of strategies here: the purpose of this pass is mostly serving as a PoC to showcase how picking different loop scheduling can affect performance. And downstream users can easily implement a scheduler according to their own needs. 

https://github.com/llvm/llvm-project/pull/151885


More information about the Mlir-commits mailing list