[Mlir-commits] [mlir] [MLIR] [SparseTensor] Implement multiple loop ordering heuristics for sparse tensor dialect (PR #151885)

Aart Bik llvmlistbot at llvm.org
Fri Aug 8 10:50:53 PDT 2025


================
@@ -271,3 +349,907 @@ void IterationGraphSorter::addConstraints(Value t, AffineMap loop2LvlMap) {
     }
   }
 }
+
+// get encoding info (storage format, level types, etc)
+SparseTensorEncodingAttr getEncodingInfo(Value tensor) {
+  auto tensorType = dyn_cast<RankedTensorType>(tensor.getType());
+  if (!tensorType)
+    return nullptr; // Not a ranked tensor type
+  return getSparseTensorEncoding(tensorType);
+}
+
+void IterationGraphSorter::analyzeMemoryPatterns() {
----------------
aartbik wrote:

This is a massive block of code that is impossible for me to review without background. Do you have a design doc of the strategies that you implemented? Where do all the hardcoded values come from? What very particular case have you been optimizing for (e.g. x86?). Will these strategies generalize to other situations, CPU, GPU, etc.

I really need some more guidance on your design to be able to give you feedback on the actual code.

https://github.com/llvm/llvm-project/pull/151885


More information about the Mlir-commits mailing list