[Mlir-commits] [mlir] [MLIR] [SparseTensor] Implement multiple loop ordering heuristics for sparse tensor dialect (PR #151885)

Govind Malasani llvmlistbot at llvm.org
Fri Aug 8 12:30:22 PDT 2025


================
@@ -271,3 +349,907 @@ void IterationGraphSorter::addConstraints(Value t, AffineMap loop2LvlMap) {
     }
   }
 }
+
+// get encoding info (storage format, level types, etc)
+SparseTensorEncodingAttr getEncodingInfo(Value tensor) {
+  auto tensorType = dyn_cast<RankedTensorType>(tensor.getType());
+  if (!tensorType)
+    return nullptr; // Not a ranked tensor type
+  return getSparseTensorEncoding(tensorType);
+}
+
+void IterationGraphSorter::analyzeMemoryPatterns() {
----------------
gmalasan wrote:

Sorry for a lot of the simple issues throughout the code and thank you both for taking the time to give me feedback. I'll definitely go ahead to try to break the PR up. 

As for my design I didn't put much thought in, as seen with a lot of the magic constants. I was hoping to begin by getting a bunch of heuristics that may or may not be beneficial and then benchmark and tune some of the numbers, at which point hopefully the results make more sense. But I got stuck trying to figure out how to properly benchmark it, what kinds of test cases to use, and things like that.

I was wondering if this idea makes sense. If so I was thinking I could start with a PR with just the flag and basic infrastructure, then implement some of the simple heuristics, like sparse outer or dense outer. And then make a PR for each flag from that point onwards. At some point I wanted to have an adaptive flag that will pick up characteristics of the tensor and then determine which heuristic might be better to prefer.

Would you guys recommend a large refactor?

But honestly I'm still pretty confused and this is my first time ever submitting a PR, so once again thanks for all the feedback.

https://github.com/llvm/llvm-project/pull/151885


More information about the Mlir-commits mailing list