[Mlir-commits] [mlir] [MLIR][Tensor] Remove FoldDimOf[Expand|Collapse]Shape Pattern (PR #134219)
Mehdi Amini
llvmlistbot at llvm.org
Wed Apr 9 10:32:13 PDT 2025
joker-eph wrote:
Thanks for elaborating @Groverkss!
> So now, to Question 2, is the correct pattern, in whatever shape or form, a good canonicalization? For static cases, expand_shape already has a folder to move to a more canonical form, where the output_shape field in the op moves to a more canonical form (dynamic -> static). For dynamic cases, the answer to this question is No. The pattern is not a canonicalization for expand_shape at all! In it's correct form, it doesn't change the representation of expand_shape in any form, instead it is using information from expand_shape to bubble up the tensor.dim operations upwards. What I understand by "non-local" effects here is that it's doing transformations around the operation that is meant to be canonicalized, but not doing anything to the operation itself. So it's "non-local" to the operation being canonicalized. This is a clear indication that this pattern is not really a canonicalization.
I see two things here:
1) "a pattern not touching the operation itself its not a canonicalization", I'm not sure where is this coming from? Idon't quite see why this should be criteria? This is actually something that is often done: you know you can simplify an op from another dialect when it consumes results from your own dialect, so you just load such a pattern in your op that actually affect another one (cf below as well wrt layering).
2) The pattern is actually a DimOp canonicalization, this is in the signature: `LogicalResult matchAndRewrite(DimOp dimOp`. It is meant to simplify the dim operation, not the `expand_shape` operation.
Where is the pattern loaded isn't very important: I'm not sure why this pattern is loaded here instead of in the dimOp for example, in general when we do this is just a dialect layering question (one dialect loads canonicalization for an op in another dialect)
> Finally, The pattern is implemented in transformations that refine shape information through ReifyRankedShapedTypeOpInterface
But that does not run in canonicalization right? If you wanted to just eliminate a duplicate implementation, you could call the implementation in ReifyRankedShapedTypeOpInterface from here to delegate it.
I'm surprised you didn't mention the use of `affine.apply`: are there other patterns in the Tensor dialect that introduce `affine.apply` (or `affine` dialect constructs) "out of thin air"? Seems like these one are the only two I can spot.
Would we be able to completely remove the dependency from the Tensor dialect to the affine dialect if these patterns are removed or are there other things left right now?
https://github.com/llvm/llvm-project/pull/134219
More information about the Mlir-commits
mailing list