[Mlir-commits] [mlir] [mlir][Vector] Add fold transpose(shape_cast) -> shape_cast (PR #73951)
Quinn Dawkins
llvmlistbot at llvm.org
Thu Nov 30 08:13:57 PST 2023
================
@@ -5548,12 +5548,55 @@ class FoldTransposeCreateMask final : public OpRewritePattern<TransposeOp> {
}
};
+/// Folds transpose(shape_cast) into a new shape_cast, when the transpose just
+/// permutes a unit dim from the result of the shape_cast.
+class FoldTransposeShapeCast : public OpRewritePattern<TransposeOp> {
+ using OpRewritePattern::OpRewritePattern;
+
+ LogicalResult matchAndRewrite(TransposeOp transpOp,
+ PatternRewriter &rewriter) const override {
+ Value transposeSrc = transpOp.getVector();
+ auto shapeCastOp = transposeSrc.getDefiningOp<vector::ShapeCastOp>();
+ if (!shapeCastOp)
+ return failure();
+
+ auto sourceType = transpOp.getSourceVectorType();
----------------
qedawkins wrote:
The point I wanted to make was that the pattern in the test is definitely a reasonable canonicalization (i.e. the transpose is just shuffling unit dims introduced by the shape_cast), but unit dims present before the shape_cast touch on the same discussion. My main concern with making the earlier pattern a canonicalization wasn't SPIR-V specific, it was more a matter of whether it was a pattern we wanted _globally_. Semantic quirks of `shape_cast` aside (being discussed on discourse), `vector.transpose` or `vector.contract` (i.e. higher level vector operations) play nicer with `transpose` than `shape_cast`. That's why I see the vector lowering pattern as reasonable; SPIR-V should use the "LLVM" lowering for shape_cast in that case. Making it a canonicalization though means that this needs to be the canonical representation everywhere. Because there's already a shape_cast in the IR here, that's why I'm not blocking, but similarly there's no reason we couldn't have a shape_cast alongside "higher level" vector IR, hence my question.
I feel like we've been blocking your work with how much this conversation got blown up though, and I am sorry about that :(
https://github.com/llvm/llvm-project/pull/73951
More information about the Mlir-commits
mailing list