[PATCH] D78463: [mlir][Linalg] NFC: Refactor fusion on tensors to enable extending itto fusing different kinds of linalg operations on tensors.
Mahesh Ravishankar via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Thu Apr 23 10:48:24 PDT 2020
mravishankar marked an inline comment as done.
mravishankar added inline comments.
================
Comment at: mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp:417
+ /// result of producer is used as operand at `consumerIdx` in the `consumer`.
+ static Operation *fuse(PatternRewriter &rewriter, ProducerOpTy producer,
+ ConsumerOpTy consumer, unsigned consumerIdx,
----------------
mravishankar wrote:
> nicolasvasilache wrote:
> > This level of indirection does not seem to pay for itself.
> > Can we move the body of this function just above:
> > ```
> > if (auto genericOp = dyn_cast<GenericOp>(producer)) {
> > ```
> > on l. 590
> I want to keep the implementation of different fusions separate. This only has fusion of GenericOp with GenericOp. The follow up CL just adds the method to fuse GenericOp with TensorRehapeOp, and TensorReshapeOp with GenericOp. (see D78464)
> I wasnt able to see a way in which you dont have to these pairwise implementations. They seem to have conditions to check and the exact steps to generate the fused op diverges. Part of the problem is that TensorReshapeOp is not a structure op (not saying it should be thats what made me go in this direction).
> I could go further. You could fuse GenericOp with a ConstantOp which is a splat constant. So given all that this was what I thought would be most extendible.
> The base case is just a place where common functionality can be kept and a common invocation method is defined. In this case though it is only the operands marshalling that is common (and checking that the ops are fusible).
> That being said, happy to course correct. Ill try to simplify the structure anyway.
Removed the level of indirection here. PTAL
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D78463/new/
https://reviews.llvm.org/D78463
More information about the llvm-commits
mailing list