[all-commits] [llvm/llvm-project] eb4efa: [mlir][Linalg] Enhance Linalg fusion on generic op...

Han-Chung Wang via All-commits all-commits at lists.llvm.org
Fri Aug 28 01:56:18 PDT 2020


  Branch: refs/heads/master
  Home:   https://github.com/llvm/llvm-project
  Commit: eb4efa883212352b2b32ba8aca8525ad17898ed4
      https://github.com/llvm/llvm-project/commit/eb4efa883212352b2b32ba8aca8525ad17898ed4
  Author: Hanhan Wang <hanchung at google.com>
  Date:   2020-08-28 (Fri, 28 Aug 2020)

  Changed paths:
    M mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
    M mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
    M mlir/test/Dialect/Linalg/fusion-tensor.mlir

  Log Message:
  -----------
  [mlir][Linalg] Enhance Linalg fusion on generic op and tensor_reshape op.

The tensor_reshape op was only fusible only if it is a collapsing case. Now we
propagate the op to all the operands so there is a further chance to fuse it
with generic op. The pre-conditions are:

1) The producer is not an indexed_generic op.
2) All the shapes of the operands are the same.
3) All the indexing maps are identity.
4) All the loops are parallel loops.
5) The producer has a single user.

It is possible to fuse the ops if the producer is an indexed_generic op. We
still can compute the original indices. E.g., if the reshape op collapses the d0
and d1, we can use DimOp to get the width of d1, and calculate the index
`d0 * width + d1`. Then replace all the uses with it. However, this pattern is
not implemented in the patch.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D86314




More information about the All-commits mailing list