[all-commits] [llvm/llvm-project] ad2f9f: [mlir] Fix subtensor_insert bufferization.

Sean Silva via All-commits all-commits at lists.llvm.org
Thu Nov 12 14:57:13 PST 2020


  Branch: refs/heads/master
  Home:   https://github.com/llvm/llvm-project
  Commit: ad2f9f67451cbb5e3af9760222f802da82f8024e
      https://github.com/llvm/llvm-project/commit/ad2f9f67451cbb5e3af9760222f802da82f8024e
  Author: Sean Silva <silvasean at google.com>
  Date:   2020-11-12 (Thu, 12 Nov 2020)

  Changed paths:
    A mlir/integration_test/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
    M mlir/lib/Dialect/Linalg/Transforms/Bufferize.cpp
    M mlir/test/Dialect/Linalg/bufferize.mlir

  Log Message:
  -----------
  [mlir] Fix subtensor_insert bufferization.

It was incorrect in the presence of a tensor argument with multiple
uses.

The bufferization of subtensor_insert was writing into a converted
memref operand, but there is no guarantee that the converted memref for
that operand is safe to write into. In this case, the same converted
memref is written to in-place by the subtensor_insert bufferization,
violating the tensor-level semantics.

I left some comments in a TODO about ways forward on this. I will be
working actively on this problem in the coming days.

Differential Revision: https://reviews.llvm.org/D91371


  Commit: faa66b1b2c7a328e747c283dfd0dcf43c365330d
      https://github.com/llvm/llvm-project/commit/faa66b1b2c7a328e747c283dfd0dcf43c365330d
  Author: Sean Silva <silvasean at google.com>
  Date:   2020-11-12 (Thu, 12 Nov 2020)

  Changed paths:
    M mlir/include/mlir/Dialect/Linalg/Passes.td
    M mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.h
    M mlir/include/mlir/Dialect/StandardOps/Transforms/Passes.td
    M mlir/integration_test/Dialect/Linalg/CPU/test-elementwise.mlir
    M mlir/integration_test/Dialect/Linalg/CPU/test-subtensor-insert-multiple-uses.mlir
    A mlir/integration_test/Dialect/Linalg/CPU/test-subtensor-insert.mlir
    M mlir/integration_test/Dialect/Linalg/CPU/test-tensor-e2e.mlir
    M mlir/integration_test/Dialect/Linalg/CPU/test-tensor-matmul.mlir
    M mlir/lib/Dialect/Linalg/Transforms/Bufferize.cpp
    M mlir/lib/Dialect/StandardOps/Transforms/CMakeLists.txt
    A mlir/lib/Dialect/StandardOps/Transforms/TensorConstantBufferize.cpp
    M mlir/test/Dialect/Linalg/bufferize.mlir
    A mlir/test/Dialect/Standard/tensor-constant-bufferize.mlir

  Log Message:
  -----------
  [mlir] Bufferize tensor constant ops

We lower them to a std.global_memref (uniqued by constant value) + a
std.get_global_memref to produce the corresponding memref value.
This allows removing Linalg's somewhat hacky lowering of tensor
constants, now that std properly supports this.

Differential Revision: https://reviews.llvm.org/D91306


  Commit: 796880288a756d1866dad0210a818896eda844cc
      https://github.com/llvm/llvm-project/commit/796880288a756d1866dad0210a818896eda844cc
  Author: Sean Silva <silvasean at google.com>
  Date:   2020-11-12 (Thu, 12 Nov 2020)

  Changed paths:
    M mlir/include/mlir/Dialect/StandardOps/IR/Ops.td

  Log Message:
  -----------
  [mlir] Make tensor_to_memref op docs match reality

The previous code defined it as allocating a new memref for its result.
However, this is not how it is treated by the dialect conversion framework,
that does the equivalent of inserting and folding it away internally
(even independent of any canonicalization patterns that we have
defined).

The semantics as they were previously written were also very
constraining: Nontrivial analysis is needed to prove that the new
allocation isn't needed for correctness (e.g. to avoid aliasing).
By removing those semantics, we avoid losing that information.

Differential Revision: https://reviews.llvm.org/D91382


Compare: https://github.com/llvm/llvm-project/compare/d0ba6c4002e4...796880288a75


More information about the All-commits mailing list