[all-commits] [llvm/llvm-project] 569e4f: `shape` dialect: add some ops

Sean Silva via All-commits all-commits at lists.llvm.org
Fri Mar 27 16:44:32 PDT 2020


  Branch: refs/heads/master
  Home:   https://github.com/llvm/llvm-project
  Commit: 569e4f9bc99a755cc30f0102b29b1eefd4fa33b4
      https://github.com/llvm/llvm-project/commit/569e4f9bc99a755cc30f0102b29b1eefd4fa33b4
  Author: Sean Silva <silvasean at google.com>
  Date:   2020-03-27 (Fri, 27 Mar 2020)

  Changed paths:
    M mlir/include/mlir/Dialect/Shape/IR/Shape.h
    M mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
    M mlir/lib/Dialect/Shape/CMakeLists.txt
    M mlir/lib/Dialect/Shape/IR/Shape.cpp

  Log Message:
  -----------
  `shape` dialect: add some ops

- add `to_extent_tensor`
 - rename `create_shape` to `from_extent_tensor` for symmetry
- add `split_at` and `concat` ops for basic shape manipulations

This set of ops is inspired by the requirements of lowering a dynamic-shape-aware batch matmul op. For such an op, the "matrix" dimensions aren't subject to broadcasting but the others are, and so we need to slice, broadcast, and reconstruct the final output shape. Furthermore, the actual broadcasting op used downstream uses a tensor of extents as its preferred shape interface for the actual op that does the broadcasting.

However, this functionality is quite general. It's obvious that `to_extent_tensor` is needed long-term to support many common patterns that involve computations on shapes. We can evolve the shape manipulation ops introduced here. The specific choices made here took into consideration the potentially unranked nature of the !shape.shape type, which means that a simple listing of dimensions to extract isn't possible in general.

Differential Revision: https://reviews.llvm.org/D76817




More information about the All-commits mailing list