[all-commits] [llvm/llvm-project] d8dc1c: [MLIR][Linalg] Add max named op to linalg

Renato Golin via All-commits all-commits at lists.llvm.org
Fri Jul 7 05:40:33 PDT 2023


  Branch: refs/heads/main
  Home:   https://github.com/llvm/llvm-project
  Commit: d8dc1c22bf926cb8c87d7ff72bae6aafe076bbc2
      https://github.com/llvm/llvm-project/commit/d8dc1c22bf926cb8c87d7ff72bae6aafe076bbc2
  Author: Renato Golin <rengolin at systemcall.eu>
  Date:   2023-07-07 (Fri, 07 Jul 2023)

  Changed paths:
    M mlir/include/mlir/Dialect/Linalg/IR/LinalgNamedStructuredOps.yaml
    M mlir/python/mlir/dialects/linalg/opdsl/ops/core_named_ops.py
    M mlir/test/Dialect/Linalg/generalize-named-ops.mlir
    M mlir/test/Dialect/Linalg/named-ops-fail.mlir
    M mlir/test/Dialect/Linalg/named-ops.mlir

  Log Message:
  -----------
  [MLIR][Linalg] Add max named op to linalg

I've been trying to come up with a simple and clean implementation for
ReLU. TOSA uses `clamp` which is probably the goal, but that means
table-gen to make it efficient (attributes, only lower `min` or `max`).

For now, `max` is a reasonable named op despite ReLU, so we can start
using it for tiling and fusion, and upon success, we create a more
complete op `clamp` that doesn't need a whole tensor filled with zeroes
or ones to implement the different activation functions.

As with other named ops, we start "requiring" type casts and broadcasts,
and zero filled constant tensors to a more complex pattern-matcher, and
can slowly simplify with attributes or structured matchers (ex. PDL) in
the future.

Differential Revision: https://reviews.llvm.org/D154703




More information about the All-commits mailing list