[PATCH] D153422: [mlir][Linalg] Add a softmax op
lorenzo chelini via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Wed Jun 21 09:38:50 PDT 2023
chelini added a comment.
Thanks! Some minor comments.
================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td:125
+ let extraClassDeclaration = [{
+ Value input() {
+ return getInput();
----------------
Any specific reason for having this wrapper? Can we use the method generated by tablegen directly `getInput`? Same for `getOutput`.
================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgOps.td:144
+ // Method to implement for specifying output range for
+ // DestinationStyleOpInterface
+ std::pair<int64_t, int64_t> getDpsInitsPositionRange() {
----------------
I cannot fully understand the sentence, maybe something like: `Implement functions necessary for DestinationStyleOpInterface.`
================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:2148
+LogicalResult SoftmaxOp::verify() {
+ Operation *op = getOperation();
+ auto inputType = input().getType().cast<ShapedType>();
----------------
I don't think getting the pointer is needed here. Below we can simply write:
```
if (failed(...)
emitOpError("...")
```
================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:2166
+LogicalResult SoftmaxOp::fold(FoldAdaptor, SmallVectorImpl<OpFoldResult> &) {
+ return memref::foldMemRefCast(*this);
+}
----------------
Can you please comment on why we need this folder?
================
Comment at: mlir/test/Dialect/Linalg/invalid.mlir:743
+ %1 = linalg.softmax dimension(2) ins(%arg0 : tensor<2x16x32xf32>)
+ outs(%0: tensor<2x16xf32>)
+ -> tensor<2x16xf32>
----------------
nit optional: I would align `ins` and `outs`.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D153422/new/
https://reviews.llvm.org/D153422
More information about the llvm-commits
mailing list