[PATCH] D76414: [mlir][Linalg] Introduce linalg.pooling op.

Mahesh Ravishankar via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Mar 23 12:00:58 PDT 2020


mravishankar added a comment.

Nice. Few comments here



================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:410
+  let description = [{
+    Applies a pooling function to all elements in each window of the input
+    multi-dimensional array, producing an output multi-dimensional array with
----------------
mehdi_amini wrote:
> The doc should define what is a "pooling function" and what does it mean to "pool" 
You should probably use this description (https://www.tensorflow.org/api_docs/python/tf/nn/pool) . This is similar to how the conv op is defined.


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:444
+    ```mlir
+      linalg.pooling { strides = [2, 1, 2] } %arg0, %arg1, %arg2 {
+        ^bb(%a: f32, %b: f32) :
----------------
nicolasvasilache wrote:
> If we go through the trouble of creating a new "named op" for pooling, I would not specify a region.
> 
> Instead I would have a `linalg.pooling_max`, `linalg.pooling_min`, `linalg.pooling.sum`, ... with no regions that just apply the proper operation when lowering to loops.
> 
> 
wouldnt that be a new named op for every pooling operation. This seems better to me.


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:474
+      // The window loops has the same number loops with output dimensions.
+      unsigned nWin = nPar;
+      SmallVector<StringRef, 8> iters(nPar, getParallelIteratorTypeName());
----------------
That seems strange. Looking at the documentation (https://www.tensorflow.org/api_docs/python/tf/nn/pool) this doesnt seem to be the case.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76414/new/

https://reviews.llvm.org/D76414





More information about the llvm-commits mailing list