[PATCH] D72555: [mlir][Linalg] Update the semantics, verifier and test for Linalg with tensors.

Alex Zinenko via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Jan 14 07:24:53 PST 2020


ftynse requested changes to this revision.
ftynse added inline comments.
This revision now requires changes to proceed.


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:60
+    //========================================================================//
+    InterfaceMethod<
+      "Query the number of inputs from the current operation.",
----------------
Apologies for being obnoxious, but the review would be significantly faster if NFC code motion changes were separate from functional changes (in the former case I can quickly eyeball it to confirm it's indeed NFC, whereas in the late I need to understand deeply what the code does; without separation, the understanding time also extends on the NFC parts)


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:132
-
-    /// Clone an operation with the given location and operands. This is used to
-    /// abstract away the optional underlying region creation.
----------------
Why remove the documentation?


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:573-574
+    number of output buffer operands and (2) the number of tensor return values.
+    The semantics is that the `linalg.indexed_generic` op produces (i.e.
+    allocates and fills) its return values.
+
----------------
Unclear whether "allocates and fills" extends to both memrefs and tensors, or just tensors since only tensors are "return values".


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:578
+    transformations can be applied. Such legalization moves tensor return values
+    into output buffer operands and updates the region argument accordingly.
+
----------------
Nit: argument -> arguments


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:580-582
+    Transformations that create control-flow around linalg.indexed_generic
+    operations are not expected to mix with tensors because SSA values do not
+    escape naturally. Still, transformations and rewrites that take advantage of
----------------
I cannot understand what "does not mix with tensors" mean in this context. Consider rephrasing or providing an example.


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:698
 
-    In this case, the number of return values must match the number of output
-    tensor arguments. The semantics is that the `linalg.indexed_generic` op
-    produces (i.e. allocates and fills) its return values.
+    In this case, the number of outputs (args_out) must match the sum of (1) the
+    number of output buffer operands and (2) the number of tensor return values.
----------------
I wonder if you could write this text only once in a tablegen variable for both linalg.generic and linalg.indexed_generic, and then just concatenate that variable with the rest of the op-specific documentation. This would avoid the duplication and everything that comes with it, but may make it harder for a casual reader of the op definition to follow. Since you are the only person actively supporting this and doing the work twice,  your call.


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h:137
+  // The `i^th` output argument is either and operand or a return value
+  // depending regardless on the type of buffer or tensor output/return
+  // arguments.
----------------
Typo "depending regardless"


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h:139
+  // arguments.
+  //
+  /// Return the `i`-th output, asserts that this is a buffer operand and not
----------------
Nit: remove this `//`, I was confused by it to read the block comment above as describing the function below, which is contradictory.


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h:147
   }
   /// Return the index of `value` in the list of output values if found,
   /// llvm::None otherwise.
----------------
Nit: values->buffers


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h:189
+    return nInputs() + nOutputs() - this->getOperation()->getNumResults();
+    ;
+  }
----------------
Spurious semicolon


================
Comment at: mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp:142
 void LinalgDependenceGraph::addDependencesBetween(LinalgOp src, LinalgOp dst) {
-  for (auto srcView : src.getOutputs()) { // W
+  for (auto srcView : src.getOutputBuffers()) { // W
     // RAW graph
----------------
This dependence analysis now only works if the output is a buffer, and cannot be queried if it is a tensor. Consider documenting (and maybe extending later to also work for tensors).


================
Comment at: mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp:144
     // RAW graph
     for (auto dstView : dst.getInputs()) {   // R
       if (aliases.alias(srcView, dstView)) { // if alias, fill RAW
----------------
Do you need any modifications to support tensor inputs here?  From a quick look, it should just work, but better check...


================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:170
+  if (funType.getNumInputs() != nOperands)
+    return op.emitOpError("expected fun arguments to match number of operands");
   if (funType.getNumResults() != op.getNumOutputs())
----------------
Ultra-nit: functions are not necessarily fun, please use "function" in user-visible messages


================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:182
       return op.emitOpError("expected fun argument ")
-             << idx << " of the same type as elemental type "
-             << view.getElementType() << " of view " << idx;
-
-    if (idx >= nInputViews) {
-      auto resultIdx = idx - nInputViews;
-      if (funType.getResult(resultIdx) != view.getElementType())
-        return op.emitOpError("expected fun result ")
-               << resultIdx << " of the same type as elemental type "
-               << view.getElementType() << " of view " << idx;
-    }
+             << (idx + 1) << " of the same type as elemental type "
+             << shapedType.getElementType() << " of operand " << (idx + 1);
----------------
Why +1 here? It's inconsistent with block arguments above.


================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:211
     if (!funType.getInput(i).isIndex())
+      return op.emitOpError("expected fun argument ") << i << " to be an index";
+
----------------
The use of 0-based index here is inconsistent with the 1-based index below. Please adopt a convention, document it in the dialect documentation and use everywhere.


================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:216-230
+  for (unsigned idx = 0; idx < nOperands; ++idx) {
+    ShapedType shapedType = op.getShapedType(idx);
+    if (funType.getInput(idx + nLoops) != shapedType.getElementType())
       return op.emitOpError("expected fun argument ")
-             << i << " to be of IndexType";
+             << (idx + nLoops + 1) << " of the same type as elemental type "
+             << shapedType.getElementType() << " of input " << (idx + 1);
   }
----------------
This looks common with then non-indexed version. Can this be factored out into a helper function?


================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:242
+    return op.emitOpError("expected exactly ")
+           << nInputsAndOutputBuffers << " inputs and buffer operands";
 
----------------
Nit: "inputs and buffer operands" sounds like a false dichotomy. 


================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:726
+           << nOutputs << ") to match the number of operands of the enclosing "
+           << "linalg.generic op(" << op.getNumOperands() << ")";
 
----------------
Ultra-nit: whitespace before `(`


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D72555/new/

https://reviews.llvm.org/D72555





More information about the llvm-commits mailing list