[PATCH] D72555: [mlir][Linalg] Update the semantics, verifier and test for Linalg with tensors.

Nicolas Vasilache via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Jan 14 09:45:49 PST 2020


nicolasvasilache added inline comments.


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:60
+    //========================================================================//
+    InterfaceMethod<
+      "Query the number of inputs from the current operation.",
----------------
ftynse wrote:
> Apologies for being obnoxious, but the review would be significantly faster if NFC code motion changes were separate from functional changes (in the former case I can quickly eyeball it to confirm it's indeed NFC, whereas in the late I need to understand deeply what the code does; without separation, the understanding time also extends on the NFC parts)
Apologies for this reordering. 
I was contemplating extracting and landing an NFC but please note that the code that moved is clearly marked by phabricator with the yellow vertical bar. 
When you hover over it it says `Moved from line xxx`.

If you think that is still not enough I can rebase the NFC part but I personally have found phabricator to be good for this. 


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:132
-
-    /// Clone an operation with the given location and operands. This is used to
-    /// abstract away the optional underlying region creation.
----------------
ftynse wrote:
> Why remove the documentation?
Because the proper place for this doc is inside the op interface and it is already there, almost word for word.


================
Comment at: mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td:698
 
-    In this case, the number of return values must match the number of output
-    tensor arguments. The semantics is that the `linalg.indexed_generic` op
-    produces (i.e. allocates and fills) its return values.
+    In this case, the number of outputs (args_out) must match the sum of (1) the
+    number of output buffer operands and (2) the number of tensor return values.
----------------
ftynse wrote:
> I wonder if you could write this text only once in a tablegen variable for both linalg.generic and linalg.indexed_generic, and then just concatenate that variable with the rest of the op-specific documentation. This would avoid the duplication and everything that comes with it, but may make it harder for a casual reader of the op definition to follow. Since you are the only person actively supporting this and doing the work twice,  your call.
Will consider for a followup, thanks!


================
Comment at: mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp:142
 void LinalgDependenceGraph::addDependencesBetween(LinalgOp src, LinalgOp dst) {
-  for (auto srcView : src.getOutputs()) { // W
+  for (auto srcView : src.getOutputBuffers()) { // W
     // RAW graph
----------------
ftynse wrote:
> This dependence analysis now only works if the output is a buffer, and cannot be queried if it is a tensor. Consider documenting (and maybe extending later to also work for tensors).
Added an assertion, there is no short term plan to support such transformations and analyses on tensors.
The answer is buffer allocate then it's in the buffer world and these apply.
Mixed mode may appear in the future but would require rethinking things.


================
Comment at: mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp:144
     // RAW graph
     for (auto dstView : dst.getInputs()) {   // R
       if (aliases.alias(srcView, dstView)) { // if alias, fill RAW
----------------
ftynse wrote:
> Do you need any modifications to support tensor inputs here?  From a quick look, it should just work, but better check...
added assertions above.


================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:182
       return op.emitOpError("expected fun argument ")
-             << idx << " of the same type as elemental type "
-             << view.getElementType() << " of view " << idx;
-
-    if (idx >= nInputViews) {
-      auto resultIdx = idx - nInputViews;
-      if (funType.getResult(resultIdx) != view.getElementType())
-        return op.emitOpError("expected fun result ")
-               << resultIdx << " of the same type as elemental type "
-               << view.getElementType() << " of view " << idx;
-    }
+             << (idx + 1) << " of the same type as elemental type "
+             << shapedType.getElementType() << " of operand " << (idx + 1);
----------------
ftynse wrote:
> Why +1 here? It's inconsistent with block arguments above.
made it consistent,  error messages for function arguments start at 1: e.g. 1st argument etc.


================
Comment at: mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp:211
     if (!funType.getInput(i).isIndex())
+      return op.emitOpError("expected fun argument ") << i << " to be an index";
+
----------------
ftynse wrote:
> The use of 0-based index here is inconsistent with the 1-based index below. Please adopt a convention, document it in the dialect documentation and use everywhere.
Made it consistent, thanks for spotting!


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D72555/new/

https://reviews.llvm.org/D72555





More information about the llvm-commits mailing list