[PATCH] D72022: [mlir][Linalg] Extend generic ops to allow tensors

Nicolas Vasilache via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Jan 2 14:40:50 PST 2020


nicolasvasilache marked 3 inline comments as done.
nicolasvasilache added a comment.

@mravishankar

  I was under the impression that there would be a separate dialect for "Linalg operations on tensors", and the buffer allocation pass would "convert" from that dialect into the existing Linalg dialect. But, I guess inter-mixing of operations from different dialects to having different "incarnations" of the operation (one operating on tensors and other on buffers) is equivalent.

Yes this was the original intention but I realized that this didn't warrant yet another dialect and lowering can be made much more progressive by mixing types.

  I see that this makes sense for generic ops, but how would this work for other operations like linalg.matmul and linalg.matvec, etc. where I am not sure you can have a tensor argument. So it seems like this is making generic ops different than other linalg ops.

Actually all these named ops should be auto-generated as special configurations of generic ops. See my comments on this thread: https://groups.google.com/a/tensorflow.org/g/mlir/c/CUL3_8fnJb4
The fact that there are manually declared `matmul`, `conv` etc ops today is more historical, they should be retired in the future.

  Another question is about buffer allocation. Is there some buffer allocation already implemented in Linalg?

Not yet but it is very easy to do something simple that can evolve over time.

Thanks for your comments and questions!


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D72022/new/

https://reviews.llvm.org/D72022





More information about the llvm-commits mailing list