[PATCH] D72022: [mlir][Linalg] Extend generic ops to allow tensors

Mahesh Ravishankar via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Jan 2 10:58:37 PST 2020


mravishankar added a comment.

@nicolasvasilache : Totally agree with all the points you make. I understand the advantage of performing fusion at the tensor level. I was under the impression that there would be a separate dialect for "Linalg operations on tensors", and the buffer allocation pass would "convert" from that dialect into the existing Linalg dialect. But, I guess inter-mixing of operations from different dialects to having different "incarnations" of the operation (one operating on tensors and other on buffers) is equivalent. I see that this makes sense for generic ops, but how would this work for other operations like linalg.matmul and linalg.matvec, etc. where I am not sure you can have a tensor argument. So it seems like this is making generic ops different than other linalg ops.
Another question is about buffer allocation. Is there some buffer allocation already implemented in Linalg?
In any case, since the contract between Linalg and other "backend" dialects is that the lowering is only supported if Linalg ops work on memrefs, I dont really have any major concerns and happy to see how things work out.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D72022/new/

https://reviews.llvm.org/D72022





More information about the llvm-commits mailing list