[PATCH] D77067: [mlir][Linalg] Create a tool to generate named Linalg ops from a Tensor Comprehensions-like specification.

Alex Zinenko via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Apr 7 02:40:28 PDT 2020


ftynse added a comment.

> meta-point: @ftynse let's not review the core parser code at the top of the file, as Nicolas says that they are just copypasta from the .mlir parser and won't be in the final patch.

@silvas I wouldn't review it if it was actual copy-pasta. It is an incomplete and modified copy, which is therefore likely to have some weird behavior or be able to get into an irreversible state where the original code wouldn't get.



================
Comment at: mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp:859
+
+  static bool classof(const Expression *e) {
+    return e->kind == Kind::TensorUse;
----------------
nicolasvasilache wrote:
> ftynse wrote:
> > If you use LLVM-style type system, you would normally want to avoid virtual functions...
> why ? https://llvm.org/docs/HowToSetUpLLVMStyleRTTI.html#basic-setup shows it's perfectly fine to use abstract base classes and LLVM RTTI.
Because you pay the runtime overhead price for two abstractions serving essentially the same goal. Why?


================
Comment at: mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp:910
+        return false;
+    for (unsigned i = 0, e = reductionDimensions.size(); i < e; ++i)
+      if (reductionDimensions[i] != other.reductionDimensions[i])
----------------
nicolasvasilache wrote:
> ftynse wrote:
> > Do you care about the order of reduction dimensions?
> you should otherwise your computation is non-deterministic
I suppose you mean the IR you produce does not have a deterministic order of dimensions, which makes it hard to check. TC semantics says that all dimensions are interchangable, so their textual order should not matter. If it does, we should discuss the semantics and avoid branding this input as TC-like.


================
Comment at: mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp:927
+  StringRef opId;
+  SmallVector<std::unique_ptr<Expression>, 4> expressions;
+  SetVector<unsigned> reductionDimensions;
----------------
nicolasvasilache wrote:
> ftynse wrote:
> > Do you actually need pointers? Can't we just store `Expression`s as is, eventually with appropriate move semantics to avoid extra copies?
> It's this or uniqu'ing, underlying storage, placement new etc etc.
> I went for the simple solution.
> When we have strong data that we need to scale much more we can revisit. 
I don't think vectors of unique_ptr are simpler than vectors of values. This is an extra abstraction with associated cognitive overhead. This is also one extra dynamic allocation per element, as opposed to occasional allocations in the vector, and no strong reason to maintain the pointer as unique or to auto-deallocate (other than you forced the allocation in the first place). There is no actual uniquing of expression, neither is there underlying storage or placement new, you seem to be mistaking this with how types/attributes are handled in MLIR.


================
Comment at: mlir/tools/mlir-linalg-ods-gen/mlir-linalg-ods-gen.cpp:1135
+  for (unsigned i = 0, e = tensors.size(); i < e; ++i)
+    if (tensors[i].name == id)
+      return tensors[i];
----------------
nicolasvasilache wrote:
> ftynse wrote:
> > Have you considered storing tensors in an llvm::StringMap indexed by name instead of doing linear lookups every time?
> I need 2 extra maps and really don't anticipate a single named op to ever to a point where this would matter.
> Of course if proven otherwise I'm happy to reconsider.
Well, you currently have two extra vectors. I just don't see why prefer using a vector of pairs and implementing a search for _every one of them_ is better than using a dedicated container with accessor immediately available.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D77067/new/

https://reviews.llvm.org/D77067





More information about the llvm-commits mailing list