[PATCH] D80004: [MLIR] Add `from_index`, `to_index`, and `num_elements` to the shape dialect

Jacques Pienaar via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue May 19 13:10:43 PDT 2020


jpienaar accepted this revision.
jpienaar marked an inline comment as done.
jpienaar added inline comments.


================
Comment at: mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td:109
 
+def Shape_FromIndexOp : Shape_Op<"from_index", []> {
+  let summary = "Creates a shape size from a standard index";
----------------
herhut wrote:
> silvas wrote:
> > frgossen wrote:
> > > herhut wrote:
> > > > pifon2a wrote:
> > > > > silvas wrote:
> > > > > > These should be marked NoSideEffect and have folders and InferTypeOpInterface. See https://reviews.llvm.org/D79833 for an example of another op.
> > > > > I am not sure `from_index` is enough. In the IR i first read it as if it creates 1D shape with the provided number number of elements. Maybe call this `from_index_rank`?
> > > > > 
> > > > > Is there some special reason why we have `shape.size` instead of `shape.rank`?
> > > > Maybe `index_to_size` and `size_to_index` would better convey what this does. We could also have a single operation `size_cast` in the spirit of the `index_cast` operation in standard that allows both conversions. That would be more uniform and, if we get to a state where we can have a type trait that would allow to use 'index_cast', it would be easier to rewrite.
> > > @silvas `NoSideEffect` makes perfect sense and I will also add the folders where possible. 
> > > Can you explain what the `InferTypeOpInterface` is good for? 
> > > I can copy and adapt the code, of course, but I would like to understand its benefit. 
> > > Is the return type of, e.g., `Shape_FromIndexOp` not already known from the declaration `outs Index:$result`? 
> > > 
> > > @pifon2a As far as I understand it, `shape.size` is used to describe also but not only the rank of a tensor. 
> > > The `shape.reduce` example, e.g., already uses it to represent the number of elements. 
> > > Until now I thought of `shape.size` as `shape`'s equivalent to `size_t`. 
> > > The name `from/to_index` was meant to be analogous to `from/to_extent_tensor` and `index` was supposed to refer to `std`'s `index`.
> > > Would `from/to_std_index` be a better name?  
> > > 
> > > 
> > I would prefer to have two ops rather than an SizeCast that does everything. That way you can just do `isa<SizeToIndexOp>` and know what you have. Otherwise, you always have to do `isa<SizeCastOp> && op.getResult().getType().isa<SizeType>()` or other such nonsense. Personally I think that shape.size_from_index/shape.size_to_index is reads best to me
> > 
> > 
> > InferTypeOpInterface avoids needing to redundantly write the result type in the builder calls, since for these ops the result type is already known. (MLIR infra should be smarter about this and do it automatically, but currently that's not the case)
> `size_from_index` `size_to_index` works for me, too.
> Can you explain what the InferTypeOpInterface is good for?

It is used to generate custom build and verify methods for ops (and pattern rewrites).

> I can copy and adapt the code, of course, but I would like to understand its benefit.
> Is the return type of, e.g., Shape_FromIndexOp not already known from the declaration > outs Index:$result?

In this case yes, but not yet plumbed through (there is pending item in shape inference doc about this but hasn't been implemented yet - the goal is to have all the shape constraint solving be in one place and be reused). In general these have been type constraints (e.g., Tensor<IntegerType> could be a case so one doesn't have exact shape/type). We should make it easier/better as Sean mentioned.


================
Comment at: mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td:156
+
+def Shape_SizeToIndexOp : Shape_Op<"size_to_index", [
+    NoSideEffect,
----------------
Could we keep these sorted?


================
Comment at: mlir/lib/Dialect/Shape/IR/Shape.cpp:312
+  // Fold only when argument constant.
+  Attribute arg = operands[0];
+  if (!arg)
----------------
It would seem for these,

if (Attribute arg = operands[0]) return arg;
return {};

is simpler.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D80004/new/

https://reviews.llvm.org/D80004





More information about the llvm-commits mailing list