[PATCH] D80004: [MLIR] Add `from_index`, `to_index`, and `num_elements` to the shape dialect

Stephan Herhut via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue May 19 02:08:27 PDT 2020


herhut added inline comments.


================
Comment at: mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td:109
 
+def Shape_FromIndexOp : Shape_Op<"from_index", []> {
+  let summary = "Creates a shape size from a standard index";
----------------
silvas wrote:
> frgossen wrote:
> > herhut wrote:
> > > pifon2a wrote:
> > > > silvas wrote:
> > > > > These should be marked NoSideEffect and have folders and InferTypeOpInterface. See https://reviews.llvm.org/D79833 for an example of another op.
> > > > I am not sure `from_index` is enough. In the IR i first read it as if it creates 1D shape with the provided number number of elements. Maybe call this `from_index_rank`?
> > > > 
> > > > Is there some special reason why we have `shape.size` instead of `shape.rank`?
> > > Maybe `index_to_size` and `size_to_index` would better convey what this does. We could also have a single operation `size_cast` in the spirit of the `index_cast` operation in standard that allows both conversions. That would be more uniform and, if we get to a state where we can have a type trait that would allow to use 'index_cast', it would be easier to rewrite.
> > @silvas `NoSideEffect` makes perfect sense and I will also add the folders where possible. 
> > Can you explain what the `InferTypeOpInterface` is good for? 
> > I can copy and adapt the code, of course, but I would like to understand its benefit. 
> > Is the return type of, e.g., `Shape_FromIndexOp` not already known from the declaration `outs Index:$result`? 
> > 
> > @pifon2a As far as I understand it, `shape.size` is used to describe also but not only the rank of a tensor. 
> > The `shape.reduce` example, e.g., already uses it to represent the number of elements. 
> > Until now I thought of `shape.size` as `shape`'s equivalent to `size_t`. 
> > The name `from/to_index` was meant to be analogous to `from/to_extent_tensor` and `index` was supposed to refer to `std`'s `index`.
> > Would `from/to_std_index` be a better name?  
> > 
> > 
> I would prefer to have two ops rather than an SizeCast that does everything. That way you can just do `isa<SizeToIndexOp>` and know what you have. Otherwise, you always have to do `isa<SizeCastOp> && op.getResult().getType().isa<SizeType>()` or other such nonsense. Personally I think that shape.size_from_index/shape.size_to_index is reads best to me
> 
> 
> InferTypeOpInterface avoids needing to redundantly write the result type in the builder calls, since for these ops the result type is already known. (MLIR infra should be smarter about this and do it automatically, but currently that's not the case)
`size_from_index` `size_to_index` works for me, too.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D80004/new/

https://reviews.llvm.org/D80004





More information about the llvm-commits mailing list