[PATCH] D80189: [mlir] Add canonicalization for Cstr and Assuming Shape Ops.

Theodore Popp via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed May 20 09:16:18 PDT 2020


tpopp marked 14 inline comments as done.
tpopp added a comment.

I split this commit up into smaller pieces.



================
Comment at: mlir/lib/Dialect/Shape/IR/Shape.cpp:141
 //===----------------------------------------------------------------------===//
-// AssumingOp
+// Assuming
 //===----------------------------------------------------------------------===//
----------------
herhut wrote:
> Why this change?
Accidental


================
Comment at: mlir/lib/Dialect/Shape/IR/Shape.cpp:186
+// Removes AssumingOp with a passing condition and inlines the region.
+struct AssumingWithTrue : public OpRewritePattern<AssumingOp> {
+  using OpRewritePattern<AssumingOp>::OpRewritePattern;
----------------
silvas wrote:
> We might not want this pattern always. For example, if we have a pass that does code motion on islands, this canonicalization can prematurely remove the island which will make that pass unable to operate properly.  Let's hold off on adding it until we have a compelling use case.
I don't find this a compelling argument. Not all code will be in islands and these islands are for the purpose of making code conditional on some assertion. If it's statically known that the assertion is not a problem and the island cannot be removed, then the island is being misused for another purpose and blocking possible compiler optimizations.


================
Comment at: mlir/lib/Dialect/Shape/IR/Shape.cpp:194
+      auto *secondBlock = op.getBody();
+      auto initPosition = rewriter.getInsertionPoint();
+      auto *thirdBlock = rewriter.splitBlock(firstBlock, initPosition);
----------------
herhut wrote:
> Why `initPosition`?
Changed to opPosition as the rewriter starts where the op is.


================
Comment at: mlir/lib/Dialect/Shape/IR/ShapeCanonicalization.td:19
+def ConstantAssumingAll : Pat<(Shape_AssumingAllOp:$op $input),
+          (Shape_TrueWitnessOp),
+          [(AllInputsTrueWitnesses $op)] >;
----------------
mehdi_amini wrote:
> silvas wrote:
> > I don't think that TableGen is buying us much here. I prefer loose C++ patterns since they are easier to debug and understand.
> Easier to understand is arguable: this can be a matter of getting used to it, easier to debug is an infra issue which should be fixed if any.
> 
> C++ pattern aren't ideal because we can't manipulate them statically: even if DRR is far from perfect, at least it is declarative.
> In particular with the interpreted rewrite engine coming, looking into authoring such transformation in a non-C++ way is valuable (whether we use DRR or we come up with another authoring language to express these)
I'm going to leave as tablegen. I'm not a big fan of almost all tablegen patterns for LLVM but am going along with what the infrastructure implies it wants.


================
Comment at: mlir/test/Dialect/Shape/canonicalize.mlir:119
+  %0 = shape.cstr_broadcastable(%cs0, %cs1)
+  %3 = "shape.assuming"(%0) ({
+    shape.assuming_yield %cs1 : !shape.shape
----------------
silvas wrote:
> Can you make this test have less op surface area? Maybe use an unknown op to capture the witness (or return the witness) instead of having the shape.assuming?
I split these into one for each canonicalization.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D80189/new/

https://reviews.llvm.org/D80189





More information about the llvm-commits mailing list