[PATCH] D24805: [GVNSink] Initial GVNSink prototype

Daniel Berlin via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri May 19 07:12:19 PDT 2017


dberlin added a comment.



> - Used post_order. I looked into using inverse RPO, but that is complicated by the lack of a single entry node in the inverted graph. Given that I'm iterating to a fixpoint anyway, I think post order traversal gives me the only guarantee I need (look at successors before predecessors).

No it won't, in the same way pre-order is not equivalent to predecessors before successors :)

What you want is either PDT order (and we should just fix the post-dom tree and move on. I have patches out to fix it) or inverse traversal RPO (and you can crib how to figure out which nodes should be exit nodes from that patch if you want)

PDT order should equivalent to inverse traversal RPO if you sort the siblings.

post_order is going to cause this to iterate *way* too much on certain types of test-cases.

My only other comment is that it's not always clear what the tests are testing (IE what you expect to happen and why).
The comments that are there sometimes leave the reader confused.

IE i read "; We can't common an intrinsic!" and my first reaction was "sure we can".



================
Comment at: lib/Transforms/Scalar/GVNSink.cpp:350
+      CmpInst::Predicate Predicate = C->getPredicate();
+      E->setOpcode((C->getOpcode() << 8) | Predicate);
+    }
----------------
We probably should move this magic into the constructor or something by letting it take an opcode and predicate.



================
Comment at: lib/Transforms/Scalar/GVNSink.cpp:483
+  /// inductive property allows us to simply return the value number of the next
+  /// instruction that defines memory.
+  uint32_t getMemoryUseOrder(Instruction *Inst) {
----------------
Yeah, you really want inverse MemorySSA here.
We could build it without too much trouble  if you determine it matters enough, performance wise.



Repository:
  rL LLVM

https://reviews.llvm.org/D24805





More information about the llvm-commits mailing list