[PATCH] Ephemeral values for LLVM invariants

hfinkel at anl.gov hfinkel at anl.gov
Sun Jul 27 20:49:24 PDT 2014


Thanks again for the review!

================
Comment at: lib/Analysis/CodeMetrics.cpp:47
@@ +46,3 @@
+        case Intrinsic::var_annotation:
+          // These intrinsics don't actually represent code after lowering.
+          WorkSet.push_back(CI);
----------------
Philip Reames wrote:
> Most of this change is driven by the assume intrinsic changes, but this actually changes behaviour for existing code as well.  It might be good to separate this infrastructure into two distinct changes and get the addition of the other intrinsics more widely reviewed.  (i.e. despite the title and description, this *isn't* specific to assume/invariant)
> 
> I'm mostly suggesting this for risk management and blame purposes.  
You're right, but the blame will be clear regardless. This needs to be committed before we actually start *generating* @llvm.assume in Clang for anything (and before anyone else should generate them either).

As it turns out, most of the people interested in this feature are the people who best understand the impact on costing for the other intrinsics too. Thus, I'm not concerned.

================
Comment at: lib/Analysis/CodeMetrics.cpp:56
@@ +55,3 @@
+  SmallPtrSet<const Value *, 32> Visited;
+
+  while (!WorkSet.empty()) {
----------------
Philip Reames wrote:
> A useful optimization might be to seed the EphValues set with every single use value in WorkSet.  This would help cases like:
> a = icmp
> assume(a)
> c = and a, b
> assume(c)
> 
> With your current implementation, if you consider 'a' first, you'll discard it because 'c' is not known ephemeral.  You'll eventually revisit it (through 'c''s operands), but this is wasted work.  
> 
> You could even go a step further and make the worklist include only items just found to be ephemeral.  You'd essentially be growing the frontier of ephemeral values back through the value graph.  This might be a more obvious algorithm.
> 
> Worth noting is that neither algorithm handles phi cycles.  Both are conservative, not optimistic.  (In the classic data flow sense.)  I think that's fine for now, but worth noting explicitly. 
Ack; yep. I actually had it that way originally, but made it so that only one place in the code inserted things into EphValues while I was debugging something, and then forgot to change it back.

You're correct, that should be noted.

================
Comment at: test/Transforms/Inline/ephemeral.ll:23
@@ +22,3 @@
+; CHECK-NOT: call i1 @inner
+define i1 @outer() optsize {
+   %r = call i1 @inner()
----------------
Philip Reames wrote:
> This test appears really fragile w.r.t. our current inlining heuristics.  Is there a way you could restructure this to avoid that?  If not, add some comments explaining how to adjust the test if the inlining heuristic change.  
> 
> You also need to add some form of negative example to show that the ephemeral logic actually contributes to the inlining.  
I could hard code the current threshold into the RUN line (that having been said, I adapted this from a similar test that will also need to be changed should the heuristic be altered).

I don't understand your second comment. If you don't account for the ephemeral values, then the extra instructions in this example prevent inlining. If you do account for them, then the inlining happens.

================
Comment at: test/Transforms/LoopUnroll/ephemeral.ll:7
@@ +6,3 @@
+
+define i32 @test1(i32* nocapture %a) nounwind uwtable readonly {
+entry:
----------------
Philip Reames wrote:
> Same comments as above w.r.t. test structure.
> 
> Also, a comment explaining the test approach would be helpful.  It looks like you're relying on the return to be constant folded after unrolling?  
Hey, the threshold is even hard-coded in this one. ;)

No, I'm adding enough extra instructions such that unrolling will be prevented unless you account for the fact that those extra instructions are ephemeral (and, thus, free).

But you're right about one thing... some comments are needed. :-)

http://reviews.llvm.org/D180






More information about the llvm-commits mailing list