[llvm-dev] Optionally using value numbering in Simplify*

Daniel Berlin via llvm-dev llvm-dev at lists.llvm.org
Fri Mar 3 15:18:09 PST 2017


Answers inline

On Fri, Mar 3, 2017, 1:40 PM Friedman, Eli <efriedma at codeaurora.org> wrote:

> On 3/3/2017 1:14 PM, Daniel Berlin wrote:
>
> On Fri, Mar 3, 2017 at 12:39 PM, Friedman, Eli <efriedma at codeaurora.org>
> wrote:
>
> On 3/3/2017 11:51 AM, Daniel Berlin via llvm-dev wrote:
>
> So i have a testcase (see PR31792, and cond_br2.llin GVN) that current GVN
> can simplify because it replaces instructions as it goes.  It's an example
> of a larger issue that pops up quite a lot
> I would appreciate thoughts on what to do about it
> it amounts to something like this (but again, it happens a lot):
>
> live = gep thing, 0
> live2 = gep thing, 1
> branch i1 provablytrue,, mergeblock, otherbb
> otherbb:
> dead = something else
> br mergeblock
> merge block
> a = phi(live, dead)
> b = live2
> result = icmp sge a, b
>
> both GVN and NewGVN prove provablytrue to be true, and phi to be
> equivalent to live.
>
> GVN transforms this piece at time, and so by the time simplifycmpinst sees
> the icmp, it is
>
> result = icmp sge <live2, live>
>
> It proves result true.
>
> NewGVN is an analysis, and so it doesn't transform the ir, and
> simplifycmpinst (rightfully) does not try to walk everything, everywhere,
> to prove something. It also couldn't know that dead is dead. So it doesn't
> see that result is true.
>
>
> Why aren't we calling SimplifyCmpInst(pred, live, live2, ...)?
>
>
> We do.
> The example is a bit contrived, the real example has a phi in the way of
> computing the gep offset, and SimplifyCmpInst does walking and matching, so
> this won't work anyway.
>
> See computePointerICmp:
>
>   Constant *LHSOffset = stripAndComputeConstantOffsets(DL, LHS);
>   Constant *RHSOffset = stripAndComputeConstantOffsets(DL, RHS);
>
> This in turn walks and collects the offsets.  One of those is a phi we
> know to be equivalent to a constant ...
>
>
> Or are you expecting SimplifyCmpInst(pred, live, dead, ...) to call back
> into GVN to find values equivalent to "dead"?
>
>
> The top level call we already get right.
> But all of these simplifiers do not just do top level things. Some go
> looking, so we need them to call back in in some cases.
>
>
> Okay, makes sense.
>
>
>
>
>
>
> The upshot is that it takes two passes of newgvn to get the same result as
> gvn.
>
> I'm trying to decide what to about this case. As i said, it happens a lot.
>
> It would be pretty trivial to make a "VNImpl" interface that has a few
> functions (that can be fulfilled by any value numbering that is an
> analysis), have newgvn implement it, and use it in Simplify*.
>
> (It would take work to make GVN work here because of how many places it
> modifies the ir during value numbering. It also modifies as it goes, so the
> only advantage would be from unreachable blocks it discovers)
>
> But before i add another argument to functions taking a ton already[1], i
> wanted to ask whether anyone had any opinion on whether it's worth doing.
>
> VNImpl would probably look something like:
> class VNImpl{
> // return true if A and B are equivalent
> bool areEquivalent(Value *A, Value *B);
> // find a value that dominates A that is equivalent to it
> Value *findDominatingEquivalent(Value *A);
> // obviousn
> bool isBlockReachable(BasicBock *BB);
> }
>
>
> I'm not sure how you expect InstructionSimplify to use
> findDominatingEquivalent.
>
>
> Most places it uses strict equality and doesn't care. In all of those
> cases newgvn has a single unique (but not always dominating) leader it
> could give and we could call it findEquivalent
>
> But simplify* does expect the end result to dominate the original
> instruction, and this is guaranteed by the docs :P.
> We could give up on these, or we could actually just use it at the end.
> Any instruction that it returns we could just find the equivalent that
> dominates the original operand, or return null.
> Besides that, there is one place (valueDominatesPhi) that actually checks
> dominance where we would change.  Not doing so is just a missed opt.
>
>
> Hmm... for the purpose of GVN, I guess we don't care if it dominates?
>

No, we don't.  We choose a single leader to value number to and use during
canonicalization/etc (and because we process in RPO, it is always a
something earlier in later proven *equivalent* after a leader has been
selected.  This is super-rare, but can happen with multiple backedges)

We then try to figure out what is equivalent, and then, depending on the
type of elimination we want to perform (ie full redundancies only, partial
redundancies, etc), we later do different things. For example, for full
redundancies, at the end, we sort things by dominance and eliminate
dominated uses.
A loop-gvn may really want to use it as part of dependence analysis, or
whatever. We don't actually care, it's just an analysis that can tell you
what in the program is equivalent, and a pass that happens to use that info
to eliminate redundancies.

For SimplifyInstruction, it also doesn't care internally. Except for that
one case, it cares to know what the value is, not where the value is.  It
cares a little bit about the *form* of the value.
I could solve that by reimplementing all of these simplifications on top of
gvn's expression type (which are canonical, or at least, intended to be).
But that seemed a much huger  boondoggle than this one  :)

We could make that a flag or something.
>

Right.

>
> Internally, InstructionSimplify already sticks all its state into a Query,
> so it might make sense to expose that, and let GVN customize the behavior a
> bit.
>
>
>
>
> Does it have to call findDominatingEquivalent every time it tries to
> match() on a Value (or otherwise examine it)?
>
> It depends on how many places go walking.
> As I said, we get the top level calls right, and that's enough for a *lot*
> of cases.
> Just not a few that want to do some sort of collection or matching of
> operands of operands.  We could make a vmatch that understands value
> equivalency if we need to.
>
> That seems extremely invasive in the sense that there would be a lot of
> places to change, and no good way to make sure we actually caught all the
> places which need to be changed.
>
>  First, all are just missed optimization if you miss them, not correctness.
>
>
> Second actually, for newgvn there is.  we can assert that for anything it
> uses, lookupOperandLeader(V) == V.
> We add these at the beginning of each function for the rhs and lhs
> arguments (and others as appropriate), and along with vmatch, should catch
> just about every case if not every one.
>
> Okay.
>
>
> Do you know how much of an impact this sort of missed optimization has in
> practice?  Introducing a bunch of table lookups to NewGVN's uses of
> InstructionSimplify will probably affect compile-time, so there's a
> tradeoff here.
>
>
> -Eli
>
Honestly, it can be tricky to measure impact.  Right now it only affects
gvn, but when we base pre on it, missed equivalency becomes missed pre.
Besides that, it's hard to tell what people actually care about.

I can come up with numbers "how many more things it eliminates", but even
that's not necesarily useful if most pass pipelines run it twice *anyway*.

I've been trying to go by the test cases GVN has, under the assumption
somebody spent the time caring.
if we don't actually care about making them identical in a single pass, but
instead "making performance on benchmarks identical overall", that's a much
easier bar (and so i'd be happy to be held to it).

All that said:
In a normal pass pipeline, running newgvn twice, or iterating it, is still
much faster than current gvn once (and most time is setup time for most
cases. If we don't rebuild memoryssa  it's even faster).

However,  I would like to get to the point where we don't feel the need to
do that.  Where the point is, dunno.  If we don't care about getting every
single optimization out of a single pass, that would be easier, but given
the set of testcases, it's definitely difficult to draw that line.
They expect a lot of random stuff to occur, and it's not clear what the
thing they actually cared about was (even with spleunking).



>
> --
> Employee of Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170303/0559cadc/attachment-0001.html>


More information about the llvm-dev mailing list