[llvm-dev] SCCP is not always correct in presence of undef (+ proposed fix)

Daniel Berlin via llvm-dev llvm-dev at lists.llvm.org
Sat Dec 31 00:15:15 PST 2016


On Fri, Dec 30, 2016 at 11:55 PM, Sanjoy Das <sanjoy at playingwithpointers.com
> wrote:

> Hi Daniel,
>
> On Fri, Dec 30, 2016 at 10:55 PM, Daniel Berlin <dberlin at dberlin.org>
> wrote:
> >> Right, but we are talking about "when, in the intermediate state, can i
> >> transform an undef to a different value".
> >>
> >> Remember you can only go down the lattice. So you can't make undef
> >> constant, and then discover it's wrong, and go back up :)
> >> IE  can't change phi undef, undef to constant value 50, , discover 50
> >> doesn't work, and say, well actually, let's make it undef again for an
> >> iteration :)
>
> If the kind of simplification (I think) you've mentioned is allowed,
> then I think even Davide's lattice is problematic.  Say we have:
>
> loop:
>   X = phi(entry: undef, loop: T)
>   ...
>   T = undef
>   if C then br loop else br outside
>
> When we first visit X then we'll compute its state as (Undef MEET
> Unknown) = Undef.

Yes


> My guess is that you're implying that the Undef
> lattice state (or some other computation hanging off of it) can be
> folded immediately to, say, 50, but once we mark the backedge as
> executable we'll set it to undef (or mark it as 49)?

Yes


  And both of
> these are bad because you're making a non-monotonic transition?

Yes


> Given
> this, I'm not sure what prevents the bad transform from happening even
> if we have separate Unknown and Undef states.


You can always get *out* of the bad transform by dropping to overdefined if
you discover it doesn't work.
The question is more one of "what is the optimal time to  try to figure out
a constant value that works".
If you drop too early, you have to go overdefined when you may have been
able to figure out a constant to use to replace the undef.

>


> If the solution is to not fold X into undef (which, by assumption, can
> "spontaneously decay" into any value) until you're sure every incoming
> value is also undef then I suppose we'll need some special handling
> for undef values to prevent breaking cases like (as shown earlier):
>
> loop:
>   X = phi(entry: 10, loop: T)
>   if X == 10 then br outside else br loop
>
> =>
>
> loop:
>   X = phi(entry: 10, loop: T)
>   br outside
>
> (i.e. to keep the algorithm properly optimistic)?
>

Right.  This is precisely the unknown part, or at least, that was my
thinking.


>
> And if we do have a way of keeping undefs as undefs (i.e. prevent X
> from decaying into 50 when we first visit the first loop) then why
> can't we re-use the same mechanism to avoid spuriously decaying "phi
> undef undef-but-actually-unknown" to "50"?
>
That mechanism is the "all executable paths leading to our operands
visited" bit, which is .. the unknown vs undef state.

IE we only mark the operand undef when we visit it from a reachable edge.

>
> Another way of stating my suggestion is that, if you agree that this
> is a correct lattice (different from Davide's proposal) and pretend
> the "spontaneous undef decay" problem does not exist, then:
>
> digraph G {
>   Unknown -> Undef
>   Undef -> Constant1
>   Undef -> Constant2
>   Undef -> Constant3
>   Constant1 -> Bottom
>   Constant2 -> Bottom
>   Constant3-> Bottom
> }
>
> then it should be legal / correct to first drop every lattice element
> from "Unknown" to "Undef" before running the algorithm.  The only
> cases where this would give us a conservative result is a place where
> we'd have otherwise gotten "Unknown" for a used value; and the
> algorithm should never have terminated in such a state.
>

It is definitely *legal* to do this or the algorithm is broken otherwise
(you should just end up in overdefined more).

I'm not sure i believe the last part though.  I have to think about it.


FWIW: My initial proposal to all of this was "stop caring about trying to
optimize undef here at all". Remove it from the lattice, and treat undef as
overdefined if we see it. If we want to optimize undef, before SCCP
solving, build SCCs of the parts of the SSA graph that contain undef,
choose correct values for those subgraphs, then mark them constant *before*
the solver sees it".

 This is pre-solve instead of post-solve or iterative solve. We already do
this for arguments by solving them to overdefined. IPCP also works much the
same way by pre-solving arguments constants

This is precisely because this stuff is non-trivial to reason about, and
unclear to me that it's really worth it.  If *it* is worth it, we can do
better than we do now (and contain the mess *entirely* to the presolver),
and if it's not, we shouldn't have complicated reasoning to try to make it
work well.

Right now we have the worst of all worlds.  We have a mess in both places
(the resolver and the solver both step on each other and differ in
behavior), it's hard to reason about (though provably not correct), and it
doesn't even give you the best answers it can *anyway*

(generally, at the point where we have a bunch of very smart people racking
their heads about whether something is really going to work algorithmically
or not, i take it as a sign that maybe we have the wrong path.  Obviously,
there are some really complex problems, but ... this should not be one of
them :P)



>
> -- Sanjoy
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20161231/e0dec624/attachment.html>


More information about the llvm-dev mailing list