[llvm-dev] RFC: Killing undef and spreading poison

John McCall via llvm-dev llvm-dev at lists.llvm.org
Fri Nov 11 11:23:46 PST 2016

> On Nov 10, 2016, at 10:37 PM, Sanjoy Das <sanjoy at playingwithpointers.com> wrote:
> Hi John,
> John McCall wrote:
>>>> Well, we could say non-nsw add and mul are actually "bitwise" operations, so "mul i32 poison, 2" is guaranteed to have its bottom bit zero (but "mul nsw i32 poison, 2" is poison).  Of course, there's a limit to how far we can go in that direction, or we just end up with the old notion of undef.  Off the top of my head, I'm not exactly sure where that line is.
>>> Right, we could try to make it work.  I don't have a good model for bit-wise poison yet, but it might be possible.  One of the things it will break is rewrites into comparisons, since at least comparisons will need to return poison if any bit of the inputs is poison. So the direction of "arithmetic ->  comparison" gets broken (InstCombine has a few of these).
>>> But before we try to make bit-wise poison work, I would like to understand what are the deficiencies of value-wise poison. Is there any major issue?  I'm asking because value-wise poison seems to be a much simpler concept, so if it works ok it seems to be preferable.
>>> During the meeting last week, Richard Smith proposed that instead we add a "load not_poison %ptr" (like load atomic) instruction that would be equivalent to load+freeze+bitcast, because he is concerned that some C++ stuff needs to be lowered to such a load. This load then becomes tricky to move around (e.g., cannot be sank to inside loops), but there are options to attenuate that problem if necessary.
>>> Perhaps this construct would make John McCall happy as well (re. his question during the talk last week)?
>> I think the source of evil here is the existence of inconsistent values in the representation.  So no, if I understand your suggestion correctly, I don't think I'd be happy because it still admits the existence of an unfrozen poison value.
>> It seems to me that the problem is that people are trying to specify the behavior of operations using the template "the operation succeeds normally if the conditions are met, and otherwise<something>".  I don't think it works.  I think the right template is simply "the result may be analyzed assuming that the conditions will be met", and we aren't really interested in what happens if the conditions aren't met.  Undefined behavior is valuable to a compiler when it allows us to simplify some operation assuming that the bad conditions never happen.
> I don't think we can get away without assigning some sort of meaning
> to the operations in the "abnormal case" for operations that don't
> have immediate UB in abnormal cases.  Since the program can legally
> continue executing after doing the "abnormal" thing, we still need
> some sort of sane semantics for the values these abnormal operations
> produce.
> As a concrete example, say we have:
> i32 sum = x *nsw y
> i64 sum.sext = sext sum to i64
> ...
> some use of sum.sext
> Pretending "x +nsw 1" does not sign-overflow, we can commute the sext
> into the arithmetic, but we still somehow need to capture the fact
> that, depending on the optimizer's whims and fancies (i.e. whether it
> does the commute or not), the high 32 bits of sum.sext can now be
> something other than all zeroes or all ones.

Well, but you can assume it can't, though, because the counter-factual still applies.

>> Actually optimizing code that we've proven has undefined behavior, in contrast, is basically uninteresting and leads us into these silly philosophical problems.
> I don't think we care about optimizing code that we've _proven_ has
> undefined behavior.  We care about optimizing code in ways that is
> correct in the face of *other* transforms that we ourselves want to
> do, where these "other transforms" pretend certain abnormal cases do
> not exist.  Poison is a "stand-in" for these transforms, which are
> sometimes non-local.

I'm saying that you still get reasonable behavior from reasoning about
counterfactuals for these things, and I haven't been convinced that you
lose optimization power except in code that actually exhibits undefined

> For instance, continuing the previous example, say we're interested in
> the speculatibility of
> t = K `udiv` (-1 + (sum.sext >> 32))
> We don't _really_ care about doing something intelligent when sum.sext
> is provably poison.  However, we do care about taking into the
> _possibility_ of sum.sext being poison, which is really just a more
> precise way of saying that we care about taking into the possibility
> of sum.sext being commuted into (sext(x) * sext(y)) (in which case the
> division is not speculatable).

It seems to me that counterfactuals are fine for reasoning about this.

> And we want to do this with enough formalism in place so that we can
> write correct checking interpreters, fuzzers etc.

I will happily admit that a pure counterfactual-reasoning model does not
admit the ability to write checking interpreters in the face of speculation.

>> I would suggest:
>> 1. Make undef an instruction which produces an arbitrary but consistent result of its type.
>> 2. Revisit most of the transformations that actually try to introduce undef.  The consistency rule may make may of them questionable.  In particular, sinking undef into a loop may change semantics.
>> 3. Understand that folding something to undef may actually lose information.  It's possible to prove that %x<  %x +nsw 1.  It's not possible to prove that %x<  undef.  Accept that this is ok because optimizing undefined behavior is not actually interesting.
> As I said above, we don't really care about folding cases that we've
> proved to have UB.
> Moreover, in your scheme, we still won't be able to do optimizations
> like:
>  %y = ...
>  %undef.0 = undefinst
>  %val = select i1 %cond, %undef.0, %y
>  print(%val)
>  to
>  %y = ...
>  print(%val)
> since it could have been that
>  %x = INT_MAX at runtime, but unknown at compile time
>  %y = %x +nsw 1
>  %undef.0 = undefinst
>  %val = select i1 %cond, %undef.0, %y
>  print(%val) == %x s< %val
> Folding %val to %y will cause us to go always printing false to
> printing either true or false, depending on the will of the compiler.

I think you *can* do these optimizations, you just have to do them very late
at a point where you're not going to do certain kinds of downstream optimization.
This kind of phase ordering, where certain high-level optimizations are only
permitted in earlier passes, is a pretty standard technique in optimizers, and
we use it for plenty of other things.

A more prominent example is that my model doesn't allow the middle-end to
freely eliminate "store undef, %x".  That store does have the effect of initializing
the memory to a consistent value, and that difference can be detected by
subsequent load/store optimizations.  But the code generator can still trivially
peephole this (when the undef has only one use) because there won't be any
subsequent load/store optimizations.


> I have to think a bit about this, but in your scheme I think we will
> generally be only able to fold `undefinst` to constants since
> non-constant values could have expressions like `<INT_MAX at
> runtime> +nsw 1` as their source which would justify non-local
> transforms that `undefinst` would not.
> -- Sanjoy

More information about the llvm-dev mailing list