[llvm-dev] Uncovering non-determinism in LLVM - The Next Steps

Daniel Berlin via llvm-dev llvm-dev at lists.llvm.org
Fri Jul 7 10:27:13 PDT 2017


I'm not sure email is the best way to continue this discussion,  as it's
getting long and complex to follow, so i think at some point we should just
go have a chat by one of our cubes. :)

I will say i don't believe achieving "every pass does not change ordering
in containers such that it causes an output change" is a useful idea.  As
i've said, i believe that is for output writing to handle.

Passes simply do not and should not care about stuff like that.

In part because they do not not know what is happening with the IR.
It could be executed by a JIT, it could be splatted into a piece of
hardware.  It could be shipped over a network.
It could simply be thrown away.

The fact that something later happens to write it out is just one
possibility, and if that thing wants some deterministic ordering of things
in containers (or whatever else causes output ordering to differ), it
should be responsible for making that happen, not try to force that cost
into every pass :)





On Fri, Jul 7, 2017 at 10:16 AM, David Blaikie <dblaikie at gmail.com> wrote:

>
>
> On Fri, Jul 7, 2017 at 9:58 AM Daniel Berlin <dberlin at dberlin.org> wrote:
>
>>
>>>>>
>>>>> One of our definitions of non-determinism is simply "output from
>>>>> command line tools should always be bit identical given identical inputs",
>>>>> which is suitable for content-based caching build systems like Bazel.
>>>>>
>>>> Just to point out: These systems already often have to ignore
>>>> whitespace differences, etc.
>>>>
>>>> I'd also argue any of these content-based caching build systems is
>>>> going to be rarely used if they require every single tool that uses them to
>>>> produce bit identical output (IE boiling the ocean), as opposed to letting
>>>> the tools define what identical means or something.
>>>>
>>>
>>> This seems inconsistent with my understanding of LLVM's (& Google's, re:
>>> Bazel) goals..
>>>
>>
>> My point is that they are certainly welcome to try this path, but it's
>> been tried before.
>> There are reasons ccache, et al have various "slop" modes.
>>
>
> I'm not familiar with those - but don't expect things would break outright
> if a tool was non-deterministic (though semantic preserving), but that
> determinism (in the same-stuff-in-same-stuff-out sense) is handy/helpful.
>
>
>> I suspect, but can't prove, a system that is so rigid will not be adopted
>> in practice.
>> (So far: Only one major player has done so, and did so by fiat!)
>>
>
> What product/player/thing are you referring to here? ^
>
>
>>
>>
>>
>>>
>>> So far as I know, LLVM has fixed pretty much any "the same input causes
>>> different output bits, even if they're semantically equivalent" bug that's
>>> reported (well, they're considered bugs at least - sometimes takes a while
>>> for someone to prioritize them)
>>>
>>
>> Which, again, i'm fine with as long as we don't muck up passes a ton to
>> do it.  I'd rather design them properly and fix things like "random
>> limits", etc.
>>
>
> I'm not sure what you mean by 'random limits'.
>
> I'm not sure "designing them properly" (in the sense of things like "order
> of basic blocks should not affect optimization quality/power") is at odds
> with this - seems orthogonal & also desirable.
>
>
>>
>>
>>
>>>
>>> While it doesn't outright break Bazel when that property doesn't hold, I
>>> believe it can cause more cache invalidation than is ideal (eg: build
>>> system evicts an object file from the cache, but still has the resulting
>>> executable in the cache - to do the link step it goes to rebuild the
>>> objects*, finds a new/different object and then reruns the link because of
>>> it)
>>>
>>> Also for LLVM testing purposes, I believe any case where the output
>>> text/bits are different are usually fixed so the tests are reliable.
>>>
>>
>> FWIW: In the cases i cited the tests can already be made reliable in the
>> face of these differences :)
>>
>
> With LLVM's infrastructure mostly around FileCheck I've found making tests
> order-independent to be somewhat difficult. Do you have particular
> techniques in mind? (eg: I care that one BB has feature X and one BB has
> feature Y - I'm not sure how to do that with LLVM's test infrastructure)
>
>
>> * maybe this scenario doesn't really happen - if Bazel assumes
>>> reproducibility is transitive, it could observe that the input hashes are
>>> the same so it doesn't need to run the task at all if all the other object
>>> files are available - could just assume the resulting binary would be the
>>> same as the one it already has
>>>
>>
>> Right.
>> This is definitely faster :)
>>
>>
>>> No optimizations should change, but I think it'd be pretty difficult to
>>> support testing that pass (granted with LLVM's testing approach, at least
>>> that would be fairly isolated to only the tests for the pass - it wouldn't
>>> pollute other pass tests with nondeterminism of output, so might not be too
>>> painful)
>>>
>>>
>>>> I don't think you can say such a pass is broken.
>>>>
>>>
>>> I kind of would say it's broken.
>>>
>> Why?
>> It makes literally no semantic change to the code that is incorrect.
>>
> This is precisely why my view is that if you want *output* to look a
>> certain way, you need to cause the *output emission* to handle that.
>>
>
> I'm confused, maybe it's too early or I'm not reading/understanding you
> correctly. I'm not sure how to communicate well here but think maybe we're
> talking past each other. I'm not sure :/
>
> Are you suggesting that LLVM IR/bitcode writing shouldn't depend on the
> order of basic blocks in a Function? (or perhaps I'm
> misunderstanding/that's coming back to your point that you don't find it
> important that IR/Bitcode writing be deterministic?)
>
>
>> Arbitrarily reordering, I'd be OK with, nondeterministically reordering
>>> would seem not OK to me (again, based on the sentiments I've heard
>>> expressed from Chandler and others (though I may've misunderstood/may be
>>> incorrectly representing their perspective) on the project/my best
>>> understanding of the goals, etc - and I tend to agree with them, but could
>>> well be wrong)
>>>
>>
>>
>>>
>>>
>>>> Hence I think if you want such constraints, they need to be placed on
>>>> output orderings, not on what passes do.
>>>>
>>>
>>> Ah, hmm, maybe there's a point of confusion. Changing the order of BBs
>>> in a Function would change the output ordering, right?
>>>
>>
>> Yes
>>
>>
>>> Presumably that would be reflected in printed IR, bitcode, and possibly
>>> in block layout (at least at -O0, but I could imagine some aspects of that
>>> ordering leak out even when LLVM does advanced block layout optimizations).
>>>
>>
>> Again, any ordering changes that cause optimization differences are clear
>> bugs in the pass.
>>
>
> Sure - I'm OK with/agree with that - but that wasn't really what I was
> thinking of/caring about in this thread/subject. I think that's
> important/valuable/would be great (though difficult, I think*?) to fuzz
> passes to ensure that holds true.
>
> * Reordering BBs then checking the optimization output was semantically
> equivalent would require presumably complex semantic checking, or some kind
> of recanonicalization - sorting output BBs based on some property not
> related to the order they appear in?
>
>
>> Yes, i know that this means we have a lot of bugs :)  I consider any case
>> where, due to a limit, identical [1]  code optimizes differently, to be a
>> bug. You should too!
>>
>
> I do, it's just seems like a different point than whether or not output is
> deterministic/how to achieve that/whether it's a goal/etc... so I'm
> confused by discussing these things together & talking about a position on
> one issue implying a position on the other...
>
>
>>
>> block1:
>> foo
>> block2:
>> bar
>> should optimize the same as
>>
>> block2:
>> bar
>> block1:
>> foo
>>
>>
>> Let me try to put this another way:
>> My argument is precisely that code that is literally identical (but
>> subject to expression in multiple textual forms) should be optimized the
>> same way, and that's the avoidance of non-determinism you should shoot for,
>> *not* the simple subset of "same crap in, same crap out" :P.
>>
>
> I'm confused by the description of this as a subset - are you saying
> "semantically identical" (for want of a better term) code should be
> optimized to exactly the same bits as output? Or semantically identical
> bits as output?
>
>
>> This form does *not* preclude the output form from being different for
>> things that meet [1],
>>
>
> OK, I guess no to that last question, if I'm reading this right ^. That
> means that same-crap-in-same-crap-out isn't a subset of this goal, then?
>
> *continues to be confused*
>
>
>> and if you want the output form to look the same, that is a function of
>> ordering output, not of "stopping passes from doing things":
>> If you aren't ordering output, and just try to get the "same crap in
>> produces same crap out" form of non-determinism, you are going to get
>> screwed by all the random limits, etc, anyway as you improve the passes in
>> that you will have gratuitous, but deterministic, output change, and
>> probably some non-determinism anyway.
>>
>
> Agreed that sorting various things on output could be one way to achieve
> determinism of output. I can see how that ends up wrapping these two
> different issues up in this discussion.
>
> I'm not sure I agree with that direction, though - seems expensive in
> other ways (sorting things after the fact may be more expensive, etc,
> rather than preserving determinism throughout) though I don't know which
> comes out as the best tradeoff in the end. & hasn't been the approach LLVM
> seems to have taken so far - though the approach so far may not be
> revisited/reconsidered.
>
>
>
>
>>
>> [1]   For the purposes of this discussion, let's define this  to mean the
>> following:
>> Code that is subject to multiple textual forms that have precisely the
>> same meaning to llvm, but generate different orderings in our
>> containers/output:
>>
>> IE textual input
>> block1:
>> foo
>> block2:
>> bar
>>
>> should generate the same results as as textual input:
>>
>> block2:
>> bar
>> block1:
>> foo
>>
>>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170707/d235bc76/attachment.html>


More information about the llvm-dev mailing list