[llvm-dev] Significant code difference with a split call to opt
Sébastien Michelland via llvm-dev
llvm-dev at lists.llvm.org
Wed Jul 3 06:55:35 PDT 2019
Alright, since it's deemed important I'll do my best to help. I've built
an up-to-date LLVM from Git, I'll make more tests once I setup the test
suite instrumentation I need to replace the optimization sequences.
So far I can't say it looks like a determinism issue to me. The two
methods are deterministic on their own as far as I can test; they just
don't have the same output (which might have an explanation in terms of
how the pass manager works?).
Are you able to reproduce the test case from earlier (the one with the
attached shell script)?
Sébastien Michelland
On 6/27/19 1:47 PM, David Greene wrote:
> We want LLVM to be deterministic and there have been efforts to fix
> problems related to data structures causing different generated code
> sequences. It's certainly possible something like that is going on, but
> it shouldn't just be dismissed. It would be best if we could get to the
> bottom of it and see what needs fixing.
>
> -David
>
> Sébastien Michelland <sebastien.michelland at ens-lyon.fr> writes:
>
>> Hi,
>>
>> This answer is a bit slow; I tried to look into the sequence details
>> but 250 passes plus the complex bitcode of test suite examples makes
>> this pretty hard.
>>
>> In the meantime I stumbled upon llvm-diff which abstracts away the
>> most significant difference, namely instruction renaming. It also
>> ignores function attributes so calling conventions are silently
>> unified; but at least it gives empty diffs when comparing the two
>> methods. This means that my performance differences are mostly
>> measurement errors...
>>
>> Some of the differences might be "normal", eg. caused by randomized
>> data structures. I don't have that much experience with LLVM code so
>> I'm not sure how probable this is.
>>
>> I'll stick to llvm-diff for now and maybe come back to this when I
>> have a clearer understanding of the pass management process. ^^
>>
>> Thanks for your time and help!
>> Sébastien Michelland
>>
>> On 6/19/19 11:42 AM, Hiroshi Yamauchi wrote:
>>> Passing -print-after-all to opt should print the IR after each
>>> pass. That may help figure out what's going on.
>>>
>>> On Mon, Jun 17, 2019 at 1:30 PM Sébastien Michelland via llvm-dev
>>> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>>>
>>> Hi,
>>>
>>> I reproduced the test on many individual files and got very variable
>>> results... it seems the computer's workload when running the test suite
>>> influenced the execution speed a lot more than standard deviation
>>> shows.
>>> I'll withdraw the performance claim until I can get consistent results
>>> (changed subject line), apologies for the confusion.
>>>
>>> What I can still show easily is that the code generated by these two
>>> methods is different (which is already weird). For a simple example,
>>> grab a copy of bilateral_grid.bc:
>>>
>>>
>>> <https://github.com/llvm/llvm-test-suite/blob/master/Bitcode/Benchmarks/Halide/bilateral_grid/bilateral_grid.bc>>
>>> Then you can generate my sequences with [opt -O3 -debug-pass=Arguments]
>>> and diff the outputs. Please see the attached script.
>>>
>>> The differences seem to be mainly on variable indices (are they
>>> randomized?); on some test (namely jacobi-2d-imper) I have seen calling
>>> convention differences.
>>>
>>> I'd like to optimize programs by greedily selecting optimizations,
>>> making a call to opt at each step. If I don't have equality between the
>>> two methods, I can't be sure that the sequence I'm building will make
>>> much sense.
>>>
>>> Sébastien Michelland
More information about the llvm-dev
mailing list