[llvm-dev] Effectiveness of llvm optimisation passes
Matthias Braun via llvm-dev
llvm-dev at lists.llvm.org
Fri Sep 22 14:01:22 PDT 2017
> On Sep 22, 2017, at 12:17 AM, Yi Lin via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> I noticed that there is a '-run-pass' argument for llc. I am wondering if I can do a similar approach with machine level optimisations/passes for llc. Are those passes optional (so I can turn them off)? And how can I get MIR format as llc expects with '-run-pass'?
It depends on the pass, some are optional, some aren't; if the pass has `if (skipFunction()) return false;` in the code then it is an optional pass that gets skipped in -O0.
In theory you should be able to do llc -stop-before, -run-pass, -start-after and write the intermediate results to .mir files. In practice we are not there yet. Targets have a big amount of state scattered around. The .mir files capture a lot it but not all, so it is likely that things don't work if you serialize to .mir in between.
And for the record: Despite the problems, the .mir files are an invaluable tool to write tests that test a single machine pass independently.
> Thanks a lot.
> On 22/9/17 15:10, Craig Topper wrote:
>> Have -O0 on your clang command line causes all functions to get marked with an 'optnone' attribute that prevents opt from being able to optimize them later. You should also add "-Xclang -disable-O0-optnone" to your command line.
>> On Thu, Sep 21, 2017 at 10:04 PM, Yi Lin via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>> Hi all,
>> I am trying to understand the effectiveness of various llvm
>> optimisations when a language targets llvm (or C) as its backend.
>> The following is my approach (please correct me if I did anything
>> I am trying to explicitly control the optimisations passes in
>> llvm. I disable optimisation in clang, but instead emit
>> unoptimized llvm IR, and use opt to optimise that. These are what
>> I do:
>> * clang -O0 -S -mllvm -disable-llvm-optzns -emit-llvm
>> -momit-leaf-frame-pointer a.c -o a.ll
>> * opt -(PASSES) a.ll -o a.bc
>> * llc a.bc -filetype=obj -o a.o
>> To evaluate the effectiveness of optimisation passes, I started
>> with an 'add-one-in' approach. The baseline is no optimisations
>> passes, and I iterate through all the O1 passes and explicitly
>> allow one pass for each run. I didnt try understand those passes
>> so it is a black box test. This will show how effective each
>> single optimisation is (ignore correlation of passes). This can be
>> iterative, e.g. identify the most effecitve pass, and always
>> enable it, and then 'add-one-in' for the rest passes. I also plan
>> to take a 'leave-one-out' approach as well, in which the baseline
>> is all optimisations enabled, and one pass will be disabled at a time.
>> Here is the result for the 'add-one-in' approach on some micro
>> The result seems a bit surprising. A few passes, such as licm,
>> sroa, instcombine and mem2reg, seem to deliver a very close
>> performance as O1 (which includes all the passes). Figure 7 is an
>> example. If my methodology is correct, then my guess is those
>> optimisations may require some common internal passes, which
>> actually deliver most of the improvements. I am wondering if this
>> is true.
>> Any suggestion or critiques are welcome.
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
More information about the llvm-dev