[llvm-dev] Relationship between clang, opt and llc

toddy wang via llvm-dev llvm-dev at lists.llvm.org
Mon Jan 8 00:00:47 PST 2018


Hi Sean,

Please check my inlined reply.
Looking forward to your comments.

Thanks for your time!

On Mon, Jan 8, 2018 at 1:12 AM, Sean Silva <chisophugis at gmail.com> wrote:

>
>
> On Jan 7, 2018 8:46 PM, "toddy wang" <wenwangtoddy at gmail.com> wrote:
>
> @Sean, here is my summary of several tools.
>
> Format: (ID,tool, input->output, timing, customization, questions)
>
> 1. llc, 1 bc -> 1 obj, back-end compile-time (code generation and
> machine-dependent optimizations),  Difficult to customize pipeline, N/A
> 2. LLD:  all bc files and obj files -> 1 binary (passing -flto to clang
> for *.bc file generation),  back-end link-time optimizations and code
> generations and machine-dependent optimizations, Easy to customize pipeline
> w/ -lto-newpm-passes, what is the connection between -lto-newpm-passes and  -lto-newpm-aa-pipeline
> and how to use -lto-newpm-passes to customize pipeline?
>
>
> You just specify the list of passes to run, as you would to opt -passes
>
> -lto-newpm-aa-pipeline has the same function as opt's -aa-pipeline option.
>

I barely found this source code as document.
https://github.com/Microsoft/llvm/blob/master/tools/llvm-lto2/llvm-lto2.cpp
It seems to me -aa-pipeline is just for alias analysis passes.
Could you explain what is opt -passes used for, and how to use it?

My impression is that if I want to run multiple passes with opt, e.g., dce,
dse, what I need to do is
opt -dce -dse a.bc
So, what -passes is used for?



> 3. gold: mixed obj files and bc files -> 1 binary (passing -flto to clang
> for *.bc file generation), back-end link-time optimization w/ LLVMgold.so
> and code generation and machine-dependent optimizations, unaware of whether
> it is customizable by means of command line options, can we consider LLD a
> more customizable gold from perspective of pipeline customization?
>
>
> Gold and LLD are very similar for this purpose, and LLD has some extra
> goodies like -lto-newpm-passes
>
>
Good to know this. LLD makes LLVM lto easier.


>
> 4. opt, 1 bc file -> 1 file at a time, middle-end machine-independent
> (may be others?), Easy to customize pipeline by means of command line
> options, N/A
> 5. llvm-link, many *bc file -> 1 bc file, link-time (unknown whether there
> is any optimization) and Unknown why it exists, unknown how to do
> customization, N/A
>
>
> llvm-link doesn't perform optimizations.
>

Any usage scenario for llvm-link?


>
>
> With above understandings, there are several ways to fine-grained tune
> clang/llvm optimization pipeline:
> 1. clang (c/c++  to bc translation, with minimal front-end optimizations,
> w/ -emit-llvm -O1 -Xclang -disable-llvm-passes), --> opt (w/ customizable
> middle-end optimizations for each bc file independently) --> gold
> (un-customizable back-end link-time optimization and code generation)
> 2. clang (c/c++  to bc translation, with minimal front-end optimizations,
> w/ -flto) -->opt ( same as 1) --> lld (w/ -lto-newpm-passes for link-time
> optimization pipeline customization, how?)
> 3. clang (c/c++ to *bc translation and optimization, customizable by mean
> of clang command-line options, maybe including both front-end optimization
> and middle-end optimizations). W/O explicitly specifying opt optimization
> pipeline, there may still be middle-end optimizations happening; also w/o
> explicitly specifying linker, it may use GNU ld / GNU gold / lld as the
> linker and with whichever's default link-time optimization pipeline.
>
> So, it seems to me that 2 is the most customizable pipeline, with
> customizable middle-end and back-end pipeline independently,
>
>
> The thing customized by -lto-newpm-passes is actually a middle-end
> pipeline run during link time optimization. The backend is not very
> customizable.
>
> Also, note that with a clang patch like the one I linked, you don't need
> opt because you can directly tell clang what fine-grained set of passes to
> run in the middle end.
>
> It seems opt's functionality and -lto-newpm-passes overlap with each
other.
With -lto-newpm-passes, one does not need to specify how programs are
linked, because middle-end optimizations (opt's optimizations), lto, and
linkage can all be included by LLD.
So LLD = opt + LTO + ld, right?


> One approach I have used in the past is to compile with -flto -O0
> -disable-O0-optnone and then do all optimizations at link time. This can
> simplify things because you only need to re-run the link command (it still
> takes a long time, but with sufficient ram (and compiling without debug
> info) you should be able to run multiple different pass pipelines in
> parallel).
>
Should "-flto -O0 -disable-O0-optnone"  be  "-flto -O1 (not O0) -Xclang
-disable-llvm-passes"? I believe the purpose is to generate unoptimized .bc
files.
Then, all machine-independent optimizations are done within LLD.


> If your only goal is to test middle-end pass pipelines (e.g. synergies of
> different passes) then that can be a good approach. However, keep in mind
> that this is really just a small part of the larger design problem of
> getting the best code with the best compile time. In practice, profile
> feedback (and making good use of it), together with accurate cost modeling
> (especially in the middle end for optimizations like inlining and loop
> unrolling), together with appropriate link-time cross-module optimization
> tend to matter just as much (or more) than a particularly intelligently
> chosen sequence of passes.
>
> This is very interesting to me.

What you described is as follows:
Performance (PGO + accurate cost modeling + appropriate Link-time
cross-module optimization)  >= Performance (intelligently chosen sequence
of passes)

Now, my question are
1. what do you mean "accurate cost modeling"?
Does you mean tune cost model for each optimization with detailed target
machine micro-architectural information?
If so, is this process automated or manual? and how?

2. when you say "appropriate Link-time cross-module optimization", what
does appropriate mean?
How to decide which modules/optimizations are appropriate?
Is this process manual or automated?

3. What does "intelligently chosen sequence of passes" mean?
Especially, how to decide which sequence of passes is better than another
one? Random sampling and measuring runtime?

4. Do you have any evidence to support the performance of the former is
better than the latter?


> Also, keep in mind that even though we in principle have a lot of
> flexibility with the sequence of passes in the middle-aged, in practice a
> lot of tuning and bug fixing has been done with the default O2/3 pipelines.
> If you deviate from them you may end up with pretty silly code. An example
> from my recent memory was that an inopportune running of GVN can cause a
> loop to have an unexpected set if induction variables, throwing off other
> optimizations.
>
-- Sean Silva
>
> the 1 with only customizable middle-end optimization pipeline, and then 3
> has the least amount of control of optimization pipeline by means of clang
> command-line.
>
> Thanks for your time and welcome to any comments!
>
> On Sun, Jan 7, 2018 at 12:40 AM, Sean Silva <chisophugis at gmail.com> wrote:
>
>> No, I meant LLD, the LLVM linker. This option for LLD is relevant for
>> exploring different pass pipelines for link time optimization.
>>
>> It is essentially equivalent to the -passes flag for 'opt'.
>>
>> Such a flag doesn't make much sense for 'llc' because llc mostly runs
>> backend passes, which are much more difficult to construct custom pipelines
>> for (backend passes are often required for correctness or have complex
>> ordering requirements).
>>
>> -- Sean Silva
>>
>>
>> On Jan 6, 2018 7:35 PM, "toddy wang" <wenwangtoddy at gmail.com> wrote:
>>
>> @Sean, do you mean llc ?
>> For llc 4.0 and llc 5.0, I cannot find -lto-newpm-passes option, is it a
>> hidden one?
>>
>> On Sat, Jan 6, 2018 at 7:37 PM, Sean Silva <chisophugis at gmail.com> wrote:
>>
>>>
>>>
>>> On Jan 5, 2018 11:30 PM, "toddy wang via llvm-dev" <
>>> llvm-dev at lists.llvm.org> wrote:
>>>
>>> What I am trying is to compile a program with different sets of
>>> optimization flags.
>>> If there is no fine-grained control over clang optimization flags, it
>>> would be impossible to achieve what I intend.
>>>
>>>
>>> LLD has -lto-newpm-passes (and the corresponding -lto-newpm-aa-pipeline)
>>> which allows you to pass a custom pass pipeline with full control. At one
>>> point I was using a similar modification to clang (see
>>> https://reviews.llvm.org/D21954) that never landed.
>>>
>>> -- Sean Silva
>>>
>>>
>>> Although there is fine-grained control via opt, for a large-scale
>>> projects, clang-opt-llc pipeline may not be a drop-in solution.
>>>
>>> On Fri, Jan 5, 2018 at 10:00 PM, Craig Topper <craig.topper at gmail.com>
>>> wrote:
>>>
>>>> I don't think "clang -help" prints options about optimizations. Clang
>>>> itself doesn't have direct support for fine grained optimization control.
>>>> Just the flag for levels -O0/-O1/-O2/-O3. This is intended to be simple and
>>>> sufficient interface for most users who just want to compile their code. So
>>>> I don't think there's a way to pass just -dse to clang.
>>>>
>>>> opt on the other hand is more of a utility for developers of llvm that
>>>> provides fine grained control of optimizations for testing purposes.
>>>>
>>>>
>>>>
>>>> ~Craig
>>>>
>>>> On Fri, Jan 5, 2018 at 5:41 PM, toddy wang <wenwangtoddy at gmail.com>
>>>> wrote:
>>>>
>>>>> Craig, thanks a lot!
>>>>>
>>>>> I'm actually confused by clang optimization flags.
>>>>>
>>>>> If I run clang -help, it will show many optimizations (denoted as set
>>>>> A)  and non-optimization options (denoted as set B).
>>>>> If I run llvm-as < /dev/null | opt -O0/1/2/3 -disable-output
>>>>> -debug-pass=Arguments, it also shows many optimization flags (denote as set
>>>>> C).
>>>>>
>>>>> There are many options in set C while not in set A, and also options
>>>>> in set A but not in set C.
>>>>>
>>>>> The general question is:  what is the relationship between set A and
>>>>> set C, at the same optimization level O0/O1/O2/O3?
>>>>> Another question is: how to specify an option in set C as a clang
>>>>> command line option, if it is not in A?
>>>>>
>>>>> For example, -dse is in set C but not in set A, how can I specify it
>>>>> as a clang option? Or simply I cannot do that.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Jan 5, 2018 at 7:55 PM, Craig Topper <craig.topper at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> O0 didn't start applying optnone until r304127 in May 2017 which is
>>>>>> after the 4.0 family was branched. So only 5.0, 6.0, and trunk have that
>>>>>> behavior. Commit message copied below
>>>>>>
>>>>>> Author: Mehdi Amini <joker.eph at gmail.com>
>>>>>>
>>>>>> Date:   Mon May 29 05:38:20 2017 +0000
>>>>>>
>>>>>>
>>>>>>     IRGen: Add optnone attribute on function during O0
>>>>>>
>>>>>>
>>>>>>
>>>>>>     Amongst other, this will help LTO to correctly handle/honor files
>>>>>>
>>>>>>     compiled with O0, helping debugging failures.
>>>>>>
>>>>>>     It also seems in line with how we handle other options, like how
>>>>>>
>>>>>>     -fnoinline adds the appropriate attribute as well.
>>>>>>
>>>>>>
>>>>>>
>>>>>>     Differential Revision: https://reviews.llvm.org/D28404
>>>>>>
>>>>>>
>>>>>>
>>>>>> ~Craig
>>>>>>
>>>>>> On Fri, Jan 5, 2018 at 4:49 PM, toddy wang <wenwangtoddy at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> @Zhaopei, thanks for the clarification.
>>>>>>>
>>>>>>> @Craig and @Michael, for clang 4.0.1,  -Xclang -disable-O0-optnone
>>>>>>> gives the following error message. From which version -disable-O0-optnone
>>>>>>> gets supported?
>>>>>>>
>>>>>>> [twang15 at c89 temp]$ clang++ -O0 -Xclang -disable-O0-optnone -Xclang
>>>>>>> -disable-llvm-passes -c -emit-llvm -o a.bc LULESH.cc
>>>>>>> error: unknown argument: '-disable-O0-optnone'
>>>>>>>
>>>>>>> [twang15 at c89 temp]$ clang++ --version
>>>>>>> clang version 4.0.1 (tags/RELEASE_401/final)
>>>>>>> Target: x86_64-unknown-linux-gnu
>>>>>>>
>>>>>>> On Fri, Jan 5, 2018 at 4:45 PM, Craig Topper <craig.topper at gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> If you pass -O0 to clang, most functions will be tagged with an
>>>>>>>> optnone function attribute that will prevent opt and llc even if you pass
>>>>>>>> -O3 to opt and llc. This is the mostly likely cause for the slow down in 2.
>>>>>>>>
>>>>>>>> You can disable the optnone function attribute behavior by passing
>>>>>>>> "-Xclang -disable-O0-optnone" to clang
>>>>>>>>
>>>>>>>> ~Craig
>>>>>>>>
>>>>>>>> On Fri, Jan 5, 2018 at 1:19 PM, toddy wang via llvm-dev <
>>>>>>>> llvm-dev at lists.llvm.org> wrote:
>>>>>>>>
>>>>>>>>> I tried the following on LULESH1.0 serial version (
>>>>>>>>> https://codesign.llnl.gov/lulesh/LULESH.cc)
>>>>>>>>>
>>>>>>>>> 1. clang++ -O3 LULESH.cc; ./a.out 20
>>>>>>>>> Runtime: 9.487353 second
>>>>>>>>>
>>>>>>>>> 2. clang++ -O0 -Xclang -disable-llvm-passes -c -emit-llvm -o a.bc
>>>>>>>>> LULESH.cc; opt -O3 a.bc -o b.bc; llc -O3 -filetype=obj b.bc -o b.o ;
>>>>>>>>> clang++ b.o -o b.out; ./b.out 20
>>>>>>>>> Runtime: 24.15 seconds
>>>>>>>>>
>>>>>>>>> 3. clang++ -O3 -Xclang -disable-llvm-passes -c -emit-llvm -o a.bc
>>>>>>>>> LULESH.cc; opt -O3 a.bc -o b.bc; llc -O3 -filetype=obj b.bc -o b.o ;
>>>>>>>>> clang++ b.o -o b.out; ./b.out 20
>>>>>>>>> Runtime: 9.53 seconds
>>>>>>>>>
>>>>>>>>> 1 and 3 have almost the same performance, while 2 is significantly
>>>>>>>>> worse, while I expect 1, 2 ,3 should have trivial difference.
>>>>>>>>>
>>>>>>>>> Is this a wrong expectation?
>>>>>>>>>
>>>>>>>>> @Peizhao, what did you try in your last post?
>>>>>>>>>
>>>>>>>>> On Tue, Apr 11, 2017 at 12:15 PM, Peizhao Ou via llvm-dev <
>>>>>>>>> llvm-dev at lists.llvm.org> wrote:
>>>>>>>>>
>>>>>>>>>> It's really nice of you pointing out the -Xclang option, it makes
>>>>>>>>>> things much easier. I really appreciate your help!
>>>>>>>>>>
>>>>>>>>>> Best,
>>>>>>>>>> Peizhao
>>>>>>>>>>
>>>>>>>>>> On Mon, Apr 10, 2017 at 10:12 PM, Mehdi Amini <
>>>>>>>>>> mehdi.amini at apple.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Apr 10, 2017, at 5:21 PM, Craig Topper via llvm-dev <
>>>>>>>>>>> llvm-dev at lists.llvm.org> wrote:
>>>>>>>>>>>
>>>>>>>>>>> clang -O0 does not disable all optimization passes modify the
>>>>>>>>>>> IR.; In fact it causes most functions to get tagged with noinline to
>>>>>>>>>>> prevent inlinining
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It also disable lifetime instrinsics emission and TBAA, etc.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> What you really need to do is
>>>>>>>>>>>
>>>>>>>>>>> clang -O3 -c emit-llvm -o source.bc -v
>>>>>>>>>>>
>>>>>>>>>>> Find the -cc1 command line from that output. Execute that
>>>>>>>>>>> command with --disable-llvm-passes. leave the -O3 and everything else.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> That’s a bit complicated: CC1 options can be passed through with
>>>>>>>>>>> -Xclang, for example here just adding to the regular clang invocation `
>>>>>>>>>>> -Xclang -disable-llvm-passes`
>>>>>>>>>>>
>>>>>>>>>>> Best,
>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Mehdi
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> You should be able to feed the output from that command to
>>>>>>>>>>> opt/llc and get consistent results.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> ~Craig
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Apr 10, 2017 at 4:57 PM, Peizhao Ou via llvm-dev <
>>>>>>>>>>> llvm-dev at lists.llvm.org> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi folks,
>>>>>>>>>>>>
>>>>>>>>>>>> I am wondering about the relationship clang, opt and llc. I
>>>>>>>>>>>> understand that this has been asked, e.g.,
>>>>>>>>>>>> http://stackoverflow.com/questions/40350990/relationsh
>>>>>>>>>>>> ip-between-clang-opt-llc-and-llvm-linker. Sorry for posting a
>>>>>>>>>>>> similar question again, but I still have something that hasn't been
>>>>>>>>>>>> resolved yet.
>>>>>>>>>>>>
>>>>>>>>>>>> More specifically I am wondering about the following two
>>>>>>>>>>>> approaches compiling optimized executable:
>>>>>>>>>>>>
>>>>>>>>>>>> 1. clang -O3 -c source.c -o source.o
>>>>>>>>>>>>     ...
>>>>>>>>>>>>     clang a.o b.o c.o ... -o executable
>>>>>>>>>>>>
>>>>>>>>>>>> 2. clang -O0 -c -emit-llvm -o source.bc
>>>>>>>>>>>>     opt -O3 source.bc -o source.bc
>>>>>>>>>>>>     llc -O3 -filetype=obj source.bc -o source.o
>>>>>>>>>>>>     ...
>>>>>>>>>>>>     clang a.o b.o c.o ... -o executable
>>>>>>>>>>>>
>>>>>>>>>>>> I took a look at the source code of the clang tool and the opt
>>>>>>>>>>>> tool, they both seem to use the PassManagerBuilder::populateModulePassManager()
>>>>>>>>>>>> and PassManagerBuilder::populateFunctionPassManager()
>>>>>>>>>>>> functions to add passes to their optimization pipeline; and for the
>>>>>>>>>>>> backend, the clang and llc both use the addPassesToEmitFile() function to
>>>>>>>>>>>> generate object code.
>>>>>>>>>>>>
>>>>>>>>>>>> So presumably the above two approaches to generating optimized
>>>>>>>>>>>> executable file should do the same thing. However, I am seeing that the
>>>>>>>>>>>> second approach is around 2% slower than the first approach (which is the
>>>>>>>>>>>> way developers usually use) pretty consistently.
>>>>>>>>>>>>
>>>>>>>>>>>> Can anyone point me to the reasons why this happens? Or even
>>>>>>>>>>>> correct my wrong understanding of the relationship between these two
>>>>>>>>>>>> approaches?
>>>>>>>>>>>>
>>>>>>>>>>>> PS: I used the -debug-pass=Structure option to print out the
>>>>>>>>>>>> passes, they seem the same except that the first approach has an extra pass
>>>>>>>>>>>> called "-add-discriminator", but I don't think that's the reason.
>>>>>>>>>>>>
>>>>>>>>>>>> Peizhao
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> LLVM Developers mailing list
>>>>>>>>>>>> llvm-dev at lists.llvm.org
>>>>>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> LLVM Developers mailing list
>>>>>>>>>>> llvm-dev at lists.llvm.org
>>>>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> LLVM Developers mailing list
>>>>>>>>>> llvm-dev at lists.llvm.org
>>>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> LLVM Developers mailing list
>>>>>>>>> llvm-dev at lists.llvm.org
>>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>> _______________________________________________
>>> LLVM Developers mailing list
>>> llvm-dev at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>
>>>
>>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180108/e73c42e3/attachment.html>


More information about the llvm-dev mailing list