[llvm-dev] Proposal for O1/Og Optimization and Code Generation Pipeline

Peter Smith via llvm-dev llvm-dev at lists.llvm.org
Fri Mar 29 03:06:15 PDT 2019


I'm definitely in favour of pursuing a meaningful -Og, it would be
especially welcome in the embedded area. One possibility that may be
worth exploring is to outline the debug illusion that we wish to
achieve at -Og and permit all optimizations that preserve that
illusion yet disable the ones that do not. Arm's old proprietary
compiler armcc defined its optimization levels in this way, with the
manual having high-level description:
http://infocenter.arm.com/help/topic/com.arm.doc.dui0472m/chr1359124935804.html
with -O1 in armcc roughly equivalent to -Og . Of course actually
achieving this and testing for it is more difficult than writing it
down, but I think it did provide a nice framework to think about what
to include. I don't know enough about debug in LLVM to know how
practical this will be.

Peter

On Fri, 29 Mar 2019 at 09:38, Kristof Beyls via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
>
> I like the general direction.
> Which seems in line with what's written already at https://github.com/llvm/llvm-project/blob/master/llvm/include/llvm/Passes/PassBuilder.h#L116 (which is the closest to a written down statement that I know of for what our optimization levels should aim for).
>
> I think that -O1 and -Og in a perfect world are different optimization levels (aiming for slightly different goals); but also agree that improving the -O1 implementation to get closer to its described goal will also mostly help in making it a reasonable baseline for an -Og optimization level if we decided to introduce that.
>
> I agree that we should aim to use data to make tradeoffs.
> It's probably relatively easy to collect data on "compile time" and "execution time" metrics; but probably harder on "debuggability". I wonder if the bot Adrian pointed to in the Debug Info BoF (this one? http://lnt.llvm.org/db_default/v4/nts/124231) at last LLVM dev meeting could help provide data for that. Or are those metrics too far off from "debuggability as perceived by a human developer"?
>
> Thanks,
>
> Kristof
>
> Op vr 29 mrt. 2019 om 06:45 schreef Eric Christopher via llvm-dev <llvm-dev at lists.llvm.org>:
>>
>>
>>
>>>
>>> >
>>> >  - Dead code elimination (ADCE, BDCE)
>>>
>>>
>>> Regarding BDCE: The trivialized values might indeed be irrelevant to
>>> later calculations, but might harm the debugging experience? If BDCE
>>> only was applied at O2 and higher, that's likely not a huge loss.
>>> Regular DCE (meaning without the bit-tracking parts) is probably fine
>>> for O1.
>>>
>>>
>>
>> Probably not. I'll see what the impact is for sure.
>>
>>>
>>> > Register allocation
>>> >
>>> > The fast register allocator should be used for compilation speed.
>>>
>>>
>>> I'm not sure about this - we should understand the performance impact -
>>> it might be severe.
>>>
>>
>> Totally agree. I think evaluating the tradeoffs is going to be key. I also have really no strong opinions and am happy to go where the data takes us.
>>
>> Thanks :)
>>
>> -eric
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev


More information about the llvm-dev mailing list