[llvm-dev] (RFC) Adjusting default loop fully unroll threshold

Chandler Carruth via llvm-dev llvm-dev at lists.llvm.org
Thu Feb 16 15:45:28 PST 2017

First off, I just want to say wow and thank you. This kind of data is
amazing. =D

On Thu, Feb 16, 2017 at 2:46 AM Kristof Beyls <Kristof.Beyls at arm.com> wrote:

> The biggest relative code size increases indeed didn't happen for the
> biggest programs, but instead for a few programs weighing in at about 100KB.
> I'm assuming the Google benchmark set covers much bigger programs than the
> ones displayed here.
> FWIW, the cluster of programs where code size increases between 60% to 80%
> with a size of about 100KB, all come from MultiSource/Benchmarks/TSVC.
> Interestingly, these programs seem to have float and double variants,  e.g.
> (MultiSource/Benchmarks/TSVC/Searching-flt/Searching-flt and
> MultiSource/Benchmarks/TSVC/Searching-dbl/Searching-dbl), and the code size
> bloat only happens for the double variants.

I think we should definitely look at this (as it seems likely to be a bug
somewhere), but I'm also not overly concerned with size regressions in the
TSVC benchmarks which are unusually loop heavy and small. We've have
several other changes that caused big fluctuations here.

> I think it may still be worthwhile to check if this also happens on other
> architectures, and why it happens only for the double-variants, not the
> float-variants.


The second chart shows relative code size increase (vertical axis) vs
> relative performance improvement (horizontal axis):
> I manually checked the cause of the 3 biggest performance regressions
> (proprietary benchmark1: -13.70%;
> MultiSource/Applications/hexxagon/hexxagon: -10.10%;
> MultiSource/Benchmarks/FreeBench/fourinarow/fourinarow -5.23%).
> For the proprietary benchmark and hexxagon, the code generation didn't
> change for the hottest parts, so probably is caused by micro-architectural
> effects of code layout changes.

This is always good to know, even though it is frustrating. =]

> For fourinarow, there seemed to be a lot more spill/fill code, so probably
> due to non-optimality of register allocation.

This is something we should probably look at. If you have the output lying
around, maybe file a PR about it?

The third chart below just zooms in on the above chart to the -5% to 5%
> performance improvement range:
> [image: unroll_codesize_vs_performance_zoom.png]
> Whether to enable the increase in unroll threshold only at O3 or also at
> O2: I don't have a strong opinion based on the above data.

FWIW, this data seems to clearly indicate that we don't get performance
wins with any consistency when the code size goes up (and thus the change
has impact). As a consequence, I pretty strongly suspect that this should
be *just* used at O3 at least for now.

I see two further directions for Dehao that make sense here (at least to
1) I suspect we should investigate *why* the size increases are happening
without helping speed. I can imagine some reasons that this would of course
happen (cold loops getting unrolled), but especially in light of the
oddities you point out above, I suspect there may be issues where more
unrolling is uncovering other problems and if we fix those other problems
the shape of things will be different. We should at least address the
issues you uncovered above.

2) If this turns out to be architecture specific (it seems that way at
least initially, but hard to tell for sure with different benchmark sets)
we might make AArch64 and x86 use different thresholds here. I'm skeptical
about this though. I suspect we should do #1, and we'll either get a
different shape, or just decide that O3 is more appropriate.

> Maybe the compile time impact is what should be driving that discussion
> the most? I'm afraid I don't have compile time numbers.

FWIW, I strongly suspect that for *this* change, compile time and code size
will be pretty precisely correlated. Dehao's data shows that to be true in
several cases certainly.

> Ultimately, I guess this boils down to what exactly the difference is in
> intent between O2 and O3, which seems like a never-ending discussion...

The definitions I am working from are here:

I've highlighted the part that makes me think O3 is better here: the code
size increases (and thus compile time increases) don't seem to correspond
to runtime improvements.

> Hoping you find this useful,

Very. Once again, this kind of data and analysis is awesome. =D

> Kristof
> On Tue, Feb 14, 2017 at 1:06 PM Kristof Beyls via llvm-dev <
> llvm-dev at lists.llvm.org> wrote:
> I've run the patch on https://reviews.llvm.org/D28368 on the test-suite
> and other benchmarks, for AArch64 -O3 -fomit-frame-pointer, both for
> Cortex-A53 and Cortex-A57.
> The geomean over the few hundred programs in there is roughly the same for
> Cortex-A53 and Cortex-A57: a bit over 1% improvement in execution speed for
> a bit over 5% increase in code size.
> Obviously I wouldn't want this for optimization levels where code size is
> of any concern, like -Os or -Oz, but don't have a problem with this going
> in for other optimization levels where this isn't a concern.
> Thanks,
> Kristof
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170216/c7268499/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: unroll_codesize_absolute_vs_relative.png
Type: image/png
Size: 86966 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170216/c7268499/attachment-0003.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: unroll_codesize_vs_performance.png
Type: image/png
Size: 84065 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170216/c7268499/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: unroll_codesize_vs_performance_zoom.png
Type: image/png
Size: 103095 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170216/c7268499/attachment-0005.png>

More information about the llvm-dev mailing list