[llvm-dev] (RFC) Adjusting default loop fully unroll threshold
Kristof Beyls via llvm-dev
llvm-dev at lists.llvm.org
Fri Feb 17 06:42:51 PST 2017
On 17 Feb 2017, at 00:45, Chandler Carruth <chandlerc at gmail.com<mailto:chandlerc at gmail.com>> wrote:
First off, I just want to say wow and thank you. This kind of data is amazing. =D
Thanks :)
For fourinarow, there seemed to be a lot more spill/fill code, so probably due to non-optimality of register allocation.
This is something we should probably look at. If you have the output lying around, maybe file a PR about it?
I'll have another, closer look to be more sure about what the root cause is, and file a PR if it looks like a good case to help improve the register allocator.
The third chart below just zooms in on the above chart to the -5% to 5% performance improvement range:
<unroll_codesize_vs_performance_zoom.png>
Whether to enable the increase in unroll threshold only at O3 or also at O2: I don't have a strong opinion based on the above data.
FWIW, this data seems to clearly indicate that we don't get performance wins with any consistency when the code size goes up (and thus the change has impact). As a consequence, I pretty strongly suspect that this should be *just* used at O3 at least for now.
I see two further directions for Dehao that make sense here (at least to me):
1) I suspect we should investigate *why* the size increases are happening without helping speed. I can imagine some reasons that this would of course happen (cold loops getting unrolled), but especially in light of the oddities you point out above, I suspect there may be issues where more unrolling is uncovering other problems and if we fix those other problems the shape of things will be different. We should at least address the issues you uncovered above.
2) If this turns out to be architecture specific (it seems that way at least initially, but hard to tell for sure with different benchmark sets) we might make AArch64 and x86 use different thresholds here. I'm skeptical about this though. I suspect we should do #1, and we'll either get a different shape, or just decide that O3 is more appropriate.
Agreed. FWIW, I haven't spotted any results that suggested to me that the unrolling threshold should be different between different architectures.
>From a basic principles perspective, I'd assume that micro-architectural features such as the size of the instruction cache should mostly define the unrolling thresholds, rather than instruction set architecture. Which implies different thresholds per subtarget, rather than per target. Anyway, let's not try to come up with different thresholds per target/subtarget without clear data.
Maybe the compile time impact is what should be driving that discussion the most? I'm afraid I don't have compile time numbers.
FWIW, I strongly suspect that for *this* change, compile time and code size will be pretty precisely correlated. Dehao's data shows that to be true in several cases certainly.
Ultimately, I guess this boils down to what exactly the difference is in intent between O2 and O3, which seems like a never-ending discussion...
The definitions I am working from are here:
https://github.com/llvm-project/llvm-project/blob/master/llvm/include/llvm/Passes/PassBuilder.h#L81-L90
I've highlighted the part that makes me think O3 is better here: the code size increases (and thus compile time increases) don't seem to correspond to runtime improvements.
Agreed, only enabling this for O3 seems to be most in line with the definition you pointed at.
Thanks,
Kristof
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170217/43ba2226/attachment.html>
More information about the llvm-dev
mailing list