[LLVMdev] Loop unrolling opportunity in SPEC's libquantum with profile info

Andrew Trick atrick at apple.com
Tue Jan 21 13:31:48 PST 2014


On Jan 21, 2014, at 6:18 AM, Diego Novillo <dnovillo at google.com> wrote:

> On 16/01/2014, 23:47 , Andrew Trick wrote:
>> 
>> On Jan 15, 2014, at 4:13 PM, Diego Novillo <dnovillo at google.com> wrote:
>> 
>>> Chandler also pointed me at the vectorizer, which has its own
>>> unroller. However, the vectorizer only unrolls enough to serve the
>>> target, it's not as general as the runtime-triggered unroller. From
>>> what I've seen, it will get a maximum unroll factor of 2 on x86 (4 on
>>> avx targets). Additionally, the vectorizer only unrolls to aid
>>> reduction variables. When I forced the vectorizer to unroll these
>>> loops, the performance effects were nil.
>> 
>> Vectorization and partial unrolling (aka runtime unrolling) for ILP should to be the same pass. The profitability analysis required in each case is very closely related, and you never want to do one before or after the other. The analysis makes sense even for targets without vector units. The “vector unroller” has an extra restriction (unlike the LoopUnroll pass) in that it must be able to interleave operations across iterations. This is usually a good thing to check before unrolling, but the compiler’s dependence analysis may be too conservative in some cases.
> 
> In addition to tuning the cost model, I found that the vectorizer does not even choose to get that far into its analysis for some loops that I need unrolled. In this particular case, there are three loops that need to be unrolled to get the performance I'm looking for. Of the three, only one gets far enough in the analysis to decide whether we unroll it or not.
> 
> But I found a bigger issue. The loop optimizers run under the loop pass manager (I am still trying to wrap my head around that. I find it very odd and have not convinced myself why there is a separate manager for loops). Inside the loop pass manager, I am not allowed to call the block frequency analysis. Any attempts I make at scheduling BF analysis, sends the compiler into an infinite loop during initialization.
> 
> Chandler suggested a way around the problem. I'll work on that first.

It is very difficult to deal with the LoopPassManager. The concept doesn’t fit with typical loop passes, which may need to rerun function level analyses, and can affect code outside the current loop. One nice thing about it is that a pass can push a new loop onto the queue and rerun all managed loop passes just on that loop. In the past I’ve seen efforts to consolidate multiple loop passes into the same LoopPassManager, but don’t really understand the benefit other than reducing the one-time overhead of instantiating multiple pass managers and some minor IR access locality improvements. Maybe Chandler’s work will help with the overhead of instantiating pass managers.

Anyway, I see no reason that the vectorizer shouldn’t run in its own loop pass manager.

>> Currently, the cost model is conservative w.r.t unrolling because we don't want to increase code size. But minimally, we should unroll until we can saturate the resources/ports. e.g. a loop with a single load should be unrolled x2 so we can do two loads per cycle. If you can come up with improved heuristics without generally impacting code size that’s great.
> Oh, code size will always go up. That's pretty much unavoidable when you decide to unroll. The trick here is to only unroll select loops. The profiler does not tell you the trip count. What it will do is cause the loop header to be excessively heavy wrt its parent in the block frequency analysis. In this particular case, you get something like:
> 
> ---- Block Freqs ----
>  entry = 1.0
>   entry -> if.else = 0.375
>   entry -> if.then = 0.625
>  if.then = 0.625
>   if.then -> if.end3 = 0.625
>  if.else = 0.375
>   if.else -> for.cond.preheader = 0.37487
>   if.else -> if.end3 = 0.00006
>  for.cond.preheader = 0.37487
>   for.cond.preheader -> for.body.lr.ph = 0.37463
>   for.cond.preheader -> for.end = 0.00018
>  for.body.lr.ph = 0.37463
>   for.body.lr.ph -> for.body = 0.37463
>  for.body = 682.0
>   for.body -> for.body = 681.65466
>   for.body -> for.end = 0.34527
>  for.end = 0.34545
>   for.end -> if.end3 = 0.34545
>  if.end3 = 0.9705
> 
> Notice how the head of the loop has weight 682, which is 682x the weight of its parent (the function entry, since this is an outermost loop).

Yep. That’s exactly what I meant by the profiled trip count. We don’t get a known count or even a distribution, just an average. And we generally only care if the loop has a *highish* or *lowish* trip count.

> With static heuristics, this ratio is significantly lower (about 3x).

I believe you, but I thought we would use 10x as a default estimated trip count.

> When we see this, we can decide to unroll the loop.

I’m sure we could be more aggressive than we are currently at O3. If the vectorizer unroller’s conditions are too strict, we could consider calling the standard partial unroller only on the loops that didn’t get vectorized.

-Andy

> 
> Diego.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140121/0d7b156d/attachment.html>


More information about the llvm-dev mailing list