[PATCH] D25963: [LoopUnroll] Implement profile-based loop peeling

Michael Kuperstein via llvm-commits llvm-commits at lists.llvm.org
Wed Oct 26 15:03:59 PDT 2016


On Wed, Oct 26, 2016 at 1:09 PM, David Li <davidxl at google.com> wrote:

> davidxl added inline comments.
>
>
> ================
> Comment at: lib/Transforms/Utils/LoopUnrollPeel.cpp:101
> +      // We no longer know anything about the branch probability.
> +      LatchBR->setMetadata(LLVMContext::MD_prof, nullptr);
> +    }
> ----------------
> mkuper wrote:
> > davidxl wrote:
> > > Why? I think we should update the branch probability here -- it
> depends on the what iteration of the peeled clone. If peel count <
> average/estimated trip count, then each peeled iteration should be more
> biased towards fall through. If peel_count == est trip_count, then the last
> peel iteration should be biased toward exit.
> > You're right, it's not that we don't know anything - but we don't know
> enough. I'm not sure how to attach a reasonable number to this, without
> knowing the distribution.
> > Do you have any suggestions? The trivial option would be to assume an
> extremely narrow distribution (the loop always exits after exactly K
> iterations), but that would mean having an extreme bias for all of the
> branches, and I'm not sure that's wise.
> A reasonable way to annotate the branch is like this.
> Say the original trip count of the loop is N, then for the m th (from 0 to
> N-1) peeled iteration, the fall through probability is a decreasing
> function:
>
> (N - m )/N
>
>
I'm not entirely sure the math works out - because N is the average, the
newly assigned weights ought to have the property that the total
probability of reaching the loop header is 0.5 - and I don't think that
happens here.

This also doesn't solve the problem of what probability to assign to the
loop backedge - if K is the random variable signifying the number of
iterations, I think it should be something like 1/(E[K | K > E[K]] - E[K]).
That is, it depends on the expected number of iterations given that we have
more iterations than average. Which we don't know, and we can't even bound.
E.g. imagine that we have a loop that runs for 1 iteration for a million
times, and a million iterations once. The average number of iterations is
2, but the probability of taking the backedge, once you've reached the
loop, is extremely high.

We could assume something like a uniform distribution between, say, 0 and 2
* N iterations (in which case the fall-through probability is, I think (2 *
N - m - 1) / (2 * N), and the backedge probability is something like 1 -
1/(1.5 * N) )  - but I don't know if that's realistic either.


> Add some fuzzing factor to avoid creating extremely biased branch prob:
>
> for instance (N-m)*3/(4*N)
>
>
> https://reviews.llvm.org/D25963
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20161026/96be510b/attachment.html>


More information about the llvm-commits mailing list