[llvm] r263231 - [PM] The order of evaluation of these analyses is actually significant,

Chandler Carruth via llvm-commits llvm-commits at lists.llvm.org
Fri Mar 11 06:07:34 PST 2016


On Fri, Mar 11, 2016 at 2:42 PM Hal Finkel via llvm-commits <
llvm-commits at lists.llvm.org> wrote:

> ----- Original Message -----
> > From: "Chandler Carruth via llvm-commits" <llvm-commits at lists.llvm.org>
> > To: llvm-commits at lists.llvm.org
> > Sent: Friday, March 11, 2016 7:26:48 AM
> > Subject: [llvm] r263231 - [PM] The order of evaluation of these analyses
> is actually significant,
> >
> > Author: chandlerc
> > Date: Fri Mar 11 07:26:47 2016
> > New Revision: 263231
> >
> > URL: http://llvm.org/viewvc/llvm-project?rev=263231&view=rev
> > Log:
> > [PM] The order of evaluation of these analyses is actually
> > significant,
> > much to my horror, so use variables to fix it in place.
> >
> > This terrifies me. Both basic-aa and memdep will provide more precise
> > information when the domtree and/or the loop info is available.
> > Because
> > of this, if your pass (like GVN) requires domtree, and then queries
> > memdep or basic-aa, it will get more precise results. If it does this
> > in
> > the other order, it gets less precise results.
> >
> > All of the ideas I have for fixing this are, essentially, terrible.
>
> I assume that we could delay the calls to getAnalysisIfAvailable until
> query time, instead of caching the results in runOnFunction. What are the
> other options?
>

Requiring these unilaterally.

I've managed to do that for domtree, no problem.

The only thing left is LoopInfo for BasicAA. Which is only used to
accelerate isPotentiallyReachable inside of loop-y code. The options I see
are:

1) Require it. This doubles the number of times we build loop info (from 5
to 10) in the O2 pipeline.

2) Lazily query it. This has some problems, mostly that it is *sloooow*.
It'll have to be a query through a type erased callable and then into the
getAnalysisIfAvailable code path in the worst case which is multiple more
indirect calls. So 2 - 4 layers of function pointer calls would be my
guess. On each query. And despite the runtime hit, we continue to have
*very* unpredictable behavior, here basic-aa becomes more powerful
"magically" when other analyses are still kicking around. This is at least
only structural with the current pass manager, but with the new pass
manager, a very old cached entry will suddenly empower basic-aa.... Yuck.

3) Build a LoopBasicAA pass that was BasicAA but with LoopInfo required,
and add it to the pipeline while we have loop info. But this means we'll do
2x the non-Loop BasicAA queries.

4) Stop using LoopInfo in BasicAA entirely.

I'm actually a fan of #1, but probably can't justify it until the new PM
lands and we can at least stop re-computing the same loop info for the same
functions that aren't changing run after run.

-Chandler
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20160311/d070c643/attachment.html>


More information about the llvm-commits mailing list