[llvm-dev] Existing studies on the benefits of pointer analysis

Jia Chen via llvm-dev llvm-dev at lists.llvm.org
Mon Mar 21 10:00:28 PDT 2016


Hi Daniel,

On 03/21/2016 11:05 AM, Daniel Berlin wrote:
>
>
> On Tue, Mar 15, 2016 at 1:37 PM, Jia Chen via llvm-dev 
> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>
>     Dear llvm devs,
>
>     tl;dr: What prevents llvm from switching to a fancier pointer
>     analysis?
>
>
> Nothing.
>
>
>     Currently, there exists a variety of general-purpose alias
>     analyses in the LLVM codebase: basic-aa, globalsmodref-aa, tbaa,
>     scev-aa, and cfl-aa. However, only the first three are actually
>     turned on when invoking clang with -O2 or -O3 (please correct me
>     if I'm wrong about this).
>
>
> This is correct.
> Eventually, i hope george will have time to get back to CFL-AA and 
> turn it on by default.
>
>
>     If one looks at existing research literatures, there are even more
>     algorithm to consider for doing pointer analysis. Some are
>     field-sensitive, some are field-based, some are flow-sensitive,
>     some are context-sensitive. Even for flow-insensitive ones, they
>     could also be inclusion-style (-andersen-aa) and equality-style
>     (-steens-aa and -ds-aa). Those algorithms are often backed up by
>     rich theoretical framework as well as preliminary evaluations
>     which demonstrate their superior precision and/or performance.
>
>
> CFL-AA is a middle ground between steens and anders, can be easily 
> made field and context sensitive, etc.
>
>
>     Given such an abundance choices of pointer analyses that seem to
>     be much better in the research land, why does real-world compiler
>     infrastructures like llvm still rely on those three simple (and
>     ad-hoc) ones to perform IR optimization?
>
>
> Time and energy.
>
>     Based on my understanding (and again please correct me if I am wrong):
>
>     (1) The minor reason: those "better" algorithms are very hard to
>     implement in a robust way and nobody seems to be interested in
>     trying to write and maintain them.
>
>
> This is false.  Heck, at the time i implemented it in GCC, 
> field-sensitive andersen's analysis was unknown in production 
> compilers.  That's why i'm thanked in all the papers - i did the 
> engineering work to make it fast and reliable.
>
>     (2) The major reason: it's not clear whether those "better"
>     algorithms are actually better for llvm. More precise pointer
>     analyses tend to slow down compile time a lot while contributing
>     too little to the optimization passes that use them. The benefit
>     one gets from a more precise analysis may not justify the
>     compile-time or the maintenance cost.
>
>
>
> CFL-AA is probably the right trade-off here. You can stop at any time 
> and have correct answers, you can be as lazy as you like.
> etc.

Regarding CFL-AA: in my understanding, cfl-aa does not introduce a new 
precision tradeoff. It is merely a demand-driven way of implementing 
existing analyses, by extending those algorithms to track additional 
"pointed-to-by" information. Laziness may help with the running time of 
the cfl analysis when only partial points-to info is needed, but if the 
client wants to do a whole-program analysis and require whole-program 
points-to info (which is usually true for optimizing compilers since 
they will eventually examine and touch every piece of the codes given to 
it), should cfl-aa be no different than traditional whatever-sensitive 
pointer analysis?

>
> The reality is i think you overlook the realistic answer:
>
> 3. Nobody has had time or energy to fix up CFL-AA or SCEV-AA. They 
> spend their time on lower-hanging fruit until that lower hanging fruit 
> is gone.
>
> IE For the moment, CFL-AA and SCEV-AA and ... are not the thing 
> holding llvm back.
>
I'd love to hear some examples of "lower-hanging fruit" in LLVM, 
especially in the area of middle-end analyses and optimizations. I 
thought LLVM is mature enough that any obvious chances of improvement in 
analyses and optimization have already been taken, no?
>
>
>     So my question here is: what kind(s) of precision really justify
>     the cost and what kinds do not?
>
>
> Depends entirely on your applications.
>
>     Has anybody done any study in the past to evaluate what kinds of
>     features in pointer analyses will benefit what kinds of
>     optimization passes?
>
> Yes.
> Chris did many years ago, and i've done one more recently.

Great! Are they published somewhere? Can the data be shared somehow?
>
>     Could there potentially be more improvement on pointer analysis
>     precision without adding too much compile-time/maintenance cost?
>
> Yes.
>
>     Has the precision/performance tradeoffs got fully explored before?
>
> Yes
>
>
>     Any pointers will be much appreciated. No pun intended :)
>
>     PS1: To be more concrete, what I am looking for is not some
>     black-box information like "we switched from basic-aa to cfl-aa
>     and observed 1% improvement at runtime". I believe white-box
>     studies such as "the licm pass failed to hoist x instructions
>     because -tbaa is not flow sensitive" are much more interesting for
>     understanding the problem here.
>
>
> White-box studies are very application specific, and often very pass 
> specific.

And I understand that. My goal is to look for commonalities among passes 
and applications. However, if the existing studies you mentioned above 
are extensive and conclusive enough to show that the aas we have today 
have fully exploited almost all such commonalities, then it's probably a 
better idea for me to find something else to work on. But again, I'd 
like to see the data first.
>
>
>     PS2: If no such evaluation exists in the past, I'd happy to do
>     that myself and report back my findings if anyone here is interested.
>
> I don't think any of the world is set up to make that valuable.
>
> Nothing takes advantage of context sensitivity, flow sensitivity, etc.
I agree that nothing takes advantage of context sensitivity. But I would 
argue against flow sensitivity, field sensitivity, heap model and 
external function models. Flow sensitivity is helpful when the 
optimization pass itself is flow-sensitive (e.g. adce, gvn), and field 
sensitivity is helpful when a struct contains multiple pointers. Heap 
sensitivity is basically what motivates Chris Lattner's PLDI'07 paper, 
and external function models are helpful since without them the analysis 
has to be extremely conservative and concludes everything that external 
functions touch all may-alias each other.

-- 
Best Regards,

--
Jia Chen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160321/3b974ef9/attachment.html>


More information about the llvm-dev mailing list