[llvm-dev] Existing studies on the benefits of pointer analysis

Daniel Berlin via llvm-dev llvm-dev at lists.llvm.org
Mon Mar 21 10:17:11 PDT 2016


On Mon, Mar 21, 2016, 10:00 AM Jia Chen <jchen at cs.utexas.edu> wrote:

> Hi Daniel,
>
>
> On 03/21/2016 11:05 AM, Daniel Berlin wrote:
>
>
>
> On Tue, Mar 15, 2016 at 1:37 PM, Jia Chen via llvm-dev <
> <llvm-dev at lists.llvm.org>llvm-dev at lists.llvm.org> wrote:
>
>> Dear llvm devs,
>>
>> tl;dr: What prevents llvm from switching to a fancier pointer analysis?
>>
>
> Nothing.
>
>
>>
>> Currently, there exists a variety of general-purpose alias analyses in
>> the LLVM codebase: basic-aa, globalsmodref-aa, tbaa, scev-aa, and cfl-aa.
>> However, only the first three are actually turned on when invoking clang
>> with -O2 or -O3 (please correct me if I'm wrong about this).
>>
>
> This is correct.
> Eventually, i hope george will have time to get back to CFL-AA and turn it
> on by default.
>
>
>>
>> If one looks at existing research literatures, there are even more
>> algorithm to consider for doing pointer analysis. Some are field-sensitive,
>> some are field-based, some are flow-sensitive, some are context-sensitive.
>> Even for flow-insensitive ones, they could also be inclusion-style
>> (-andersen-aa) and equality-style (-steens-aa and -ds-aa). Those algorithms
>> are often backed up by rich theoretical framework as well as preliminary
>> evaluations which demonstrate their superior precision and/or performance.
>>
>
> CFL-AA is a middle ground between steens and anders, can be easily made
> field and context sensitive, etc.
>
>
>>
>> Given such an abundance choices of pointer analyses that seem to be much
>> better in the research land, why does real-world compiler infrastructures
>> like llvm still rely on those three simple (and ad-hoc) ones to perform IR
>> optimization?
>>
>
> Time and energy.
>
>
>> Based on my understanding (and again please correct me if I am wrong):
>>
>> (1) The minor reason: those "better" algorithms are very hard to
>> implement in a robust way and nobody seems to be interested in trying to
>> write and maintain them.
>>
>
> This is false.  Heck, at the time i implemented it in GCC, field-sensitive
> andersen's analysis was unknown in production compilers.  That's why i'm
> thanked in all the papers - i did the engineering work to make it fast and
> reliable.
>
>
>
>> (2) The major reason: it's not clear whether those "better" algorithms
>> are actually better for llvm. More precise pointer analyses tend to slow
>> down compile time a lot while contributing too little to the optimization
>> passes that use them. The benefit one gets from a more precise analysis may
>> not justify the compile-time or the maintenance cost.
>>
>
>
> CFL-AA is probably the right trade-off here. You can stop at any time and
> have correct answers, you can be as lazy as you like.
> etc.
>
>
> Regarding CFL-AA: in my understanding, cfl-aa does not introduce a new
> precision tradeoff.
>

You can make it do what you want much easier than existing frameworks in my
experience.

It is merely a demand-driven way of implementing existing analyses, by
> extending those algorithms to track additional "pointed-to-by" information.
> Laziness may help with the running time of the cfl analysis when only
> partial points-to info is needed, but if the client wants to do a
> whole-program analysis and require whole-program points-to info (which is
> usually true for optimizing compilers since they will eventually examine
> and touch every piece of the codes given to it), should cfl-aa be no
> different than traditional whatever-sensitive pointer analysis?
>

CFL, at least when I ran the numbers, was faster at all pairs than existing
analysis.

>
>
>
> The reality is i think you overlook the realistic answer:
>
> 3. Nobody has had time or energy to fix up CFL-AA or SCEV-AA. They spend
> their time on lower-hanging fruit until that lower hanging fruit is gone.
>
> IE For the moment, CFL-AA and SCEV-AA and ... are not the thing holding
> llvm back.
>
> I'd love to hear some examples of "lower-hanging fruit" in LLVM,
> especially in the area of middle-end analyses and optimizations. I thought
> LLVM is mature enough that any obvious chances of improvement in analyses
> and optimization have already been taken, no?
>

No.
For example, gvn and pre are fairly simple implementations that miss
obvious optimizations.

>
>
>
>>
>> So my question here is: what kind(s) of precision really justify the cost
>> and what kinds do not?
>>
>
> Depends entirely on your applications.
>
>
>> Has anybody done any study in the past to evaluate what kinds of features
>> in pointer analyses will benefit what kinds of optimization passes?
>>
> Yes.
> Chris did many years ago, and i've done one more recently.
>
>
> Great! Are they published somewhere? Can the data be shared somehow?
>

No, and sadly, no

>
>
>> Could there potentially be more improvement on pointer analysis precision
>> without adding too much compile-time/maintenance cost?
>>
> Yes.
>
> Has the precision/performance tradeoffs got fully explored before?
>>
> Yes
>
>>
>> Any pointers will be much appreciated. No pun intended :)
>>
>> PS1: To be more concrete, what I am looking for is not some black-box
>> information like "we switched from basic-aa to cfl-aa and observed 1%
>> improvement at runtime". I believe white-box studies such as "the licm pass
>> failed to hoist x instructions because -tbaa is not flow sensitive" are
>> much more interesting for understanding the problem here.
>>
>
> White-box studies are very application specific, and often very pass
> specific.
>
>
> And I understand that. My goal is to look for commonalities among passes
> and applications.
>

This generally just discovers things we already know, which is that certain
passes have deficiencies.

However, if the existing studies you mentioned above are extensive and
> conclusive enough to show that the aas we have today have fully exploited
> almost all such commonalities, then it's probably a better idea for me to
> find something else to work on. But again, I'd like to see the data first.
>


>
>
>>
>> PS2: If no such evaluation exists in the past, I'd happy to do that
>> myself and report back my findings if anyone here is interested.
>>
> I don't think any of the world is set up to make that valuable.
>
> Nothing takes advantage of context sensitivity, flow sensitivity, etc.
>
> I agree that nothing takes advantage of context sensitivity. But I would
> argue against flow sensitivity, field sensitivity, heap model and external
> function models
>

I'm talking about infrastructure wise, nothing in llvm can take advantage
because the APIs don't exist.

. Flow sensitivity is helpful when the optimization pass itself is
> flow-sensitive (e.g. adce, gvn),
>

No api exists that they could use right now for this, and you'd have to
change things to understand answers are not valid over the entire function.

and field sensitivity is helpful when a struct contains multiple pointers.
> Heap sensitivity is basically what motivates Chris Lattner's PLDI'07 paper,
> and external function models are helpful since without them the analysis
> has to be extremely conservative and concludes everything that external
> functions touch all may-alias each other.
>

I don't disagree, this is the one to two years of work I said would be
needed

>
>
>
> --
> Best Regards,
>
> --
> Jia Chen
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160321/62fce41e/attachment-0001.html>


More information about the llvm-dev mailing list