[llvm-dev] Existing studies on the benefits of pointer analysis

Daniel Berlin via llvm-dev llvm-dev at lists.llvm.org
Mon Mar 21 12:03:26 PDT 2016


On Mon, Mar 21, 2016 at 11:34 AM, Jia Chen <jchen at cs.utexas.edu> wrote:

>
>
> It is merely a demand-driven way of implementing existing analyses, by
>> extending those algorithms to track additional "pointed-to-by" information.
>> Laziness may help with the running time of the cfl analysis when only
>> partial points-to info is needed, but if the client wants to do a
>> whole-program analysis and require whole-program points-to info (which is
>> usually true for optimizing compilers since they will eventually examine
>> and touch every piece of the codes given to it), should cfl-aa be no
>> different than traditional whatever-sensitive pointer analysis?
>>
>
> CFL, at least when I ran the numbers, was faster at all pairs than
> existing analysis.
>
>
> There could be many reasons for it, e.g. better implementations.
>

FWIW: the implementations i compared against are completely state of the
art and very well engineered (IE not research crap :P).


> Again, my point is that cfl-aa is more of an implementation strategy than
> a fundamentally superior approach.
>

The first part is true, but the second part depends on your definition of
"superior approach".

 You can solve andersens and steengaards and everything else using standard
dataflow solvers, and that's an implementation strategy, but it will be
really slow.

Part of the tradeoff is how fast something runs, and approaches that are
orders of magnitude faster often change the calculus of what people do. For
example, before hardekopf's work, andersens was considered too slow to be
practical in a real compiler.

Now, GCC does it by default.

So i would call that approach a superior approach :)

So saying that CFL-AA offers nothing superior in terms of approach, IMHO,
misunderstands the nature of the problem. If your goal is to get precision
at all costs, then yes, it's not superior.  If your goal is to get
something into a production compiler, that is understandable, maintainable,
can turn on and off field and context sensitivity easily, etc, then it may
be a superior approach.


> I'm talking about infrastructure wise, nothing in llvm can take advantage
> because the APIs don't exist.
>
> . Flow sensitivity is helpful when the optimization pass itself is
>> flow-sensitive (e.g. adce, gvn),
>>
>
> No api exists that they could use right now for this, and you'd have to
> change things to understand answers are not valid over the entire function.
>
>
> I see what you are saying now. Sometimes flow/ctx-insensitive alias
> queries can benefit from a flow/ctx-sensitive analysis, yet my intuition is
> that such cases are likely to be rare.
>

Yes.


> I could go ahead and modify those passes myself to carry on the study, but
> that option probably won't be too interesting to the community.
>

Right, because then you aren't testing LLVM, you are testing LLVM with
better infrastructure :)


>
> Thank you very much for pointing that out to me.
>

Happy to ;)


>
>
> --
> Best Regards,
>
> --
> Jia Chen
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160321/67ba17b5/attachment.html>


More information about the llvm-dev mailing list