Daniel Berlin via llvm-dev
llvm-dev at lists.llvm.org
Thu Aug 25 09:06:06 PDT 2016
I'll take a look at the patch :)
Sounds like fun work.
As George says, improving AA significantly will almost always cause
significant performance regressions at first, in almost any compiler.
Compilers knobs, passes, usually get tuned for x amount of freedom, and if
you give them 10x, they start moving things too far, vectorizing too much,
This was definitely the case for GCC, where adding a precise
interprocedural field-sensitive analysis initially regressed performance by
a few percent on average.
I know it was also the case for XLC at IBM, etc.
Like anything else, just gotta figure out what passes are going nuts, and
rework them to have better heuristics/etc.
The end result is performance improvements, but the path takes a bit of
If you need a way to see whether your analysis has actually done an okay
job in the meantime, usually a good way to see if you are doing well or not
is to see how many loads/stores get eliminated or moved by various passes
before and after.
If the number is significantly higher, great.
If the number is significantly lower, something has likely gone wrong :)
On Thu, Aug 25, 2016 at 8:11 AM, David Callahan via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> (Adding “LLVM Dev”)
> My variant is up as https://reviews.llvm.org/D23876
> From: George Burgess IV <george.burgess.iv at gmail.com>
> Date: Wednesday, August 24, 2016 at 3:17 PM
> To: David Callahan <dcallahan at fb.com>
> Subject: Re: CFLAA
> > I see there is on going work with alias analysis and it appears the
> prior CFLAA has been abandoned.
> There was quite a bit of refactoring done, yeah. The original CFLAA is now
> called CFLSteens, and graph construction was moved to its own bit. We also
> have CFLAnders, which is based more heavily on the paper by Zheng and
> Rugina (e.g. no stratifiedsets magic).
> > I have a variant of it where I reworked how compression was done to be
> less conservative, reworked the interprocedural to do simulated but bounded
> inlining, and added code to do on-demand testing of CFL paths on both
> compressed and full graphs.
> > Happy to share the patch with you if you are interested as well as some
> data collected
> Yes, please. Would you mind if I CC'ed llvm-dev on this thread (and a few
> people specifically, who also might find this interesting)?
> > However I was not able to see any performance improvements in the code.
> In fact on a various benchmarks there were noticeable regressions in
> measured performance of the generated code. Have you noticed any similar
> I know that a number of people people in the community expressed concerns
> about how other passes will perform with better AA results (e.g. If LICM
> becomes more aggressive, register pressure may increase, which may cause us
> to spill when we haven't before, etc). So, such a problem isn't
> unthinkable. :)
> On Wed, Aug 24, 2016 at 2:56 PM, David Callahan <dcallahan at fb.com> wrote:
>> Hi Greg,
>> I see there is on going work with alias analysis and it appears the prior
>> CFLAA has been abandoned.
>> I have a variant of it where I reworked how compression was done to be
>> less conservative, reworked the interprocedural to do simulated but bounded
>> inlining, and added code to do on-demand testing of CFL paths on both
>> compressed and full graphs.
>> I reached a point where the ahead-of-time compression was linear but
>> still very accurate compared to on-demand path search and there were
>> noticeable improvements in the alias analysis results and impacted
>> transformations. Happy to share the patch with you if you are interested
>> as well as some data collected.
>> However I was not able to see any performance improvements in the code.
>> In fact on a various benchmarks there were noticeable regressions in
>> measured performance of the generated code. Have you noticed any similar
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev