[llvm-dev] CFLAA

David Callahan via llvm-dev llvm-dev at lists.llvm.org
Thu Aug 25 18:45:01 PDT 2016


Sorry, I forgot your last question,

The benchmarks were a rather arbitrarily selected set of files out of
Facebook¹s codebase
so not really suitable to share.

On 8/25/16, 6:34 PM, "David Callahan" <dcallahan at fb.com> wrote:

>Hi Jia, nice to meet you,
>
>
>On 8/25/16, 6:22 PM, "Jia Chen" <jchen at cs.utexas.edu> wrote:
>
>>Hi David,
>>
>>I am the one who's responsible for CFLAA's refactoring in the summer.
>>I've sent out another email on llvm-dev, and you can find more about my
>>work in my GSoC final report.
>
>Is this report available?
>
>>I think it is fantastic that you have done such an interesting work.
>>I'll definitely try to help getting the code reviewed and merged in the
>>current. After a quick glance at your patch, it seems that what you are
>>trying to do there is an optimized version of CFL-Steens, with a custom
>>way of handling context-sensitivity. I'll be happy if we can end up
>>integrating it into the existing CFL-Steens pass
>
>The work was more about improving the accuracy of the equivalencing step
>then it 
>is about context sensitivity. In fact, it is only context-sensitive to the
>extent there is simulated inlining. There is now downward propagation of
>facts into called functions.
>
>I wanted to share it incase there were lessons of value. It is not in a
>very
>clean state at the moment but I can clean it up. Let me know how I can
>help.
>
>
>>Regarding the benchmark numbers, I'm very interested in what kind of
>>tests files were you running the experiments on? Is it possible to share
>>it?
>>
>>> On Wed, Aug 24, 2016 at 2:56 PM, David Callahan <dcallahan at fb.com>
>>>wrote:
>>> Hi Greg,
>>>
>>>
>>>
>>> I see there is on going work with alias analysis and it appears the
>>>prior CFLAA has been abandoned.
>>>
>>>
>>>
>>> I have a variant of it where I reworked how compression was done to be
>>>less conservative, reworked the interprocedural to do simulated but
>>>bounded inlining, and added code to do on-demand testing of CFL paths on
>>>both compressed and full graphs.
>>>
>>>
>>>
>>> I reached a point where the ahead-of-time compression was linear but
>>>still very accurate compared to on-demand path search and there were
>>>noticeable improvements in the alias analysis results and impacted
>>>transformations.  Happy to share the patch with you if you are
>>>interested as well as some data collected.
>>>
>>>
>>>
>>> However I was not able to see any performance improvements in the code.
>>>In fact on a various benchmarks there were noticeable regressions in
>>>measured performance of the generated code. Have you noticed any similar
>>>problems?
>>>
>>>
>>>
>>> --david
>>
>>
>>
>>-- 
>>Best Regards,
>>
>>--
>>Jia Chen
>



More information about the llvm-dev mailing list