[PATCH] D23432: [AliasSetTracker] Degrade AliasSetTracker results when may-alias sets get too large.
Xinliang David Li via llvm-commits
llvm-commits at lists.llvm.org
Mon Aug 22 12:26:51 PDT 2016
Having real test cases demonstrating such issues will also be helpful
for testing MemorySSA in the future.
David
On Sun, Aug 21, 2016 at 12:51 PM, Philip Reames
<listmail at philipreames.com> wrote:
> Happy to. I'll try to remember to take a diff tomorrow and summarize.
>
> One case which I know we do which is of interest here is that we fall back
> to O(N^2) aliasing queries for small loops in LICM specifically to paper
> over imprecision in AliasSetTracker. I don't have the test case at hand,
> but this was motivated by real cases we saw.
>
> On 08/21/2016 12:14 PM, Daniel Berlin wrote:
>
> Would you mind listing the ones related to aliasing/memdep/etc?
>
> I'm trying to figure out what the best order to tackle things in is.
>
> For example, we could convert passes to memoryssa, we can add optional
> caching to basicaa (IE aliascache), etc.
>
> All of these produce increased scaling of various things, and we should do
> all of them over time, but it would be nice to know what we should increase
> the scaling of first, and the more data, the better the plan :)
>
>
> On Sun, Aug 21, 2016 at 11:06 AM, Philip Reames <listmail at philipreames.com>
> wrote:
>>
>> On 08/20/2016 10:14 PM, Xinliang David Li wrote:
>>>
>>> I understand your concern here, and performance cliff is definitely
>>> something we should try to avoid. However dropping alias info in
>>> situation like this != performance cliff. I am sure we can come up
>>> with hand-created examples to show performance damage with dropped
>>> alias info, in real programs, when a function reaches such a state,
>>> the alias query results will be already so conservative that doing
>>> memory disambiguation busily any further will likely be just waste of
>>> compile time, so 'gracefully' lowering alias precision or dropping
>>> aliasing info to the floor makes no difference practically. I have
>>> not seen performance regressions due to the use of cutoff limits
>>> elsewhere in LLVM.
>>
>> I have. In fact, we have a number of the flags tuned higher in our local
>> builds than upstream precisely for this reason.
>>
>>>
>>> David
>>>
>>>
>>> On Sat, Aug 20, 2016 at 12:24 PM, Philip Reames
>>> <listmail at philipreames.com> wrote:
>>>>
>>>> reames added a subscriber: reames.
>>>> reames added a comment.
>>>>
>>>> I am not actively objecting to this patch, but I really don't like the
>>>> overall direction here. Having a threshold where our ability to optimize
>>>> falls off a cliff just seems really undesirable. As Hal pointed out, there
>>>> are likely options for summarizing alias sets to allow quicker AA queries.
>>>> How much have we explored that design space?
>>>>
>>>>
>>>> Repository:
>>>> rL LLVM
>>>>
>>>> https://reviews.llvm.org/D23432
>>>>
>>>>
>>>>
>>
>
>
More information about the llvm-commits
mailing list