[PATCH] D12080: [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible with the new pass manager, and no longer relying on analysis groups.

hfinkel@anl.gov via llvm-commits llvm-commits at lists.llvm.org
Tue Sep 1 14:02:57 PDT 2015


hfinkel added inline comments.

================
Comment at: lib/Analysis/BasicAliasAnalysis.cpp:675
@@ -701,1 +674,3 @@
+          getBestAAResults().alias(MemoryLocation(*CI), MemoryLocation(Object));
+      if (AR) {
         PassedAsArg = true;
----------------
chandlerc wrote:
> I've built a tiny proxy class to model this and now we get the proxy class and call through it. This kept the interface consistent but hides the delegation logic as suggested. I can add comments there if you have anywhere you think they would help clarify.
Looks good.

================
Comment at: lib/Analysis/BasicAliasAnalysis.cpp:1346-1348
@@ -1392,5 +1345,5 @@
   AliasResult Result =
-      AliasAnalysis::alias(MemoryLocation(V1, V1Size, V1AAInfo),
-                           MemoryLocation(V2, V2Size, V2AAInfo));
+      AAResultBase::alias(MemoryLocation(V1, V1Size, V1AAInfo),
+                          MemoryLocation(V2, V2Size, V2AAInfo));
   return AliasCache[Locs] = Result;
 }
----------------
chandlerc wrote:
> So, I'm now pretty sure that this does match the old behavior.
> 
> Returning MayAlias here causes the AAResults aggregation layer to continue to the next result it has available, and query that layer. So in essence, returning MayAlias triggers the expected behavior.
> 
> There is one difference here, but it shouldn't be observable. In the old system, BasicAA's cache ended up populated by the results provided from querying other AAs. With my change, we will just cache "MayAlias" and then directly query the next layer. In essence, this change may cause us to query more layers of the AA stack, but I would not expect that to change behavior or to be a significant compile time shift as all the other layers are much faster than BasicAA already.
> 
> Does this make more sense?
> 
> I would still like to take the caching out of BasicAA and teach the aggregation layer to do all the caching, which would recover the compile time performance (and get more performance as well) but I think that should be in a follow-up patch.
> So, I'm now pretty sure that this does match the old behavior.

> Returning MayAlias here causes the AAResults aggregation layer to continue to the next result it has available, and query that layer. So in essence, returning MayAlias triggers the expected behavior.

> There is one difference here, but it shouldn't be observable. In the old system, BasicAA's cache ended up populated by the results provided from querying other AAs. With my change, we will just cache "MayAlias" and then directly query the next layer. In essence, this change may cause us to query more layers of the AA stack, but I would not expect that to change behavior or to be a significant compile time shift as all the other layers are much faster than BasicAA already.

> Does this make more sense?

I think I understand what you're saying, but I'm not convinced that this is correct. This is the mechanism that BasicAA uses to do recursive analysis, including based on other passes. So let's say that we have both BasicAA and CFLAA. Given some query on two GEPs, BasicAA might be able to reduce that query to some query on the base pointers of those GEPs, and then recurse up the chain allowing CFLAA to determine that the base pointers don't alias. Thus, both BasicAA and CFLAA can work together on the query, producing an answer neither would have been able to produce alone.

It seems important to keep the system's ability to function this way.

> I would still like to take the caching out of BasicAA and teach the aggregation layer to do all the caching, which would recover the compile time performance (and get more performance as well) but I think that should be in a follow-up patch.

Yes, but you'd also need to be careful about the optimistic phi-based analysis that rely on the caching for correctness (to prevent infinite recursion).



http://reviews.llvm.org/D12080





More information about the llvm-commits mailing list