[PATCH] D12080: [PM/AA] Rebuild LLVM's alias analysis infrastructure in a way compatible with the new pass manager, and no longer relying on analysis groups.

Chandler Carruth via llvm-commits llvm-commits at lists.llvm.org
Tue Sep 1 13:43:35 PDT 2015


chandlerc marked 4 inline comments as done.
chandlerc added a comment.

Hopefully we're really close at this point? I think I've gotten everything addressed here.


================
Comment at: lib/Analysis/BasicAliasAnalysis.cpp:675
@@ -701,1 +674,3 @@
+          getBestAAResults().alias(MemoryLocation(*CI), MemoryLocation(Object));
+      if (AR) {
         PassedAsArg = true;
----------------
I've built a tiny proxy class to model this and now we get the proxy class and call through it. This kept the interface consistent but hides the delegation logic as suggested. I can add comments there if you have anywhere you think they would help clarify.

================
Comment at: lib/Analysis/BasicAliasAnalysis.cpp:1346-1348
@@ -1392,5 +1345,5 @@
   AliasResult Result =
-      AliasAnalysis::alias(MemoryLocation(V1, V1Size, V1AAInfo),
-                           MemoryLocation(V2, V2Size, V2AAInfo));
+      AAResultBase::alias(MemoryLocation(V1, V1Size, V1AAInfo),
+                          MemoryLocation(V2, V2Size, V2AAInfo));
   return AliasCache[Locs] = Result;
 }
----------------
So, I'm now pretty sure that this does match the old behavior.

Returning MayAlias here causes the AAResults aggregation layer to continue to the next result it has available, and query that layer. So in essence, returning MayAlias triggers the expected behavior.

There is one difference here, but it shouldn't be observable. In the old system, BasicAA's cache ended up populated by the results provided from querying other AAs. With my change, we will just cache "MayAlias" and then directly query the next layer. In essence, this change may cause us to query more layers of the AA stack, but I would not expect that to change behavior or to be a significant compile time shift as all the other layers are much faster than BasicAA already.

Does this make more sense?

I would still like to take the caching out of BasicAA and teach the aggregation layer to do all the caching, which would recover the compile time performance (and get more performance as well) but I think that should be in a follow-up patch.


http://reviews.llvm.org/D12080





More information about the llvm-commits mailing list