[PATCH] D57627: [BasicAA] Cache nonEscapingLocalObjects for alias() calls.

Chandler Carruth via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Feb 4 19:17:12 PST 2019


chandlerc added a comment.

Just think we can poke the map a bit better here.



================
Comment at: lib/Analysis/BasicAliasAnalysis.cpp:122-126
+  if (IsCapturedCache) {
+    auto CacheIt = IsCapturedCache->find(V);
+    if (CacheIt != IsCapturedCache->end())
+      return CacheIt->second;
+  }
----------------
asbirlea wrote:
> chandlerc wrote:
> > Can this function ever be re-entered? It seems unlikely given the structure.
> > 
> > If not, hoist the `CacheIt` iterator out, use insert, and then update it below rather than doing another lookup?
> IIUC, insert doesn't do a lookup. So using insert below (instead of `[]`) should be equivalent to your suggestion? We are guaranteed we need to insert if we reached that point..
Er, insert does do a lookup?

```
  auto CacheIt;
  if (IsCapturedCache) {
    bool Inserted;
    std::tie(CacheIt, Inserted) = IsCapturedCache->insert({V, false});
    if (!Inserted)
      // Found cached result, return it!
      return CacheIt->second;
  }
  ...
  bool Ret = !PointerMayBeCaptured(...);
  if (IsCapturedCache)
    CacheIt->second = Ret;
  return Ret;
```

This kind of code pattern should do a single lookup into the map instead of doing two.


================
Comment at: lib/Analysis/BasicAliasAnalysis.cpp:154-155
+    }
 
   return false;
 }
----------------
asbirlea wrote:
> chandlerc wrote:
> > Why not cache this too?
> I debated between sinking the cache search into each of the above `if`s, or caching this.
> 
> Performance-wise, I see no difference between the cache search on all queries vs the checks inside the above `if`s. So, sure, let's cache this too :)
> 
> If other folks find that querying the cache is more expensive when all conditions (`isa`+`isNoAliasCall`+ `dyn_cast`+attributes check) are `false`, then I can refactor this.
Sure, but will have to pull this all the way before the initial query to the cache. =D

Anyways, this makes sense to me.


Repository:
  rL LLVM

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D57627/new/

https://reviews.llvm.org/D57627





More information about the llvm-commits mailing list