[PATCH] D11651: [LVI] Improve LazyValueInfo compile time performance
Bruno Cardoso Lopes via llvm-commits
llvm-commits at lists.llvm.org
Tue Aug 18 09:21:06 PDT 2015
Hi Duncan,
> The `OverdefinedCache` changes LGTM, assuming you commit it
> separately (this are separable, right? If one of them causes an
> invalidation bug, committing separately will make bisection more
> powerful).
OK.
> Regarding the `ValueCache` change:
>
>> ===================================================================
>> --- lib/Analysis/LazyValueInfo.cpp
>> +++ lib/Analysis/LazyValueInfo.cpp
>> @@ -315,17 +315,18 @@
>> /// This is all of the cached block information for exactly one Value*.
>> /// The entries are sorted by the BasicBlock* of the
>> /// entries, allowing us to do a lookup with a binary search.
>> - typedef std::map<AssertingVH<BasicBlock>, LVILatticeVal> ValueCacheEntryTy;
>> + typedef DenseMap<AssertingVH<BasicBlock>, LVILatticeVal> ValueCacheEntryTy;
>
> DenseMap allocates space for 64 entries on construction...
>
>> /// This is all of the cached information for all values,
>> /// mapped from Value* to key information.
>> std::map<LVIValueHandle, ValueCacheEntryTy> ValueCache;
>>
>
> Now each entry in the `ValueCache` will have a huge memory cost, even if
> the `ValueCacheEntryTy::second` only has a couple of items in it. Do
> you know what the typical size is? Would a `SmallDenseMap<>` be
> appropriate? (If they're usually close to empty, then maybe
> `SmallDenseMap<..., 4>`?)
Using a `SmallDenseMap<..., 4>` actually reduces compile further than
using a DenseMap. I've tested with other sizes, but 4 seems the more
appropriate.
--
Bruno Cardoso Lopes
http://www.brunocardoso.cc
More information about the llvm-commits
mailing list