[PATCH] D16571: Compute the DISubprogram hash key partially (NFC)
Daniel Berlin via llvm-commits
llvm-commits at lists.llvm.org
Wed Feb 3 23:33:02 PST 2016
On Wed, Feb 3, 2016 at 1:18 PM, Mehdi Amini <mehdi.amini at apple.com> wrote:
> On Jan 26, 2016, at 3:59 PM, Daniel Berlin <dberlin at dberlin.org> wrote:
> On Tue, Jan 26, 2016 at 3:35 PM, Mehdi Amini <mehdi.amini at apple.com>
>> The hash should (almost?) never change.
>> I thought about this solution, and I plan on evaluating it a well (it
>> applies to constants as well).
>> However there is a tradeoff of memory vs speed doing this.
>> For instance when doing LTO, we load a lot of metadata when loading and
>> linking individual Modules, which put a lot of stress on the hash tables
>> (both uniquing and grow). But then during optimization and CodeGen it
>> shouldn’t and I’m not sure we want to pay the price of the memory overhead
>> This naturally raises the questions of whether you really need hash
> tables to get what you want, or would some other more memory-efficient data
> structure be better for uniquing :)
> I think this is a great point.
> While DenseMap is pretty good, you do end up with a lot of empty buckets
> (or pay a high probing price).
> Whereas, depending on the situation and structure, you can often pay a
> much lower cost (and have better behavior on hash misses). Obviously, in
> most cases, DenseMap should be the choice because it's consistent and who
> cares, but in specific situations if you are placing huge stress on it
> during say, uniquing, there are often better ways both in time and space
> (various compressed tries, ternary search trees, etc).
> Debug Info metadata are putting a lot of stress both on CPU time as it
> shows up on the profile quite frequently, and on the memory (even it is far
> better now than in the past following all the work driven by Duncan last
> An LTO build of llvm-tblgen consumes at peak 130MB without debug info, and
> 740MB with debug info. Since you’re familiar with GCC are you aware of the
A sentence got lost here :)
BTW, do you know the hash miss rate on the hash tables involved in uniqing?
(IE how many total lookups and how many times is the thing in the hash
If the hash hit rate is very high, more memory efficient hash tables are
viable (sparse_hash_map), etc.
If the hash hit rate is not that high, it's likely other structures will be
both faster and more efficient.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-commits