[early patch] Speed up decl chaining

Rafael EspĂ­ndola rafael.espindola at gmail.com
Sat Oct 19 14:51:51 PDT 2013


> How is it hard to measure? Just look at the distribution of redeclaration
> chain lengths. What I'm trying to get across is that focusing on asymptotic
> complexity if the overwhelming majority of cases are "constant-sized" seems
> a bit misguided. It's always possible to add a fallback mechanism to
> guarantee good asymptotic complexity. It's the same principle as
> SmallVector: you ensure that a specific common case is very fast, and fall
> back to a slower version when the assumptions that enable the optimization
> fail.
>
> The method I suggested for packing bits of the address that is O(n) links
> away into the low bits of each link is kind of a hack, but it *does*
> guarantee constant time access to that node.

It fixes a cases that was reported it pr10651. The reduced tetscase
seemed an interesting bug to work on my time. If you find computing
statistics more interesting, nothing stops you from doing it.

>> For example, when this benchmark first came to being, the linkage
>> computation was non linear and dominated. Fixing it helped existing
>> code and moved the hot spot to decl linking. It looks like the hot
>> path is back to linkage computation, and we are still a lot slower
>> than gcc on this one, so fixing decl chaining will make this a good
>> linkage benchmark again.
>>
>> Unbounded super linear algorithms in general provide a minefield that
>> is not very user friendly.
>
>
> Your patch doesn't seem to affect the asymptotic complexity of anything
> though.

It does. Building a decl chain with N elements goes from O(n^2) to O(n).

Cheers,
Rafael



More information about the cfe-commits mailing list