[llvm-dev] LLD: Possible optimization for TargetInfo

Sean Silva via llvm-dev llvm-dev at lists.llvm.org
Wed Mar 30 17:34:30 PDT 2016


On Wed, Mar 30, 2016 at 4:25 PM, Rui Ueyama <ruiu at google.com> wrote:

> On Wed, Mar 30, 2016 at 4:20 PM, Sean Silva <chisophugis at gmail.com> wrote:
>
>> I believe the relocation stuff that Rafael is currently working on will
>> make this a non-issue (it will make relocation application much friendlier
>> for the CPU).
>>
>
> I don't think Rafael's patch would make this a non-issue. He's making
> scanRelocs to create data, which would reduce the number of calls to the
> virtual functions, but it wouldn't be reduced to zero.
>
> However, even in the current scheme, since the target is fixed, all the
>> indirect call sites should be monomorphic and so there shouldn't be much
>> branch-prediction cost (certainly nothing that would cause 1.8% performance
>> delta for the entire link).
>>
>
> Agreed. We could template functions that call TargetInfo's member
> functions for each target to eliminate the virtual function calls.
>

>From what has been presented I would not conclude that virtual calls are
actually the problem (or a problem at all). A root-cause analysis is
necessary. As r263227 shows, the relocation application loop is very
sensitive to small changes.

One quick thing you may also want to try as a sanity check is inserting
nops in different places in the function. I suspect you'll find that the
performance swings (both speedups and slowdowns) from doing that are
similar in magnitude to what you are seeing. You may also want to try
editing the indirect call instruction to a direct call without otherwise
modifying the binary; if that reproduces the 1.8% speedup then it will be
convincing.

If you haven't read it, I think you would enjoy this paper:
http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37077.pdf

-- Sean Silva


>
>
>> Notice that 1.8% is smaller than the performance variation from r263227
>> which is a very innocuous-looking change but caused ~2-3% slowdown for
>> ScyllaDB (see the thread "LLD performance w.r.t. local symbols (and
>> --build-id)").
>>
>> -- Sean Silva
>>
>> On Wed, Mar 30, 2016 at 3:39 PM, Rui Ueyama via llvm-dev <
>> llvm-dev at lists.llvm.org> wrote:
>>
>>> I was wandering how much is the overhead of virtual function calls of
>>> TargetInfo member functions. TargetInfo handles platform-specific details,
>>> and we have target-specific subclasses of that class. The subclasses
>>> override functions defined in TargetInfo.
>>>
>>> The TargetInfo member functions are called multiple times for each
>>> relocation. So the cost of virtual function calls may be non-neglible. That
>>> is a motication to do the following test.
>>>
>>> As a test, I removed all TargetInfo subclasses except for x86-64, move
>>> all code from X86_64TargetInfo to TargetInfo, and remove `virtual` from
>>> TargetInfo.
>>>
>>> The original LLD links itself (with debug info) in 7.499 seconds. The
>>> de-virtualized version did the same thing in 7.364 seconds. So it can
>>> improve it by 1.8%.
>>>
>>> I'm just pointing out that there's room there to improve performance,
>>> and I'm not suggesting we do something for this right now. We probably
>>> shouldn't do anything for this because the current code is pretty
>>> straightforward. But I'd expect that we will eventually want do something
>>> for this in future.
>>>
>>> _______________________________________________
>>> LLVM Developers mailing list
>>> llvm-dev at lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160330/fe8541df/attachment.html>


More information about the llvm-dev mailing list