[cfe-dev] Virtual function call optimization(memoization) questions

Ninu-Ciprian Marginean via cfe-dev cfe-dev at lists.llvm.org
Tue Jun 16 16:06:38 PDT 2020


Hello Richard,

it seems the caching works only when the compiler is able to unroll the
loop. Otherwise, it does not(in the following example change the work
method's actual parameter to a constant and the optimization is again
performed):
https://godbolt.org/z/ExvKqR

Also, thanks for the reading material, I already went through it, but I
need to read again, some parts are not very clear to me. If you won't mind
I will get back to you with a reply when I'll get to my
conclusions(hopefully soon); just two questions, for now. Is this
optimization supposed to always yield correct results or are some cases
just supposed to work differently(perhaps leading into undefined
behaviour)? I don't have an actual example, but I'm thinking
multithreading, even without exploiting race conditions can be used to
carefully craft a program that breaks this optimization.
When I was thinking about it, I had my concerns about the possibility of
somehow changing the result of the virtual call resolution, but I wasn't
thinking the placement new could be used inside the actual call so this
idea with multithreading didn't seem possible. Also are you aware of any
other possibilities to invalidate the cached resolution?

Thanks,
Ninu.

On Tue, Jun 16, 2020 at 9:51 PM Richard Smith <richard at metafoo.co.uk> wrote:

> On Tue, 16 Jun 2020, 11:44 Richard Smith, <richard at metafoo.co.uk> wrote:
>
>> On Mon, 15 Jun 2020, 23:31 Ninu-Ciprian Marginean via cfe-dev, <
>> cfe-dev at lists.llvm.org> wrote:
>>
>>> Hello,
>>>
>>> I want to investigate if there are any possibilities of optimizing
>>> virtual functions calls. From my knowledge and reading I understand that
>>> the overhead for this is two pointer dereferences. I know about
>>> alternatives like CRTP and std::variant, but for this investigation, I'm
>>> interested only in traditional, dynamic polymorphism. One of my ideas is to
>>> use some form of caching of the computation of the address of the actual
>>> method that gets called. Obviously we cannot always do this, but there are
>>> some cases where we could.
>>>
>>> One example:
>>>
>>> I have a virtual function call in a loop. All the time, the same method
>>> is called:
>>> https://godbolt.org/z/WFp2rm
>>>
>>> The loop is in method work; the virtual call is to method id.
>>>
>>> We can see that method work gets inlined, but inside the loop, there are
>>> always two pointer dereferences:
>>> mov     rax, qword ptr [r14]
>>> call    qword ptr [rax]
>>>
>>> Since the object referred to by b, never changes to a different object,
>>> this could(at least in this case), be cached.
>>>
>>> My assembly might be rusty, but before the loop, we could have:
>>> mov     r13, qword ptr [r14]
>>> mov     r13, qword ptr [r13]
>>>
>>> and inside the loop we would only have:
>>>
>>> call    r13
>>>
>>> *My questions are:*
>>> Do we have a mechanism in C++ to explicitly store the result of the
>>> lookup in the vtable without additional overhead? Some sort of cache for
>>> this result so that we do not do the same computation over and over again?
>>> I'm specifically looking for this solution, not alternatives to dynamic
>>> polymorphism like CRTP or std::variant. I couldn't find one.
>>> For functional programming style "pure functions", we would have:
>>>     int res = pure_function();
>>>     while(true) use(res);
>>> instead of
>>>     while(true) use(pure_function());
>>>
>>> Is there anything in the C++ standard that prevents such an optimization?
>>>
>>
>> The optimization is valid, and clang performs it under
>> -fstrict-vtable-pointers (https://godbolt.org/z/SrlUC8). Unfortunately,
>> there are still some cases where this optimization can regress performance
>> (the annotations that the frontend inserts to enable the optimization can
>> get in the way of other transformations), so it's not enabled by default
>> yet.
>>
>
> Actually, I think the above performance concern is way out of date. I
> think we decided that we are happy with the performance delta (it improves
> a lot more than it regresses) and the remaining blocker for enabling it by
> default was ensuring LLVM IR ABI compatibility (especially when performing
> LTO between compilations with the feature turned off and compilations with
> it turned on).
>
> How would we identify the cases in which such an optimization is possible
>>> and the ones in which it is not?
>>>
>>> Are there any other reasons for which such an optimization would not be
>>> desired?
>>>
>>>
>>> N.B.: I realize the last two questions might be difficult to answer, but
>>> could you at least point me in the right direction for investigating this
>>> myself?
>>>
>>> Thanks,
>>> Ninu.
>>> _______________________________________________
>>> cfe-dev mailing list
>>> cfe-dev at lists.llvm.org
>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/cfe-dev/attachments/20200617/f0fed223/attachment.html>


More information about the cfe-dev mailing list