[LLVMdev] Is there a "callback optimization"?

Eugene Toder eltoder at gmail.com
Fri Jun 4 13:13:37 PDT 2010


Generic reasons -- more data means more cache misses, TLB misses and
page faults. And some code specific reasons -- more code pollutes
branch prediction and trace caches for some CPUs.

Eugene

On Fri, Jun 4, 2010 at 9:01 PM, Cornelius <c.r1 at gmx.de> wrote:
> Hi,
>> 1) Knowing when to perform specialization. If the call was not inlined
>> the function is probably big. Getting this wrong will generate *a lot*
>> of code for very small (if not negative) speed gain.
> Could you elaborate why just having (lots of) more code in the final
> executable will incur a performance _penalty_?
> I was thinking of something similiar, but for type-specializations of
> functions of a dynamicly-typed language, so that the frontend creates
> more than one function for each function in the sourcecode.
>
>> 2) Sharing of specializations from different call sites that have the
>> same constants.
>> Getting 1) right is crucial but hard. Easy cases are already taken by
>> inline and dead argument elimination. If some good profiling
>> information is available it can be used for speed/space trade off
>> estimation (specialize calls from hot code).
>>
>> Eugene
>>
>
> Cornelius
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>




More information about the llvm-dev mailing list