[LLVMdev] LLVM Inliner

Xinliang David Li xinliangli at gmail.com
Tue Nov 23 20:29:02 PST 2010


On Tue, Nov 23, 2010 at 7:40 PM, Reid Kleckner <reid.kleckner at gmail.com>wrote:

> On Tue, Nov 23, 2010 at 8:07 PM, Xinliang David Li <xinliangli at gmail.com>
> wrote:
> > Hi, I browsed the LLVM inliner implementation, and it seems there is room
> > for improvement.  (I have not read it too carefully, so correct me if
> what I
> > observed is wrong).
> > First the good side of the inliner -- the function level summary and
> inline
> > cost estimation is more elaborate and complete than gcc. For instance, it
> > considers callsite arguments and the effects of optimization enabled by
> > inlining.
> > Now more to the weakness of the inliner:
> > 1) It is bottom up.  The inlining is not done in the order based on the
> > priority of the callsites.  It may leave important callsites (at top of
> the
> > cg) unlined due to higher cost after inline cost update. It also
> eliminates
> > the possibility of inline specialization. To change this, the inliner
> pass
> > may not use the pass manager infrastructure .  (I noticed a hack in the
> > inliner to workaround the problem -- for static functions avoid inlining
> its
> > callees if it causes it to become too big ..)
> > 2) There seems to be only one inliner pass.  For calls to small
> functions,
> > it is better to perform early inlining as one of the local (per function)
> > optimizations followed by scalar opt clean up. This will sharpen the
> summary
> > information.  (Note the inline summary update does not consider the
> possible
> > cleanup)
> > 3)  recursive inlining is not supported
> > 4) function with indirect branch is not inlined. What source construct
> does
> > indirect branch instr correspond to ? variable jump?
>
> This corresponds to functions that use GCC's labels-as-values
> extension, not things like switch statements or virtual calls, so this
> really only applies to things like interpreter loops, which you don't
> usually want to inline.
>
> > 5) fudge factor prefers functions with vector instructions -- why is
> that?
>
> I'm guessing that vectorized instructions are an indicator that the
> function is a hotspot, and should therefore be inlined.
>

This is a reasonable heuristic.


>
> > 6) There is one heuristc used in inline-cost computation seems wrong:
> >
> >   // Calls usually take a long time, so they make the inlining gain
> smaller.
> >   InlineCost += CalleeFI->Metrics.NumCalls *
> InlineConstants::CallPenalty;
> > Does it try to block inlining of callees with lots of calls? Note
> inlining
> > such a function only increase static call counts.
>
> I'm guessing the heuristic is that functions with lots of calls are
> heavy and spend a lot of time in their callees, so eliminating the
> overhead of one call isn't worth it.
>

Those calls may sit in a cold paths -- penalizing inlining to those
functions can miss opportunities.

David


>
> Reid
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20101123/3dc59a33/attachment.html>


More information about the llvm-dev mailing list