[PATCHES] A module inliner pass with a greedy call site queue
James Molloy
james at jamesmolloy.co.uk
Wed Aug 6 01:11:22 PDT 2014
Hi Nick,
I'm not an expert on inlining algorithms so please excuse my naivite. But
usually these "top-down versus bottom-up" arguments (in other domains, at
least), come to the obvious conclusion that both have merits so let's
create a hybrid. Why is this not the case here too?
As far as I can tell, Yin's pass simply provides a method to add global
context to local decisions. Algorithms such as a bottom-up inliner analyze
a callsite and assign it a value. This could be bottom-up or top-down, it
doesn't really matter. What matters is that eventually, all (rational)
callsites end up in the same sorted datastructure and are addressed in
order.
Am I missing something?
Cheers,
James
On 6 August 2014 08:54, Nick Lewycky <nicholas at mxc.ca> wrote:
> Hal Finkel wrote:
>
>> I'd like you to elaborate on your assertion here, however, that a
>> "topdown inliner is going to work best when you have the whole program." It
>> seems to me that, whole program or not, a top-down inlining approach is
>> exactly what you want to avoid the vector-push_back-cold-path-inlining
>> problem (because, from the caller, you see many calls to push_back, which
>> is small -- because the hot path is small and the cold path has not (yet)
>> been inlined -- and inlines them all, at which point it can make a sensible
>> decision about the cold-path calls).
>>
>
> I don't see that. You get the same information when looking at a pair of
> functions and deciding whether to inline. With the bottom-up walk, we
> analyze the caller and callee in their entirety before deciding whether to
> inline. I assume a top-down inliner would do the same.
>
> If you have a top-down traversal and you don't have the whole program, the
> first problem you have is a whole ton of starting points. At first blush
> bottom up seems to have the same problem, except that they are generally
> very straight-forward leaf functions -- setters and getters or little loops
> to test for a property. Top down you don't yet know what you've got, and it
> has lots of calls that may access arbitrary memory. In either case, you
> apply your metric to inline or not. Then you run the function-level passes
> to perform simplification. Bottom up, you're much more likely to get
> meaningful simplifications -- your getter/setter melts away. Top down you
> may remove some redundant loads or dead stores, but you still don't know
> what's going on because you have these opaque not-yet-analyzed callees in
> the way. If you couldn't analyze the memory before, inlining one level away
> hasn't helped you, and the function size has grown. You don't get the
> simplifications until you go all the way down the call stack to the setters
> and getters etc.
>
> There's a fix for this, and that's to perform a sort of symbolic execution
> and just keep track of what the program has done so far (ie. what values
> registers have taken on so far, which pointers have escaped etc.), and make
> each inlining decision in program execution order. But that fix doesn't get
> you very far if you haven't got a significant chunk of program to work with.
>
>
> Nick
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140806/725ba7c9/attachment.html>
More information about the llvm-commits
mailing list