[PATCHES] A module inliner pass with a greedy call site queue
James Molloy
james at jamesmolloy.co.uk
Wed Aug 6 02:34:17 PDT 2014
Hi Nick,
Your points make a lot of sense. However:
I have a strong problem with global metrics. Things like "only allow X%
> code size growth" mean that whether I inline this callsite can depend on
> seemingly unrelated factors like how many other functions are in the same
> module, even outside the call stack at hand. Similarly for other things
> like cutoffs about how many inlinings are to be performed (now it depends
> on traversal order, and if you provide the inliner with a more complete
> program then it may chose to not inline calls it otherwise would have). I
> don't like spooky action at a distance, it's hard to predict and hard to
> debug.
Doesn't this all depend on the confidence that your algorithm has about
inlining a particular callsite? I'd agree if the global cutoffs were
stopping you from performing an inlining decision where your heuristics
tell you it is definitely worthwhile. But not all decisions are so clear,
right? We surely end up with a bunch of "speculative" inlining decisions
where the win/no-win is not clear static-ahead-of-time. If the global
cutoff was setting a limit on the number of speculative inlining decisions
made, I don't see that as an issue because you're limiting code growth and
yet still allowing for a non-conservative heuristic.
In the past I've seem them used for their ability to game benchmarks
> (that's my side of the story, not theirs). You provide an inliner with
> tweakable knobs that have really messy complicated interactions all across
> the inliner depending on all sorts of things, then you select the numbers
> that happen to give you a 20% speed up on SPEC for no good reason, and call
> it success. Attribute the success to the flexibility provided by the design.
Yes, I can see that is a serious problem. It is difficult to avoid however
- if (unlike Google and Apple) we don't have a clear defined set of
programs we want to make faster (you have your internal Google code which
is what you care about, Apple has a similar ecosystem of programs), we rely
wholly on benchmarks to provide a decent scope for evaluation. Agreed, the
current benchmark sets are limited, but we have to play with what we have
to an extent!
Cheers,
James
On 6 August 2014 10:12, Nick Lewycky <nicholas at mxc.ca> wrote:
> James Molloy wrote:
>
>> Hi Nick,
>>
>> I'm not an expert on inlining algorithms so please excuse my naivite.
>> But usually these "top-down versus bottom-up" arguments (in other
>> domains, at least), come to the obvious conclusion that both have merits
>> so let's create a hybrid. Why is this not the case here too?
>>
>
> It is. Our current bottom-up inliner is already a hybrid, we sometimes
> decide to inline the caller into the callee instead of inlining the callee
> into the caller.
>
>
> As far as I can tell, Yin's pass simply provides a method to add global
>> context to local decisions.
>>
>
> I have a strong problem with global metrics. Things like "only allow X%
> code size growth" mean that whether I inline this callsite can depend on
> seemingly unrelated factors like how many other functions are in the same
> module, even outside the call stack at hand. Similarly for other things
> like cutoffs about how many inlinings are to be performed (now it depends
> on traversal order, and if you provide the inliner with a more complete
> program then it may chose to not inline calls it otherwise would have). I
> don't like spooky action at a distance, it's hard to predict and hard to
> debug.
>
> We *do* want more context in the inliner, that's the largest known
> deficiency of our current one. Again, the pass manager rewrite is taking
> place to allow the inliner to call into function analysis passes so that we
> can have more context available when making our inlining decision. It's
> just a long, slow path to getting what we want.
>
>
> Algorithms such as a bottom-up inliner
>
>> analyze a callsite and assign it a value. This could be bottom-up or
>> top-down, it doesn't really matter. What matters is that eventually, all
>> (rational) callsites end up in the same sorted datastructure and are
>> addressed in order.
>>
>> Am I missing something?
>>
>
> The current inliner doesn't assign values across the whole call graph then
> decide where to inline.
>
> Firstly, the local decision (looking at a single caller-callee pair
> through a particular call site) works by attempting to determine how much
> of the callee will be live given the values known at the caller. For
> instance, we will resolve a switch statement to its destination block, and
> potentially eliminate other callees. These simplifications would still be
> possible even if we calculated everything up front.
>
> Secondly, we iterate with the function passes optimizing the new function
> after each inlining is performed. This may eliminate dead code (potentially
> removing call graph edges) and can resolve loads (potentially creating new
> call graph edges as indirect calls are resolved to direct calls). Handling
> the CFG updates is one of the more interesting and difficult parts of the
> inliner, and it's very important for getting C++ virtual calls right. This
> sort of thing can't be calculated up front.
>
> Nick
>
> PS. You may have guessed that I'm just plain prejudiced against top-down
> inliners. I am, and I should call that out before going too far down into
> the discussion.
>
> In the past I've seem them used for their ability to game benchmarks
> (that's my side of the story, not theirs). You provide an inliner with
> tweakable knobs that have really messy complicated interactions all across
> the inliner depending on all sorts of things, then you select the numbers
> that happen to give you a 20% speed up on SPEC for no good reason, and call
> it success. Attribute the success to the flexibility provided by the design.
>
> On 6 August 2014 08:54, Nick Lewycky <nicholas at mxc.ca
>> <mailto:nicholas at mxc.ca>> wrote:
>>
>> Hal Finkel wrote:
>>
>> I'd like you to elaborate on your assertion here, however, that
>> a "topdown inliner is going to work best when you have the whole
>> program." It seems to me that, whole program or not, a top-down
>> inlining approach is exactly what you want to avoid the
>> vector-push_back-cold-path- inlining problem (because, from the
>>
>> caller, you see many calls to push_back, which is small --
>> because the hot path is small and the cold path has not (yet)
>> been inlined -- and inlines them all, at which point it can make
>> a sensible decision about the cold-path calls).
>>
>>
>> I don't see that. You get the same information when looking at a
>> pair of functions and deciding whether to inline. With the bottom-up
>> walk, we analyze the caller and callee in their entirety before
>> deciding whether to inline. I assume a top-down inliner would do the
>> same.
>>
>> If you have a top-down traversal and you don't have the whole
>> program, the first problem you have is a whole ton of starting
>> points. At first blush bottom up seems to have the same problem,
>> except that they are generally very straight-forward leaf functions
>> -- setters and getters or little loops to test for a property. Top
>> down you don't yet know what you've got, and it has lots of calls
>> that may access arbitrary memory. In either case, you apply your
>> metric to inline or not. Then you run the function-level passes to
>> perform simplification. Bottom up, you're much more likely to get
>> meaningful simplifications -- your getter/setter melts away. Top
>> down you may remove some redundant loads or dead stores, but you
>> still don't know what's going on because you have these opaque
>> not-yet-analyzed callees in the way. If you couldn't analyze the
>> memory before, inlining one level away hasn't helped you, and the
>> function size has grown. You don't get the simplifications until you
>> go all the way down the call stack to the setters and getters etc.
>>
>> There's a fix for this, and that's to perform a sort of symbolic
>> execution and just keep track of what the program has done so far
>> (ie. what values registers have taken on so far, which pointers have
>> escaped etc.), and make each inlining decision in program execution
>> order. But that fix doesn't get you very far if you haven't got a
>> significant chunk of program to work with.
>>
>>
>> Nick
>> ______________________________ _________________
>> llvm-commits mailing list
>> llvm-commits at cs.uiuc.edu <mailto:llvm-commits at cs.uiuc.edu>
>> http://lists.cs.uiuc.edu/ mailman/listinfo/llvm-commits
>> <http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20140806/7943570c/attachment.html>
More information about the llvm-commits
mailing list