[LLVMdev] RFC: Recursive inlining

Chandler Carruth chandlerc at google.com
Thu Feb 5 02:57:37 PST 2015


First off, thanks for the examples. They are very nice.

Breaking it down into an explicit state machine is I think exactly the
right way to model this.

On Thu, Feb 5, 2015 at 2:36 AM, James Molloy <james at jamesmolloy.co.uk>
wrote:

> This solution can be generalised for if there is code in between the
> recursive calls (just add more states). It's getting more complex in terms
> of control flow though - the question is, is it worth it?
>
> Unrolling this loop is probably not going to usefully create larger basic
> blocks due to how complex the control flow is. I suppose we have the
> opportunity, if we unroll, of CSE and jump threading values through
> equivalent blocks that may gain us time. However, we can still do LICM
> which is probably a major improvement, and there are no calls/returns to
> speculate.
>
> This approach would have to be extended if the recursive calls didn't
> return void (for example it was a recursive reduction), but it's not
> impossible.
>
> Sorry for the long post. What do people think?
>

Personally, I think this would be an amazing transformation for LLVM to do.
But I agree that it gets very complex in the limit. I suspect there will
need to be some complexity bound on doing this transformation, but as of
yet I'm not sure what it would be. Here is what is in my head.

Let's imagine we do this transform. Let's imagine that *no other
optimizations fire*. How close can we make the generated code resemble what
would be executed by actually calling the function recursively? Can we use
the actual stack and lower the adjustments much like we would were we to
imagine the dynamic trace of instructions in the recursive variant?

My suspicion is that for a decent chunk of real world recursion, we can
transform it into a loop that is not significantly lower to execute. We can
use the same stack adjustment code that would be executed in a recursive
call, etc. My further suspicion is that the complexity of control flow is
not the critical concern here, but the amount of stack usage, and how much
work we have to do to use a minimal amount of the stack. I could be
completely wrong about this, however, looking at your state machine code my
first impression is that the code is actually not bad... except for the
stack manipulation. I think we'll actually generate very reasonable code
for the CFGs here.

So, if we can do this transform in a way that produces generated code that
essentially executes the same dynamic trace of instructions that the
recursive code would execute, I think we're in *very* safe territory. The
primary code size will be switching from the call-stack approach to this
manual stack approach. It'll be like the cost of peeling the outer
iteration. Unless the code size is of extreme concern, I don't think there
is much downside.

As for the upside, I think it is huge. LICM alone is a game-changer. GVN
starts to work. PRE can be applied. Will SCEV be able to reason about the
trip count? probably not. But the entire rest of the optimizer essentially
gets turned *on* by this change.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150205/f37fe124/attachment.html>


More information about the llvm-dev mailing list