[cfe-dev] [LLVMdev] Does LLVM optimize recursive call?

Kent Williams nkwmailinglists at gmail.com
Wed Oct 3 08:18:18 PDT 2012


Also known as 'tail recursion' -- it's an important concept to understand.

It depends on this pattern:

ReturnVal
AFunction( <args> )
{
   if(<args> == terminal)
     return ReturnVal.null
   ReturnVal LocalResult = <computation>(<args>)
   return LocalResult {composed with}  AFunction(<args>.next);
}

The possibility of optimization out recursion comes because there's no
state to be stacked. Whatever temporaries that are needed for
<computation> can be reused without stacking up local states. It's
equivalent to

ReturnVal AFunction( <args> )
{
    ReturnVal accum = init;
    Args current;
    while ( ( current = <args>.front ) != terminal )
      {
      accum = accum <compose> current;
      <args>.pop_front;
      }
  return accum;
}

The reason tail recursion isn't more widely used and taught is that
the mapping onto iterative solutions is so straightforward, knowing
how to solve problems iteratively seems expedient and more useful.

On the other hand, a deep understanding of induction is one thing that
separates literate programmers from hacks.


On Wed, Oct 3, 2012 at 4:33 AM, Journeyer J. Joh
<oosaprogrammer at gmail.com> wrote:
> Hello David Chisnall, John McCall, David Blaikie and other people who
> may concern
>
>> If your teachers are really making this sort of superficial generalization,
>> they're doing you a disservice.
>
> This hit my head.
> Actually I didn't test the performance of those two, the iteration and
> the recursion, at all. And in this field of study, testing is very
> much important, I believe.
>
> And the '''tail call optimization'''. I will remember this also.
>
> Thank you all.
>
> I learned a new knowledge today.
>
> Sincerely
> Journeyer J. Joh
>
>
> 2012/10/3 David Chisnall <David.Chisnall at cl.cam.ac.uk>:
>> On 3 Oct 2012, at 09:48, John McCall wrote:
>>
>>> If your teachers are really making this sort of superficial generalization,
>>> they're doing you a disservice.
>>
>> I completely agree with John here - we had a case a few years ago where someone spent ages refactoring their algorithm to be iterative and was surprised that it became slower (with the MS compiler).  Looking at the assembly, it was using push and pop instructions for the temporaries on the recursive version, but had to do more complex addressing to find the right place for them in the iterative version, which made everything slower (and prevented the CPU from doing some register-stack aliasing tricks).
>>
>> One thing John didn't mention is that LLVM will try to do tail call optimisation.  This means that if your function ends with something like:
>>
>> return a(b);
>>
>> It will attempt to turn this into a jump, rather than a call.  This doesn't happen in all cases, because it is dependent on the calling convention.  For static functions in C, the compiler is free to chose whatever calling convention it wants (because it's not part of the public API), but for others it may need to use a non-standard calling convention.
>>
>> This has the advantage that it doesn't consume extra stack space (especially if it's self-recursive, it only consumes one stack frame no matter how deep the recursion goes) and the return will return to the outer caller.  It is slightly more complex, because most C calling conventions require the caller to clean up the call frame (the callee can't clean it up in the general case, because C allows variadic functions), and so a tail call optimisation must ensure that the inner callee has the same layout of call frame as its caller.
>>
>> David
>
>
>
> --
> ----------------------------------------
> Journeyer J. Joh
> o o s a p r o g r a m m e r
> a t
> g m a i l  d o t  c o m
> ----------------------------------------
>
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev




More information about the cfe-dev mailing list