[llvm-dev] exponential code explosion during inlining

Xinliang David Li via llvm-dev llvm-dev at lists.llvm.org
Tue Jun 8 17:02:37 PDT 2021


May be related to https://reviews.llvm.org/D98481

David

On Tue, Jun 8, 2021 at 8:43 AM Joseph Tremoulet via llvm-dev <
llvm-dev at lists.llvm.org> wrote:

> Hi,
>
>
>
> I filed a bug (PR50485 [1]) a couple weeks ago for some pathological
> behavior we’ve hit in the inliner, but there hasn’t been any reply on the
> bug so I figured I’d broaden the audience and ask about the issue here.
>
>
>
> The problem is partially a phase-ordering issue – we run cleanup
> optimizations after inlining an SCC that reduce the size of functions in
> the SCC, so a given call foo -> bar that looks unprofitable due to bar’s
> size while processing the SCC containing foo and bar may suddenly look
> profitable due to bar’s reduced size when later considering a copy of that
> same callsite that gets created by inlining foo into some function in a
> later SCC.
>
>
>
> This interacts badly with the approach that the inliner relies on local
> heuristics to eventually converge (rather than limiting itself with some
> global budget).  I’ll copy the comment explaining that approach here:
>
>
>
>   // We use a single common worklist for calls across the entire SCC. We
>
>   // process these in-order and append new calls introduced during
> inlining to
>
>   // the end.
>
>   //
>
>   // Note that this particular order of processing is actually critical to
>
>   // avoid very bad behaviors. Consider *highly connected* call graphs
> where
>
>   // each function contains a small amount of code and a couple of calls to
>
>   // other functions. Because the LLVM inliner is fundamentally a bottom-up
>
>   // inliner, it can handle gracefully the fact that these all appear to be
>
>   // reasonable inlining candidates as it will flatten things until they
> become
>
>   // too big to inline, and then move on and flatten another batch.
>
>   //
>
>   // However, when processing call edges *within* an SCC we cannot rely on
> this
>
>   // bottom-up behavior. As a consequence, with heavily connected *SCCs* of
>
>   // functions we can end up incrementally inlining N calls into each of
>
>   // N functions because each incremental inlining decision looks good and
> we
>
>   // don't have a topological ordering to prevent explosions.
>
>   //
>
>   // To compensate for this, we don't process transitive edges made
> immediate
>
>   // by inlining until we've done one pass of inlining across the entire
> SCC.
>
>   // Large, highly connected SCCs still lead to some amount of code bloat
> in
>
>   // this model, but it is uniformly spread across all the functions in
> the SCC
>
>   // and eventually they all become too large to inline, rather than
>
>   // incrementally maknig a single function grow in a super linear fashion.
>
>
>
> The problem in a nutshell is that “eventually they all become too large to
> inline” is true while inlining is happening in their SCC, but then the
> cleanup makes them small again and so the inliner goes nuts chasing all the
> now-profitable paths through the highly connected SCC when considering them
> as transitive inlines into a subsequent SCC.
>
>
>
> I’d love some thoughts on how we might best go about addressing this.  I
> could imagine trying to address it as a phase ordering issue by running
> cleanup at the start of inlining an SCC – in the cases where we’ve hit this
> the cleanup hasn’t actually depended on inlining to expose the
> opportunities, it just happened to first get cleaned up immediately post
> inlining.  I could also imagine trying to address it by limiting transitive
> inlines at callsites created by inlining functions from already-converged
> SCCs, which we could either do wholesale (if we’re expecting them to be too
> big to inline at this point, that shouldn’t be a big loss, right?) or just
> by capping their depth, say, to cut off exponential explosion.
>
>
>
>
>
> Thanks,
>
> -Joseph
>
>
>
> [1] - 50485 – Exponential code explosion during inlining (llvm.org)
> <https://bugs.llvm.org/show_bug.cgi?id=50485>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210608/5ea002af/attachment-0001.html>


More information about the llvm-dev mailing list