<div dir="ltr">IMO, a good inliner with a precise cost/benefit model will eventually need what Art is proposing here. <div><br></div><div>Giving the function call overhead as an example. It depends on a couple of factors: 1) call/return instruction latency; 2) function epilogue/prologue; 3) calling convention (argument parsing, using registers or not, what register classes etc). All these factors depend on target information. If we want go deeper, we know certain micro architectures uses a stack of call/return pairs to help branch prediction of ret instructions -- such stack has a target specific limit which can be triggered when a callsite is deep in the callchain. Register file size and register pressure increase due to inline comes as another example.</div><div><br></div><div>Another relevant example is the icache/itlb sizes. To do a more precise analysis of the cost to 'speed' due to icache/itlb pressure increase requires target information, profile information as well as some global analysis. Easwaran has done some research in this area in the past and can share the analysis design when other things are ready.</div><div class="gmail_extra"><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
Hi Art,<br>
<br>
I've long thought that we should have a more principled way of doing inline profitability. There is obviously some cost to executing a function body, some call site overhead, and some cost reduction associated with any post-inlining simplifications. If inlining reduces the overall call site cost by more than some factor, say 1% (this should probably depend on the optimization level), then we should inline. With profiling information, we might even use global speedup instead of local speedup.<br></blockquote><div><br></div><div>yes -- with target specific cost information, global speedup analysis can be more precise :)</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Whether we need a target customization of this threshold, or just a way for a target to supplement the fine inlining decision, is unclear to me. It is also true that a the result of a bunch of locally-optimal decisions might be far from the global optimum. Maybe the target has something to say about that?<br></blockquote><div><br></div><div><br></div><div>The concept of threshold can be a topic of another discussion. In current design, I think the threshold should remain target independent. It is the cost that is target specific.</div><div><br></div><div>thanks,</div><div><br></div><div>David</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
In short, I'm fine with what you're proposing, but to the extent possible, I want the numbers provided by the target to mean something. Replacing a global set of somewhat-arbitrary magic numbers, with target-specific sets of somewhat-arbitrary magic numbers should be our last choice.<br>
<br>
Thanks again,<br>
Hal<br>
<br>
<br>
><br>
> Thanks,<br>
<span class="HOEnZb"><font color="#888888">> --<br>
><br>
><br>
> --Artem Belevich<br>
> _______________________________________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a><br>
> <a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
><br>
<br>
--<br>
Hal Finkel<br>
Assistant Computational Scientist<br>
Leadership Computing Facility<br>
Argonne National Laboratory<br>
</font></span></blockquote></div><br></div></div>