<div dir="ltr"><div class="gmail_default" style="font-family:monospace;font-size:small;color:#000000">Changing the default threshold needs lots of benchmarking.</div><div class="gmail_default" style="font-family:monospace;font-size:small;color:#000000"><br></div><div class="gmail_default" style="font-family:monospace;font-size:small;color:#000000">For this particular case, IMO the better way is to enhance inline cost analysis to give callsite more bonus if it enables SROA in call context. The analysis needs to be careful such that if there is another callsite that blocks SROA, and that callsites can never be inlined, then the bonus can not be applied.</div><div class="gmail_default" style="font-family:monospace;font-size:small;color:#000000"><br></div><div class="gmail_default" style="font-family:monospace;font-size:small;color:#000000">David </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Apr 24, 2021 at 4:10 AM Roman Lebedev via Phabricator <<a href="mailto:reviews@reviews.llvm.org">reviews@reviews.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">lebedev.ri created this revision.<br>
lebedev.ri added reviewers: aeubanks, eraman, Prazek, davidxl.<br>
lebedev.ri added a project: LLVM.<br>
Herald added subscribers: haicheng, hiraditya.<br>
lebedev.ri requested review of this revision.<br>
<br>
I'm observing a rather big runtime performance regression<br>
as a result of NewPM switch on one of RawSpeed's benchmarks:<br>
<br>
  raw.pixls.us-unique/Panasonic/DC-GH5S$ /repositories/googlebenchmark/tools/compare.py -a benchmarks ~/rawspeed/build-{old,new}/src/utilities/rsbench/rsbench --benchmark_counters_tabular=true P1022085.RW2 --benchmark_repetitions=9 --benchmark_min_time=1<br>
  RUNNING: /home/lebedevri/rawspeed/build-old/src/utilities/rsbench/rsbench --benchmark_counters_tabular=true P1022085.RW2 --benchmark_repetitions=9 --benchmark_min_time=1 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpk9pkqe2s<br>
  2021-04-24T14:00:17+03:00<br>
  Running /home/lebedevri/rawspeed/build-old/src/utilities/rsbench/rsbench<br>
  Run on (32 X 3599.99 MHz CPU s)<br>
  CPU Caches:<br>
    L1 Data 32 KiB (x16)<br>
    L1 Instruction 32 KiB (x16)<br>
    L2 Unified 512 KiB (x16)<br>
    L3 Unified 32768 KiB (x2)<br>
  Load Average: 0.65, 0.51, 1.27<br>
  ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
  Benchmark                                                      Time             CPU   Iterations  CPUTime,s CPUTime/WallTime     Pixels Pixels/CPUTime Pixels/WallTime Raws/CPUTime Raws/WallTime WallTime,s<br>
  ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
  P1022085.RW2/threads:32/process_time/real_time_mean        0.748 ms         23.9 ms            9  0.0239231          31.9721   10.3933M       434.452M        13.8904G       41.801      1.33647k    748.25u<br>
  P1022085.RW2/threads:32/process_time/real_time_median      0.748 ms         23.9 ms            9  0.0239156          31.9716   10.3933M       434.585M        13.8934G      41.8138      1.33676k   748.079u<br>
  P1022085.RW2/threads:32/process_time/real_time_stddev      0.003 ms        0.080 ms            9   80.0846u         6.00073m          0       1.45335M        48.9162M     0.139834        4.7065   2.63684u<br>
  RUNNING: /home/lebedevri/rawspeed/build-new/src/utilities/rsbench/rsbench --benchmark_counters_tabular=true P1022085.RW2 --benchmark_repetitions=9 --benchmark_min_time=1 --benchmark_display_aggregates_only=true --benchmark_out=/tmp/tmpt6ijfryg<br>
  2021-04-24T14:00:31+03:00<br>
  Running /home/lebedevri/rawspeed/build-new/src/utilities/rsbench/rsbench<br>
  Run on (32 X 3600.05 MHz CPU s)<br>
  CPU Caches:<br>
    L1 Data 32 KiB (x16)<br>
    L1 Instruction 32 KiB (x16)<br>
    L2 Unified 512 KiB (x16)<br>
    L3 Unified 32768 KiB (x2)<br>
  Load Average: 5.54, 1.56, 1.61<br>
  ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
  Benchmark                                                      Time             CPU   Iterations  CPUTime,s CPUTime/WallTime     Pixels Pixels/CPUTime Pixels/WallTime Raws/CPUTime Raws/WallTime WallTime,s<br>
  ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
  P1022085.RW2/threads:32/process_time/real_time_mean        0.851 ms         27.2 ms            9  0.0272077          31.9615   10.3933M       382.027M        12.2102G      36.7569      1.17481k   851.271u<br>
  P1022085.RW2/threads:32/process_time/real_time_median      0.848 ms         27.1 ms            9  0.0271017          31.9699   10.3933M       383.494M        12.2598G      36.8981      1.17959k   847.755u<br>
  P1022085.RW2/threads:32/process_time/real_time_stddev      0.008 ms        0.243 ms            9   243.106u        0.0215795          0       3.38806M         116.08M     0.325984       11.1687   8.16022u<br>
  Comparing /home/lebedevri/rawspeed/build-old/src/utilities/rsbench/rsbench to /home/lebedevri/rawspeed/build-new/src/utilities/rsbench/rsbench<br>
  Benchmark                                                               Time             CPU      Time Old      Time New       CPU Old       CPU New<br>
  ----------------------------------------------------------------------------------------------------------------------------------------------------<br>
  P1022085.RW2/threads:32/process_time/real_time_pvalue                 0.0004          0.0004      U Test, Repetitions: 9 vs 9<br>
  P1022085.RW2/threads:32/process_time/real_time_mean                  +0.1377         +0.1373             1             1            24            27<br>
  P1022085.RW2/threads:32/process_time/real_time_median                +0.1332         +0.1333             1             1            24            27<br>
  P1022085.RW2/threads:32/process_time/real_time_stddev                +2.0947         +2.0384             0             0             0             0<br>
<br>
I've posted repro IR at <a href="https://bugs.llvm.org/show_bug.cgi?id=50099" rel="noreferrer" target="_blank">https://bugs.llvm.org/show_bug.cgi?id=50099</a>.<br>
It happens because certain destructor, that runs at the end of certain function,<br>
is no longer inlined, thus preventing SROA of the class.<br>
<br>
I guess this bisects to D28331 <<a href="https://reviews.llvm.org/D28331" rel="noreferrer" target="_blank">https://reviews.llvm.org/D28331</a>>, which added the lower threshold for cold calls,<br>
with comment:<br>
<br>
In D28331 <<a href="https://reviews.llvm.org/D28331" rel="noreferrer" target="_blank">https://reviews.llvm.org/D28331</a>>, @eraman wrote:<br>
<br>
> I've tuned the thresholds for the hot and cold callsites using a hacked up version of the old inliner that explicitly computes BFI on a set of internal benchmarks and spec. Once the new PM based pipeline stabilizes (IIRC Chandler mentioned there are known issues) I'll benchmark this again and adjust the thresholds if required.<br>
<br>
Since the values haven't been changed since then, i guess that didn't happen as of yet.<br>
<br>
Analyzing the problem, on top of D101228 <<a href="https://reviews.llvm.org/D101228" rel="noreferrer" target="_blank">https://reviews.llvm.org/D101228</a>>, it would cost `50` to inline it, while the threshold is `45`.<br>
Bumping it to `50` isn't sufficient, because the threshold is non-inclusive (is that intentional?),<br>
so i rounded it up to `55`.<br>
<br>
This addresses the issue, inlining happens, SROA happens, and the perf is happy.<br>
<br>
While this seems like the least problematic approach,<br>
i think we may want to somehow boost inlining of the callees<br>
that have arguments that are marked as likely to be SROA'ble.<br>
Perhaps we don't want to apply this budget-lowering logic in such cases at least?<br>
<br>
<br>
Repository:<br>
  rG LLVM Github Monorepo<br>
<br>
<a href="https://reviews.llvm.org/D101229" rel="noreferrer" target="_blank">https://reviews.llvm.org/D101229</a><br>
<br>
Files:<br>
  llvm/lib/Analysis/InlineCost.cpp<br>
  llvm/test/Transforms/Inline/X86/inline-cold-callsite.ll<br>
<br>
<br>
Index: llvm/test/Transforms/Inline/X86/inline-cold-callsite.ll<br>
===================================================================<br>
--- /dev/null<br>
+++ llvm/test/Transforms/Inline/X86/inline-cold-callsite.ll<br>
@@ -0,0 +1,37 @@<br>
+; RUN: opt < %s -inline -debug-only=inline-cost -print-instruction-comments -S -mtriple=x86_64-unknown-linux-gnu 2>&1 | FileCheck %s<br>
+; RUN: opt < %s -passes='cgscc(inline)' -debug-only=inline-cost -print-instruction-comments -S -mtriple=x86_64-unknown-linux-gnu 2>&1 | FileCheck %s<br>
+<br>
+; REQUIRES: asserts<br>
+<br>
+; Check the threshold for inlining cold callsites.<br>
+<br>
+; CHECK: Analyzing call of cold_callee... (caller:caller)<br>
+; CHECK-NEXT: Cold callsite<br>
+; CHECK-NEXT: define void @cold_callee() {<br>
+; CHECK-NEXT: ; cost before = -30, cost after = -30, threshold before = 51, threshold after = 51, cost delta = 0<br>
+; CHECK-NEXT:   ret void<br>
+; CHECK-NEXT: }<br>
+<br>
+declare void @hot_callee()<br>
+<br>
+define void @cold_callee() {<br>
+  ret void<br>
+}<br>
+<br>
+define void @caller(i1 %c) {<br>
+entry:<br>
+  br i1 %c, label %hot, label %cold, !prof !0<br>
+<br>
+hot:<br>
+  call void @hot_callee()<br>
+  br label %end<br>
+<br>
+cold:<br>
+  call void @cold_callee()<br>
+  br label %end<br>
+<br>
+end:<br>
+  ret void<br>
+}<br>
+<br>
+!0 = !{!"branch_weights", i32 1000000, i32 1}<br>
Index: llvm/lib/Analysis/InlineCost.cpp<br>
===================================================================<br>
--- llvm/lib/Analysis/InlineCost.cpp<br>
+++ llvm/lib/Analysis/InlineCost.cpp<br>
@@ -68,7 +68,7 @@<br>
<br>
 static cl::opt<int><br>
     ColdCallSiteThreshold("inline-cold-callsite-threshold", cl::Hidden,<br>
-                          cl::init(45), cl::ZeroOrMore,<br>
+                          cl::init(55), cl::ZeroOrMore,<br>
                           cl::desc("Threshold for inlining cold callsites"));<br>
<br>
 static cl::opt<bool> InlineEnableCostBenefitAnalysis(<br>
@@ -89,7 +89,7 @@<br>
 // PGO before we actually hook up inliner with analysis passes such as BPI and<br>
 // BFI.<br>
 static cl::opt<int> ColdThreshold(<br>
-    "inlinecold-threshold", cl::Hidden, cl::init(45), cl::ZeroOrMore,<br>
+    "inlinecold-threshold", cl::Hidden, cl::init(55), cl::ZeroOrMore,<br>
     cl::desc("Threshold for inlining functions with cold attribute"));<br>
<br>
 static cl::opt<int><br>
<br>
<br>
</blockquote></div>