[PATCH] D105996: [AArch64] Enable Upper bound unrolling universally

JinGu Kang via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Mon Aug 16 04:32:50 PDT 2021


jaykang10 added a comment.

In D105996#2922232 <https://reviews.llvm.org/D105996#2922232>, @fhahn wrote:

> In D105996#2917239 <https://reviews.llvm.org/D105996#2917239>, @jaykang10 wrote:
>
>> I have not looked into them in detail yet. Let me check it next week. If possible, can you also share your side data please?
>
> So far I ran SPEC2017 and all changes look to be within the noise. Still waiting on Geekbench

Thanks for sharing it.

> SPEC2017 ThunderX2 -O3 -flto : 500.perlbench_r -0.772206292
> SPEC2006 neoverse-n1 core. -O3 -flto: 401.bzip2 -3.003732177

I have run them multiple times again. It looks the numbers are not real. The result is as below.

  SPEC2006 neoverse-n1 core. -O3 -flto: 401.bzip2: mean 0.32% score improvement
  SPEC2017 ThunderX2 -O3 -flto : 500.perlbench_r: mean 0.65% score improvement

> I think by default compare.py only shows a small fixed number of results. You can show all by passing --all. You can filter unchanged binaries between runs by using --filter-hash. IIUC from the data here it looks like there are a few notable regressions in this test set. Do you know if that's noise are actual regressions?

I have run it mutiple times again. It looks the numbers are not real too... The result with `--filter-hash` is as below.

  Metric: exec_time
  
  /usr/local/lib/python2.7/dist-packages/scipy/stats/stats.py:308: RuntimeWarning: divide by zero encountered in log
    log_a = np.log(np.array(a, dtype=dtype))
  Program                                        results.org results.mod diff  
   test-suite...ootout/Shootout-ackermann.test     0.00        0.01      100.0%
   test-suite.../Prolangs-C++/simul/simul.test     0.00        0.01      100.0%
   test-suite...out-C++/Shootout-C++-ary2.test     0.01        0.02      94.9% 
   test-suite...plications/d/make_dparser.test     0.01        0.02      66.1% 
   test-suite...ijndael/security-rijndael.test     0.02        0.03      60.3% 
   test-suite...marks/Stanford/Bubblesort.test     0.02        0.02      25.7% 
   test-suite...hmarks/VersaBench/bmm/bmm.test     1.75        2.01      14.9% 
   test-suite...patricia/network-patricia.test     0.06        0.07      11.9% 
   test-suite...rks/Olden/treeadd/treeadd.test     0.13        0.14      11.7% 
   test-suite...ing/covariance/covariance.test     1.40        1.54      10.6% 
   test-suite...-2d-imper/jacobi-2d-imper.test     0.12        0.13       9.6% 
   test-suite.../Applications/spiff/spiff.test     1.31        1.43       8.9% 
   test-suite...g/correlation/correlation.test     1.26        1.37       8.6% 
   test-suite...rks/FreeBench/mason/mason.test     0.09        0.10       8.2% 
   test-suite...enchmarks/Misc-C++/bigfib.test     0.15        0.16       8.0% 
   Geomean difference                                                     nan% 
           results.org    results.mod          diff
  count  760.000000     760.000000     7.580000e+02
  mean   2025.018981    2101.590727   -2.405934e-03
  std    24798.482931   26215.053402   1.127594e-01
  min    0.000000       0.000000      -1.000000e+00
  25%    0.919290       0.921498      -6.456376e-04
  50%    9.972822       9.972357       5.963245e-07
  75%    449.831035     449.823480     1.218490e-03
  max    479628.130000  507694.300000  1.000000e+00

I think it looks the results are overall fine. Any objection to push this change?


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D105996/new/

https://reviews.llvm.org/D105996



More information about the llvm-commits mailing list