[PATCH] D105996: [AArch64] Enable Upper bound unrolling universally

JinGu Kang via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jul 30 11:34:41 PDT 2021


jaykang10 added a comment.

In D105996#2916990 <https://reviews.llvm.org/D105996#2916990>, @fhahn wrote:

> Thank you very much for sharing the additional numbers!
>
> A few regressions jump out. Do you know if those are real?
>
> - SPEC2017  ThunderX2  -O3 -flto : 500.perlbench_r		-0.772206292
> - SPEC2006  neoverse-n1 core. -O3 -flto:  401.bzip2	-3.003732177
>
>
>
> In D105996#2916050 <https://reviews.llvm.org/D105996#2916050>, @jaykang10 wrote:
>
>> The result of llvm-test-suite on neoverse-n1 is as below.
>>
>>   Metric: exec_time
>>   
>>   Program                                        results_org results_mod diff 
>>    test-suite..._MemCmp<5, LessThanZero, None>   1464.48     1586.24      8.3%
>>    test-suite...mp<16, GreaterThanZero, First>   469.91      494.79       5.3%
>>    test-suite...ing-dbl/Equivalencing-dbl.test     1.54        1.62       5.0%
>>    test-suite...dbl/LoopRestructuring-dbl.test     2.52        2.62       4.0%
>>    test-suite...mCmp<5, GreaterThanZero, Last>   1529.02     1585.66      3.7%
>>    test-suite...ow-dbl/GlobalDataFlow-dbl.test     2.59        2.66       2.6%
>>    test-suite...BENCHMARK_asinf_autovec_float_   343.94      352.35       2.4%
>>    test-suite...netbench-crc/netbench-crc.test     0.62        0.63       2.2%
>>    test-suite...l/StatementReordering-dbl.test     2.67        2.73       2.1%
>>    test-suite...Raw.test:BM_HYDRO_1D_RAW/44217    39.04       39.86       2.1%
>>    test-suite...w.test:BM_INT_PREDICT_RAW/5001    35.27       35.93       1.9%
>>    test-suite...mbolics-dbl/Symbolics-dbl.test     1.74        1.77       1.8%
>>    test-suite...est:BM_ENERGY_CALC_LAMBDA/5001    62.91       64.01       1.7%
>>    test-suite...bl/CrossingThresholds-dbl.test     2.58        2.62       1.7%
>>    test-suite...-dbl/LinearDependence-dbl.test     2.75        2.79       1.6%
>>    Geomean difference                                                     nan%
>>            results_org    results_mod        diff
>>   count  582.000000     582.000000     581.000000
>>   mean   2669.685396    2665.482627   -0.000234  
>>   std    28714.786308   28644.777094   0.009457  
>>   min    0.606500       0.600200      -0.092831  
>>   25%    3.190475       3.185000      -0.000765  
>>   50%    131.649140     131.477139     0.000016  
>>   75%    602.095967     602.268365     0.000851  
>>   max    486980.720000  484782.360000  0.083140
>
> I think by default `compare.py` only shows a small fixed number of results. You can show all by passing `--all`. You can filter unchanged binaries between runs by using `--filter-hash`. IIUC from the data here it looks like there are a few notable regressions in this test set. Do you know if that's noise are actual regressions?

I have not looked into them in detail yet. Let me check it next week. If possible, can you also share your side data please?


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D105996/new/

https://reviews.llvm.org/D105996



More information about the llvm-commits mailing list