[PATCH] D93530: [DSE] Add support for not aligned begin/end
Evgeniy via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Sun Dec 20 23:55:15 PST 2020
ebrevnov added a comment.
I had already tried to measure performance with the test-suite previously with out success. This time again I observe big variation. I'm using dedicated performance machine which runs only my process. I've build test-suite as follows:
> cmake -DTEST_SUITE_BENCHMARKING_ONLY=ON -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -C../cmake/caches/O3 <https://reviews.llvm.org/owners/package/3/>.cmake ..; make -j16
And ran the tests 3 times. Here is the results of comparison first vs second vs third runs.
{noformat}
> utils/compare.py --merge-average --filter-short ~/results.coffeelake.orig1.json vs ~/results.coffeelake.orig2.json
Tests: 736
Short Running: 197 (filtered out)
Remaining: 539
Metric: exec_time
Program lhs rhs diff
test-suite....test:BM_MULADDSUB_LAMBDA/5001 10.18 12.39 21.8%
test-suite...da.test:BM_PIC_1D_LAMBDA/44217 514.08 436.69 -15.1%
test-suite...sCRaw.test:BM_HYDRO_2D_RAW/171 9.97 11.12 11.5%
test-suite...bda.test:BM_PIC_1D_LAMBDA/5001 53.98 48.92 -9.4%
test-suite...lcalsCRaw.test:BM_ADI_RAW/5001 90.87 84.16 -7.4%
test-suite...XRayFDRMultiThreaded/threads:2 142.53 151.64 6.4%
test-suite...sARaw.test:BM_VOL3D_CALC_RAW/2 2.57 2.43 -5.6%
test-suite....test:BENCHMARK_HARRIS/256/256 341.95 359.82 5.2%
test-suite...a/kernels/doitgen/doitgen.test 0.67 0.70 3.8%
test-suite.../Applications/spiff/spiff.test 1.12 1.08 -3.5%
test-suite...g/correlation/correlation.test 1.27 1.23 -3.2%
test-suite....test:BENCHMARK_HARRIS/512/512 1854.99 1913.18 3.1%
test-suite...test:BM_MULADDSUB_LAMBDA/44217 95.43 98.36 3.1%
test-suite...aw.test:BM_MULADDSUB_RAW/44217 95.66 92.73 -3.1%
test-suite...CRaw.test:BM_MAT_X_MAT_RAW/171 105.88 102.78 -2.9%
Geomean difference nan%
lhs rhs diff
count 538.000000 539.000000 538.000000
mean 1554.916420 1548.375947 -0.000238
std 13204.345562 13143.680859 0.015499
min 0.610700 0.609000 -0.150543
25% 2.729454 2.729260 -0.000955
50% 92.554771 90.861409 0.000000
75% 555.904231 555.793696 0.000634
max 208982.569333 207906.284333 0.217786
> utils/compare.py --merge-average --filter-short ~/results.coffeelake.orig2.json vs ~/results.coffeelake.orig3.json
Tests: 736
Short Running: 197 (filtered out)
Remaining: 539
Metric: exec_time
Program lhs rhs diff
test-suite...sCRaw.test:BM_PIC_1D_RAW/44217 433.62 519.59 19.8%
test-suite....test:BM_MULADDSUB_LAMBDA/5001 12.39 10.36 -16.4%
test-suite...CHMARK_ANISTROPIC_DIFFUSION/64 2056.09 2369.92 15.3%
test-suite...HMARK_ANISTROPIC_DIFFUSION/128 8850.38 10172.76 14.9%
test-suite...HMARK_ANISTROPIC_DIFFUSION/256 36797.14 42257.00 14.8%
test-suite...CHMARK_ANISTROPIC_DIFFUSION/32 457.93 523.86 14.4%
test-suite....test:BENCHMARK_HARRIS/256/256 359.82 310.62 -13.7%
test-suite...lsCRaw.test:BM_PIC_1D_RAW/5001 48.22 54.78 13.6%
test-suite...sCRaw.test:BM_HYDRO_2D_RAW/171 11.12 11.98 7.7%
test-suite...flt/LoopRestructuring-flt.test 2.64 2.79 5.5%
test-suite...XRayFDRMultiThreaded/threads:2 151.64 143.43 -5.4%
test-suite...CRaw.test:BM_HYDRO_2D_RAW/5001 304.73 320.81 5.3%
test-suite...mbda.test:BM_INIT3_LAMBDA/5001 9.20 9.68 5.2%
test-suite.../Applications/spiff/spiff.test 1.08 1.11 2.8%
test-suite...++/Shootout-C++-ackermann.test 0.63 0.65 2.8%
Geomean difference nan%
lhs rhs diff
count 539.000000 538.000000 538.000000
mean 1548.375947 1552.758096 0.001124
std 13143.680859 12998.591022 0.020444
min 0.609000 0.609400 -0.164172
25% 2.729260 2.770800 -0.000865
50% 90.861409 91.507016 -0.000025
75% 555.793696 555.931031 0.000464
max 207906.284333 204187.808667 0.198265
{noformat}
Is this expected? What I'm doing wrong?
Thanks
Evgeniy
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D93530/new/
https://reviews.llvm.org/D93530
More information about the llvm-commits
mailing list