[PATCH] D48879: [XRay][test-suite] Benchmarks for profiling mode implementation

Dean Michael Berris via llvm-commits llvm-commits at lists.llvm.org
Thu Aug 2 16:25:26 PDT 2018


Hi Hans,

I *think* this could be caused by the process running the tests
exhausting available RAM (if this is running in a container with
limited memory availability). I'll need to look into this deeper.

I'm OK with running them sequentially, or even just limiting the
number of threads in the benchmarks to reduce the requirements on
available system memory.

Cheers
On Thu, Aug 2, 2018 at 10:38 PM Hans Wennborg via Phabricator
<reviews at reviews.llvm.org> wrote:
>
> hans added a comment.
> Herald added a subscriber: jfb.
>
> These tests are flaky when run as part of the full test-suite, as happens when testing the release branch.
>
> Any idea why these would break when run in parallel?
>
>   $ /work/llvm-release-test/branches_release_70/sandbox/bin/lit -sv MicroBenchmarks/XRay/ProfilingMode/
>   FAIL: test-suite :: MicroBenchmarks/XRay/ProfilingMode/deep-call-bench.test (1 of 3)
>   ******************** TEST 'test-suite :: MicroBenchmarks/XRay/ProfilingMode/deep-call-bench.test' FAILED ********************
>
>   /work/llvm-release-test/branches_release_70/test-suite-build/MicroBenchmarks/XRay/ProfilingMode/deep-call-bench --benchmark_format=csv > /work/llvm-release-test/branches_release_70/test-suite-build/Micr
>   oBenchmarks/XRay/ProfilingMode/Output/deep-call-bench.test.bench.csv
>
>   Run on (56 X 3500 MHz CPU s)
>   2018-08-02 14:24:49
>   ***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
>   /work/llvm-release-test/branches_release_70/test-suite-build/MicroBenchmarks/XRay/ProfilingMode/Output/deep-call-bench.test_run.script: line 1: 223587 Segmentation fault      /work/llvm-release-test/bra
>   nches_release_70/test-suite-build/MicroBenchmarks/XRay/ProfilingMode/deep-call-bench --benchmark_format=csv > /work/llvm-release-test/branches_release_70/test-suite-build/MicroBenchmarks/XRay/ProfilingM
>   ode/Output/deep-call-bench.test.bench.csv
>
>   ********************
>   Testing Time: 187.27s
>   ********************
>   Failing Tests (1):
>       test-suite :: MicroBenchmarks/XRay/ProfilingMode/deep-call-bench.test
>
>     Expected Passes    : 2
>     Unexpected Failures: 1
>
> (It varies how many and which tests fail.)
>
> I've reverted this from trunk in r338710 and merged to 7.0 in r338711 until this is fixed.
>
>
>
> ================
> Comment at: test-suite/trunk/MicroBenchmarks/XRay/ProfilingMode/CMakeLists.txt:12
> +               -fxray-instrument -fxray-modes=xray-profiling)
> +  llvm_test_run()
> +  llvm_test_executable(deep-call-bench deep-call-bench.cc)
> ----------------
> I think there needs to be a llvm_test_run() for each test executable. Otherwise lit fails like this:
>
>
> ```
> UNRESOLVED: test-suite :: MicroBenchmarks/XRay/ProfilingMode/shallow-call-bench.test (341 of 912)
> ******************** TEST 'test-suite :: MicroBenchmarks/XRay/ProfilingMode/shallow-call-bench.test' FAILED ********************
> Exception during script execution:
> Traceback (most recent call last):
>   File "/work/llvm-release-test/branches_release_70/sandbox/local/lib/python2.7/site-packages/lit-0.7.0.dev0-py2.7.egg/lit/run.py", line 202, in _execute_test_impl
>     result = test.config.test_format.execute(test, lit_config)
>   File "/work/llvm-release-test/branches_release_70/test-suite.src/litsupport/test.py", line 49, in execute
>     litsupport.testfile.parse(context, test.getSourcePath())
>   File "/work/llvm-release-test/branches_release_70/test-suite.src/litsupport/testfile.py", line 50, in parse
>     raise ValueError("Test has no RUN: line!")
> ValueError: Test has no RUN: line!
>
>
> ********************
> Testing: 0 .. 10.. 20.. 30.
> UNRESOLVED: test-suite :: MicroBenchmarks/XRay/ProfilingMode/wide-call-bench.test (343 of 912)
> ******************** TEST 'test-suite :: MicroBenchmarks/XRay/ProfilingMode/wide-call-bench.test' FAILED ********************
> Exception during script execution:
> Traceback (most recent call last):
>   File "/work/llvm-release-test/branches_release_70/sandbox/local/lib/python2.7/site-packages/lit-0.7.0.dev0-py2.7.egg/lit/run.py", line 202, in _execute_test_impl
>     result = test.config.test_format.execute(test, lit_config)
>   File "/work/llvm-release-test/branches_release_70/test-suite.src/litsupport/test.py", line 49, in execute
>     litsupport.testfile.parse(context, test.getSourcePath())
>   File "/work/llvm-release-test/branches_release_70/test-suite.src/litsupport/testfile.py", line 50, in parse
>     raise ValueError("Test has no RUN: line!")
> ValueError: Test has no RUN: line!
> ```
>
>
> This worked for me:
>
> ```
>   llvm_test_run()
>   llvm_test_executable(deep-call-bench deep-call-bench.cc)
>   target_link_libraries(deep-call-bench benchmark)
>   llvm_test_run()
>   llvm_test_executable(shallow-call-bench shallow-call-bench.cc)
>   target_link_libraries(shallow-call-bench benchmark)
>   llvm_test_run()
>   llvm_test_executable(wide-call-bench wide-call-bench.cc)
>   target_link_libraries(wide-call-bench benchmark)
> ```
>
> However, the tests are flaky when run together, see below.
>
>
> Repository:
>   rL LLVM
>
> https://reviews.llvm.org/D48879
>
>
>


More information about the llvm-commits mailing list