[PATCH] D32272: [XRay] Add Google Benchmark library + initial XRay benchmarks

Dean Michael Berris via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Wed Jun 28 19:06:17 PDT 2017


dberris accepted this revision.
dberris added a comment.

In https://reviews.llvm.org/D32272#789252, @hfinkel wrote:

> In https://reviews.llvm.org/D32272#789237, @MatzeB wrote:
>
> > - Todays lit works best if you have 1 file for each test; Also test discover is done before any test is run.
> > - If there is a flag to only run a specific benchmark in a googletest executable then we could work around this limitation on the cmake side:
> >
> >   Just generate multiple .test files with slightly different RUN: lines (assume that option is called --only-test): ``` retref-bench-unpatched.test  contains RUN: retref-bench --only-test unpatched retref-bench-patched.test contains RUN: retref-bench --only-test patches ... ``` It would require us to repeat the sub-benchmark names in the cmake file (or write cmake logic to scan the sourcecode?)
>
>
> I suppose we could do this, but realistically, we need to be able to collect multiple named timings per test. We need some way of telling lit, "don't use the timing from this executable, it provides timing numbers some other way", and then some mechanism from parsing the output to collect those numbers. It seems fairly straightforward to do this as an LNT plugin because the benchmark library has its own well-defined output format, which is why I mentioned that, but could we build something into lit as well.


It might sound weird, but I agree with both of you. :)

On the one hand, we shouldn't need to introduce too much change all at once -- right now I'm happy to just get the benchmarks there, and have them be runnable by people who want to test this on different platforms. While they might not look like the rest of the benchmarks/tests in the test-suite already, I'd be very happy to iterate on it doing both of @MatzeB and @hfinkel's suggestions. More concretely:

- In the interim, create multiple `.test` files each running one of the named benchmarks in the single .cc file.
- In the future teach LNT to understand the Google Benchmark output.
- Once LNT knows how to interpret the data and we can set the running of the tests such that we can reduce variance (there's a mode for Google Benchmark that only shows mean times and standard deviation by providing time bounds and iteration bounds) then we can sanitise the multiple files per benchmark.

Now for this particular change, is the current state and the plan to iterate seem reasonable to either of you (@MatzeB and @hfinkel)?


https://reviews.llvm.org/D32272





More information about the llvm-commits mailing list