[PATCH] D32272: [XRay] Add Google Benchmark library + initial XRay benchmarks

Matthias Braun via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jun 23 12:08:50 PDT 2017


MatzeB added a comment.

In https://reviews.llvm.org/D32272#789224, @hfinkel wrote:

> In https://reviews.llvm.org/D32272#789201, @MatzeB wrote:
>
> > In https://reviews.llvm.org/D32272#789021, @hfinkel wrote:
> >
> > > In https://reviews.llvm.org/D32272#789012, @kristof.beyls wrote:
> > >
> > > > In https://reviews.llvm.org/D32272#789006, @hfinkel wrote:
> > > >
> > > > > In https://reviews.llvm.org/D32272#788932, @dberris wrote:
> > > > >
> > > > > > LGTM for the most part, thanks for making the changes, it's much simpler now.
> > > > > >
> > > > > > Thoughts @MatzeB? I think we can work on running/reporting the microbenchmarks data later, in changes to LNT.
> > > > >
> > > > >
> > > > > Can you please summarize the state of this and the eventual goal? It looks like:
> > > > >
> > > > > 1. Eventually, we need a new LNT plugin (i.e. something like what we have in LNTBased/*) that understands how to interpret the output of the Google benchmark library.
> > > >
> > >
> > >
> > >
> > >
> > > > I hope this could all be done within the test-suite, probably extending test-suite/litsupport. I hope/think that no changes to LNT will be needed, but I haven't investigated myself.
> > >
> > > Just to be clear, LNTBased/* is in the test suite; LNT supports plugins and those can be in the test suite. That having been said, if we can do this just within litsupport, that potentially seems better. The key feature seems to be being able to report multiple benchmark timings per actual executable.
> >
> >
> >
> >
> > - The lit data format currently does not allow multiple values for a metric.
> > - I'm not sure I see why that would be needed here.
>
>
> I don't mean multiple times for each benchmark, I mean that there are multiple benchmarks for which we should individually collect timings. In the benchmark added here, for example, we'll get individual timings out for:
>
>   BENCHMARK(BM_ReturnInstrumentedu);
>   BENCHMARK(BM_ReturnInstrumentedPatchedThenUnpatched);
>   BENCHMARK(BM_ReturnInstrumentedPatched);
>   ...
>
> and we should separately record them all.


Ah right.

- Todays lit works best if you have 1 file for each test; Also test discover is done before any test is run.
- If there is a flag to only run a specific benchmark in a googletest executable then we could work around this limitation on the cmake side:

Just generate multiple .test files with slightly different RUN: lines (assume that option is called --only-test):

  retref-bench-unpatched.test  contains RUN: retref-bench --only-test unpatched
  retref-bench-patched.test contains RUN: retref-bench --only-test patches
  ...

It would require us to repeat the sub-benchmark names in the cmake file (or write cmake logic to scan the sourcecode?)


https://reviews.llvm.org/D32272





More information about the llvm-commits mailing list