[llvm-dev] Using Google Benchmark Library

Roman Lebedev via llvm-dev llvm-dev at lists.llvm.org
Wed May 30 12:27:15 PDT 2018


On Wed, May 30, 2018 at 10:21 PM, Pankaj Kukreja via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
>
>
> On Wed, May 30, 2018 at 4:07 AM, mbraun via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
>>
>> Not going into all the detail, but from my side the big question is
>> whether the benchmarks inner loop is small/fine grained enough that
>> stabilization with google benchmark doesn't lead to dozens of seconds
>> benchmark runtimes. Given that you typically see thousandsd or millions of
>> invocations for small functions...
>
>
> Google benchmarks executes the kernel at max 1e9 times or until CPU time is
> greater than the minimum time or the wallclock time is 5x minimum time, by
> default min time is 0.5s but we can change it using "MinTime(X)" or
> "--benchmark_min_time=X".
> So the stabilization of small/fine grained kernel with google benchmark
> should not be any problem.
>
>>
>>
>> > On May 29, 2018, at 2:06 PM, Michael Kruse via llvm-dev
>> > <llvm-dev at lists.llvm.org> wrote:
>> >
>> > Thanks for your remarks.
>> >
>> > 2018-05-27 5:19 GMT-05:00 Dean Michael Berris <dean.berris at gmail.com>:
>> >> I think you might run into artificial overhead here if you’re not
>> >> careful. In particular you might run into:
>> >>
>> >> - Missed in-lining opportunity in the benchmark. If you expect the
>> >> kernels to be potentially inlined, this might be a problem.
>> >
>> > For the kind of benchmarks we have in mind, one function call overhead
>> > is not significant, nor would we expect compilers to inline them in
>> > the applications that use them.
>> > Inlining may even be counterproductive: After inlining might see from
>> > the array initialization code that elements are initialized to 0 (or
>> > some other deterministic value) and use that knowledge while
>> > optimizing the kernel.
>> > We might prevent such things by annotating the kernel with
>> > __attribute__((noinline)).
>> >
>> >
>> >> - The link order might cause interference depending on the linker being
>> >> used.
>> >>
>> >> - If you’re doing LTO then that would add an additional wrinkle.
>> >>
>> >> They’re not show-stoppers, but these are some of the things to look out
>> >> for and consider.
>> >
>> > I'd consider the benchmark to be specific to a compiler+linker
>> > combination. As long as we can measure the kernel in isolation (and
>> > consider cold/warm caches), it should be fine.
>> > I'd switch off LTO here since any code that could be inlined into the
>> > kernel should already be in its translation unit.
>> >
>> >
>> >>
>> >>> - Instruct the driver to run the kernel with a small problem size and
>> >>> check the correctness.
>> >>
>> >> In practice, what I’ve seen is mixing unit tests which perform
>> >> correctness checks (using Google Test/Mock) and then co-locating the
>> >> benchmarks in the same file. This way you can choose to run just the tests
>> >> or the benchmarks in the same compilation mode. I’m not sure whether there’s
>> >> already a copy of the Google Test/Mock libraries in the test-suite, but I’d
>> >> think those shouldn’t be too hard (nor controversial) to add.
>> >
>> > Google Test is already part of LLVM. Since the test-suite already has
>> > a dependency on LLVM (e.g. for llvm-lit, itself already supporting
>> > Google Test), we could just use that one.
>> > I don't know yet one would combine them to run the same code.
>> >
>> >
>> >>> - Instructs Google Benchmark to run the kernel to get a reliable
>> >>> average execution time of the kernel (without the input data
>> >>> initialization)
>> >>
>> >> There’s ways to write the benchmarks so that you only measure a small
>> >> part of the actual benchmark. The manuals will be really helpful in pointing
>> >> out how to do that.
>> >>
>> >> https://github.com/google/benchmark#passing-arguments
>> >>
>> >> In particular, you can pause the timing when you’re doing the data
>> >> initialisation and then resume just before you run the kernel.
>> >
>> > Sounds great.
>> >
>> >
>> >>> - LNT's --exec-multisample does not need to run the benchmarks
>> >>> multiple times, as Google Benchmark already did so.
>> >>
>> >> I thought recent patches already does some of this. Hal would know.
>> >
>> > I haven't found any special handling for --exec-multisample in
>> > MicroBenchmarks.
>>
>> This LNT option means run the whole benchmarking multiple times. It's more
>> or less a loop the whole of the benchmark run: This even means we compile
>> the source code multiple times.
>>
>> FWIW I'd really like to see an alternative option to compile once and then
>> run each benchmark executable multiple times before moving to the next
>> executable; but given that we don't have that today I'm just not using the
>> multisample option personally as it's not better than submitting multiple
>> benchmarking jobs anyway...
>
>
> I think its better to use "benchmark_repetitions=n" option of google
> benchmark than "--exec-multisample=n" as in multisample data initialization
> will also be executed again along with the main kernel but
> benchmark_repetitions option runs only the main kernel n times. The repeated
> execution of data inititalization will only add unwanted execution time.

> Also, google benchmark library gives the result for each run along with the
> mean, stddev and median of all runs. This can be usefull if there is some
> tool which can parse the stdout and write some summary in a sheet.
I would like to point out that the normal table console output is not the only
output, it can produce JSON, and even natively store it into a file.
https://github.com/google/benchmark#output-formats

> One thing that I would like to have is some tool/API that can automatically
> verifiy the output like lit does using reference output.
>
>
> Regards,
> Pankaj Kukreja
> Computer Science Department
> IIT Hyderabad
Roman.

> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>


More information about the llvm-dev mailing list