[llvm-dev] Benchmarks for LLVM-generated Binaries

Dean Michael Berris via llvm-dev llvm-dev at lists.llvm.org
Fri Sep 16 01:44:25 PDT 2016


> On 15 Sep 2016, at 12:44, Matthias Braun <matze at braunis.de> wrote:
> 
>> 
>> On Sep 14, 2016, at 6:57 PM, Dean Michael Berris via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>> 
>> Thanks everyone, I'll go with the "libs/" as a top-level directory in test-suite.
>> 
>>> On 15 Sep 2016, at 03:08, Matthias Braun <matze at braunis.de> wrote:
>>> 
>>> Have you seen the prototype for googlebenchmark integration I did in the past:
>>> 
>>> https://reviews.llvm.org/D18428 (though probably out of date for todays test-suite)
>>> 
>> 
>> Not yet, but thanks for the pointer Matthias!
>> 
>>> +1 for copying the googlebechmark into the test-suite.
>>> 
>>> However I do not think this should simply go into MultiSource: We currently have a number of additional plugins in the lit test runner such as measuring the runtime of the benchmark executable, determining code size, we still plan to add a mode to run benchmarks multiple times, we run the bechmark under perf (or iOS specific tools) to collect performance counters… Many of those are questionable measurements for a googlebenchmark executable which has varying runtime because it runs the test more/less often.
>>> We really should introduce a new benchmarking mode for this.
>>> 
>> 
>> Sounds good to me, but probably something for later down the road.
> Well if you just put googlebenchmark executables into the MultiSource directory then the lit runner will just measure the runtime of the executable which is worse than "for (int i = 0; i < LARGE_NUMBER; ++i) myfunc();" because googlebenchmark will use a varying number of runs depending on noise levels/confidence.
> When running googlebenchmarks we should disable the external time measurements and have a lit plugin in place which parses the googlebenchmark output (that old patch has that). I believe this can only really work when you create a new toplevel directory for which we apply different benchmarking rules.
> 

Aha! Right, I see -- this makes a lot of sense.

I'll experiment a little bit more and get inspiration from what you've already done before.

Thanks very much for the insight and the pointer!

Cheers

-- Dean



More information about the llvm-dev mailing list