[PATCH] D38496: [XRay] [test-suite] Add litsupport for microbenchmarks
Matthias Braun via Phabricator via llvm-commits
llvm-commits at lists.llvm.org
Tue Oct 3 10:30:51 PDT 2017
MatzeB added inline comments.
================
Comment at: MicroBenchmarks/XRay/lit.local.cfg:2
config.environment['XRAY_OPTIONS'] = 'patch_premain=false xray_naive_log=false'
+config.test_modules.append('microbenchmark')
----------------
I think we need some more logic here, like this:
```
test_modules = config.test_modules
if 'run' in test_modules:
# Insert microbenchmarks module behind 'run'
test_modules.insert(test_modules.index('run')+1, 'microbenchmarks')
# Timeit results are not useful for microbenchmarks
if 'timeit' in test_modules:
test_modules.remove('timeit')
```
- I think we should add the microbenchmarks module behind 'run' so that 'remote' or 'run_under' can come later and modify the commandline including the redirections and flags for the microbenchmarks.
- It's probably best to remove the 'timeit' module as the number isn't really useful for microbenchmarks.
================
Comment at: litsupport/modules/microbenchmark.py:34
+ continue
+ # Note that we cannot create new tests here, so for now we just
+ # add up all the numbers here.
----------------
hfinkel wrote:
> The summary might make sense in some contexts, but why not just return multiple metrics? I'd really like to move to a model where we collect all of the microbenchmark timings individually.
I'd like to see individual timings too; but I think putting the benchmark name as prefix to the metric name is not the right direction. LNT unfortunately has a fixed schema of metrics that are recorded. The LNT UI and homebrewn tools are probably designed to well known metrics too (some information like 'bigger/smaller is better' is derived from the name today).
I think adding something to lit that allows to report multiple benchmarks/tests from a single test file can't be that hard.
https://reviews.llvm.org/D38496
More information about the llvm-commits
mailing list