[PATCH] D38496: [XRay] [test-suite] Add litsupport for microbenchmarks

Matthias Braun via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Oct 3 10:54:45 PDT 2017


MatzeB added inline comments.


================
Comment at: litsupport/modules/microbenchmark.py:34
+                    continue
+                # Note that we cannot create new tests here, so for now we just
+                # add up all the numbers here.
----------------
hfinkel wrote:
> MatzeB wrote:
> > hfinkel wrote:
> > > The summary might make sense in some contexts, but why not just return multiple metrics? I'd really like to move to a model where we collect all of the microbenchmark timings individually.
> > I'd like to see individual timings too; but I think putting the benchmark name as prefix to the metric name is not the right direction. LNT unfortunately has a fixed schema of metrics that are recorded. The LNT UI and homebrewn tools are probably designed to well known metrics too (some information like 'bigger/smaller is better' is derived from the name today).
> > 
> > I think adding something to lit that allows to report multiple benchmarks/tests from a single test file can't be that hard.
> > I think adding something to lit that allows to report multiple benchmarks/tests from a single test file can't be that hard.
> 
> I don't object to doing this, but it seems like the multiple-metrics-per-test is a direction in which we need to go anyway, no? I expect that we will need to update the LNT UI regardless of what we do (otherwise we could easily end up with a single huge list that makes the interface clunky). What are we going to do with all of the binary-size information that we collect?
> 
> I don't object to doing this, but it seems like the multiple-metrics-per-test is a direction in which we need to go anyway, no? I expect that we will need to update the LNT UI regardless of what we do (otherwise we could easily end up with a single huge list that makes the interface clunky).

Sure and we already have multiple metrics per test of course; I think it's compiletime, exec_time, hash and size today. However given that this is somewhat two dimensional data today and we have one dimension along test names and one along the metrics, I really think the "subbenchmarks" should end up on the test-name dimenson and not the metrics one.

> What are we going to do with all of the binary-size information that we collect?
LNT throws all the detailed codesize metrics away and only keeps `size.__text` I think, which probably is macOS only :-(

Though if you run the benchmarks yourself you can simply not use LNT and view results with something like test-suite/utils/compare.py.


https://reviews.llvm.org/D38496





More information about the llvm-commits mailing list