[llvm-dev] RFC: LNT/Test-suite support for custom metrics and test parameterization

Matthias Braun via llvm-dev llvm-dev at lists.llvm.org
Thu May 26 16:56:06 PDT 2016

> On May 26, 2016, at 7:08 AM, Elena Lepilkina via llvm-dev <llvm-dev at lists.llvm.org> wrote:
> I understood your modules and I see as them can be used in LNT. But there are some question about old features.
> 1.       With Makefiles we can write pass and collect some statistics. There was an example of branch pass. Metrics can be collected by
> @-$(LOPT) -load dcc888$(SHLIBEXT) -branch-counter -stats \ -time-passes -disable-output $< 2>>$@
> in makefile. In report file we write how data should be parsed. Can we do same things now with cmake+lit?

This question hits several points at once:
- Collecting metrics at compiletime: The Makefiles have definitely been more flexible in this regard. The flip side of the medal however is that the makefiles became so complicated that you typically needed to pass several barely documented flags to them for a simple task like compiling and running the benchmarks without any further actions.
cmake on the other hand restricts us a bit in how much we can intercept and modify the build process of the benchmark (and just to point this out: the running process in lit is no problem and really flexible).
The best worst solution I have come up with so far is using a python wrapper that understands the same arguments as clang and slightly modifies the arguments (adding stats and a providing a new output filename for the stats) before passing it along to clang. I have some personal script running which does this but it is not in a cleaned publishable state yet.

As for the reporting in the .report files: Instead of doing perl magic you can just as well read the json produced by lit and process it in your favorite scripting language. Minimalistic example in python (that just prints all metrics per benchmark):

import json
import sys
data = json.load(open(sys.argv[1]))
for test in data['tests']:
    print "%-10s:" % test['name']
    for name, value in test['metrics'].items():
        print "\t%s: %s" % (name, value)

> 2.       As I saw in LNT code there is no opportunity to compile but not execute tests. Some metrics cab be collected without execution, than run can be faster.

The lnt test module only exposes a limited set of features possible with the cmake/lit testsuite. IMO we should break the odd relationship that lnt drives the running of the benchmark, but should just read the result file after the user has the testsuite in whatever configuration he wanted. I started a patch for this here: http://reviews.llvm.org/D19949 <http://reviews.llvm.org/D19949> but I need to find some time to rework it.

If you run the test-suite manually: Just remove the "run" module in the config.test_modules list in the lit.site.cfg of your builddir. The benchmarks will no longer be executed but the codesize, hash and compiletime modules will still collect those metrics. We can add a cmake configuration parameter for this if this is a generally useful configuration.

> 3.       Before each group of metrics has name. Now if we would like to integrate cmake with custom metrics in LNT, we should compare all metrics of each run to make sure if this is the same test-suite or not. And we should give some random names to each group.

I don't understand what you mean here, can you give an example/typical use case?

- Matthias

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160526/c0ab4193/attachment.html>

More information about the llvm-dev mailing list