[llvm-dev] RFC: LNT/Test-suite support for custom metrics and test parameterization
Elena Lepilkina via llvm-dev
llvm-dev at lists.llvm.org
Fri May 27 04:15:53 PDT 2016
Thank you for answers.
About last question. Now there are compile and nt(simple) tests in LNT. We suggested to add opportunity to run and save in database any test-suite(nightly and etc test-suites described with Makefile+.report file). Each test-suite has name: simple, nightly, etc. And when we run test we choose some test-suite. Now if we use Cmake and test modules we create test-suite as we want and it has no name. Moreover, I can run tests always with different test modules. If we want to save results in LNT database we need to compare all metrics and then understood if we have such test-suite before or not.
And how should we show these test-suites?
Now there are on main page
New custom metrics should be shown to and they should have name.
One decision for this problem is to add file with description of test modules and association with this group of test modules some name.
There is no such opportunity now?
From: Matthias Braun [mailto:matze at braunis.de]
Sent: Friday, May 27, 2016 2:56 AM
To: Elena Lepilkina <Elena.Lepilkina at synopsys.com>
Cc: llvm-dev <llvm-dev at lists.llvm.org>; nd <nd at arm.com>
Subject: Re: [llvm-dev] RFC: LNT/Test-suite support for custom metrics and test parameterization
On May 26, 2016, at 7:08 AM, Elena Lepilkina via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:
I understood your modules and I see as them can be used in LNT. But there are some question about old features.
1. With Makefiles we can write pass and collect some statistics. There was an example of branch pass. Metrics can be collected by
@-$(LOPT) -load dcc888$(SHLIBEXT) -branch-counter -stats \ -time-passes -disable-output $< 2>>$@
in makefile. In report file we write how data should be parsed. Can we do same things now with cmake+lit?
This question hits several points at once:
- Collecting metrics at compiletime: The Makefiles have definitely been more flexible in this regard. The flip side of the medal however is that the makefiles became so complicated that you typically needed to pass several barely documented flags to them for a simple task like compiling and running the benchmarks without any further actions.
cmake on the other hand restricts us a bit in how much we can intercept and modify the build process of the benchmark (and just to point this out: the running process in lit is no problem and really flexible).
The best worst solution I have come up with so far is using a python wrapper that understands the same arguments as clang and slightly modifies the arguments (adding stats and a providing a new output filename for the stats) before passing it along to clang. I have some personal script running which does this but it is not in a cleaned publishable state yet.
As for the reporting in the .report files: Instead of doing perl magic you can just as well read the json produced by lit and process it in your favorite scripting language. Minimalistic example in python (that just prints all metrics per benchmark):
data = json.load(open(sys.argv))
for test in data['tests']:
print "%-10s:" % test['name']
for name, value in test['metrics'].items():
print "\t%s: %s" % (name, value)
2. As I saw in LNT code there is no opportunity to compile but not execute tests. Some metrics cab be collected without execution, than run can be faster.
The lnt test module only exposes a limited set of features possible with the cmake/lit testsuite. IMO we should break the odd relationship that lnt drives the running of the benchmark, but should just read the result file after the user has the testsuite in whatever configuration he wanted. I started a patch for this here: http://reviews.llvm.org/D19949 but I need to find some time to rework it.
If you run the test-suite manually: Just remove the "run" module in the config.test_modules list in the lit.site.cfg of your builddir. The benchmarks will no longer be executed but the codesize, hash and compiletime modules will still collect those metrics. We can add a cmake configuration parameter for this if this is a generally useful configuration.
3. Before each group of metrics has name. Now if we would like to integrate cmake with custom metrics in LNT, we should compare all metrics of each run to make sure if this is the same test-suite or not. And we should give some random names to each group.
I don't understand what you mean here, can you give an example/typical use case?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the llvm-dev