[llvm-dev] Benchmarks for LLVM-generated Binaries
Michael Zolotukhin via llvm-dev
llvm-dev at lists.llvm.org
Wed Sep 7 10:31:22 PDT 2016
Hi Eric,
Yeah, I know about Externals and SPEC specifically. But as far as I understand, you have to have kind of description of the tests in test-suite even if you don’t provide the source codes - that’s what I would like to avoid. I.e. you have to have CMakeLists.txt and other files in place all the time, open to everyone.
Now, imagine I have a small testsuite, which probably is not very interesting to anyone else, but extremely interesting for me. AFAIU, to make it a part of LNT now, I have to modify ‘test-suite’ repo so that it’s aware of this new test-suite. I will probably not be able to upstream these changes (as they are only interesting to me), and that’s the source of some inconvenience. I’d be happy to be mistaken here, but that how I understand the current infrastructure, and that where my question came from.
Thanks,
Michael
> On Sep 7, 2016, at 10:13 AM, Eric Christopher <echristo at gmail.com> wrote:
>
> Hi Michael,
>
> You'll want to look into the externals part of the test suite :) It's how things like SPEC etc are run.
>
> -eric
>
> On Tue, Sep 6, 2016 at 4:08 PM Michael Zolotukhin via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
>
> > On Sep 1, 2016, at 8:14 AM, Renato Golin via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
> >
> > On 1 September 2016 at 07:45, Dean Michael Berris via llvm-dev
> > <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
> >> I've lately been wondering where benchmarks for LLVM-generated binaries are hosted, and whether they're tracked over time.
> >
> > Hi Dean,
> >
> > Do you mean Perf?
> >
> > http://llvm.org/perf/ <http://llvm.org/perf/>
> >
> > Example, ARM and AArch64 tracking performance at:
> >
> > http://llvm.org/perf/db_default/v4/nts/machine/41 <http://llvm.org/perf/db_default/v4/nts/machine/41>
> > http://llvm.org/perf/db_default/v4/nts/machine/46 <http://llvm.org/perf/db_default/v4/nts/machine/46>
> >
> >
> >> - Is the test-suite repository the right place to put these generated-code benchmarks?
> >
> > I believe there would be the best place, yes.
> >
> >
> >> - Are there any objections to using a later version of the Google benchmarking library [0] in the test-suite?
> >
> > While this looks like a very nice tool set, I wonder how we're going
> > to integrate it.
> >
> > Checking it out in the test-suite wouldn't be the best option (version
> > rot), but neither would be requiring people to install it before
> > running the test-suite, especially if the installation process isn't
> > as easy as "apt-get install", like all the other dependencies.
> >
> >
> >> - Are the docs on the Testing Infrastructure Guide still relevant and up-to-date, and is that a good starting point for exploration here?
> >
> > Unfortunately, that's mostly for the "make check" tests, not for the
> > test-suite. The test-suite execution is covered by LNT's doc
> > (http://llvm.org/docs/lnt <http://llvm.org/docs/lnt>), but it's mostly about LNT internals and
> > not the test-suite itself.
> >
> > However, it's not that hard to understand the test-suite structure. To
> > add new tests, you just need to find a suitable place { (SingleSource
> > / MultiSource) / Benchmarks / YourBench } and copy ( CMakeFiles.txt,
> > Makefile, lit.local.cfg ), change to your needs, and it should be
> > done.
> Hi Renato and others,
>
> Is it possible/how hard will it be to make the testsuite kind of extendable? Like, I copy a folder with some tests that comply to some rules (e.g. have CMakeLists.txt, lit.local.cfg, etc.) and run them via the standard infrastructure without changing anything in test-suite files?
>
> The motivation of this question is the following: we (and probably many other companies too) have internal tests that we can’t share, but still want to track. Currently, the process of adding them to the existing test-suite is not clear to me (or at least not very well documented), and while I can figure it out, it would be cool if we can streamline this process.
>
> Ideally, I’d like to see this process like this:
> 1) Add following files to your benchmark suite:
> 1.a) CMakeLists.txt having this and that target doing this and that.
> 1.b) lit.local.cfg script having this and that.
> …
> 2) Make sure the test report results in the following format/provide a wrapper script to convert results to the specified form. /* TODO: Results format is specified here */
> 3) Run your tests using the standard LNT command, like "lnt runtest … --only-test=External/MyTestSuite/TestA”
>
> If that’s already implemented, then I’ll be glad to help with documentation, and if not, I can try implementing it. What do you think?
>
> Thanks,
> Michael
>
> >
> > cheers,
> > --renato
> > _______________________________________________
> > LLVM Developers mailing list
> > llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>
>
> _______________________________________________
> LLVM Developers mailing list
> llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160907/b9e6fd28/attachment.html>
More information about the llvm-dev
mailing list