Using lnt to track lld performance

James Henderson via llvm-commits llvm-commits at lists.llvm.org
Wed Nov 1 10:42:13 PDT 2017


Hi Rafael,



This definitely looks like a good proposal from our team’s point of view.
We would find it particularly useful if any script you write is something
that we could use internally, but for that to work, we would need a few
things from it, some or all of which I hope would be more widely useful:

1)      We use Windows, so it would be important for us for this to run on
Windows as well as Linux. If it’s just a python script doing the running, I
imagine that this would come more or less for free, but it’s worth keeping
in mind.

2)      We would find it useful if there were hooks we could easily
configure to do various environment setup/tear-down steps. Perhaps this
could be achieved by adding the ability to specify a wrapper process that
in turn runs some specified link-line after/before doing some work itself
such as configuring process priorities etc.

3)      Finally, it would be really useful to support implied or
externally-imposed variants for tests, where each variant runs the test
from a distinct location. This would allow us to run each case from an HDD,
SSD, and from RAM. RAM's good for reducing noise, but we have also found it
useful to have some real-world numbers available.

I also noticed that you didn’t mention measuring memory usage anywhere. I
assume that this is something that you’d be measuring as well?



James

On 1 November 2017 at 16:53, Rafael Espíndola <rafael.espindola at gmail.com>
wrote:

> > This sounds good from the LNT side. Indeed you should be good to go with
> the
> > default 'nts' schema. And just submit to the server.
>
> It seems possible, but also a bit hackish as the "linking time" would
> be reported as "compile time". It would also be nice to be able to
> store the other metrics reported by perf. Since we get them for free
> and storage requirements are very small.
>
> I have created a "link" schema for running the experiments locally.
> Would it be possible to add it to lnt.llvm.org? My script is still
> adding versions, but what it produced over night already shows the
> value of having something like it running in a bot (see attached
> graph).
>
> > For reference this how a submission can look in python (you have to wrap
> the
> > data in one more level of records with the 'input_data' thing).
> > You should be able to just submit away to
>
> I have created a script that given a directory with linker benchmarks
> and a linker binary will run all benchmarks and send the result to
> lnt. I will send that for review. After that all that is needed is
> figuring out how to add the schema and to set up a bot to do it for
> every revision.
>
> Thanks,
> Rafael
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20171101/ecc0aa85/attachment.html>


More information about the llvm-commits mailing list