Using lnt to track lld performance

Rafael Avila de Espindola via llvm-commits llvm-commits at lists.llvm.org
Wed Nov 1 11:33:45 PDT 2017


James Henderson <jh7370.2008 at my.bristol.ac.uk> writes:

> Hi Rafael,
>
>
>
> This definitely looks like a good proposal from our team’s point of view.
> We would find it particularly useful if any script you write is something
> that we could use internally, but for that to work, we would need a few
> things from it, some or all of which I hope would be more widely useful:

I hope so too. Ideally it should eventually support running benchmarks
on any system.

I should get it up for review soon. Once it is committed it should be
easy to send patches for adding support for windows.


> 1)      We use Windows, so it would be important for us for this to run on
> Windows as well as Linux. If it’s just a python script doing the running, I
> imagine that this would come more or less for free, but it’s worth keeping
> in mind.

The logic for finding benchmarks and reporting the results should work
anywhere.

Currently the script uses perf and cset for running the benchmarks on
linux. That part will have to be abstracted in the future.

What statistics can one collect on windows? I ask to see if the lnt
schema should be extended.


> 2)      We would find it useful if there were hooks we could easily
> configure to do various environment setup/tear-down steps. Perhaps this
> could be achieved by adding the ability to specify a wrapper process that
> in turn runs some specified link-line after/before doing some work itself
> such as configuring process priorities etc.

That is one part I am undecided. Currently the script just runs the
benchmark and sends the result to a lnt server. It is up to something
else to build the various lld versions.

I did the split like this for two reasons. First, it is useful to just
compare two versions manually to test a patch. Second, I assume that
most of the logic for repeatedly building lld is already coded in the
existing bot infrastructure.

> 3)      Finally, it would be really useful to support implied or
> externally-imposed variants for tests, where each variant runs the test
> from a distinct location. This would allow us to run each case from an HDD,
> SSD, and from RAM. RAM's good for reducing noise, but we have also found it
> useful to have some real-world numbers available.

In the current setup the script is oblivious to that. It is just given a
directory, that directory can be in tmpfs, ssd or hd.

> I also noticed that you didn’t mention measuring memory usage anywhere. I
> assume that this is something that you’d be measuring as well?

Currently I am not recording that, but it would be a good addition.

Cheers,
Rafael




More information about the llvm-commits mailing list