Using lnt to track lld performance

James Henderson via llvm-commits llvm-commits at lists.llvm.org
Thu Nov 2 03:43:40 PDT 2017


> Currently the script uses perf and cset for running the benchmarks on
> linux. That part will have to be abstracted in the future.
>
> What statistics can one collect on windows? I ask to see if the lnt
> schema should be extended.
>

Reading between the lines of the MSDN documentation at
https://msdn.microsoft.com/en-us/library/bb385772.aspx, it is possible to
collect a whole range of stats if you have the Visual Studio Performance
tools installed, but the complete set may vary from machine to machine. The
list of so-called "portable" counters available on my machines, which are
supposedly independent of processor, are:

        InstructionsRetired
        NonHaltedCycles
        L2Misses
        L2References
        ITLBMisses
        BranchesRetired
        MispredictedBranches

It's also possible to gather memory information, via the Windows API using
GetProcessMemoryInfo, which returns a struct containing the following
members:

  PageFaultCount;
  PeakWorkingSetSize;
  WorkingSetSize;
  QuotaPeakPagedPoolUsage;
  QuotaPagedPoolUsage;
  QuotaPeakNonPagedPoolUsage;
  QuotaNonPagedPoolUsage;
  PagefileUsage;
  PeakPagefileUsage;
  PrivateUsage;

This is for the 64-bit variant - it may be necessary to create a simple
program to wrap the function to enable it to be run by a 32-bit program,
such as a typical python installation.

I don't know if there are other values available, but I couldn't
immediately find any more information.


>
> > 2)      We would find it useful if there were hooks we could easily
> > configure to do various environment setup/tear-down steps. Perhaps this
> > could be achieved by adding the ability to specify a wrapper process that
> > in turn runs some specified link-line after/before doing some work itself
> > such as configuring process priorities etc.
>
> That is one part I am undecided. Currently the script just runs the
> benchmark and sends the result to a lnt server. It is up to something
> else to build the various lld versions.
>
> I did the split like this for two reasons. First, it is useful to just
> compare two versions manually to test a patch. Second, I assume that
> most of the logic for repeatedly building lld is already coded in the
> existing bot infrastructure.
>

I think maybe I wasn't as clear as I could be - in fact the script up for
review already does what I meant, but explicitly for cset and then perf.
Providing a way of extending this list of programs to call before running
the test would be helpful. One Windows example would be to run a program
that disables/re-enables Superfetch during the course of the run.


>
> > 3)      Finally, it would be really useful to support implied or
> > externally-imposed variants for tests, where each variant runs the test
> > from a distinct location. This would allow us to run each case from an
> HDD,
> > SSD, and from RAM. RAM's good for reducing noise, but we have also found
> it
> > useful to have some real-world numbers available.
>
> In the current setup the script is oblivious to that. It is just given a
> directory, that directory can be in tmpfs, ssd or hd.
>
> > I also noticed that you didn’t mention measuring memory usage anywhere. I
> > assume that this is something that you’d be measuring as well?
>
> Currently I am not recording that, but it would be a good addition.
>

>From my point of view, memory usage is nearly as important as runtime
performance, because some links can use up many GB of memory, which already
pushes the limits of what some users have available, so it definitely
should be added at some point. I think it's also useful to measure the
output size. In general, this should not change, but we might, for example,
improve the effectiveness of icf or gc-sections somehow, and it would be
useful to point to the improvements quickly. I'm less concerned about this,
however.


>
> Cheers,
> Rafael
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20171102/742db73c/attachment-0001.html>


More information about the llvm-commits mailing list