[llvm-dev] Using lnt to track lld performance

Shoaib Meenai via llvm-dev llvm-dev at lists.llvm.org
Tue Oct 31 11:49:19 PDT 2017


(changing mailing list from llvm-commits to llvm-dev)

From: llvm-commits <llvm-commits-bounces at lists.llvm.org> on behalf of Matthias Braun via llvm-commits <llvm-commits at lists.llvm.org>
Reply-To: Matthias Braun <matze at braunis.de>
Date: Tuesday, October 31, 2017 at 11:33 AM
To: Rafael Avila de Espindola <rafael.espindola at gmail.com>
Cc: llvm-commits <llvm-commits at lists.llvm.org>, James Henderson <jh7370 at my.bristol.ac.uk>
Subject: Re: Using lnt to track lld performance


On Oct 31, 2017, at 11:08 AM, Rafael Avila de Espindola via llvm-commits <llvm-commits at lists.llvm.org<mailto:llvm-commits at lists.llvm.org>> wrote:

Right now performance tracking in lld is a manual and very laborious
process.

I decided to take some time off main development to automate it a
bit.

The current state is that we have a bunch of test in
https://s3-us-west-2.amazonaws.com/linker-tests/lld-speed-test.tar.xz<https://urldefense.proofpoint.com/v2/url?u=https-3A__s3-2Dus-2Dwest-2D2.amazonaws.com_linker-2Dtests_lld-2Dspeed-2Dtest.tar.xz&d=DwMFAg&c=5VD0RTtNlTh3ycd41b3MUw&r=o3kDXzdBUE3ljQXKeTWOMw&m=ZHTxQwyvouSBeubS2n1CjEMcFFYh1w6aBvvkFtJwnQQ&s=7llNHnFLqvPoq6LUmtDRf92EhiQn_xKgpF3wf9ahm9o&e=>. Each
test is a program that lld can link (clang, chrome, firefox, etc).

Right now there is just a hackish run.sh that links every program with
two versions of lld to compare the performance. Some programs are also
linked in different modes. For example, we compare chrome with and
without icf.

I want to improve this in two ways:

* Make the directory structure uniform. What I am currently prototyping
 is that each program is its own directory and it can have multiple
 response*.txt files. Each response file is an independent test. The
 reason for allowing multiple response files is to allow for variants:
 * response.txt
 * response-icf.txt
 * response-gc.txt

* Instead of just comparing lld's run time, parse and save various
 metrics to a database.

The database hierarchy would be:

So for each llvm revision there will be multiple benchmarks (chrome,
chrome-icf, clang).

For each benchmark, there will be multiple metrics (lld runtime, branches,
output size, etc).

Some output metrics will include multiple measurements. The output size
should always be the same, but we can have multiple runs with slightly
different time for example.

Not too surprisingly, the above structure is remarkably similar to what lnt
uses:
http://llvm.org/docs/lnt/importing_data.html#importing-data-in-a-text-file<https://urldefense.proofpoint.com/v2/url?u=http-3A__llvm.org_docs_lnt_importing-5Fdata.html-23importing-2Ddata-2Din-2Da-2Dtext-2Dfile&d=DwMFAg&c=5VD0RTtNlTh3ycd41b3MUw&r=o3kDXzdBUE3ljQXKeTWOMw&m=ZHTxQwyvouSBeubS2n1CjEMcFFYh1w6aBvvkFtJwnQQ&s=UvmQM64VT6LWp9kq5002H_BM9Alwc4cP75cjB1THdqk&e=>

So my idea is to first write a python script that runs all the
benchmarks in the above structure and submits the results to a lnt
server. This would replace the existing run.sh.

For comparing just two revisions locally, this will be a bit more work.

But it should allow us to setup a bot that continually submits lld
performance results.

For those working on lld, does the above sound like a good idea?

For those working on lnt, is it a good match for the above? The main use
case I would like is to build graphs of various metrics over llvm
revisions. For example:

* chrome's output size over the last 1000 llvm revisions.
* firefox link time (with error bars) over that last 1000 llvm revisions.
This sounds good from the LNT side. Indeed you should be good to go with the default 'nts' schema. And just submit to the server.

For reference this how a submission can look in python (you have to wrap the data in one more level of records with the 'input_data' thing).
You should be able to just submit away to

    # Submit the resulting data to an LNT server.
    jsondata = json.dumps(result, sort_keys=True)
    postdata = {'input_data': json.dumps(result, sort_keys=True)}
    data = urllib.urlencode(postdata)
    try:
        opener = urllib2.build_opener(urllib2.HTTPHandler)
        response = urllib2.urlopen(config.url, data=data)
        responsedata = response.read()

        try:
            resultdata = json.loads(responsedata)
            result_url = resultdata.get('result_url')
            if result_url is not None:
                sys.stdout.write("%s\n" % result_url)
        except:
            sys.stderr.write("Unexpected server response:\n" + responsedata)
    except urllib2.HTTPError as e:
        sys.stderr.write("PUT Request failed with code %s\n" % e.code)
        sys.stderr.write("%s\n" % e.reason)
        sys.exit(1)

With that you should be able to just submit away to http://lnt.llvm.org/db_default/v4/nts/submitRun<https://urldefense.proofpoint.com/v2/url?u=http-3A__lnt.llvm.org_db-5Fdefault_v4_nts_submitRun&d=DwMFAg&c=5VD0RTtNlTh3ycd41b3MUw&r=o3kDXzdBUE3ljQXKeTWOMw&m=ZHTxQwyvouSBeubS2n1CjEMcFFYh1w6aBvvkFtJwnQQ&s=GvKzJ3ltYWyj8KHinUNKUkjTDt1-zfL45rQbO76o0j0&e=>

Drawing those two graphs should also just work.

- Matthias

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20171031/9de76400/attachment-0001.html>


More information about the llvm-dev mailing list