[PATCH] D39506: Add a script to run various benchmarks and send the result to lnt

James Henderson via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Nov 2 03:10:54 PDT 2017


jhenderson added a comment.

Overall approach looks reasonable to me. There's clearly some work to do to get this compatible with Windows (we'd need suitable replacements for cset and perf), but I don't think it should be too difficult. That being said, I don't know if there's a way to get the same set of information on Windows or not. It might be that we have to use a different schema in that case, as well as a different parser for the results. Overall, the code is small enough that it shouldn't be too hard to do this later though.



================
Comment at: utils/benchmark.py:23
+parser.add_argument('--runs', type=int, default=10)
+parser.add_argument('--cset', default='cset')
+parser.add_argument('--machine', required=True)
----------------
Why is this a configurable option, but perf not? Is the intent to be able to point to a specific cset executable, or to be able to swap it out for a different executable?


================
Comment at: utils/benchmark.py:26
+parser.add_argument('--revision', required=True)
+parser.add_argument('--url',
+                    default='http://localhost:8000/db_default/v4/link/submitRun')
----------------
Could we have some help text here, please saying that this is the LNT server URL.


================
Comment at: utils/benchmark.py:70
+
+    measurement_lines = [ x for x in lines if b'#' in x]
+    for l in measurement_lines:
----------------
Nit: extra unnecessary leading space inside brackets.


================
Comment at: utils/benchmark.py:85
+def perf(cmd):
+    run(cmd)
+    ret = {}
----------------
Could you comment what this run is for, please?

(I assume that it is to warm up the file cache or similar, but it'd be good to have this confirmed).

Also, this isn't in the try/except block the later run is. Any particular reason?


================
Comment at: utils/benchmark.py:103
+    response = 'response' + suffix + '.txt'
+    ret = perf(['../ld.lld', '@' + response, '-o', 't', '--no-threads'])
+    ret['name'] = str(bench)
----------------
I know that this would lead to less stable metrics, but it might be nice to have the "--no-threads" on a switch, probably defaulting to no threads. That way, if people want to compare their performance with and without threading, they can.


================
Comment at: utils/benchmark.py:116
+            'end_time' : now,
+            'start_time' : now,
+            'llvm_project_revision': args.revision
----------------
This doesn't look right - why are we setting the start and end time to the same value?


https://reviews.llvm.org/D39506





More information about the llvm-commits mailing list