[PATCH] D39506: Add a script to run various benchmarks and send the result to lnt

Rafael Avila de Espindola via llvm-commits llvm-commits at lists.llvm.org
Thu Nov 2 17:19:54 PDT 2017


James Henderson via Phabricator <reviews at reviews.llvm.org> writes:

> ================
> Comment at: utils/benchmark.py:23
> +parser.add_argument('--runs', type=int, default=10)
> +parser.add_argument('--cset', default='cset')
> +parser.add_argument('--machine', required=True)
> ----------------
> Why is this a configurable option, but perf not? Is the intent to be able to point to a specific cset executable, or to be able to swap it out for a different executable?

This is a small hack.

The perf utility is normally provided by linux distros. The cset one is
not and will normaly be installed in the user directory.

The result is that sudo can find perf, but not cset.

Another option would be to run "which cset" and pass the absulute path
to sudo. Do you think that would be a better solution?

> ================
> Comment at: utils/benchmark.py:26
> +parser.add_argument('--revision', required=True)
> +parser.add_argument('--url',
> +                    default='http://localhost:8000/db_default/v4/link/submitRun')
> ----------------
> Could we have some help text here, please saying that this is the LNT server URL.

Added. Will upload a new version soon.

>
> ================
> Comment at: utils/benchmark.py:70
> +
> +    measurement_lines = [ x for x in lines if b'#' in x]
> +    for l in measurement_lines:
> ----------------
> Nit: extra unnecessary leading space inside brackets.

Removed.

> ================
> Comment at: utils/benchmark.py:85
> +def perf(cmd):
> +    run(cmd)
> +    ret = {}
> ----------------
> Could you comment what this run is for, please?
>
> (I assume that it is to warm up the file cache or similar, but it'd be good to have this confirmed).

Added a comment.

> Also, this isn't in the try/except block the later run is. Any particular reason?

That was a bug. It should be in the run function itself. The idea is to
print the error output on failure.

> ================
> Comment at: utils/benchmark.py:103
> +    response = 'response' + suffix + '.txt'
> +    ret = perf(['../ld.lld', '@' + response, '-o', 't', '--no-threads'])
> +    ret['name'] = str(bench)
> ----------------
> I know that this would lead to less stable metrics, but it might be nice to have the "--no-threads" on a switch, probably defaulting to no threads. That way, if people want to compare their performance with and without threading, they can.

I added a --threads option.

>
> ================
> Comment at: utils/benchmark.py:116
> +            'end_time' : now,
> +            'start_time' : now,
> +            'llvm_project_revision': args.revision
> ----------------
> This doesn't look right - why are we setting the start and end time to the same value?

Fixed.

Thanks,
Rafael


More information about the llvm-commits mailing list