[llvm-dev] Benchmarks for LLVM-generated Binaries
Renato Golin via llvm-dev
llvm-dev at lists.llvm.org
Thu Sep 1 18:19:43 PDT 2016
On 2 September 2016 at 02:13, Dean Michael Berris <dean.berris at gmail.com> wrote:
> I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin).
The test-suite has a very large number of licenses (compared to LLVM),
so licensing should be less of a problem there. Though Dan can help
more than I can. :)
> I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :)
If we start using it more (maybe we should, at least for the
benchmarks, I've been long wanting to do something decent there), then
we'd need to add a proper update procedure.
I'm fine with some checkout if it's a stable release, not trunk, as it
would make things a lot easier to update later (patch releases, new
> Thanks -- this doesn't tell me how to run the test though... I could certainly do it by hand (i.e. build the executables and run it) and I suspect I'm not alone in wanting to be able to do this easily through the CMake+Ninja (or other generator) workflow.
Ah, no, that helped you adding your test. :)
> Do you know if someone is working on that aspect?
This is *exactly* what Perf (the monitoring website) does, so you're
sure to get the same result on both sides if you run it locally like
that. I do.
You can choose to run down to a specific test/benchmark, so it's quick
and easy to use while developing, too.
More information about the llvm-dev