<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jul 18, 2021 at 8:58 PM Michael Kruse via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org">llvm-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Am So., 18. Juli 2021 um 11:14 Uhr schrieb Stefanos Baziotis via<br>
llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>>:<br>
> Now, to the questions. First, there doesn't seem to be a common time unit for<br>
> "exec_time" among the different tests. For instance, SingleSource/ seem to use<br>
> seconds while MicroBenchmarks seem to use μs. So, we can't reliably judge<br>
> changes. Although I get the fact that micro-benchmarks are different in nature<br>
> than Single/MultiSource benchmarks, so maybe one should focus only on<br>
> the one or the other depending on what they're interested in.<br>
<br>
Usually one does not compare executions of the entire test-suite, but<br>
look for which programs have regressed. In this scenario only relative<br>
changes between programs matter, so μs are only compared to μs and<br>
seconds only compared to seconds.<br>
<br>
<br>
> In any case, it would at least be great if the JSON data contained the time unit per test,<br>
> but that is not happening either.<br>
<br>
What do you mean? Don't you get the exec_time per program?<br>
<br>
<br>
> Do you think that the lack of time unit info is a problem ? If yes, do you like the<br>
> solution of adding the time unit in the JSON or do you want to propose an alternative?<br>
<br>
You could also normalize the time unit that is emitted to JSON to s or ms.<br>
<br>
><br>
> The second question has to do with re-running the benchmarks: I do<br>
> cmake + make + llvm-lit -v -j 1 -o out.json .<br>
> but if I try to do the latter another time, it just does/shows nothing. Is there any reason<br>
> that the benchmarks can't be run a second time? Could I somehow run it a second time ?<br>
<br>
Running the programs a second time did work for me in the past.<br>
Remember to change the output to another file or the previous .json<br>
will be overwritten.<br>
<br>
<br>
> Lastly, slightly off-topic but while we're on the subject of benchmarking,<br>
> do you think it's reliable to run with -j <number of cores> ? I'm a little bit afraid of<br>
> the shared caches (because misses should be counted in the CPU time, which<br>
> is what is measured in "exec_time" AFAIU)<br>
> and any potential multi-threading that the tests may use.<br>
<br>
It depends. You can run in parallel, but then you should increase the<br>
number of samples (executions) appropriately to counter the increased<br>
noise. Depending on how many cores your system has, it might not be<br>
worth it, but instead try to make the system as deterministic as<br>
possible (single thread, thread affinity, avoid background processes,<br>
use perf instead of timeit, avoid context switches etc. ). To avoid<br>
systematic bias because always the same cache-sensitive programs run<br>
in parallel, use the --shuffle option.<br>
<br></blockquote><div>Also, depending on what you are trying to achieve (and what your platform target is), you could enable <a href="https://github.com/google/benchmark/blob/main/docs/perf_counters.md">perfcounter </a>collection; if instruction counts are sufficient (for example), the value will probably not vary much with multi-threading.</div><div><br></div><div>...but it's probably best to avoid system noise altogether. On Intel, afaik that includes disabling turbo boost and hyperthreading, along with Michael's recommendations.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Michael<br>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div></div>