<div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Btw, when using perf (i.e., using TEST_SUITE_USE_PERF in cmake), it seems that perf runs both during the<br>build (i.e., make) and the run (i.e., llvm-lit) of the tests. It's not important but do you happen to know<br>why does this happen?</blockquote><div><br></div><div>It seems the one gathers measurements for the compilation command and the other for the run. My bad, I hadn't noticed.</div><div><br></div><div>- Stefanos </div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Στις Δευ, 19 Ιουλ 2021 στις 10:46 μ.μ., ο/η Stefanos Baziotis <<a href="mailto:stefanos.baziotis@gmail.com">stefanos.baziotis@gmail.com</a>> έγραψε:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi,<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Usually one does not compare executions of the entire test-suite, but<br>look for which programs have regressed. In this scenario only relative<br>changes between programs matter, so μs are only compared to μs and<br>seconds only compared to seconds.</blockquote><div><br></div><div>That's true, but there are different insights one can get from, say, a 30%</div><div>increase in a program that initially took 100μs and one which initially</div><div>took 10s.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">What do you mean? Don't you get the exec_time per program?</blockquote><div><br></div><div>Yes, but JSON file does not include the time _unit_. Actually, I think the correct phrasing</div><div>is "unit of time", not "time unit", my bad. In any case, I mean that you get</div><div>e.g., "exec_time": 4, but you don't know if this 4 is 4 seconds or</div><div>4 μs or whatever other unit of time.</div><div><br></div><div>For example, the only reason that it seems that MultiSource/ use</div><div>seconds is just because I ran a bunch of them manually (and because</div><div>some outputs saved by llvm-lit, which measure in seconds, match</div><div>the numbers on JSON).</div><div><br></div><div>If we know the unit of time per test case (or per X grouping of tests</div><div>for that matter), we could then, e.g., normalize the times, as you</div><div>suggest, or anyway, know the unit of time and act accordingly.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Running the programs a second time did work for me in the past.</blockquote><div><br></div><div>Ok, it seems it works for me if I wait, but it seems it behaves differently<br></div><div>the second time. Anyway, not important.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It depends. You can run in parallel, but then you should increase the<br>number of samples (executions) appropriately to counter the increased<br>noise. Depending on how many cores your system has, it might not be<br>worth it, but instead try to make the system as deterministic as<br>possible (single thread, thread affinity, avoid background processes,<br>use perf instead of timeit, avoid context switches etc. ). To avoid<br>systematic bias because always the same cache-sensitive programs run<br>in parallel, use the --shuffle option.</blockquote><div><br></div><div>I see, thanks. I didn't know about the --shuffle option, interesting.</div><div><br></div><div>Btw, when using perf (i.e., using TEST_SUITE_USE_PERF in cmake), it seems that perf runs both during the</div><div>build (i.e., make) and the run (i.e., llvm-lit) of the tests. It's not important but do you happen to know</div><div>why does this happen?</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Also, depending on what you are trying to achieve (and what your platform target is), you could enable <a href="https://github.com/google/benchmark/blob/main/docs/perf_counters.md" target="_blank">perfcounter </a>collection;</blockquote><div><br></div><div>Thanks, that can be useful in a bunch of cases. I should not that perf stats are not included in the</div><div>JSON file. Is the "canonical" way to access them to follow the CMakeFiles/<benchmark name>.dir/<benchmark name>.time.perfstats ?</div><div><br></div><div>For example, let's say that I want the perf stats for test-suite/SingleSource/Benchmarks/Adobe-C++/loop_unroll.cpp</div><div>To find them, I should go to the same path but in the build directory, i.e.,: test-suite-build/SingleSource/Benchmarks/Adobe-C++/</div><div>and then follow the pattern above, so, the .perfstats file will be in: test-suite-build/SingleSource/Benchmarks/Adobe-C++/CMakeFiles/loop_unroll.dir/loop_unroll.cpp.time.perfstats</div><div><br></div><div>Sorry for the long path strings, but I couldn't make it clear otherwise.</div><div><br></div><div>Thanks to both,</div><div>Stefanos</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Στις Δευ, 19 Ιουλ 2021 στις 5:36 μ.μ., ο/η Mircea Trofin <<a href="mailto:mtrofin@google.com" target="_blank">mtrofin@google.com</a>> έγραψε:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Jul 18, 2021 at 8:58 PM Michael Kruse via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Am So., 18. Juli 2021 um 11:14 Uhr schrieb Stefanos Baziotis via<br>
llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>>:<br>
> Now, to the questions. First, there doesn't seem to be a common time unit for<br>
> "exec_time" among the different tests. For instance, SingleSource/ seem to use<br>
> seconds while MicroBenchmarks seem to use μs. So, we can't reliably judge<br>
> changes. Although I get the fact that micro-benchmarks are different in nature<br>
> than Single/MultiSource benchmarks, so maybe one should focus only on<br>
> the one or the other depending on what they're interested in.<br>
<br>
Usually one does not compare executions of the entire test-suite, but<br>
look for which programs have regressed. In this scenario only relative<br>
changes between programs matter, so μs are only compared to μs and<br>
seconds only compared to seconds.<br>
<br>
<br>
> In any case, it would at least be great if the JSON data contained the time unit per test,<br>
> but that is not happening either.<br>
<br>
What do you mean? Don't you get the exec_time per program?<br>
<br>
<br>
> Do you think that the lack of time unit info is a problem ? If yes, do you like the<br>
> solution of adding the time unit in the JSON or do you want to propose an alternative?<br>
<br>
You could also normalize the time unit that is emitted to JSON to s or ms.<br>
<br>
><br>
> The second question has to do with re-running the benchmarks: I do<br>
> cmake + make + llvm-lit -v -j 1 -o out.json .<br>
> but if I try to do the latter another time, it just does/shows nothing. Is there any reason<br>
> that the benchmarks can't be run a second time? Could I somehow run it a second time ?<br>
<br>
Running the programs a second time did work for me in the past.<br>
Remember to change the output to another file or the previous .json<br>
will be overwritten.<br>
<br>
<br>
> Lastly, slightly off-topic but while we're on the subject of benchmarking,<br>
> do you think it's reliable to run with -j <number of cores> ? I'm a little bit afraid of<br>
> the shared caches (because misses should be counted in the CPU time, which<br>
> is what is measured in "exec_time" AFAIU)<br>
> and any potential multi-threading that the tests may use.<br>
<br>
It depends. You can run in parallel, but then you should increase the<br>
number of samples (executions) appropriately to counter the increased<br>
noise. Depending on how many cores your system has, it might not be<br>
worth it, but instead try to make the system as deterministic as<br>
possible (single thread, thread affinity, avoid background processes,<br>
use perf instead of timeit, avoid context switches etc. ). To avoid<br>
systematic bias because always the same cache-sensitive programs run<br>
in parallel, use the --shuffle option.<br>
<br></blockquote><div>Also, depending on what you are trying to achieve (and what your platform target is), you could enable <a href="https://github.com/google/benchmark/blob/main/docs/perf_counters.md" target="_blank">perfcounter </a>collection; if instruction counts are sufficient (for example), the value will probably not vary much with multi-threading.</div><div><br></div><div>...but it's probably best to avoid system noise altogether. On Intel, afaik that includes disabling turbo boost and hyperthreading, along with Michael's recommendations.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Michael<br>
_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br>
</blockquote></div></div>
</blockquote></div>
</blockquote></div>