<div dir="ltr">You should try it both ways, certainly, But it's good to isolate different effects from unrelated library code, especially if you're specifically working on things such as whether aligning branch targets is worthwhile, or choosing different instruction encodings to maximize dispatch width from 16 or 32 byte blocks.</div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 27, 2017 at 1:53 PM, Kristof Beyls <span dir="ltr"><<a href="mailto:Kristof.Beyls@arm.com" target="_blank">Kristof.Beyls@arm.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word">
<br>
<div><span class="">
<blockquote type="cite">
<div>On 27 Feb 2017, at 11:32, Bruce Hoult <<a href="mailto:bruce@hoult.org" target="_blank">bruce@hoult.org</a>> wrote:</div>
<br class="m_4723031194749629615Apple-interchange-newline">
<div>
<div dir="ltr">Two other things:
<div><br>
</div>
<div>1) I get massively more stable execution times on 16.04 than on 14.04 on both x86 and ARM because 16.04 does far fewer gratuitous moves from one core to another, even without explicit pinning.</div>
<div><br>
</div>
<div>2) turn off ASLR: "echo 0 > /proc/sys/kernel/randomize_va_<wbr>space". As well as getting stable addresses for debugging repeatability, it also stabilizes execution time variability due to "random" conflicts in caches, hash collisions in branch prediction
or BTB, maybe even uop cache.</div>
</div>
</div>
</blockquote>
<div><br>
</div>
</span><div>FWIW, I personally think it's better to keep ASLR turned on. It's better to get the performance fluctuations in your experiments from the slight changes in code layout from ASLR, as that gives some kind of indication of how sensitive the specific program,
core, environment is to layout changes. If you disable ASLR and get a big speed difference when evaluating a compiler patch, you still won't know if it's down to some code layout change in a hot piece of code that your patch otherwise didn't change at all.
Keeping ASLR turned on isn't perfect by far: if you really want to evaluate this properly, you might need to introduce more code layout randomization in your experiments. I've talked about this in a bit more detail at EuroLLVM last year, see <a href="https://www.youtube.com/watch?v=COmfRpnujF8" target="_blank">https://www.youtube.com/<wbr>watch?v=COmfRpnujF8</a>.</div>
<div>Being able to more quickly determine whether a performance change is due to the intent of the compiler patch you've written or due to a micro-architectural non-linearity (such as a big speed difference due to a small code layout change), was one of the
main motivations to add profile-annotated disassembly views to LNT, as demonstrated at <a href="http://blog.llvm.org/2016/06/using-lnt-to-track-performance.html" target="_blank">http://blog.llvm.org/2016/<wbr>06/using-lnt-to-track-<wbr>performance.html</a>, or <a href="https://fosdem.org/2017/schedule/event/lnt/" target="_blank">https://fosdem.org/2017/<wbr>schedule/event/lnt/</a>.
Beware that to use this feature, you'll need to use the cmake+lit infrastructure in the test-suite rather than the older make infrastructure. From lnt runtest, this can be done by using "lnt runtest test-suite" rather than using "lnt runtest nt".</div>
<div><br>
</div>
<div>Thanks,</div>
<div><br>
</div>
<div>Kristof</div><div><div class="h5">
<br>
<blockquote type="cite">
<div>
<div dir="ltr">
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Feb 27, 2017 at 12:36 PM, Kristof Beyls via llvm-dev
<span dir="ltr"><<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Mikael,<br>
<br>
Some noisiness in benchmark results is expected, but the numbers you see seem to be higher than I'd expect.<br>
A number of tricks people use to get lower noise results are (with the lnt runtest nt command line options to enable it between brackets):<br>
* Only build the benchmarks in parallel, but do the actual running of the benchmark code at most one at a time. (--threads 1 --build-threads 6).<br>
* Make lnt use linux perf to get more accurate timing for short-running benchmarks (--use-perf=1)<br>
* Pin the running benchmark to a specific core, so the OS doesn't move the benchmark process from core to core. (--make-param=RUNUNDER=taskset -c 1)<br>
* Only run the programs that are marked as a benchmark; some of the tests in the test-suite are not intended to be used as a benchmark (--benchmarking-only)<br>
* Make sure each program gets run multiple times, so that LNT has a higher chance of recognizing which programs are inherently noisy (--multisample=3)<br>
<br>
I hope this is the kind of answer you were looking for?<br>
Do the above measures reduce the noisiness to acceptable levels for your setup?<br>
<br>
Thanks,<br>
<br>
Kristof<br>
<div class="m_4723031194749629615HOEnZb">
<div class="m_4723031194749629615h5"><br>
<br>
> On 27 Feb 2017, at 09:46, Mikael Holmén via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a>> wrote:<br>
><br>
> Hi,<br>
><br>
> I'm trying to run the benchmark suite:<br>
> <a href="http://llvm.org/docs/TestingGuide.html#test-suite-quickstart" rel="noreferrer" target="_blank">
http://llvm.org/docs/TestingGu<wbr>ide.html#test-suite-quickstart</a><br>
><br>
> I'm doing it the lnt way, as described at:<br>
> <a href="http://llvm.org/docs/lnt/quickstart.html" rel="noreferrer" target="_blank">
http://llvm.org/docs/lnt/quick<wbr>start.html</a><br>
><br>
> I don't know what to expect but the results seems to be quite noisy and unstable. E.g I've done two runs on two different commits that only differ by a space in CODE_OWNERS.txt on my 12 core ubuntu 14.04 machine with:<br>
><br>
> lnt runtest nt --sandbox SANDBOX --cc <path-to-my-clang> --test-suite /data/repo/test-suite -j 8<br>
><br>
> And then I get the following top execution time regressions:<br>
> <a href="http://i.imgur.com/sv1xzlK.png" rel="noreferrer" target="_blank">
http://i.imgur.com/sv1xzlK.png</a><br>
><br>
> The numbers bounce around a lot if I do more runs.<br>
><br>
> Given the amount of noise I see here I don't know to sort out significant regressions if I actually do a real change in the compiler.<br>
><br>
> Are the above results expected?<br>
><br>
> How to use this?<br>
><br>
><br>
> As a bonus question, if I instead run the benchmarks with an added -m32:<br>
> lnt runtest nt --sandbox SANDBOX --cflag=-m32 --cc <path-to-my-clang> --test-suite /data/repo/test-suite -j 8<br>
><br>
> I get three failures:<br>
><br>
> --- Tested: 2465 tests --<br>
> FAIL: MultiSource/Applications/ClamA<wbr>V/clamscan.compile_time (1 of 2465)<br>
> FAIL: MultiSource/Applications/ClamA<wbr>V/clamscan.execution_time (494 of 2465)<br>
> FAIL: MultiSource/Benchmarks/DOE-Pro<wbr>xyApps-C/XSBench/XSBench.execu<wbr>tion_time (495 of 2465)<br>
><br>
> Is this known/expected or do I do something stupid?<br>
><br>
> Thanks,<br>
> Mikael<br>
> ______________________________<wbr>_________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
> <a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">
http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-dev</a><br>
<br>
______________________________<wbr>_________________<br>
LLVM Developers mailing list<br>
<a href="mailto:llvm-dev@lists.llvm.org" target="_blank">llvm-dev@lists.llvm.org</a><br>
<a href="http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" rel="noreferrer" target="_blank">http://lists.llvm.org/cgi-bin/<wbr>mailman/listinfo/llvm-dev</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
</div></div></div><span class="HOEnZb"><font color="#888888">
<br>
<br>--
<br>This message has been scanned for viruses and
<br>dangerous content by
<a href="http://www.mailscanner.info/" target="_blank"><b>MailScanner</b></a>, and is
<br>believed to be clean.
</font></span></div>
</blockquote></div><br></div>