<div dir="ltr">Nadav,<div><br></div><div>We see multiple regressions after r172868 in ISPC compiler (based on LLVM optimizer). The regressions are due to spill/reloads, which are due to increase register pressure. This matches Zach's analysis. We've filed bug 17285 for this problem.<div>
<br></div><div style>Is there any possibility to avoid splitting in case of multiple loads going together?</div></div><div style><br></div><div style>Dmitry.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Wed, Jul 10, 2013 at 1:12 PM, Zach Devito <span dir="ltr"><<a href="mailto:zdevito@stanford.edu" target="_blank">zdevito@stanford.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I've narrowed this down to a single kernel (kernel.ll), which does a fixed-size matrix-matrix multiply:<div><br></div><div># ~/llvm-32-final/bin/llc kernel.ll -o kernel32.s</div><div># ~/llvm-33-final/bin/llc kernel.ll -o kernel33.s</div>
<div># ~/llvm-32-final/bin/clang++ harness.cpp kernel32.s -o harness32</div><div># ~/llvm-32-final/bin/clang++ harness.cpp kernel33.s -o harness33</div><div># time ./harness32</div><div>real<span style="white-space:pre-wrap"> </span>0m0.584s</div>
<div>user<span style="white-space:pre-wrap"> </span>0m0.581s</div><div>sys<span style="white-space:pre-wrap"> </span>0m0.001s</div><div># time ./harness33</div><div>real<span style="white-space:pre-wrap"> </span>0m0.730s</div>
<div>user<span style="white-space:pre-wrap"> </span>0m0.725s</div><div>sys<span style="white-space:pre-wrap"> </span>0m0.001s</div><div> </div><div>If you look at kernel33.s, it has a register spill/reload in the inner loop. This doesn't appear in the llvm 3.2 version and disappears from the 3.3 version if you remove the "align 8"s from kernel.ll which are making it unaligned. Do the two-instruction unaligned loads increase register pressure? Or is something else going on?</div>
<span class="HOEnZb"><font color="#888888">
<div><br></div><div>Zach</div></font></span><div class="HOEnZb"><div class="h5"><div><br><div class="gmail_quote">On Tue, Jul 9, 2013 at 11:33 PM, Zach Devito <span dir="ltr"><<a href="mailto:zdevito@stanford.edu" target="_blank">zdevito@stanford.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thanks for all the the info! I'm still in the process of narrowing down the performance difference in my code. I'm no longer convinced its related to only the unaligned loads/stores alone since extracting this part of the kernel makes the performance difference disappear. I will try to narrow down what is going on and if it seems related LLVM, I will post an example. Thanks again,<div>
<br>Zach<div><div><br><div><br><div class="gmail_quote">On Tue, Jul 9, 2013 at 10:15 PM, Nadav Rotem <span dir="ltr"><<a href="mailto:nrotem@apple.com" target="_blank">nrotem@apple.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div>Hi, </div><div><br></div><div>Yes. On Sandybridge 256-bit loads/stores are double pumped. This means that they go in one after the other in two cycles. On Haswell the memory ports are wide enough to allow a 256bit memory operation in one cycle. So, on Sandybridge we split unaligned memory operations into two 128bit parts to allow them to execute in two separate ports. This is also what GCC and ICC do. </div>
<div><br></div><div>It is very possible that the decision to split the wide vectors causes a regression. If the memory ports are busy it is better to double-pump them and save the cost of the insert/extract subvector. Unfortunately, during ISel we don’t have a good way to estimate port pressure. In any case, it is a good idea to revise the heuristics that I put in and to see if it matches the Sandybridge optimization guide. If I remember correctly the optimization guide does not have too much information on this, but Elena looked over it and said that it made sense. </div>
<div><br></div><div>BTW, you can validate that this is the problem using the IACA tool. It performs static analysis on your binary and tells you where the critical path is. <a href="http://software.intel.com/en-us/articles/intel-architecture-code-analyzer" target="_blank">http://software.intel.com/en-us/articles/intel-architecture-code-analyzer</a></div>
<div><br></div><div>Thanks,</div><div>Nadav</div><div><div><div><br></div><br><div><div>On Jul 9, 2013, at 10:01 PM, Eli Friedman <<a href="mailto:eli.friedman@gmail.com" target="_blank">eli.friedman@gmail.com</a>> wrote:</div>
<br><blockquote type="cite"><div style="letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px">On Tue, Jul 9, 2013 at 9:01 PM, Zach Devito <<a href="mailto:zdevito@gmail.com" target="_blank">zdevito@gmail.com</a>> wrote:<br>
<blockquote type="cite">I'm seeing a difference in how LLVM 3.3 and 3.2 emit unaligned vector loads<br>on AVX.<br>3.3 is splitting up an unaligned vector load but in 3.2, it was emitted as a<br>single instruction (details below).<br>
In a matrix-matrix inner-kernel, I see a ~25% decrease in performance, which<br>seems to be due to this.<br><br>Any ideas why this changed? Thanks!<br></blockquote><br>This was intentional; apparently doing it with two instructions is<br>
supposed to be faster. See r172868/r172894.<br><br>Adding Nadav in case he has anything more to say.<br><br>-Eli</div></blockquote></div><br></div></div></div></blockquote></div><br>
</div></div></div></div>
</blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
LLVM Developers mailing list<br>
<a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a> <a href="http://llvm.cs.uiuc.edu" target="_blank">http://llvm.cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
<br></blockquote></div><br></div>