<div>I will test your suggestion, but I designed the test case to load the memory directly into <4 x float> registers. So there is absolutely no permutation and other swizzle or move operations. Maybe the heuristic should not only count the depth but also the surrounding load/store operations.<br>
</div><div>Are the load/store operations vectorized, too? (I designed the test case to completely fit the SSE registers)</div><br><div class="gmail_quote">2012/2/10 Hal Finkel <span dir="ltr"><<a href="mailto:hfinkel@anl.gov">hfinkel@anl.gov</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Carl-Philip,<br>
<br>
The reason that this does not vectorize is that it cannot vectorize the<br>
stores; this leaves only the mul-add chains (and some chains with<br>
loads), and they only have a depth of 2 (the threshold is 6).<br>
<br>
If you give clang -mllvm -bb-vectorize-req-chain-depth=2 then it will<br>
vectorize. The reason the heuristic has such a large default value is to<br>
prevent cases where it costs more to permute all of the necessary values<br>
into and out of the vector registers than is saved by vectorizing. Does<br>
the code generated with -bb-vectorize-req-chain-depth=2 run faster than<br>
the unvectorized code?<br>
<br>
The heuristic can certainly be improved, and these kinds of test cases<br>
are very important to that improvement process.<br>
<span class="HOEnZb"><font color="#888888"><br>
-Hal<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
On Thu, 2012-02-09 at 13:27 +0100, Carl-Philip Hänsch wrote:<br>
> I have a super-simple test case 4x4 matrix * 4-vector which gets<br>
> correctly unrolled, but is not vectorized by -bb-vectorize. (I used<br>
> llvm 3.1svn)<br>
> I attached the test case so you can see what is going wrong there.<br>
><br>
> 2012/2/3 Hal Finkel <<a href="mailto:hfinkel@anl.gov">hfinkel@anl.gov</a>><br>
> As some of you may know, I committed my basic-block<br>
> autovectorization<br>
> pass a few days ago. I encourage anyone interested to try it<br>
> out (pass<br>
> -vectorize to opt or -mllvm -vectorize to clang) and provide<br>
> feedback.<br>
> Especially in combination with -unroll-allow-partial, I have<br>
> observed<br>
> some significant benchmark speedups, but, I have also observed<br>
> some<br>
> significant slowdowns. I would like to share my thoughts, and<br>
> hopefully<br>
> get feedback, on next steps.<br>
><br>
> 1. "Target Data" for vectorization - I think that in order to<br>
> improve<br>
> the vectorization quality, the vectorizer will need more<br>
> information<br>
> about the target. This information could be provided in the<br>
> form of a<br>
> kind of extended target data. This extended target data might<br>
> contain:<br>
> - What basic types can be vectorized, and how many of them<br>
> will fit<br>
> into (the largest) vector registers<br>
> - What classes of operations can be vectorized (division,<br>
> conversions /<br>
> sign extension, etc. are not always supported)<br>
> - What alignment is necessary for loads and stores<br>
> - Is scalar-to-vector free?<br>
><br>
> 2. Feedback between passes - We may to implement a closer<br>
> coupling<br>
> between optimization passes than currently exists.<br>
> Specifically, I have<br>
> in mind two things:<br>
> - The vectorizer should communicate more closely with the<br>
> loop<br>
> unroller. First, the loop unroller should try to unroll to<br>
> preserve<br>
> maximal load/store alignments. Second, I think it would make a<br>
> lot of<br>
> sense to be able to unroll and, only if this helps<br>
> vectorization should<br>
> the unrolled version be kept in preference to the original.<br>
> With basic<br>
> block vectorization, it is often necessary to (partially)<br>
> unroll in<br>
> order to vectorize. Even when we also have real loop<br>
> vectorization,<br>
> however, I still think that it will be important for the loop<br>
> unroller<br>
> to communicate with the vectorizer.<br>
> - After vectorization, it would make sense for the<br>
> vectorization pass<br>
> to request further simplification, but only on those parts of<br>
> the code<br>
> that it modified.<br>
><br>
> 3. Loop vectorization - It would be nice to have, in addition<br>
> to<br>
> basic-block vectorization, a more-traditional loop<br>
> vectorization pass. I<br>
> think that we'll need a better loop analysis pass in order for<br>
> this to<br>
> happen. Some of this was started in LoopDependenceAnalysis,<br>
> but that<br>
> pass is not yet finished. We'll need something like this to<br>
> recognize<br>
> affine memory references, etc.<br>
><br>
> I look forward to hearing everyone's thoughts.<br>
><br>
> -Hal<br>
><br>
> --<br>
> Hal Finkel<br>
> Postdoctoral Appointee<br>
> Leadership Computing Facility<br>
> Argonne National Laboratory<br>
><br>
> _______________________________________________<br>
> LLVM Developers mailing list<br>
> <a href="mailto:LLVMdev@cs.uiuc.edu">LLVMdev@cs.uiuc.edu</a> <a href="http://llvm.cs.uiuc.edu" target="_blank">http://llvm.cs.uiuc.edu</a><br>
> <a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev</a><br>
><br>
<br>
--<br>
Hal Finkel<br>
Postdoctoral Appointee<br>
Leadership Computing Facility<br>
Argonne National Laboratory<br>
<br>
</div></div></blockquote></div><br>