<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Dec 19, 2013 at 2:43 PM, Tim Northover <span dir="ltr"><<a href="mailto:t.p.northover@gmail.com" target="_blank">t.p.northover@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> As per Renato comment above, vmla instruction is NEON instruction while vmfa is VFP instruction. Correct me if i am wrong on this.<br>
<br>
My version of the ARM architecture reference manual (v7 A & R) lists<br>
versions requiring NEON and versions requiring VFP. (Section<br>
A8.8.337). Split in just the way you'd expect (SIMD variants need<br>
NEON).<br></blockquote><div><br></div><div>I will check on this part.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im"><br>
> It may seem that total number of cycles are more or less same for single vmla<br>
> and vmul+vadd. However, when vmul+vadd combination is used instead of vmla,<br>
> then intermediate results will be generated which needs to be stored in memory<br>
> for future access.<br>
<br>
</div>Well, it increases register pressure slightly I suppose, but there's<br>
no need to store anything to memory unless that gets critical.<br>
<div class="im"><br>
> Correct me if i am wrong on this, but my observation till date have shown this.<br>
<br>
</div>Perhaps. Actual data is needed, I think, if you seriously want to<br>
change this behaviour in LLVM. The test-suite might be a good place to<br>
start, though it'll give an incomplete picture without the externals<br>
(SPEC & other things).<br>
<br>
Of course, if we're just speculating we can carry on.<br></blockquote><div><br></div><div>I wasn't speculating. Let's take an example of a 3*3 simple matrix multiplication (no loops, all multiplication and additions are hard coded - basically all the operations are expanded <br>
e.g Result[0][0] = A[0][0]*B[0][0] + A[0][1]*B[1][0] + A[0][2]*B[2][0] and so on for all 9 elements of the result ).<br><br></div><div>If i compile above code with "clang -O3 -mcpu=cortex-a8 -mfpu=vfpv3-d16" (only 16 floating point registers present with my arm, so specifying vfpv3-d16), there are 27 vmul, 18 vadd, 23 store and 30 load ops in total.<br>
</div><div>If same is compiled with gcc with same options there are 9 vmul, 18 vmla, 9 store and 20 load ops. So, its clear that extra load/store ops gets added with clang as it is not emitting vmla instruction. Won't this lead to performance degradation? <br>
<br>I would also like to know about accuracy with vmla and pair of vmul and vadd ops. <br><br></div></div><br>-- <br>With regards,<br>Suyog Sarda<br>
</div></div>