[LLVMdev] Is there any llvm neon intrinsic that maps to vmla.f32 instruction ?

Sebastien DELDON-GNB sebastien.deldon at st.com
Fri Feb 8 04:28:30 PST 2013


Hi Renato,

Thanks for the answer, it confirms what I was suspecting. My problem is that this behavior is controlled by vmlx forwarding on cortex-a9 for which despite asking on this list, I couldn't get a clear understanding what this option is meant for.
So here are my new questions:
Why for cortex-a9 vmlx-forwarding is enabled by default ? Is it to guarantee correctness or for performance purpose ? I've made some experiments and DISABLING vmlx-forwarding for cortex-a9 leads to generation of more vmla/vmls .f32 and  significantly improve some benchmarks. I've not enter into a case where it significantly  degrades performance or give incorrect answers.
Thus my goal is to use my front-end to generate llvm neon intrinsics that maps to LLVM vmla/vmls f32 when I think it is appropriate and not to rely on disabling/enabling vmlx-forwarding.

Best Regards
Seb

From: Renato Golin [mailto:renato.golin at linaro.org]
Sent: Friday, February 08, 2013 11:54 AM
To: Sebastien DELDON-GNB
Cc: llvmdev at cs.uiuc.edu
Subject: Re: [LLVMdev] Is there any llvm neon intrinsic that maps to vmla.f32 instruction ?

On 8 February 2013 10:40, Sebastien DELDON-GNB <sebastien.deldon at st.com<mailto:sebastien.deldon at st.com>> wrote:
Hi all,

Everything is in the tile, I would like to enforce generation of vmla.f32 instruction for scalar operations on cortex-a9, so is there a LLMV neon intrinsic available for that  ?

Hi Sebastien,

LLVM doesn't use intrinsics when there is a clear way of representing the same thing on standard IR. In the case of VMLA, it is generated from a pattern:

%mul = mul <N x type> %a, %b
%sum = add <N x type> %mul, %c

So, if you generate FAdd(FMull(a, b), c), you'll probably get an FMLA.

It's not common, but also not impossible that the two instructions will be reordered, or even removed, so you need to make sure the intermediate result is not used  (or it'll probably use VMUL/VADD) and the final result is used (or it'll be removed) and keep the body of the function/basic block small (to avoid reordering).

cheers,
--renato
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130208/54734f1f/attachment.html>


More information about the llvm-dev mailing list