[LLVMdev] Poor floating point optimizations?

Sdadsda Sdasdaas llvmuser at yahoo.com
Sat Nov 20 14:13:53 PST 2010


I wanted to use LLVM for my math parser but it seems that floating point 
optimizations are poor.

For example consider such C code:
float foo(float x) { return x+x+x; }

and here is the code generated with "optimized" live demo:
define float @foo(float %x) nounwind readnone { entry:   %0 = fmul float %x, 
2.000000e+00                ; <float> [#uses=1]   %1 = fadd float %0, %x                          
; <float> [#uses=1]   ret float %1 } 


So what is going on? I mean, it was not replaced by %0 = fmul %x, 3.0. 
I thought that maybe LLVM follows some strict IEEE floating point rules like 
exceptions/subnormals/NaN/Inf etc issues. But (x+x) was actually transformed to 
%x * 2.0. And this is even stranger, because on many architectures, MUL takes 
more cycles than ADD (reason you'd rather use LEA than IMUL in X86 code).

Could someone explain what is going on? Maybe there are some special 
optimization passes for such scenarios but I've been playing with them (in C++ 
app) and I didn't achieve anything. And to be honest, those optimization passes 
are not well documented. 

With regards,
Bob D.


      



More information about the llvm-dev mailing list