[LLVMdev] optimization in llvm-gcc vs opt

Ryan M. Lefever lefever at crhc.illinois.edu
Tue Apr 28 21:20:52 PDT 2009


I have a piece of C code, and when I compile it to bitcode using 
llvm-gcc with -O3, the result is different than if I compile using 
llvm-gcc with no optimization, but pipe the output to opt running the 
same passes as -O3 uses.  In particular, llvm-gcc with -O3 leads to 
int-to-ptr casts while the optimization performed by opt does not.  Why 
are the two methods different, and how do they compare to each other? 
Does one produce more optimized code?  The reason I am asking about this 
is because I have my own transformations that I later run on the 
bitcode, and they can not handle certain code, in particular, int-to-ptr 
casts.

I am using llvm 2.5.  When I use optimization from llvm-gcc, I use the 
following command:

llvm-gcc -O3 -emit-llvm -c -o tmp.bc tmp.c

When I use opt for optimization, I run:

llvm-gcc -emit-llvm -c -o - tmp.c | opt -f -o tmp.bc <passes used by -O3>

Note, I determined the passes used by -O3 by running the following:

echo | $llvm/bin/llvm-gcc -emit-llvm -c -fdebug-pass-arguments -x c -O3 
-o /dev/null -

Regards,
Ryan



More information about the llvm-dev mailing list