[LLVMdev] optimization in llvm-gcc vs opt
baldrick at free.fr
Tue Apr 28 23:21:20 PDT 2009
> I have a piece of C code, and when I compile it to bitcode using
> llvm-gcc with -O3, the result is different than if I compile using
> llvm-gcc with no optimization, but pipe the output to opt running the
> same passes as -O3 uses. In particular, llvm-gcc with -O3 leads to
> int-to-ptr casts while the optimization performed by opt does not.
the problem with compiling with llvm-gcc with no optimization is
that gcc itself does not do any constant folding, unlike at -O3.
This is not an LLVM optimization, this is gcc itself. It can cause
quite significant differences. This is why I originally suggesting
using -O1 -mllvm -disable-llvm-optzns. Also, last time I checked
"opt -O3" and llvm-gcc -O3 didn't run exactly the same set of passes,
or in a different order (they get out of sync easily). But I reckon
the constant folding bit is most likely to be the cause. In particular
gcc simplifies int-to-ptr type things a bit more than LLVM because it
knows additional stuff about language semantics. Finally, llvm-gcc is
not the same as passing opt the list of passes run by llvm-gcc, because
some passes are run by llvm-gcc on each function as it is output, and
the rest run when the entire module is output. This results in differences
due to opt scheduling the passes differently. I think "opt -O3"
tries to imitate what llvm-gcc does.
More information about the llvm-dev