[LLVMdev] llvm-ld optimization options

Duncan Sands baldrick at free.fr
Fri Apr 18 05:47:55 PDT 2008

Hi Holger,

> > There should be no difference between using llvm-gcc at some -O
> > level, and running it at -O0 and using opt to run the passes on
> > the unoptimized bitcode.
> However, you wrote earlier:
> > Finally, llvm-gcc runs the following passes on each function
> > immediately after it is created:
> >
> > CFGSimplification, PromoteMemoryToRegister,
> > ScalarReplAggregates, InstructionCombining.
> AFAIK this isn't something that opt can do.

literally speaking opt can't run those on functions as they are
output, because it gets passed the final module.  But since those
passes are per-function, running
  opt -simplifycfg -mem2reg -scalarrepl -instcombine
on the unoptimized module should run those passes on each function,
giving the same result.  So llvm-gcc -O3 -funroll-loops should be
the same as:
  opt -simplifycfg -mem2reg -scalarrepl -instcombine -std-compile-opts
Only it's not going to be because for some strange reason opt likes to
put the extra arguments at the back, so in fact it will first run
-std-compile-opts and then -simplifycfg -mem2reg -scalarrepl -instcombine,
d'oh!  Also, when run at -O1 or better, gcc itself does some folding and
other small optimizations, so the IR that gets fed to the LLVM passes is
actually different to what you get at -O0.



More information about the llvm-dev mailing list