[LLVMdev] dragonegg svn benchmarks

Don Quixote de la Mancha quixote at dulcineatech.com
Wed Oct 12 02:31:20 PDT 2011


On Wed, Oct 12, 2011 at 12:40 AM, Duncan Sands >  The basic
> observation is that running the LLVM optimizers at -O3 after running the
> GCC optimizers (at -O3) results in slower code!  I mean slower than what
> you get by running the LLVM optimizers at -O1 or -O2.  I didn't find time
> to analyse this curiosity yet.  It might simply be that the LLVM inlining
> level is too high given that inlining has already been done by GCC.  Anyway,
> I didn't want to run LLVM at -O3 because of this.

If you inline too much you will get slower code because you make
poorer use of the instruction cache in most modern processors.

C99 and C++ allow one to declare functions inline at the point that
they are declared.  For early C standards I believe GCC has an
attribute that allows one to inline a function at the point of
declaration as a language extension.

Lots of other languages do inlining, for example I understand Java
JITs will inline JIT-compiled native code even though the Java
language itself doesn't support inlining.

For modern processors with code caches it would be better to inline
functions at the point they are used rather than when they are
declared.  That way one has the choice of better cache usage or
avoiding function call overhead.  For example:

int foo( float bar );

int baz( void )
{
   return foo( 3 ) inline;  // This call will be fast
}

int boo( void )
{
   return foo( 5 ); // This will make a hot spot at foo's definition
}

Profiler-guided optimizations could take care of this without needing
any language extensions.  I understand that that is what the Java
HotSpot JIT does.

Don Quixote
-- 
Don Quixote de la Mancha
Dulcinea Technologies Corporation
Software of Elegance and Beauty
http://www.dulcineatech.com
quixote at dulcineatech.com




More information about the llvm-dev mailing list