[LLVMdev] -O0 compile time speed (was: Go)
Jon Harrop
jon at ffconsultancy.com
Sun Nov 22 11:14:45 PST 2009
On Saturday 21 November 2009 14:27:15 Chris Lattner wrote:
> On Nov 19, 2009, at 1:04 PM, Bob Wilson wrote:
> >> I've tested it and LLVM is indeed 2x slower to compile, although it
> >> generates
> >> code that is 2x faster to run...
> >>
> >>> Compared to a compiler in the same category as PCC, whose pinnacle of
> >>> optimization is doing register allocation? I'm not surprised at all.
> >>
> >> What else does LLVM do with optimizations turned off that makes it
> >> slower?
> >
> > I haven't looked at Go at all, but in general, there is a significant
> > overhead to creating a compiler intermediate representation. If you
> > produce assembly code straight out of the parser, you can compile
> > faster.
>
> Right. Another common comparison is between clang and TCC. TCC generates
> terrible code, but it is a great example of a one pass compiler that
> doesn't even build an AST. Generating code as you parse will be much much
> much faster than building an AST, then generating llvm ir, then generating
> assembly from it. On X86 at -O0, we use FastISel which avoids creating the
> SelectionDAG intermediate representation in most cases (it fast paths LLVM
> IR -> MachineInstrs, instead of going IR -> SelectionDAG -> MachineInstrs).
I found LLVM was 2x slower than Go at a simple 10,000 Fibonacci functions
test. Do you have any data on Clang vs TCC compilation speed?
> I'm still really interested in making Clang (and thus LLVM) faster at -O0
> (while still preserving debuggability of course). One way to do this
> (which would be a disaster and not worth it) would be to implement a new
> X86 backend directly translating from Clang ASTs or something like that.
> However, this would obviously lose all of the portability benefits that
> LLVM IR provides.
That sounds like a lot of work for relatively little gain.
> That said, there is a lot that we can do to make the compiler faster at O0.
> FastISel could be improved in several dimensions, including going
> bottom-up instead of top-down (eliminating the need for the 'dead
> instruction elimination pass'), integrating simple register allocation into
> it for the common case of single-use instructions, etc. Another good way
> to speed up O0 codegen is to avoid generating as much horrible code in the
> frontend that the optimizer (which isn't run at O0) is expected to clean
> up.
HLVM generates quite sane and efficient IR directly and I've been more than
happy with LLVM's JIT compilation times when using it interatively from a
REPL.
So I'm not sure that LLVM so slow as to make it worth ploughing much effort
into optimizing compilation times. If you want to go down that route then I'd
certainly start with higher level optimizations like memoizing previous
compilations and reusing them. What about parallelization?
--
Dr Jon Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?e
More information about the llvm-dev
mailing list