[LLVMdev] -O0 compile time speed

Mark Shannon marks at dcs.gla.ac.uk
Sat Nov 21 07:57:03 PST 2009


Hi there,
Just my tuppence worth:

I for one would love it if the code-gen pass was quicker.
It makes LLVM even more appealing for JIT compilers.

One approach might be to generate code directly from the standard IR.
(Rather than create yet another IR  (the instruction DAG)).

Would it be possible to convert the standard IR DAG to a forest of trees 
with a simple linear pass, either before or after register allocation, 
then use a BURG code generator on the trees?

BURG selectors are both fast and optimal(In theory, assuming all 
instructions can be given a cost and ignoring scheduling issues).

Chris Lattner wrote:
> On Nov 19, 2009, at 1:04 PM, Bob Wilson wrote:
>>> I've tested it and LLVM is indeed 2x slower to compile, although it  
>>> generates
>>> code that is 2x faster to run...
>>>
>>>> Compared to a compiler in the same category as PCC, whose pinnacle of
>>>> optimization is doing register allocation?  I'm not surprised at all.
>>> What else does LLVM do with optimizations turned off that makes it  
>>> slower?
>> I haven't looked at Go at all, but in general, there is a significant  
>> overhead to creating a compiler intermediate representation.  If you  
>> produce assembly code straight out of the parser, you can compile  
>> faster.
> 
> Right.  Another common comparison is between clang and TCC.  TCC generates terrible code, but it is a great example of a one pass compiler that doesn't even build an AST.  Generating code as you parse will be much much much faster than building an AST, then generating llvm ir, then generating assembly from it.  On X86 at -O0, we use FastISel which avoids creating the SelectionDAG intermediate representation in most cases (it fast paths LLVM IR -> MachineInstrs, instead of going IR -> SelectionDAG -> MachineInstrs).
> 
> I'm still really interested in making Clang (and thus LLVM) faster at -O0 (while still preserving debuggability of course).  One way to do this (which would be a disaster and not worth it)  would be to implement a new X86 backend directly translating from Clang ASTs or something like that.  However, this would obviously lose all of the portability benefits that LLVM IR provides.
> 
> That said, there is a lot that we can do to make the compiler faster at O0.  FastISel could be improved in several dimensions, including going bottom-up instead of top-down (eliminating the need for the 'dead instruction elimination pass'), integrating simple register allocation into it for the common case of single-use instructions, etc.  Another good way to speed up O0 codegen is to avoid generating as much horrible code in the frontend that the optimizer (which isn't run at O0) is expected to clean up.
> 
> -Chris
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
> 




More information about the llvm-dev mailing list