[LLVMdev] GSoC 2011: Fast JIT Code Generation for x86-64

Tilmann Scheller tilmann.scheller at googlemail.com
Tue Apr 5 14:49:21 PDT 2011


Hi Viktor,

On Tue, Apr 5, 2011 at 9:41 PM, Óscar Fuentes <ofv at wanadoo.es> wrote:

> Jim Grosbach <grosbach at apple.com> writes:
>
> >> To me, increasing coverage of the FastISel seemed more involved than
> >> directly emitting opcodes to memory, with a lesser outlook on
> >> reducing overhead.
> >
> > That seems extremely unlikely. You'd be effectively re-implementing
> > both fast-isel and the MC binary emitter layers, and it sounds like a
> > new register allocator as well.
> >
> > What Eric is suggesting is instead locating which IR constructs are
> > not being handled by fast-isel and are causing problems (i.e., are
> > being frequently encountered in your code-base) and implementing
> > fast-isel handling for them. That will remove the selectiondag
> > overhead that you've identified as the primary compile-time problem.
>
> At some point on the past someone was kind enough to add fast-isel for
> some instructions frequently emitted by my compiler, hoping that that
> would speed up JITting. The results were dissapointing (negligible,
> IIRC). Either fast-isel does not make much of a difference or the main
> inefficiency is elsewhere.
>
>
 fast-isel discussion aside, I think the real speed killer of a dynamic
binary translator (or other users of the JIT which invoke it many times on
small pieces of code) is the constant time of the JIT which is required for
every source ISA BB (each BB gets mapped to an LLVM Function).

[1] cites a constant overhead of 10 ms per BB. I just did some simple
measurements with callgrind doing an lli on a simple .ll file which only
contains a main function which immediately returns. With -regalloc=fast and
-fast-isel and an -O2 compiled lli we spend about 725000 instructions in
getPointerToFunction(). Clearly, that's quite some constant overhead and I
doubt that we can get it down by two orders of magnitude, so what about
this:

The old qemu JIT used an extremely simple and fast approach which performed
surprisingly well: Having chunks of precompiled machine code (from C
sources) for the individual IR instructions which at runtime get glued
together and patched as necessary.

The idea would be to use the same approach to generate machine code from
LLVM IR, e.g. having chunks of LLVM MC instructions for the individual LLVM
IR instructions (ideally describing the mapping with TableGen), glueing them
together doing no dynamic register allocation, no scheduling.

I'd be willing to mentor such a project, let me know if you're interested.

Regards,

Tilmann


[1] http://www.iaeng.org/publication/IMECS2011/IMECS2011_pp212-216.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20110405/cc45fb99/attachment.html>


More information about the llvm-dev mailing list