[LLVMdev] Scheme + LLVM JIT

Alexander Friedman alex at inga.mit.edu
Thu May 5 00:46:58 PDT 2005


On May  5, Misha Brukman wrote:
> Maybe we can use you for a testimonial... :)

Certainly.

> > Tail Call Elimination:
> > 
> > I've read over the "Random llvm notes", and see that you guys have
> > though about this already.
> > 
> > However, the note dates from last year, so I am wondering if there is
> > an implementation in the works. If no one is working on this or is
> > planning to work on this in the near future, I would be willing to
> > give it a shot if I was given some direction as to where to start.
> 
> To the best of my knowledge, this has not been done and no one has
> announced their intent to work on it, so if you are interested, you'd be
> more than welcome to do so.

My C++ knowledge is completely non-existant, but so far I've had a
surprisingly easy time reading the source. This change seems somewhat
involved - I will have to implement different calling conventions -
ie, passing a return-address to the callee, etc. Who is the right
person to talk to abot this?

> The use case scenario is usually like this:
> 
> llvm-gcc/llvm-g++ produces very simple, brain-dead code for a given
> C/C++ file.  It does not create SSA form, but creates stack allocations
> for all variables.  This makes it easier to write a front-end.  We
> turned off all optimizations in GCC and so the code produced by the
> C/C++ front-end is really not pretty.

[ ... ]

> After this, you can use llc or lli (JIT) on the resulting bytecode, and
> llc or lli don't have to do any optimizations, because they have already
> been performed.

+

> > Does it re-do these optimizations when functions are added/ removed/
> > changed? Are there parameters to tune the compiler's aggressiveness?
> 
> There is a JIT::recompileAndRelinkFunction() method, but it doesn't
> optimize the code.

Ok, this makes sense. However, am I correct in assuming that the
interaprocedural optimizations performed in gccas will make it
problematic to call 'JIT::recompileAndRelinkFunction()' . For example,
suppose I run run some module that looks like

module a

int foo () {
 ...
 bar()
 ...
}

int bar () {
 ...
}

through all of those optimizations. Will the result nessisarily have a
bar() function? If inlining is enabled, replacing bar might have no
effect if it's inlined in foo. If inlining is not enabled, are there
other gotcha's like this?

If there are complications like this, how much of a performance gain
do the interprocedural opts give?

Also, compileAndRelink (F) seems to update references in call sites of
F. Does this mean that every function call incurs an extra 'load' , or
is there some cleverer solution?

Finally, if I jit-compile several modules, can they reference each
other's functions? If this is answered somewhere in the docs, I
appologize.

> > If it does indeed optimize the input, does it attempt to do global
> > optimizations on the functions (intraprocedural register allocation,
> > inlining, whatever)?
> 
> The default register allocator in use for most platforms is a
> linear-scan register allocator, and the SparcV9 backend uses a
> graph-coloring register allocator.  However, the JIT performs no
> inlining, as mentioned above.

Why use linear scan on X86? Does it have some benefits over
graph-coloring? FWIW, Lal George has a paper on using graph coloring
on the register poor X86 by implicitly taking advantage of the Intel's
register mapping to emulate 32 registers. The result is between 10 and
100% improvement on the benchmarks he ran (but the allocater is 40%
slower).

> llvm/examples/HowToUseJIT pretty much has the minimal support one needs
> for a JIT, but if you can make it even smaller, I'd be interested.

Sorry, what i actually meant was: what are the minimum libraries that
I have to compile in order to be able to build the HowToUseJIT (and
all the passes in gccas/ld).

We will eventually need to distrubute the sources and binaries with
our scheme distrubution, and so I need to find smallest set of files
that need to be compiled in order to have the jit + optimizers working.

> > [...] configure script seems to ignore my directives. For examle, it
> > always builds all architectures, ...
> 
> Are you using a release or CVS version?  Support for this just went into
> CVS recently, so you should check it out and see if it works for you.
> If you *are* using CVS, are you saying you used `configure
> -enable-target=[blah]' and it compiled and linked them all?  In that
> case, it's a bug, so please post your results over here:

Yes, I just tried with cvs and It still compiles all back-ends. I'll
try it again to make sure, and then report the bug.

> > ... and it always statically links each binary.
> 
> Yes, that is currently the default method of building libraries and
> tools.  If you were to make all the libraries shared, you would be doing
> the same linking/relocating at run-time every time you start the tool.

It's not the linking/relocating that's the problem. The problem is
that each binary winds up being rather large. However, since these
tools don't need to be distributed or compiled for my purposes, I
guess i'm not really worried about it.

-- 


-Alex




More information about the llvm-dev mailing list