[LLVMdev] Runtime optimisation in the JIT

Warren Armstrong warren.armstrong at anu.edu.au
Wed May 17 20:40:06 PDT 2006


Hi All,

I've just started investigating LLVM for use in a project of mine,
and I've got a couple of questions:

1. How does LLVM support run-time optimisation - i.e: which
elements of the toolchain will optimise a running bytecode / binary?

2. Is there a way, with the existing infrastructure, to do adaptive
compilation using the JIT interpreter?

Watching said interpreter through GDB shows that it starts off
with runFunction() called on the function representing "main", and
ends up at the line:

    rv.IntVal = PF(ArgValues[0].IntVal, (char **)GVTOP(ArgValues[1]));

where PF is a pointer to (I think) the machine code representation of
main().  So, main() is invoked, and the result returned to the caller.

Now, what I'm looking at implementing is something like the following
scenario.

a. The JIT interpreter starts executing the program.

b. It compiles it, inserting instrumentation, and lets it run for 2 seconds.

c. During the run, it records how many times each function is executed.

d. After 2 seconds, all hot functions are reoptimised using the info 
from the
instrumentation.  Said instrumentation is then removed, and the new
versions replace the old.

e. The program is resumed.

Is this possible to do using the JIT interpreter?  As far as I can see, 
the interpreter itself
won't execute during the execution of main() - i.e: for the interesting 
bits of the program.

Cheers,
Warren Armstrong





More information about the llvm-dev mailing list