[LLVMdev] Proposal for improving llvm.gcroot (summarized)

Jay Foad jay.foad at gmail.com
Fri Apr 1 11:17:57 PDT 2011

>> if (cond) {
>>  llvm.gc.declare(foo, bar);
>> }
>> ...
>> // foo may or may not be a root here
>> ...
>> if (cond) { // same condition as above
>>  llvm.gc.undeclare(foo);
>> }
> You would need to do the same as what is done today: Move the declare
> outside of the condition, and initialize the variable to a null state, such
> that the garbage collector will know to ignore the variable. In the
> if-block, you then overwrite the variable with actual data.

I think there's a real problem here. The code in LLVM that creates the
static stack map will want the llvm.gc.declare / llvm.gc.undeclare
calls to be "well-formed" in some sense: nicely paired and nested,
with each declare and undeclare being at the same depth inside any
loops or "if"s in the CFG. But consider:

1. Your front end generates well-formed llvm.gc.* calls.
2. The LLVM optimisers kick in and do jump threading, tail merging and whatnot.
3. The code that creates the static stack map finds a complete mess.
(And by this point I think it would be too late to do any
transformations like you suggest above: "Move the declare outside of
the condition ...")

I think it's a good idea to have information in the IR about the
lifetime of GC roots, but I think intrinsic calls are the wrong
representation for that information.

This is very similar to the problem of representing lexical scopes in
debug info. The llvm.dbg.region.* intrinsics were the wrong way of
doing it, because of the problems I mentioned above. Now we use
metadata attached  to each instruction to say what scope it is in,
which is much better, because it is robust against optimisation


More information about the llvm-dev mailing list