[LLVMdev] Garbage Collection Roots
Gordon Henriksen
gordonhenriksen at mac.com
Tue Sep 4 07:36:49 PDT 2007
On Sep 4, 2007, at 04:15, Talin wrote:
> Gordon Henriksen wrote:
>
>> But to elaborate upon the described class of collectors: The
>> emitted GC tables do not describe the dynamic call stack; rather,
>> they are static data compiled into the executable. Only when cross-
>> referenced with the dynamic call stack do they identify the roots.
>> The garbage collectors for .NET and Java work in this manner.
>>
>> Here's an example of the sort of data these tables might contain.
>> (I generated this using the -print-gc feature which I have added
>> to llc locally; the llc in Subversion cannot print this data
>> because it does not collect it.) This test case has a simple
>> function with 2 roots and 1 call.
>>
>> GC roots for fun:
>> 0 8[sp]
>> 1 4[sp]
>> GC safe points for fun:
>> label 1: post-call, live = { 0, 1 }
>>
>> Provided this information, the collector can easily identify the
>> live roots within fun's stack frame at runtime—so long as fun is
>> paused at the safe point, which it must be if it is not the
>> topmost stack frame.
>
> How do you get the info for the non-topmost stack frames in this case?
The runtime needs to walk the machine stack. The return address of
the inner frame will have the address of label 1 while fun is paused
in the call that precedes it. An actual codegen example of the same
module is here:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-
Mon-20070903/053242.html
Llabel1 is the safe point.
> Am I going to need to implement my own version of
> llvm_cg_walk_gcroots() ?
If you choose not use the LowerGC transformation and the attendant
shadow stack.
> My current mad scheme (which I'm not sure will work) for safe
> points in a multi-threaded environment uses a global POSIX reader/
> writer lock. Each running thread keeps a read lock on the global
> lock when it's running. The old-generation collector attempts to
> acquire a write-lock when it needs to do a collection. At some semi-
> regular interval, the application threads will need to momentarily
> release the lock, telling the global collector its OK to run. (This
> is only for the old generation - young generation collections are
> handled per-thread, hopefully. Although cross-thread references
> make that complex...) The advantage of this scheme is that it
> doesn't require preemptive thread freezing, icky pagefault-based
> write barriers or other platform-specific hacks, the disadvantage
> is that all threads have to wait if some thread doesn't want to
> release the read lock in a timely manner.
Could work.
> The hard part will be analyzing loops in the generated IR to decide
> whether a call to pulse the lock needs to be inserted in the loop
> body or not. Obviously, dropping and reacquiring a lock is
> expensive, so if the loop is provably short, I don't want to do that.
Sure. And if there already exists an unconditional safe point within
the loop body, then you also do not need to insert one in the loop.
This is a very common need, and is one of the target-independent
features I want to provide in the collector infrastructure. However,
although I have made provisions for it, I have no immediate need for
it. Patches are welcome.
> So it sounds like I should be passing the pointer to the stack root
> list at the same time I release the lock? I supposed I could store
> it in the TLD which I guess wouldn't be too expensive.
— Gordon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20070904/9ec3c3f9/attachment.html>
More information about the llvm-dev
mailing list