[LLVMdev] Proposal: stack/context switching within a thread
Jeffrey Yasskin
jyasskin at google.com
Sat Apr 10 10:54:45 PDT 2010
I took the liberty of forwarding this to the Stackless Python list,
since they switch stacks, and I got a response at
http://thread.gmane.org/gmane.comp.python.stackless/4464/focus=4467.
The upshot is that they really need the ability to allocate only a
tiny amount of space for each thread and grow that as the thread
actually uses more stack. The way they accomplish that now is by
copying the entire stack to the heap on a context switch, and having
all threads share the main C stack. This isn't quite as bad as it
sounds because it only happens to threads that call into C extension
modules. Pure Python threads operate entirely within heap Python
frames. Still, it would be nice to support this use case.
Kenneth, I don't want to insist that the first version of this be both
a floor wax _and_ a dessert topping, but is there a natural extension
to supporting what Stackless needs that we could add later? It looks
like swapcontext() simply repoints the stack pointer and restores the
registers, while Stackless wants it to be able to allocate memory and
copy the stack. Maybe that implies a "mode" argument?
Alternately, Stackless could probably work with a segmented stack
mechanism like Ian Taylor implemented in gcc for Go. Do you see
anything that would prevent us from layering segmented stacks on top
of this context switching mechanism later?
Thanks,
Jeffrey
On Wed, Apr 7, 2010 at 12:14 PM, Kenneth Uildriks <kennethuil at gmail.com> wrote:
> Right now the functionality is available, sometimes, from the C
> standard library. But embedded environments (often running a limited
> standard library) and server environments would benefit heavily from a
> standard way to specify context switches within a single thread in the
> style of makecontext/swapcontext/setcontext, and built-in support for
> these operations would also open the way for optimizers to begin
> handling these execution paths.
>
> The use cases for these operations, and things like coroutines built
> on top of them, will only increase in the future as developers look
> for ways to get more concurrency while limiting the number of
> high-overhead and difficult to manage native threads, locks, and
> mutexes.
>
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>
>
More information about the llvm-dev
mailing list