[llvm-dev] ORC JIT Weekly #32 -- Latest ORC runtime preview
Stefan Gränitz via llvm-dev
llvm-dev at lists.llvm.org
Mon Apr 12 02:48:27 PDT 2021
Thanks for sharing the preview!
I am referring to this code state:
https://github.com/apple/llvm-project/commit/c305c6389b997e73
It looks like WrapperFunctionUtils in OrcShared define the common data
structures that can be passed back and forth between runtime and JIT. It
contains a non-trivial amount of code, but all in all it appears to be
self-contained and header-only. I guess that's because the runtime is
supposed to not depend on any LLVM libs?
The duplication of the Error, Expected<T> and ExtensibleRTTI classes is
a little unfortunate. I assume we won't need arbitrary data structures
in the runtime and for communication with it, but what we need must be
duplicated? Isn't there any way to avoid it? A separate Error lib above
Support? It's probably fine as is for the moment, but maybe a mid-term
perspective could be discussed when integrating into mainline.
There are no platform-specific flavors of the clang_rt.orc static
library. The memory footprint will likely be small, but won't it impact
JITLink performance since it will have to process symbols for all
platforms (__orc_rt_macho_tlv_get_addr, __orc_rt_elf_tlv_get_addr,
etc.)? If not, then why? Can they be dead-stripped early on?
Best,
Stefan
On 12/04/2021 04:53, Lang Hames wrote:
> Hi All,
>
> Apologies for the lack of updates -- I've been flat out prototyping
> the ORC runtime, but the details of that process weren't particularly
> newsworthy. Good news though: The result that work is available in the
> new preview branch
> at https://github.com/lhames/llvm-project/pull/new/orc-runtime-preview
> <https://github.com/lhames/llvm-project/pull/new/orc-runtime-preview>,
> and it's starting to look pretty good.
>
> A quick recap of this project, since it's been a while since my last
> update: Some features of object files (e.g. thread local variables,
> exceptions, static initializers, and language metadata registration)
> require support code in the executor process. We also want support
> code in the executing process for other JIT features (e.g. laziness).
> The ORC runtime is meant to provide a container for that support code.
> The runtime can be loaded into the executor process via the JIT, and
> the executor and JIT processes can communicate via a builtin
> serialization format (either serializing/deserializing directly
> in-process, or communicating serialized data via IPC/RPC) to
> coordinate on complex operations, for example discovering (on the JIT
> side) and running (on the executor side) all initializers in a JITDylib.
>
> After a bit of hacking, the setup code for all this is looking very
> neat and tidy. For example, to turn on support for advanced MachO
> features:
>
> if (auto P = MachOPlatform::Create(ES, ObjLinkingLayer, TPC, MainJD,
> OrcRuntimePath))
> ES.setPlatform(std::move(*P));
> else
> return P.takeError();
>
> That's it. The MachOPlatform loads the runtime archive into the
> ObjectLinkingLayer, then installs an ObjectLinkingLayer::Plugin to
> scan all loaded objects for features that it needs to react to. When
> it encounters such a feature it looks up the corresponding runtime
> functionality (loading the runtime support into the executor as
> required) and calls over to the runtime to react. For example, if an
> object contains an __eh_frame section then the plugin will
> discover its address range during linking and call over to the runtime
> to register that range with libunwind.
>
> Having set up the platform, you can add objects compiled from C++,
> Objective-C, Swift, etc., using static initializers, thread locals,
> etc. and everything should Just Work.
>
> Of course it doesn't all Just Work yet: The plumbing work is mostly
> complete, but I haven't written handlers for all the special sections
> yet. A surprising number of things do work (e.g. C++ code with static
> initializers/destructors, TLVs and exceptions, and simple Objective-C
> and Swift programs). An equally surprising number of simple things
> don't work (zero-initialized thread locals fail because I haven't
> gotten around to handling .tbss sections yet).
>
> If you would like to play around with the runtime (and have access to
> an x86-64 Mac) you can build it by checking out the preview branch
> above and configuring LLVM like this:
>
> xcrun cmake -GNinja \
> -DCMAKE_BUILD_TYPE=Debug \
> -DLLVM_ENABLE_PROJECTS="llvm;clang" \
> -DLLVM_ENABLE_RUNTIMES="compiler-rt;libcxx;libcxxabi" \
> /path/to/llvm
>
> Then you can try running arbitrary MachO objects under llvm-jitlink,
> which has been updated to load the built runtime by default. You
> should be able to run the objects both in-process (the default), and
> out-of-process (using -oop-executor or -oop-executor-connect) and have
> them behave exactly the same way.
>
> What we have so far is a pretty good proof of concept, so I'll start a
> new thread on llvm-dev tomorrow to discuss how we can land this in the
> LLVM mainline.
>
> -- Lang.
--
https://flowcrypt.com/pub/stefan.graenitz@gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210412/f330bd23/attachment.html>
More information about the llvm-dev
mailing list