[llvm-dev] ORC JIT Weekly #32 -- Latest ORC runtime preview
Lang Hames via llvm-dev
llvm-dev at lists.llvm.org
Sun Apr 11 19:53:47 PDT 2021
Hi All,
Apologies for the lack of updates -- I've been flat out prototyping the ORC
runtime, but the details of that process weren't particularly newsworthy.
Good news though: The result that work is available in the new preview
branch at
https://github.com/lhames/llvm-project/pull/new/orc-runtime-preview, and
it's starting to look pretty good.
A quick recap of this project, since it's been a while since my last
update: Some features of object files (e.g. thread local variables,
exceptions, static initializers, and language metadata registration)
require support code in the executor process. We also want support code in
the executing process for other JIT features (e.g. laziness). The ORC
runtime is meant to provide a container for that support code. The runtime
can be loaded into the executor process via the JIT, and the executor and
JIT processes can communicate via a builtin serialization format (either
serializing/deserializing directly in-process, or communicating serialized
data via IPC/RPC) to coordinate on complex operations, for example
discovering (on the JIT side) and running (on the executor side) all
initializers in a JITDylib.
After a bit of hacking, the setup code for all this is looking very neat
and tidy. For example, to turn on support for advanced MachO features:
if (auto P = MachOPlatform::Create(ES, ObjLinkingLayer, TPC, MainJD,
OrcRuntimePath))
ES.setPlatform(std::move(*P));
else
return P.takeError();
That's it. The MachOPlatform loads the runtime archive into the
ObjectLinkingLayer, then installs an ObjectLinkingLayer::Plugin to scan all
loaded objects for features that it needs to react to. When it encounters
such a feature it looks up the corresponding runtime functionality (loading
the runtime support into the executor as required) and calls over to the
runtime to react. For example, if an object contains an __eh_frame section
then the plugin will discover its address range during linking and call
over to the runtime to register that range with libunwind.
Having set up the platform, you can add objects compiled from C++,
Objective-C, Swift, etc., using static initializers, thread locals, etc.
and everything should Just Work.
Of course it doesn't all Just Work yet: The plumbing work is mostly
complete, but I haven't written handlers for all the special sections yet.
A surprising number of things do work (e.g. C++ code with static
initializers/destructors, TLVs and exceptions, and simple Objective-C and
Swift programs). An equally surprising number of simple things don't work
(zero-initialized thread locals fail because I haven't gotten around to
handling .tbss sections yet).
If you would like to play around with the runtime (and have access to an
x86-64 Mac) you can build it by checking out the preview branch above and
configuring LLVM like this:
xcrun cmake -GNinja \
-DCMAKE_BUILD_TYPE=Debug \
-DLLVM_ENABLE_PROJECTS="llvm;clang" \
-DLLVM_ENABLE_RUNTIMES="compiler-rt;libcxx;libcxxabi" \
/path/to/llvm
Then you can try running arbitrary MachO objects under llvm-jitlink, which
has been updated to load the built runtime by default. You should be able
to run the objects both in-process (the default), and out-of-process (using
-oop-executor or -oop-executor-connect) and have them behave exactly the
same way.
What we have so far is a pretty good proof of concept, so I'll start a new
thread on llvm-dev tomorrow to discuss how we can land this in the LLVM
mainline.
-- Lang.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210411/6b57e30a/attachment.html>
More information about the llvm-dev
mailing list