[llvm-dev] LLVM Orc Weekly #28 -- ORC Runtime Prototype update

Lang Hames via llvm-dev llvm-dev at lists.llvm.org
Sun Jan 17 23:55:13 PST 2021

Hi All,

Happy 2021!

I've just posted a new Orc Runtime Preview patch:

Quick background:

To date, neither ORC nor MCJIT have had their own runtime libraries. This
has limited and complicated the implementation of many features (e.g. jit
re-entry functions, exception handling, JID'd initializers and
de-initializers), and more-or-less prevented the implementation of others
(e.g. native thread local storage).

Late last year I started work on a prototype ORC runtime library to address
this, and with the above commit I've finally got something worth sharing.

The prototype above is simultaneously limited and complex. Limited, in that
it only tackles a small subset of the desired functionality. Complex in
that it's one of the most involved pieces of functionality that I
anticipate supporting, as it requires two-way communication between the
executor and JIT processes. My aim in choosing to tackle the hard part
first was to get a sense of our ultimate requirements for the project,
particularly in regards to *where it should live within the LLVM Project*.
It's not a perfect fit for LLVM proper: there will be lots of target
specific code, including assembly, and it should be easily buildable for
multiple targets (that sounds more like compiler-rt). On the other hand
it's not a perfect fit for compiler-rt: it shares data structures with
LLVM, and it would be very useful to be able to re-use llvm::Error  /
llvm::Expected (that sounds like LLVM). At the moment I think the best way
to square things would be to keep it in compiler-rt, allow inclusion of
header-only code from LLVM in compiler-rt, and then make Error / Expected
header-only (or copy / adapt them for this library). This will be a
discussion for llvm-dev at some point in the near future.

On to the actual functionality though: The prototype makes significant
changes to the MachOPlatform class and introduces an ORC runtime library in
compiler-rt/lib/orc. Together, these changes allow us to emulate the dlopen
/ dlsym / dlclose in the JIT executor process. We can use this to define
what it means to run a *JIT program*, rather than just running a JIT
function (the way TargetProcessControl::runAsMain does):

ORC_RT_INTERFACE int64_t __orc_rt_macho_run_program(int argc, char *argv[])
  using MainTy = int (*)(int, char *[]);

  void *H = __orc_rt_macho_jit_dlopen("Main", ORC_RT_RTLD_LAZY);
  if (!H) {
    return -1;

  auto *Main = reinterpret_cast<MainTy>(__orc_rt_macho_jit_dlsym(H, "main"
  if (!Main) {
    return -1;

  int Result = Main(argc, argv);

  if (__orc_rt_macho_jit_dlclose(H) == -1)

  return Result;

The functions __orc_rt_macho_jit_dlopen, __orc_rt_macho_jit_dlsym,
and __orc_rt_macho_jit_dlclose behave the same as their dlfcn.h
counterparts (dlopen, dlsym, dlclose), but operate on JITDylibs rather than
regular dylibs. This includes running static initializers and registering
with language runtimes (e.g. ObjC).

While we could run static initializers before (e.g. via
LLJIT::runConstructors), we had to initiate this from the JIT process side,
which has two significant drawbacks: (1) Extra RPC round trips, and (2) in
the out-of-process case: initializers not running on the executor thread
that requested them, since that thread will be blocked waiting for its call
to return. Issue (1) only affects performance, but (2) can affect
correctness if the initializers modify thread local values, or interact
with locks or threads. Interacting with threads from initializers is
generally best avoided, but nonetheless is done by real-world code, so we
want to support it. By using the runtime we can improve both performance
and correctness (or at least consistency with current behavior).

The effect of this is that we can now load C++, Objective-C and Swift
programs in the JIT and expect them to run correctly, at least for simple
cases. This works regardless of whether the JIT'd code runs in-process or
out-of-process. To test all this I have integrated support for the
prototype runtime into llvm-jitlink. You can demo output from this tool
below for two simple input programs: One swift, one C++. All of this is
MachO specific at the moment, but provides a template that could be easily
re-used to support this on ELF platforms, and likely on COFF platforms too.

While the discussion on where the runtime should live plays out I will
continue adding / moving functionality to the prototype runtime. Next up
will be eh-frame registration and resolver functions (both currently in
OrcTargetProcess). After that I'll try to tackle support for native MachO
thread local storage.

As always: Questions and comments are very welcome.

-- Lang.

lhames at Langs-MacBook-Pro scratch % cat foo.swift
class MyClass {
  func foo() {

let m = MyClass()

lhames at Langs-MacBook-Pro scratch % xcrun swiftc -emit-object -o foo.o
lhames at Langs-MacBook-Pro scratch % llvm-jitlink -dlopen
/usr/lib/swift/libswiftCore.dylib foo.o
lhames at Langs-MacBook-Pro scratch % llvm-jitlink -oop-executor -dlopen
/usr/lib/swift/libswiftCore.dylib foo.o
lhames at Langs-MacBook-Pro scratch % cat inits.cpp
#include <iostream>

class Foo {
  Foo() { std::cout << "Foo::Foo()\n"; }
  ~Foo() { std::cout << "Foo::~Foo()\n"; }
  void foo() { std::cout << "Foo::foo()\n"; }

Foo F;

int main(int argc, char *argv[]) {
  return 0;
lhames at Langs-MacBook-Pro scratch % xcrun clang++ -c -o inits.o inits.cpp
lhames at Langs-MacBook-Pro scratch % llvm-jitlink inits.o

lhames at Langs-MacBook-Pro scratch % llvm-jitlink -oop-executor inits.o
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210118/c5fcb0ff/attachment.html>

More information about the llvm-dev mailing list