[llvm-dev] LLVM Orc Weekly #28 -- ORC Runtime Prototype update

Stefan Gränitz via llvm-dev llvm-dev at lists.llvm.org
Thu Feb 4 07:56:17 PST 2021


Thanks for your comprehensive reply!

> This means that we can use the new '-use-orc-runtime=false' option to
> disable runtime loading and still run all llvm-jitlink tests that are
> in-tree today.
Fantastic. I had a look at the code and it seems totally reasonable. One
detail stated in your prototype implementation is, that the default
value for the option would depend on compiler-rt being built or not.
It's possible to infer this info from CMake, but it's a little tricky
since compiler-rt is generally configured after LLVM. I made a patch
that might be used for that purpose once your prototype landed:
https://reviews.llvm.org/D96039

>     At the moment, the TargetProcess library is only used by JITLink.
>     Will it stay like that? If so, RuntimeDyld-based JITs will not be
>     affected by the patch. Will it be possible to build and run e.g. a
>     LLJIT instance with a RuntimeDyldLinkingLayer if only LLVM gets
>     built (omitting LLVM_ENABLE_PROJECTS="clang;compiler-rt")?
>
>
> I think that the LLI tool should be migrated to TargetProcess too.
Yes agreed. Though, it seems to me that this makes the LLJITBuilder
configuration quite complex. After all, we'd need to keep RuntimeDyld
support and this doesn't work with TargetProcess right? Would that
create the case for a separate jit-kind 'orc-rtdyld'?

> LLI does run static initializers / deinitializers today, but only
> in-process using the existing initializer infrastructure (which I
> think is a hack). I think two reasonable options going forward would be:
> (1) Make running initializers via lli dependent on building
> compiler-rt (as in llvm-jitlink). This is my preferred solution.
> (2) Move the existing hacks out of ORC and into LLI (or a specific
> LLI_LLJIT class in ORC).
.., then orc-rtdyld could keep the hack a la (2) and JITLink-based kinds
can switch to (1)? Moving lli-specific code out of ORC makes sense in
any case.

> What I think may change over time is how advanced features (e.g.
> laziness, initializers) are implemented: It is so much easier and
> lower maintenance to implement these within the runtime. Eventually I
> could see us dropping support for them in LLVM-only builds
Sounds reasonable. Maybe that would also give an opportunity to
reevaluate the role of lli. It always seemed to me that it was intended
as a helper tool for testing and for developers to quickly run some
bitcode snippets. With the advent of orc-lazy, lli also started to serve
as an entry point to these advanced ORC features. I recently started
wondering how big the overlap between these two use-cases still is and
whether it might be favorable to have two separate executables instead.
What's your impression? I guess there are also good reasons why lli
should remain as it is today.

Stefan

On 24/01/2021 01:55, Lang Hames wrote:
> Hi Stefan,
>
>     I am always in favor of getting JIT improvements into mainline as
>     soon as possible. Also, simplifying APIs is an important goal.
>
>      
>
>     On the other hand this raises a few general questions for me:
>
>      
>
>     IIUC the current patch introduces a dependency from LLVM to
>     compiler-rt/clang build artifacts, because the
>     llvm-jitlink-executor executable is fully functional only if it
>     can find the clang_rt.orc static library in the build tree. Do we
>     have dependencies like that in mainline so far?
>
>
> These are excellent questions.
>
> As far as I know we do not have any dependencies like this (LLVM tool
> depends on compiler-rt for full functionality) in the mainline yet.
> Even superficially similar situations (e.g. building clang without
> building compiler-rt or libcxx) are quite different as there must
> already be compatible system libraries available to bootstrap LLVM in
> the first place.
>
> On the other hand, while it's true that llvm-jitlink will no longer
> have full functionality without loading the runtime, it's not the case
> that there will be any regression from the current functionality:
> we're only enabling new functionality here (at least in llvm-jitlink,
> which never used to run initializers). This means that we can use the
> new '-use-orc-runtime=false' option to disable runtime loading and
> still run all llvm-jitlink tests that are in-tree today.
>
>     And how does it affect testing? So far, it seems we have no
>     testing for out-of-process execution, because the only tool that
>     exercises this functionality is llvm-jitlink, which itself is
>     mostly used as a testing helper. If the functionality now moves
>     into the TargetProcess library, it might be worth thinking through
>     the test strategy first.
>
>
> I think it is reasonable to maintain the existing tests (adding
> '-use-orc-runtime=false'), then add new end-to-end tests of the
> runtime via llvm-jitlink that will be dependent on having built
> compiler-rt.
>
>     At the moment, the TargetProcess library is only used by JITLink.
>     Will it stay like that? If so, RuntimeDyld-based JITs will not be
>     affected by the patch. Will it be possible to build and run e.g. a
>     LLJIT instance with a RuntimeDyldLinkingLayer if only LLVM gets
>     built (omitting LLVM_ENABLE_PROJECTS="clang;compiler-rt")?
>
>
> I think that the LLI tool should be migrated to TargetProcess too. We
> should distinguish between TargetProcess and the ORC runtime though:
> TargetProcess can be linked into the executor without requiring the
> runtime.
>
> LLI does run static initializers / deinitializers today, but only
> in-process using the existing initializer infrastructure (which I
> think is a hack). I think two reasonable options going forward would be:
> (1) Make running initializers via lli dependent on building
> compiler-rt (as in llvm-jitlink). This is my preferred solution.
> (2) Move the existing hacks out of ORC and into LLI (or a specific
> LLI_LLJIT class in ORC).
>
> For MCJIT-like use cases you will definitely still be able to build
> and run an LLJIT instance with either RTDyldObjectLinkingLayer
> (RuntimeDyld) or ObjectLinkingLayer (JITLink) with LLVM only. I think
> this will remain true indefinitely. What I think may change over time
> is how advanced features (e.g. laziness, initializers) are
> implemented: It is so much easier and lower maintenance to implement
> these within the runtime. Eventually I could see us dropping support
> for them in LLVM-only builds, at which point it will be a runtime
> error to attempt to use those features (e.g. attempting to add a
> lazyReexport will yield a "cannot resolve __orc_rt_jit_reentry" error).
>
> Do these solutions seem reasonable to you?
>
> Thank you very much again for thinking about this -- I think testing
> and dependencies are the trickiest aspects of introducing this
> runtime, and it's very helpful to get other developers' perspectives. 
>
> Regards,
> Lang.
>
> On Fri, Jan 22, 2021 at 9:53 PM Stefan Gränitz
> <stefan.graenitz at gmail.com <mailto:stefan.graenitz at gmail.com>> wrote:
>
>     I am always in favor of getting JIT improvements into mainline as
>     soon as possible. Also, simplifying APIs is an important goal.
>
>     On the other hand this raises a few general questions for me:
>
>     IIUC the current patch introduces a dependency from LLVM to
>     compiler-rt/clang build artifacts, because the
>     llvm-jitlink-executor executable is fully functional only if it
>     can find the clang_rt.orc static library in the build tree. Do we
>     have dependencies like that in mainline so far?
>
>     And how does it affect testing? So far, it seems we have no
>     testing for out-of-process execution, because the only tool that
>     exercises this functionality is llvm-jitlink, which itself is
>     mostly used as a testing helper. If the functionality now moves
>     into the TargetProcess library, it might be worth thinking through
>     the test strategy first.
>
>     At the moment, the TargetProcess library is only used by JITLink.
>     Will it stay like that? If so, RuntimeDyld-based JITs will not be
>     affected by the patch. Will it be possible to build and run e.g. a
>     LLJIT instance with a RuntimeDyldLinkingLayer if only LLVM gets
>     built (omitting LLVM_ENABLE_PROJECTS="clang;compiler-rt")?
>
>     Last but not least, there are examples that use JITLink. Could we
>     still build and run them if only LLVM gets built?
>
>     Thanks,
>     Stefan
>
>     On 19/01/2021 23:08, Lang Hames wrote:
>>     Big question for JIT clients: Does anyone have any objection to
>>     APIs in ORC *relying* on the runtime being loaded in the target?
>>     If so, now is the time to let me know. :)
>>
>>     I think possible objections are JIT'd program startup time
>>     (unlikely to be very high, and likely fixable via careful runtime
>>     design and pre-linking of parts of the runtime), and difficulties
>>     building compiler-rt (which sounds like something we should fix
>>     in compiler-rt).
>>
>>     If we can assume that the runtime is loadable then we can
>>     significantly simplify the TargetProcess library, and
>>     TargetProcessControl API, and further accelerate feature
>>     development in LLVM 13.
>>
>>     -- Lang.
>>
>>     On Tue, Jan 19, 2021 at 8:45 AM Lang Hames <lhames at gmail.com
>>     <mailto:lhames at gmail.com>> wrote:
>>
>>         Hi Stefan,
>>
>>             % ./bin/llvm-jitlink -oop-executor inits.o
>>             JIT session error: Symbols not found: [
>>             __ZTIN4llvm6detail14format_adapterE ]
>>
>>
>>         I've been testing with a debug build:
>>
>>         % xcrun cmake -GNinja -DCMAKE_BUILD_TYPE=Debug
>>         -DLLVM_ENABLE_PROJECTS="llvm;clang;compiler-rt" ../llvm
>>
>>         Matching this build might fix the issue, though building with
>>         my config (if it works) is only a short-term fix. The error
>>         that you're seeing implies that the runtime is dependending
>>         on a symbol from libSupport that is not being linked in to
>>         the target (llvm-jitlink-executor). I'll aim to break these
>>         dependencies on libSupport in the future. Mostly that means
>>         either removing the dependence on llvm::Error /
>>         llvm::Expected (e.g. by creating stripped down versions for
>>         the orc runtime), or making those types header-only.
>>
>>         -- Lang.
>>
>>
>>
>>
>>
>>         On Tue, Jan 19, 2021 at 12:50 AM Stefan Gränitz
>>         <stefan.graenitz at gmail.com
>>         <mailto:stefan.graenitz at gmail.com>> wrote:
>>
>>             Wow, thanks for the update. One more ORC milestone in a
>>             short period of time!
>>
>>             On macOS I built the C++ example like this:
>>
>>             % cmake -GNinja -DLLVM_TARGETS_TO_BUILD=host
>>             -DLLVM_ENABLE_PROJECTS="clang;compiler-rt" ../llvm
>>             % ninja llvm-jitlink llvm-jitlink-executor
>>             lib/clang/12.0.0/lib/darwin/libclang_rt.orc_osx.a
>>             % clang++ -c -o inits.o inits.cpp
>>
>>             The in-process version works perfectly, but with the
>>             out-of-process flag the examples fails:
>>
>>             % ./bin/llvm-jitlink inits.o
>>             Foo::Foo()
>>             Foo::foo()
>>             Foo::~Foo()
>>             % ./bin/llvm-jitlink -oop-executor inits.o
>>             JIT session error: Symbols not found: [
>>             __ZTIN4llvm6detail14format_adapterE ]
>>
>>             Any idea what could go wrong here? Otherwise I can try to
>>             debug it later this week. (Full error below.)
>>
>>             Best
>>             Stefan
>>
>>             --
>>
>>             JIT session error: Symbols not found: [
>>             __ZTIN4llvm6detail14format_adapterE ]
>>             /Users/staefsn/Develop/LLVM/monorepo/llvm-orc-runtime/build/bin/llvm-jitlink:
>>             Failed to materialize symbols: { (Main, {
>>             ___orc_rt_macho_symbol_lookup_remote,
>>             __ZN4llvm3orc6shared21WrapperFunctionResult22destroyWithArrayDeleteE39LLVMOrcSharedCWrapperFunctionResultDatay,
>>             __ZNSt3__113__vector_baseIN4llvm3orc6shared25MachOJITDylibInitializersENS_9allocatorIS4_EEED2Ev,
>>             __ZNK4llvm8ExpectedINS_3orc6shared21WrapperFunctionResultEE22fatalUncheckedExpectedEv,
>>             __ZN4llvm3orc6shared21toWrapperFunctionBlobIJNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEEEEENS1_21WrapperFunctionResultEDpRKT_,
>>             __ZN4llvm3orc6shared20VectorRawByteChannelD1Ev,
>>             __ZN4llvm3orc6shared21SequenceSerializationINS1_20VectorRawByteChannelEJNSt3__16vectorINS1_25MachOJITDylibInitializers13SectionExtentENS4_9allocatorIS7_EEEESA_SA_EE11deserializeISA_JSA_SA_EEENS_5ErrorERS3_RT_DpRT0_,
>>             __ZNSt3__16vectorIN4llvm3orc6shared25MachOJITDylibInitializers13SectionExtentENS_9allocatorIS5_EEE8__appendEm,
>>             __ZN4llvm3orc6shared21toWrapperFunctionBlobIJyNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEEEEENS1_21WrapperFunctionResultEDpRKT_,
>>             __ZN4llvm29VerifyEnableABIBreakingChecksE,
>>             __ZN4llvm15format_providerImvE6formatERKmRNS_11raw_ostreamENS_9StringRefE,
>>             __ZNSt3__114__split_bufferIN4llvm3orc6shared25MachOJITDylibInitializersERNS_9allocatorIS4_EEED2Ev,
>>             __ZTVN4llvm6detail23provider_format_adapterIRmEE,
>>             __ZN4llvm3orc6shared21SequenceSerializationINS1_20VectorRawByteChannelEJyNSt3__112basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEEEE9serializeIRKyJRKSA_EEENS_5ErrorERS3_OT_DpOT0_,
>>             __ZTVN4llvm6detail23provider_format_adapterImEE,
>>             __ZN4llvm6detail23provider_format_adapterImE6formatERNS_11raw_ostreamENS_9StringRefE,
>>             __ZN4llvm6detail23provider_format_adapterIRmED1Ev,
>>             ___orc_rt_macho_get_deinitializers_tag,
>>             __ZN4llvm6detail23provider_format_adapterIRmED0Ev,
>>             __ZTVN4llvm3orc6shared14RawByteChannelE,
>>             __ZN6orc_rt12jit_dispatchEPKvN4llvm8ArrayRefIhEE,
>>             __ZN4llvm3orc6shared20VectorRawByteChannelD0Ev,
>>             __ZTVN4llvm3orc6shared20VectorRawByteChannelE,
>>             __ZN4llvm8cantFailENS_5ErrorEPKc,
>>             __ZN4llvm6detail23provider_format_adapterImED1Ev,
>>             __ZTVN4llvm6detail23provider_format_adapterIRjEE,
>>             __ZN4llvm6detail23provider_format_adapterImED0Ev,
>>             __ZN4llvm15format_providerIjvE6formatERKjRNS_11raw_ostreamENS_9StringRefE,
>>             __ZTSN4llvm3orc6shared14RawByteChannelE,
>>             __ZN4llvm6detail23provider_format_adapterIRjE6formatERNS_11raw_ostreamENS_9StringRefE,
>>             __ZN4llvm6detail23provider_format_adapterIRjED0Ev,
>>             __ZN4llvm3orc6shared19SerializationTraitsINS1_20VectorRawByteChannelENSt3__16vectorINS1_25MachOJITDylibInitializersENS4_9allocatorIS6_EEEES9_vE11deserializeERS3_RS9_,
>>             __ZTIN4llvm6detail23provider_format_adapterImEE,
>>             __ZN4llvm6detail15HelperFunctions15consumeHexStyleERNS_9StringRefERNS_13HexPrintStyleE,
>>             __ZTIN4llvm6detail23provider_format_adapterIRmEE,
>>             ___orc_rt_macho_get_initializers_tag,
>>             __ZTSN4llvm6detail23provider_format_adapterImEE,
>>             __ZN4llvm3orc6shared20VectorRawByteChannel4sendEv,
>>             __ZN4llvm6detail23provider_format_adapterIRmE6formatERNS_11raw_ostreamENS_9StringRefE,
>>             ___orc_rt_macho_symbol_lookup_tag,
>>             __ZTSN4llvm6detail23provider_format_adapterIRmEE,
>>             __ZN4llvm3orc6shared20VectorRawByteChannel11appendBytesEPKcj,
>>             __ZTSN4llvm6detail23provider_format_adapterIRjEE,
>>             __ZN4llvm3orc6shared20VectorRawByteChannel9readBytesEPcj,
>>             __ZTSN4llvm3orc6shared20VectorRawByteChannelE,
>>             __ZN4llvm6detail23provider_format_adapterIRjED1Ev,
>>             __ZN4llvm3orc6shared14RawByteChannelD1Ev,
>>             __ZN4llvm3orc6shared23fromWrapperFunctionBlobIJNSt3__16vectorINS1_25MachOJITDylibInitializersENS3_9allocatorIS5_EEEEEEENS_5ErrorENS_8ArrayRefIhEEDpRT_,
>>             __ZN4llvm3orc6shared14RawByteChannelD0Ev,
>>             __ZN4llvm3orc6shared21SequenceSerializationINS1_20VectorRawByteChannelEJyyNSt3__16vectorINS1_25MachOJITDylibInitializers13SectionExtentENS4_9allocatorIS7_EEEESA_SA_EE11deserializeIyJySA_SA_SA_EEENS_5ErrorERS3_RT_DpRT0_,
>>             __ZNSt3__16vectorIN4llvm3orc6shared25MachOJITDylibInitializersENS_9allocatorIS4_EEE8__appendEm,
>>             __ZN4llvm3orc6shared21SequenceSerializationINS1_20VectorRawByteChannelEJyyEE11deserializeIyJyEEENS_5ErrorERS3_RT_DpRT0_,
>>             __ZNSt3__16vectorIN4llvm3orc6shared25MachOJITDylibInitializersENS_9allocatorIS4_EEE6resizeEm,
>>             __ZN4llvm3orc6shared19SerializationTraitsINS1_20VectorRawByteChannelENSt3__112basic_stringIcNS4_11char_traitsIcEENS4_9allocatorIcEEEESA_vE11deserializeERNS1_14RawByteChannelERSA_,
>>             __ZN4llvm3orc6shared23fromWrapperFunctionBlobIJyEEENS_5ErrorENS_8ArrayRefIhEEDpRT_,
>>             __ZTIN4llvm6detail23provider_format_adapterIRjEE,
>>             __ZN4llvm3orc6shared21WrapperFunctionResult20getAnyOutOfBandErrorEv,
>>             __ZTIN4llvm3orc6shared20VectorRawByteChannelE,
>>             ___orc_rt_macho_get_initializers_remote,
>>             __ZTIN4llvm3orc6shared14RawByteChannelE,
>>             __ZNSt3__16vectorIhNS_9allocatorIhEEE26__swap_out_circular_bufferERNS_14__split_bufferIhRS2_EE,
>>             __ZN4llvm3orc6shared19SerializationTraitsINS1_20VectorRawByteChannelENSt3__16vectorINS1_25MachOJITDylibInitializers13SectionExtentENS4_9allocatorIS7_EEEESA_vE11deserializeERS3_RSA_
>>             }) }
>>             /Users/staefsn/Develop/LLVM/monorepo/llvm-orc-runtime/build/bin/llvm-jitlink-executor:Response
>>             has unknown sequence number 527162
>>
>>             On 18/01/2021 08:55, Lang Hames wrote:
>>>             Hi All,
>>>
>>>             Happy 2021!
>>>
>>>             I've just posted a new Orc Runtime Preview patch:
>>>             https://github.com/lhames/llvm-project/commit/8833a7f24693f1c7a3616438718e7927c6624894
>>>             <https://github.com/lhames/llvm-project/commit/8833a7f24693f1c7a3616438718e7927c6624894>
>>>
>>>             Quick background:
>>>
>>>             To date, neither ORC nor MCJIT have had their own
>>>             runtime libraries. This has limited and complicated the
>>>             implementation of many features (e.g. jit re-entry
>>>             functions, exception handling, JID'd initializers and
>>>             de-initializers), and more-or-less prevented the
>>>             implementation of others (e.g. native thread local storage).
>>>
>>>             Late last year I started work on a prototype ORC runtime
>>>             library to address this, and with the above commit I've
>>>             finally got something worth sharing.
>>>
>>>             The prototype above is simultaneously limited and
>>>             complex. Limited, in that it only tackles a small subset
>>>             of the desired functionality. Complex in that it's one
>>>             of the most involved pieces of functionality that I
>>>             anticipate supporting, as it requires two-way
>>>             communication between the executor and JIT processes. My
>>>             aim in choosing to tackle the hard part first was to get
>>>             a sense of our ultimate requirements for the project,
>>>             particularly in regards to /where it should live within
>>>             the LLVM Project/. It's not a perfect fit for LLVM
>>>             proper: there will be lots of target specific code,
>>>             including assembly, and it should be easily buildable
>>>             for multiple targets (that sounds more like
>>>             compiler-rt). On the other hand it's not a perfect fit
>>>             for compiler-rt: it shares data structures with LLVM,
>>>             and it would be very useful to be able to re-use
>>>             llvm::Error  / llvm::Expected (that sounds like LLVM).
>>>             At the moment I think the best way to square things
>>>             would be to keep it in compiler-rt, allow inclusion of
>>>             header-only code from LLVM in compiler-rt, and then make
>>>             Error / Expected header-only (or copy / adapt them for
>>>             this library). This will be a discussion for llvm-dev at
>>>             some point in the near future.
>>>
>>>             On to the actual functionality though: The prototype
>>>             makes significant changes to the MachOPlatform class and
>>>             introduces an ORC runtime library in
>>>             compiler-rt/lib/orc. Together, these changes allow us to
>>>             emulate the dlopen / dlsym / dlclose in the JIT executor
>>>             process. We can use this to define what it means to run
>>>             a /JIT program/, rather than just running a JIT function
>>>             (the way TargetProcessControl::runAsMain does):
>>>
>>>             ORC_RT_INTERFACE int64_t __orc_rt_macho_run_program(int argc, char *argv[])
>>>             {
>>>               using MainTy = int (*)(int, char *[]);
>>>
>>>               void *H = __orc_rt_macho_jit_dlopen("Main",
>>>             ORC_RT_RTLD_LAZY);
>>>               if (!H) {
>>>                 __orc_rt_log_error(__orc_rt_macho_jit_dlerror());
>>>                 return -1;
>>>               }
>>>
>>>               auto *Main = reinterpret_cast<MainTy>(__orc_rt_macho_jit_dlsym(H, "main"));
>>>               if (!Main) {
>>>                 __orc_rt_log_error(__orc_rt_macho_jit_dlerror());
>>>                 return -1;
>>>               }
>>>
>>>               int Result = Main(argc, argv);
>>>
>>>               if (__orc_rt_macho_jit_dlclose(H) == -1)
>>>                 __orc_rt_log_error(__orc_rt_macho_jit_dlerror());
>>>
>>>               return Result;
>>>             }
>>>
>>>             The
>>>             functions __orc_rt_macho_jit_dlopen, __orc_rt_macho_jit_dlsym,
>>>             and __orc_rt_macho_jit_dlclose behave the same as their
>>>             dlfcn.h counterparts (dlopen, dlsym, dlclose), but
>>>             operate on JITDylibs rather than regular dylibs. This
>>>             includes running static initializers and registering
>>>             with language runtimes (e.g. ObjC).
>>>
>>>             While we could run static initializers before (e.g. via
>>>             LLJIT::runConstructors), we had to initiate this from
>>>             the JIT process side, which has two significant
>>>             drawbacks: (1) Extra RPC round trips, and (2) in the
>>>             out-of-process case: initializers not running on the
>>>             executor thread that requested them, since that thread
>>>             will be blocked waiting for its call to return. Issue
>>>             (1) only affects performance, but (2) can affect
>>>             correctness if the initializers modify thread local
>>>             values, or interact with locks or threads. Interacting
>>>             with threads from initializers is generally best
>>>             avoided, but nonetheless is done by real-world code, so
>>>             we want to support it. By using the runtime we can
>>>             improve both performance and correctness (or at least
>>>             consistency with current behavior). 
>>>
>>>             The effect of this is that we can now load C++,
>>>             Objective-C and Swift programs in the JIT and expect
>>>             them to run correctly, at least for simple cases. This
>>>             works regardless of whether the JIT'd code runs
>>>             in-process or out-of-process. To test all this I have
>>>             integrated support for the prototype runtime into
>>>             llvm-jitlink. You can demo output from this tool below
>>>             for two simple input programs: One swift, one C++. All
>>>             of this is MachO specific at the moment, but provides a
>>>             template that could be easily re-used to support this on
>>>             ELF platforms, and likely on COFF platforms too.
>>>
>>>             While the discussion on where the runtime should live
>>>             plays out I will continue adding / moving functionality
>>>             to the prototype runtime. Next up will be eh-frame
>>>             registration and resolver functions (both currently in
>>>             OrcTargetProcess). After that I'll try to tackle support
>>>             for native MachO thread local storage.
>>>
>>>             As always: Questions and comments are very welcome.
>>>
>>>             -- Lang.
>>>
>>>             lhames at Langs-MacBook-Pro scratch % cat foo.swift
>>>             class MyClass {
>>>               func foo() {
>>>                 print("foo")
>>>               }
>>>             }
>>>
>>>             let m = MyClass()
>>>             m.foo();
>>>
>>>             lhames at Langs-MacBook-Pro scratch % xcrun swiftc
>>>             -emit-object -o foo.o foo.swift                        
>>>                        
>>>             lhames at Langs-MacBook-Pro scratch % llvm-jitlink -dlopen
>>>             /usr/lib/swift/libswiftCore.dylib foo.o              
>>>             foo
>>>             lhames at Langs-MacBook-Pro scratch % llvm-jitlink
>>>             -oop-executor -dlopen /usr/lib/swift/libswiftCore.dylib
>>>             foo.o
>>>             foo
>>>             lhames at Langs-MacBook-Pro scratch % cat inits.cpp 
>>>             #include <iostream>
>>>
>>>             class Foo {
>>>             public:
>>>               Foo() { std::cout << "Foo::Foo()\n"; }
>>>               ~Foo() { std::cout << "Foo::~Foo()\n"; }
>>>               void foo() { std::cout << "Foo::foo()\n"; }
>>>             };
>>>
>>>             Foo F;
>>>
>>>             int main(int argc, char *argv[]) {
>>>               F.foo();
>>>               return 0;
>>>             }
>>>             lhames at Langs-MacBook-Pro scratch % xcrun clang++ -c -o
>>>             inits.o inits.cpp
>>>             lhames at Langs-MacBook-Pro scratch % llvm-jitlink inits.o
>>>                                                                  
>>>             Foo::Foo()
>>>             Foo::foo()
>>>             Foo::~Foo()
>>>             lhames at Langs-MacBook-Pro scratch % llvm-jitlink
>>>             -oop-executor inits.o
>>>             Foo::Foo()
>>>             Foo::foo()
>>>             Foo::~Foo()
>>
>>             -- 
>>             https://flowcrypt.com/pub/stefan.graenitz@gmail.com <https://flowcrypt.com/pub/stefan.graenitz@gmail.com>
>>
>     -- 
>     https://flowcrypt.com/pub/stefan.graenitz@gmail.com <https://flowcrypt.com/pub/stefan.graenitz@gmail.com>
>
-- 
https://flowcrypt.com/pub/stefan.graenitz@gmail.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20210204/32e5e8be/attachment-0001.html>


More information about the llvm-dev mailing list