[llvm-dev] ORC JIT Weekly #5
Lang Hames via llvm-dev
llvm-dev at lists.llvm.org
Sun Feb 23 21:48:13 PST 2020
Hi Christian,
Recall that I initially ran into a problem with the "small" code model on
> macOS where the JIT was linking to call library functions that were too far
> away to fit into a 32-bit word leading to segfaults. You recommended
> compiling that code with the 'large' code model. I did that in a big way
> and it works - but it's impacted performance quite a lot - everything slows
> down by at least 1.5x. You said you are working on a solution with the
> new JITLink to have the linker add jump tables in these situations. Have
> I characterized this all correctly?
Yes, except that JITLink is actually already available, at least for Darwin
x86-64 and arm64. :)
If you switch your object linking layer from RTDyldObjectLinkingLayer to
ObjectLinkingLayer (which is built on JITLink) you can use small code model
objects, and JITLink will build stubs and GOT entries to access
out-of-range code and data.
On Top-Of-Tree LLJITBuilder will actually use ObjectLinkingLayer and small
code model automatically if JITLink is available, so you can usually just
let the default behavior take care of this.
-- Lang.
On Thu, Feb 20, 2020 at 9:28 AM Christian Schafmeister <meister at temple.edu>
wrote:
> Hi Lang,
>
> I read each of your ORC JIT Weekly updates with great interest!
>
> I have a question that I thought other ORC JIT users might be interested
> in...
>
> As I mentioned in the LLVM Discord server - I have this "fast loading
> object-file" scheme where rather than linking and then loading dynamic
> libraries into my system, I load object files and add them to the ORC JIT
> one at a time. This scheme works and it lets me avoid calling out to the
> linker and it gives me some new capabilities that I'm really excited
> about. The time to link the object files at runtime is negligible.
>
> Recall that I initially ran into a problem with the "small" code model on
> macOS where the JIT was linking to call library functions that were too far
> away to fit into a 32-bit word leading to segfaults. You recommended
> compiling that code with the 'large' code model. I did that in a big way
> and it works - but it's impacted performance quite a lot - everything slows
> down by at least 1.5x. You said you are working on a solution with the
> new JITLink to have the linker add jump tables in these situations. Have
> I characterized this all correctly?
>
> Question: When do you think this might be available for use in llvm-tot?
> I'm asking because I can rearrange things to adopt a hybrid solution in
> the meantime if it's going to be months.
>
> Cheers,
>
> .Chris.
>
>
>
>
>
> On Sun, Feb 16, 2020 at 6:44 PM Lang Hames <lhames at gmail.com> wrote:
>
>> Hi All,
>>
>> The initializer patch review at https://reviews.llvm.org/D74300 has been
>> updated. The new version contains a MachOPlatform implementation that
>> demonstrates how Platforms and ObjectLinkingLayer::Plugins can work
>> together to implement platform specific initialization. In this case, the
>> MachOPlatform installs a plugin that scans objects for __objc_classlist and
>> __objc_selref sections and uses them to register JIT'd code with the
>> Objective-C runtime. This allows LLJIT instances (and the lli command line
>> tool) to run IR compiled from Objective-C and Swift sources.
>>
>> Discussion on the review is ongoing (thanks especially to Stefan Granitz
>> for his review comments!) but I will aim to have the patch tidied up and
>> landed in the coming week.
>>
>> -- Lang.
>>
>
>
> --
> Christian Schafmeister
> Professor, Chemistry Department
> Temple University
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200223/fb3d3c19/attachment.html>
More information about the llvm-dev
mailing list