[llvm-dev] JIT compiler and calls to existing functions

Russell Wallace via llvm-dev llvm-dev at lists.llvm.org
Mon Mar 28 20:57:33 PDT 2016


Ah! Okay then, so you are saying something substantive that I think I
disagree with, but that could be because there are relevant issues I don't
understand.

My reasoning is, I've already got a pointer to the function I want the
generated code to call, so I just supply that pointer, it looks ugly on a
microscopic scale because there are a couple of lines of casts to shepherd
it through the type system, but on an even slightly larger scale it's as
simple and elegant as it gets: I supply a 64-bit machine word that gets
compiled into the object code, there are no extra layers of machinery to go
wrong, no nonlocal effects to understand, no having to wonder about whether
anything depends on symbol information that might vary between debug and
release builds or between the vendor compiler and clang or between Windows
and Linux; if it works once, it will work the same way every time. As a
bonus, it will compile slightly faster.

Keeping things symbolic - you're saying the advantage is the intermediate
code is easier to debug because a dump will say 'call print' instead of
'call function at 123456'? I can see that being a real advantage but as of
right now it doesn't jump out at me as necessarily outweighing the
advantages of the direct pointer method.



On Tue, Mar 29, 2016 at 4:37 AM, Philip Reames <listmail at philipreames.com>
wrote:

> I think our use cases are actually quite similar.  Part of generating the
> in memory executable code is resolving all the symbolic symbols and
> relocations.  The details of this are mostly hidden from you by the MCJIT
> interface, but it's this step I was referring to as "link time".
>
> The way to think of MCJIT: generate object file, incrementally link, run
> dynamic loader, but do it all in memory without round tripping through disk
> or explicit files.
>
> Philip
>
>
>
> On Mar 28, 2016, at 7:25 PM, Russell Wallace <russell.wallace at gmail.com>
> wrote:
>
> Right, but when you say link time, the JIT compiler I'm writing works the
> way openJDK or v8 do, it reads a script, JIT compiles it into memory and
> runs the code in memory without ever writing anything to disk (an option
> for ahead of time compilation may come later, but that will be a while down
> the road), so we might be doing different things?
>
> On Tue, Mar 29, 2016 at 2:59 AM, Philip Reames <listmail at philipreames.com>
> wrote:
>
>> The option we use is to have a custom memory manager, override the
>> getPointerToNamedFunction function, and provide the pointer to the external
>> function at link time.  The inttoptr scheme works fairly well, but it does
>> make for some pretty ugly and sometimes hard to analyze IR.  I recommend
>> leaving everything symbolic until link time if you can.
>>
>> Philip
>>
>>
>> On 03/28/2016 06:33 PM, Russell Wallace via llvm-dev wrote:
>>
>> That seems to work, thanks! The specific code I ended up with to call
>> int64_t print(int64_t) looks like:
>>
>>         auto f = builder.CreateIntToPtr(
>>             ConstantInt::get(builder.getInt64Ty(), uintptr_t(print)),
>>             PointerType::getUnqual(FunctionType::get(
>>                 builder.getInt64Ty(), {builder.getInt64Ty()}, false)));
>>         return builder.CreateCall(f, args);
>>
>>
>> On Mon, Mar 28, 2016 at 1:40 PM, Caldarale, Charles R <
>> <Chuck.Caldarale at unisys.com>Chuck.Caldarale at unisys.com> wrote:
>>
>>> > From: llvm-dev [mailto:llvm-dev-bounces at lists.llvm.org]
>>> > On Behalf Of Russell Wallace via llvm-dev
>>> > Subject: [llvm-dev] JIT compiler and calls to existing functions
>>>
>>> > In the context of a JIT compiler, what's the recommended way to
>>> generate a call to an
>>> > existing function, that is, not one that you are generating on-the-fly
>>> with LLVM, but
>>> > one that's already linked into your program? For example the cosine
>>> function (from the
>>> > standard math library); the Kaleidoscope tutorial recommends looking
>>> it up by name with
>>> > dlsym("cos"), but it seems to me that it should be possible to use a
>>> more efficient and
>>> > portable solution that takes advantage of the fact that you already
>>> have an actual pointer
>>> > to cos, even if you haven't linked with debugging symbols.
>>>
>>> Perhaps not the most elegant, but we simply use the
>>> IRBuilder.CreateIntToPtr() method to construct the Callee argument for
>>> IRBuilder.CreateCall().  The first argument for CreateIntToPtr() comes from
>>> ConstantInt::get(I64, uintptr_t(ptr)), while the second is a function type
>>> pointer defined by using PointerType::get() on the result of
>>> FunctionType::get() with the appropriate function signature.
>>>
>>>  - Chuck
>>>
>>>
>>
>>
>> _______________________________________________
>> LLVM Developers mailing listllvm-dev at lists.llvm.orghttp://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160329/11f64a6c/attachment.html>


More information about the llvm-dev mailing list