[LLVMdev] Thoughts about ExecutionEngine/MCJIT interface

Lang Hames lhames at gmail.com
Thu Mar 19 16:17:05 PDT 2015


Hi Hayden,

I haven't had a chance to yet, but I'll introduce a lazily compiling stack
into LLI soon. When that goes in it might be a good candidate for exposing
via the C-API.

Cheers,
Lang.

On Fri, Mar 20, 2015 at 8:54 AM, Hayden Livingston <halivingston at gmail.com>
wrote:

> Lang, I saw the Kaleidoscope tutorial, nice!
>
> Now maybe we can also have the LLVM C bindings for this surface area? :-)
>
> I'm happy to help if I am given some guidance.
>
> On Wed, Mar 18, 2015 at 5:42 PM, Lang Hames <lhames at gmail.com> wrote:
>
>> Hi Pawel,
>>
>> I believe that an early (working) version of Orc is in the 3.6 release,
>> and work on Orc is continuing on trunk.
>>
>> The MCJIT API (or at least the ExecutionEngine API) will stay with us for
>> some time, but hopefully only the API. I am hoping that the MCJIT class
>> itself can be deprecated in the future in favor of the OrcMCJITReplacement
>> class. Given that, I'd rather not expose MCJIT directly: Any work we do
>> there will have to be duplicated in OrcMCJITReplacement if everything goes
>> to plan.
>>
>> Orc *should* be a good fit for any MCJIT client: It was designed with
>> MCJIT's use-cases in mind, and built on the same conceptual framework (i.e.
>> replicating the static pipeline by linking object files in memory). It's
>> aim is almost exactly what you've described: To tidy MCJIT up and expose
>> the internals so that people can add new features.
>>
>> I'd be interested to hear how you go porting your project to Orc, and I
>> would be happy to help out where I can. What features of MCJIT are you
>> using now? And what do you want to add? I'd suggest checking out the
>> Orc/Kaleidoscope tutorials (see llvm/examples/Kaleidoscope/Orc/*) and the
>> OrcMCJITReplacement class (see
>> llvm/lib/ExecutionEngine/Orc/OrcMCJITReplacement.*) to get an idea of what
>> features are available now. It sounds like you'll want to add to this, but
>> then that's the purpose of these new APIs.
>>
>> As a sales pitch for Orc, I've included the definition of the initial JIT
>> from the Orc/Kaleidoscope tutorials below. As you can see, you can get a
>> functioning "custom" JIT up and running with a page of code. This basic JIT
>> just lets you throw LLVM modules at it and execute functions from them.
>> Starting from this point, with 149 lines worth of changes*, you can build a
>> JIT that lazily compiles functions from ASTs on first call (no need to even
>> IRGen up front).
>>
>> Cheers,
>> Lang.
>>
>> * As reported by:
>> diff examples/Kaleidoscope/Orc/{initial,fully_lazy}/toy.cpp | wc -l
>>
>> class KaleidoscopeJIT {
>> public:
>>   typedef ObjectLinkingLayer<> ObjLayerT;
>>   typedef IRCompileLayer<ObjLayerT> CompileLayerT;
>>   typedef CompileLayerT::ModuleSetHandleT ModuleHandleT;
>>
>>   KaleidoscopeJIT(TargetMachine &TM)
>>     : Mang(TM.getDataLayout()),
>>       CompileLayer(ObjectLayer, SimpleCompiler(TM)) {}
>>
>>   std::string mangle(const std::string &Name) {
>>     std::string MangledName;
>>     {
>>       raw_string_ostream MangledNameStream(MangledName);
>>       Mang.getNameWithPrefix(MangledNameStream, Name);
>>     }
>>     return MangledName;
>>   }
>>
>>   ModuleHandleT addModule(std::unique_ptr<Module> M) {
>>     auto MM = createLookasideRTDyldMM<SectionMemoryManager>(
>>                 [&](const std::string &Name) {
>>                   return findSymbol(Name).getAddress();
>>                 },
>>                 [](const std::string &S) { return 0; } );
>>
>>     return CompileLayer.addModuleSet(singletonSet(std::move(M)),
>> std::move(MM));
>>   }
>>
>>   void removeModule(ModuleHandleT H) { CompileLayer.removeModuleSet(H); }
>>
>>   JITSymbol findSymbol(const std::string &Name) {
>>     return CompileLayer.findSymbol(Name, true);
>>   }
>>
>>   JITSymbol findUnmangledSymbol(const std::string Name) {
>>     return findSymbol(mangle(Name));
>>   }
>>
>> private:
>>   Mangler Mang;
>>   ObjLayerT ObjectLayer;
>>   CompileLayerT CompileLayer;
>> };
>>
>> On Thu, Mar 19, 2015 at 1:28 AM, Paweł Bylica <chfast at gmail.com> wrote:
>>
>>> Hi Lang,
>>>
>>> I know that Orc JIT is somewhere around the corner, but haven't had time
>>> to check it out. Is it available in 3.6 release? If yes I will try to port
>>> my project to the new API and share experiences.
>>>
>>> However, MCJIT API will stay with us for some time. I would like to add
>>> some improvements to it if we agreed in what direction we would like to go.
>>> Personally I would like to expose whole MCJIT class and reduce
>>> ExecutionEngine interface.
>>>
>>> - Paweł
>>>
>>>
>>> On Sat, Mar 14, 2015 at 2:39 AM Lang Hames <lhames at gmail.com> wrote:
>>>
>>>> Hi Pawel,
>>>>
>>>> I agree. ExecutionEngine, in its current form, is unhelpful. I'd be in
>>>> favor of cutting the common interface back to something like:
>>>>
>>>> class ExecutionEngine {
>>>> public:
>>>>   virtual void addModule(std::unique_ptr<Module> M) = 0;
>>>>   virtual void* getGlobalValueAddress(const GlobalValue *GV) = 0;
>>>>   virtual GenericValue runFunction(const Function *F,
>>>>                                    const std::vector<GenericValue>
>>>> &Args) = 0;
>>>> };
>>>>
>>>> That's the obvious common functionality that both the interpreter and
>>>> MCJIT provide. Beyond that I think things get pretty implementation
>>>> specific.
>>>>
>>>> For what it's worth, this is an issue that I'm trying to address with
>>>> the new Orc JIT APIs. Those expose the internals directly to give you more
>>>> control. If you don't want to be able to switch the underlying
>>>> execution-engine, they may be a good fit for your use-case.
>>>>
>>>> Cheers,
>>>> Lang.
>>>>
>>>>
>>>> On Sat, Mar 14, 2015 at 4:57 AM, Paweł Bylica <chfast at gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I think ExecutionEngine as a common interface for both Interpreter and
>>>>> MCJIT is almost useless in the current form. There are separated methods in
>>>>> ExecutionEngine for similar or the same features provided by Interpreter
>>>>> and MCJIT, i.e. to get a pointer to function you should call
>>>>> getPointerToFunction() for Interpreter or getFunctionAddress() for MCJIT.
>>>>>
>>>>> Personally, I'm using MCJIT and wish to have access to some methods
>>>>> not available from ExecutionEngine. E.g. I would like to use
>>>>> getSymbolAddress() instead of getFunctionAddress() sometimes as
>>>>> getFunctionAddress() do some additional work what I'm sure has be done
>>>>> already.
>>>>>
>>>>> Maybe it's time face the truth that Interpreter and MCJIT based
>>>>> solutions are not so similar and different interfaces are needed. Or maybe
>>>>> some unification is possible?
>>>>>
>>>>> My propositions / discussion starting points:
>>>>>
>>>>>    1. Expose MCJIT header in public API. It will allow users to cast
>>>>>    ExecutionEngine instance to MCJIT instance.
>>>>>    2. Separate Interpreter and MCJIT interfaces and add them to API.
>>>>>    ExecutionEngine can still be a base class for real common part (like module
>>>>>    list).
>>>>>    3. Try to alter ExecutionEngine interface to unify common
>>>>>    Interpreter and MCJIT features. Is it possible to have one getFunction()
>>>>>    method?
>>>>>
>>>>> - Paweł
>>>>>
>>>>> _______________________________________________
>>>>> LLVM Developers mailing list
>>>>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>>>>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>>>>>
>>>>>
>>>>
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> LLVMdev at cs.uiuc.edu         http://llvm.cs.uiuc.edu
>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20150320/297e899d/attachment.html>


More information about the llvm-dev mailing list