[llvm-dev] LLVM on bare-metal
Brian Clarkson via llvm-dev
llvm-dev at lists.llvm.org
Thu Jun 27 04:49:31 PDT 2019
Yes, I'm definitely referring to the linking functionality in the JIT
part of LLVM (ORC?), not the ld replacement which I agree is way too
much. As far as I can tell from spec'ing this out, user code (i.e.
plugins) will be exporting symbols as well as importing them. In fact,
Cling is where I was planning on doing my first source code deep-dive
because it seems to cover all the bases in terms of functionality.
So I guess I can tentatively identify my list of functional requirements as:
- load relocatable (but highly optimized) machine code
- relocate the machine code
- export symbols from the loaded machine code (available exports are not
known at compile-time)
- import symbols into the loaded machine code (required imports are not
known at compile-time)
- finally, actually execute functions exported from the loaded machine code
I latched on to LLVM because I nearly lost my mind trying read the Linux
source code for libdl.
Kind regards
Brian Clarkson
Orthogonal Devices
Tokyo, Japan
www.orthogonaldevices.com
On 27/06/2019 19:28, Peter Smith wrote:
> On Thu, 27 Jun 2019 at 11:16, Brian Clarkson
> <clarkson at orthogonaldevices.com> wrote:
>> Would you say that embedding the LLVM linker is a practical way to get
>> the required dynamic linking capabilities on the bare-metal side?
>>
> If I've understood you correctly; probably not. The LLVM linker (LLD)
> is a static linker, it doesn't have any image loading functionality.
> It also isn't really suited for running on top of an RTOS in the same
> way that Clang isn't. There is something called llvm-link but that is
> a linker for multiple bitcode files to produce a single bitcode file
> which I'm guessing isn't what you want either.
>
> I think there is a dynamic linker in one of the JITs, but I can't
> remember where it is off the top of my head.
>
> If I get some time this afternoon I'll try and find some links on
> either how to write a simple dynamic loader or some examples.
>
> Peter
>
>> Orthogonal Devices
>> Tokyo, Japan
>> www.orthogonaldevices.com
>>
>> On 27/06/2019 19:06, Peter Smith wrote:
>>> On Thu, 27 Jun 2019 at 10:36, Brian Clarkson via llvm-dev
>>> <llvm-dev at lists.llvm.org> wrote:
>>>> Hi Tim.
>>>>
>>>> Thank you for taking to time to comment on the background!
>>>>
>>>> I will definitely study lldb and remote JIT for ideas. I worry that I
>>>> will not be able to pre-link on the host side because the host cannot(?)
>>>> know the final memory layout of code on the client side, especially when
>>>> there are multiple plugins being loaded in different combinations on the
>>>> host and client. Is that an unfounded worry?
>>>>
>>>> I suppose it is also possible to share re-locatable machine code (ELF?)
>>>> and only use client-side embedded LLVM for linking duties? Does that
>>>> simplify things appreciably? I was under the impression that if I can
>>>> compile and embed the LLVM linker then embedding LLVM's codegen
>>>> libraries would not be much extra work. Then I can allow users to use
>>>> Faust (or any other frontend) to generate bytecode in addition to my
>>>> "live coding" desktop application. So many variables to consider... :-)
>>>>
>>> It is possible to build a position independent code on the host and
>>> run it on the device without needing the full complexity of a SysV
>>> dynamic linker. As you say there are many different options depending
>>> on how much your plugins need to communicate with the main program, or
>>> each other, and how sophisticated a plugin loader you are comfortable
>>> writing. There is probably much more information available online
>>> about how to do that than embedding LLVM.
>>>
>>> One possible approach is build your plugins on the host as some kind
>>> of position independent ELF executable. Your program on the device
>>> could extract the loadable parts of the ELF, copy them to memory,
>>> resolve potential fixups (relocations in ELF) and and branch to the
>>> entry point. In general ELF isn't compact enough for embedded systems
>>> and it is common to post-process it into some more easily processed
>>> form first.
>>>
>>>> Kind regards
>>>> Brian Clarkson
>>>>
>>>>
>>>> Orthogonal Devices
>>>> Tokyo, Japan
>>>> www.orthogonaldevices.com
>>>>
>>>> On 27/06/2019 18:17, Tim Northover wrote:
>>>>> Hi Brian,
>>>>>
>>>>> I'm afraid I can't answer your actual questions, but do have a couple
>>>>> of comments on the background...
>>>>>
>>>>> On Thu, 27 Jun 2019 at 09:50, Brian Clarkson via llvm-dev
>>>>> <llvm-dev at lists.llvm.org> wrote:
>>>>>> LLVM would be responsible for generating
>>>>>> (optimized, and especially vectorized for NEON) machine code directly on
>>>>>> the embedded device and it would take care of the relocation and
>>>>>> run-time linking duties.
>>>>> That's a much smaller task than what you'd get from embedding all of
>>>>> LLVM. lldb is probably an example of a program with a similar problem
>>>>> to you, and it gets by with just a pretty small stub of a
>>>>> "debugserver" on a device. It does all CodeGen and even prelinking on
>>>>> the host side, and then transfers binary data across.
>>>>>
>>>>> The concept is called "remote JIT" in the LLVM codebase if you want to
>>>>> research it more.
>>>>>
>>>>> I think the main advantage you'd get from embedding LLVM itself over a
>>>>> scheme like that would be a certain resilience to updating the RTOS on
>>>>> the device (it would cope with a function sliding around in memory
>>>>> even if the host is no longer available to recompile), but I bet there
>>>>> are simpler ways to do that. The API surface you need to control is
>>>>> probably pretty small.
>>>>>
>>>>>> Sharing code in the form of LLVM bytecode
>>>>>> also seems to sidestep the complex task of setting up a cross-compiling
>>>>>> toolchain which is something that I would prefer not to have to force my
>>>>>> users to do.
>>>>> If you can produce bitcode on the host, you can produce an ARM binary
>>>>> without forcing the users to install extra stuff. The work involved
>>>>> would be pretty comparable to what you'd have to do on the RTOS side
>>>>> anyway (you're unlikely to be running GNU ld against system libraries
>>>>> on the RTOS), and made slightly easier by the host being more of a
>>>>> "normal" LLVM environment.
>>>>>
>>>>> Cheers.
>>>>>
>>>>> Tim.
>>>> ---
>>>> This email has been checked for viruses by Avast antivirus software.
>>>> https://www.avast.com/antivirus
>>>>
>>>> _______________________________________________
>>>> LLVM Developers mailing list
>>>> llvm-dev at lists.llvm.org
>>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
More information about the llvm-dev
mailing list