[llvm-dev] LLVM on bare-metal

Brian Clarkson via llvm-dev llvm-dev at lists.llvm.org
Thu Jun 27 03:16:11 PDT 2019


Would you say that embedding the LLVM linker is a practical way to get 
the required dynamic linking capabilities on the bare-metal side?

Orthogonal Devices
Tokyo, Japan
www.orthogonaldevices.com

On 27/06/2019 19:06, Peter Smith wrote:
> On Thu, 27 Jun 2019 at 10:36, Brian Clarkson via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
>> Hi Tim.
>>
>> Thank you for taking to time to comment on the background!
>>
>> I will definitely study lldb and remote JIT for ideas.  I worry that I
>> will not be able to pre-link on the host side because the host cannot(?)
>> know the final memory layout of code on the client side, especially when
>> there are multiple plugins being loaded in different combinations on the
>> host and client.  Is that an unfounded worry?
>>
>> I suppose it is also possible to share re-locatable machine code (ELF?)
>> and only use client-side embedded LLVM for linking duties? Does that
>> simplify things appreciably?  I was under the impression that if I can
>> compile and embed the LLVM linker then embedding LLVM's codegen
>> libraries would not be much extra work.  Then I can allow users to use
>> Faust (or any other frontend) to generate bytecode in addition to my
>> "live coding" desktop application.  So many variables to consider... :-)
>>
> It is possible to build a position independent code on the host and
> run it on the device without needing the full complexity of a SysV
> dynamic linker. As you say there are many different options depending
> on how much your plugins need to communicate with the main program, or
> each other, and how sophisticated a plugin loader you are comfortable
> writing. There is probably much more information available online
> about how to do that than embedding LLVM.
>
> One possible approach is build your plugins on the host as some kind
> of position independent ELF executable. Your program on the device
> could extract the loadable parts of the ELF, copy them to memory,
> resolve potential fixups (relocations in ELF) and and branch to the
> entry point. In general ELF isn't compact enough for embedded systems
> and it is common to post-process it into some more easily processed
> form first.
>
>> Kind regards
>> Brian Clarkson
>>
>>
>> Orthogonal Devices
>> Tokyo, Japan
>> www.orthogonaldevices.com
>>
>> On 27/06/2019 18:17, Tim Northover wrote:
>>> Hi Brian,
>>>
>>> I'm afraid I can't answer your actual questions, but do have a couple
>>> of comments on the background...
>>>
>>> On Thu, 27 Jun 2019 at 09:50, Brian Clarkson via llvm-dev
>>> <llvm-dev at lists.llvm.org> wrote:
>>>> LLVM would be responsible for generating
>>>> (optimized, and especially vectorized for NEON) machine code directly on
>>>> the embedded device and it would take care of the relocation and
>>>> run-time linking duties.
>>> That's a much smaller task than what you'd get from embedding all of
>>> LLVM. lldb is probably an example of a program with a similar problem
>>> to you, and it gets by with just a pretty small stub of a
>>> "debugserver" on a device. It does all CodeGen and even prelinking on
>>> the host side, and then transfers binary data across.
>>>
>>> The concept is called "remote JIT" in the LLVM codebase if you want to
>>> research it more.
>>>
>>> I think the main advantage you'd get from embedding LLVM itself over a
>>> scheme like that would be a certain resilience to updating the RTOS on
>>> the device (it would cope with a function sliding around in memory
>>> even if the host is no longer available to recompile), but I bet there
>>> are simpler ways to do that. The API surface you need to control is
>>> probably pretty small.
>>>
>>>> Sharing code in the form of LLVM bytecode
>>>> also seems to sidestep the complex task of setting up a cross-compiling
>>>> toolchain which is something that I would prefer not to have to force my
>>>> users to do.
>>> If you can produce bitcode on the host, you can produce an ARM binary
>>> without forcing the users to install extra stuff. The work involved
>>> would be pretty comparable to what you'd have to do on the RTOS side
>>> anyway (you're unlikely to be running GNU ld against system libraries
>>> on the RTOS), and made slightly easier by the host being more of a
>>> "normal" LLVM environment.
>>>
>>> Cheers.
>>>
>>> Tim.
>>
>> ---
>> This email has been checked for viruses by Avast antivirus software.
>> https://www.avast.com/antivirus
>>
>> _______________________________________________
>> LLVM Developers mailing list
>> llvm-dev at lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev



More information about the llvm-dev mailing list