[llvm-dev] Linking Linux kernel with LLD

Sean Silva via llvm-dev llvm-dev at lists.llvm.org
Sat Feb 18 01:03:49 PST 2017

On Fri, Feb 17, 2017 at 8:31 AM, George Rimar <grimar at accesssoftek.com>

> >>That boot_params.hdr.code32_start field is probably either invalid (bad
> reloc or something else causing the bootloader to >>calculate the wrong
> address) or valid but the thing it thinks it is pointing to wasn't loaded
> (missing PT_LOAD etc.).
> >boot_params.hdr.code32_start field is valid :) It is 0x100000, like
> expected
> >
> >Then I suspect that that segment isn't being loaded. Is there a PT_LOAD
> that covers that address? Is the bootloader loading it?
> That issue is gone. Not sure what changed, but looks something was fixed
> in LLD during last week.
> Latest status of booting linux kernel is next currently:
> At this location,
> https://github.com/torvalds/linux/blob/5924bbecd0267d87c24110cbe2041b
> 5075173a25/arch/x86/boot/compressed/head_64.S#L424
> kernel calls extract_kernel(). And 2 lines below it tries to jmp to the
> address of decompressed kernel and fails to do that for me.
> extract_kernel() method is:
> https://github.com/torvalds/linux/blob/5924bbecd0267d87c24110cbe2041b
> 5075173a25/arch/x86/boot/compressed/misc.c#L334
> I added next lines before return:
> __putstr("hi from extract_kernel");
> if ((int) output == 0x1000000)
>   __putstr("== 0x1000000");
> output[0] = 0xEB;
> output[1] = 0xFE;
> return output;
> And during boot in shows all text from above and enters infinite loop as
> expected. So, that means it successfully jumps to 0x1000000, but
> looks something is wrong in decompressed code. Next destination point
> should be startup_64‚Äč.
> https://github.com/torvalds/linux/blob/5924bbecd0267d87c24110cbe2041b
> 5075173a25/arch/x86/kernel/head_64.S#L50
> <https://github.com/torvalds/linux/blob/5924bbecd0267d87c24110cbe2041b5075173a25/arch/x86/kernel/head_64.S#L48>
> Though as I mentioned it does not reach it for me.
> Next step I probably going to do is dump/printf-trace that memory area of
> decompressed kernel to compare with what produced there when BFD is used.
> Have no better ideas now.

Based on your other post and some googling, it seems like there are two
separate kernel binaries. One is a "loader" binary that does the
decompression and some simple dynamic loader/linker type behavior and the
"real" kernel binary.

It seems based on your description that the "loader" binary is running
fine. The bug is not likely to be corrupted data in the decompressed output
(that is just calling into a gzip routine or something). You shouldn't have
to dump/printf-trace from memory during boot to see that data since the
"real" kernel binary that is being decompressed into that memory region is
probably already somewhere in your build tree
(arch/x86/boot/compressed/Makefile seems like it has the details; I think
it is the file called `vmlinux`; also see this article with more info about
all the pieces: https://en.wikipedia.org/wiki/Vmlinux). It should be
possible to do some static analysis (ELF header, phdrs, and
symbols/relocations) on the "real" kernel binary directly.

One place to start doing dynamic analysis is these two lines:

handle_relocations(output, output_len, virt_addr);

This seems to implement a bare-bones loader:

for (i = 0; i < ehdr.e_phnum; i++) {
phdr = &phdrs[i];

switch (phdr->p_type) {
case PT_LOAD:
dest = output;
dest += (phdr->p_paddr - LOAD_PHYSICAL_ADDR);
dest = (void *)(phdr->p_paddr);
memmove(dest, output + phdr->p_offset, phdr->p_filesz);
default: /* Ignore other PT_* */ break;

Based on the fact that you are hitting the EB FE at the load address, this
seems like it is working fine (though if there are multiple PT_LOAD's there
may still be issues lurking with the other ones).


Okay, this one looks likely to be where the issues are manifesting.

First off, it seems like they have some sort of homegrown relocation scheme:

* Process relocations: 32 bit relocations first then 64 bit after.
* Three sets of binary relocations are added to the end of the kernel
* before compression. Each relocation table entry is the kernel
* address of the location which needs to be updated stored as a
* 32-bit value which is sign extended to 64 bits.
* Format is:
* kernel bits...
* 0 - zero terminator for 64 bit relocations
* 64 bit relocation repeated
* 0 - zero terminator for inverse 32 bit relocations
* 32 bit inverse relocation repeated
* 0 - zero terminator for 32 bit relocations
* 32 bit relocation repeated
* So we work backwards from the end of the decompressed image.

Grepping around, it seems like they build this list of relocations based on
some sort of homegrown tooling in arch/x86/tools/. E.g. look at
arch/x86/tools/relocs.c. The main exported function from there (modulo
being compiled twice in 32/64 mode) is:
#if ELF_BITS == 64
# define process process_64
# define process process_32

void process(FILE *fp, int use_real_mode, int as_text,
    int show_absolute_syms, int show_absolute_relocs,
    int show_reloc_info)
if (ELF_BITS == 64)
if (show_absolute_syms) {
if (show_absolute_relocs) {
if (show_reloc_info) {
emit_relocs(as_text, use_real_mode);

There may be something that this program doesn't like about the LLD-linked
"real" kernel executable. If regular static analysis of the "real" kernel
binaries doesn't reveal anything wrong (I'm guessing it will though and is
probably the best first thing to look at), you can try tracing the
execution of this program on the LLD-linked executable and the BFD/gold
executable and diff the results. arch/x86/boot/compressed/Makefile seems to
have some details about the invocation of this `relocs` program.

Also, down the road once we have your current configuration working, it is
probably worth toggling each of these kconfig options as they may tickle
different LLD issues:
(this is just from a quick glance at the ifdef's in the compressed boot
loader code)

-- Sean Silva

> George.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170218/a388a603/attachment.html>

More information about the llvm-dev mailing list