[Lldb-commits] [PATCH] D42145: [lldb] Use vFlash commands when writing to target's flash memory regions

Pavel Labath via Phabricator via lldb-commits lldb-commits at lists.llvm.org
Wed Feb 28 11:53:45 PST 2018


labath added inline comments.


================
Comment at: lldb/trunk/source/Plugins/ObjectFile/ELF/ObjectFileELF.cpp:814
 
           value = value - header->p_vaddr;
           found_offset = true;
----------------
owenpshaw wrote:
> labath wrote:
> > Ok so the issue is that here we use the virtual address to compute the load bias, but at line 830 we add the bias to the physical address. This breaks as soon as these two addresses start to differ.
> > 
> > Changing this to use the physical address as well fixes things, but as I said before, I'm not sure we should be using physical addresses here.
> I don't know if there's another use case besides flash load that should definitely use the physical address, so I can't be much help answering that question.  I was mainly relying on tests to catch any issue caused by using physical addresses.
> 
> Could the value_is_offset flag be a tell here?  Looks like that load bias is only calculated if value_is_offset == false.  Would it make sense to always use virtual address in such a case?  It wouldn't affect the "target modules load" case, which always sets value_is_offset to true.
> I don't know if there's another use case besides flash load that should definitely use the physical address, so I can't be much help answering that question. I was mainly relying on tests to catch any issue caused by using physical addresses.

Yeah, unfortunately we don't have tests which would catch borderline conditions like that, and as you've said, most elf files have these the same, so you couldn't have noticed that. The only reason I this is that one of the android devices we test on has a kernel whose vdso has these different. 

The good news is that with your test suite, we should be able to write a test for it, when we figure out what the behavior should be here.

> Could the value_is_offset flag be a tell here? Looks like that load bias is only calculated if value_is_offset == false. Would it make sense to always use virtual address in such a case? It wouldn't affect the "target modules load" case, which always sets value_is_offset to true.

The is_offset flag is orthogonal to this. It tells you whether the value represents a bias or an absolute value. When we read the module list from the linker rendezvous structure, we get a load bias; when we read it from /proc/pid/maps, we get an absolute value. It does *not* tell you what the value is relative to. So while doing that like you suggest would fix things, I don't think it's the right thing to do.

In fact, the more I think about this, the more I'm convinced using physical addresses is not the right solution here.  The main user of this function is the dynamic linker plugin, which uses it to notify the debugger about modules that have already been loaded into the process. That definitely sounds like a virtual address thing. Also, if you look at the documentation of the "target modules load --slide", it explicitly states that the slide should be relative to the virtual address.

And I'm not even completely clear about your case. I understand you write the module to the physical address, but what happens when you actually go around to debugging it? Is it still there or does it get copied/mapped/whatever to the virtual address?

So it seems to me the physical addresses should really be specific to the "target modules load --load" case. I've never used that command so I don't know if it can just use physical addresses unconditionally, or whether we need an extra option for that "target modules load --load --physical". I think I'll leave that decision up to you (or anyone else who has an opinion on this). But for the other use cases, I believe we should keep using virtual addresses.

So I think we should revert this and then split out the loading part of this patch from the vFlash thingy so we can focus on the virtual/physical address situation and how to handle that.


Repository:
  rL LLVM

https://reviews.llvm.org/D42145





More information about the lldb-commits mailing list