[Lldb-commits] [PATCH] D147462: Use kernel's global variable indicating how many bits are used in addressing when loading Darwin xnu kernel
Jason Molenda via Phabricator via lldb-commits
lldb-commits at lists.llvm.org
Mon Apr 3 13:01:12 PDT 2023
jasonmolenda added inline comments.
================
Comment at: lldb/source/Plugins/DynamicLoader/Darwin-Kernel/DynamicLoaderDarwinKernel.cpp:1093
+ // don't mark those as addressable.
+ m_process->SetVirtualAddressableBits((wordsize * 8) - 9);
+ addr_t sym_addr = symbol->GetLoadAddress(&m_process->GetTarget());
----------------
JDevlieghere wrote:
> Instead of changing the addressable bits for the process, should we modify `GetLoadAddress` to explicitly skip the stripping instead? This is probably purely theoretical, but what if another (host) thread tried to read memory in the meantime? Changing global state like this can lead to subtle bugs.
Yes, I thought about touching the global state. At this point we've just attached to the remote device / corefile, and are loading the kernel binary. There aren't other threads operating on this at this time. We're also in a state where the number of addressable bits may default to a correct value, but is just as likely incorrect.
As for a flag to do this, it's a bit tricky! It's actually the Target::ReadUnsignedIntegerFromMemory() call, which takes an Address object, which ends up mutating the address while constructing the load address. We could pipe a flag for ReadUnsignedIntegerFromMemory down a few layers to where that's happening, or add a flag to the Address object which indicates that it should not be mutated down the line. Target::ReadMemory needs to take an Address object to fall back to using the backing file if possible (important if the corefile doesn't include the binary contents), another alternative is to switch to Process::ReadMemory whcih takes an `addr_t` but won't fall back to the on-disk file.
================
Comment at: lldb/source/Plugins/DynamicLoader/Darwin-Kernel/DynamicLoaderDarwinKernel.cpp:1114
+ } else {
+ if (old_bits_value != 0)
+ m_process->SetVirtualAddressableBits(old_bits_value);
----------------
JDevlieghere wrote:
> Is this actually necessary? Does that mean that you can't do `process->SetVirtualAddressableBits(process->GetVirtualAddressableBits())` in general?
I didn't debug it through the layers, but setting the value to 0 to when the getter returns 0, did break this.
Repository:
rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D147462/new/
https://reviews.llvm.org/D147462
More information about the lldb-commits
mailing list