[Lldb-commits] [PATCH] D87868: [RFC] When calling the process mmap try to call all found instead of just the first one

Greg Clayton via Phabricator via lldb-commits lldb-commits at lists.llvm.org
Wed Sep 30 14:08:13 PDT 2020

clayborg added a comment.

In D87868#2303398 <https://reviews.llvm.org/D87868#2303398>, @labath wrote:

> A completely different approach would be to avoid the mmap function completely, and go for the mmap syscall instead.
> That is, instead of setting up registers to fake call to mmap and doing a run-to entry point breakpoint, we could set them up to fake a syscall, and then do an instruction-step over a syscall instruction (which we could place at the entry point, or find a suitable one in the binary).
> The advantage of that would be that this would work not only in this (sanitizer) case, but also in all other cases where an mmap symbol is not present/functional/unambiguous:
> - a bare-bone statically linked binary need not contain an mmap function
> - very early in the program startup (before relocations are applied) it may not be safe to call the global mmap
> - mmap may be buggy (libc debugging?)

That is possible, though how do we figure out the syscall enumeration for the mmap syscall reliably for a given system? And if we are remote debugging? Seems like we are signing up for architecture specific syscalls on every different architecture. Not sure I would go that route, but I am open to seeing a solution before passing judgement.

> Note that this would not need to be implemented in the lldb client. This sort of thing would be natural to implement in lldb server in response to the `_M` packet. There it would be easy to encode the abi details needed to issue a syscall. The client already prefers this packet, and the existing code could remain as a fallback for platforms not implementing it.

There is no code currently that does any run control down inside of lldb-server and I would like to keep it that way if possible. debugserver can do it only because we have the task port to the program we are debugging and we can call a function in the debugserver process that does the memory allocation in the debug process. Having this done in lldb-server would require lldb-server to perform run control by changing register values, running the syscall, stopping at a breakpoint to after the syscall has run, removing that breakpoint only if it didn't exist already. Seems like a lot of dangerous flow control that we won't be able to see if anything goes wrong. Right now if we are doing it all through the GDB remote protocol, we can see exactly how things go wrong in the packet log, but now it would be a mystery if things go wrong.

  rG LLVM Github Monorepo



More information about the lldb-commits mailing list