[lldb-dev] Listing memory regions in lldb

Howard Hellyer via lldb-dev lldb-dev at lists.llvm.org
Mon May 16 04:00:32 PDT 2016


Apologies for the confusion, I don't mean either of those.
 
This was working on a debug tool for the IBM JVM. When we added the 
MiniDumpWithFullMemoryInfo to our calls to MiniDumpWriteDump we found it 
inserted blank ranges to keep the regions in the MINIDUMP_MEMORY_INFO_LIST
 structure contiguous. I checked a dump just now and it has a 8 Tb region 
without and access flags inserted in the middle, probably because of a gap 
in the addresses the libraries were loaded at, but it hasn't added an 
enormous one at the end.
 
This is what I see (from a random dump of no special importance!):
0x00000000ffef0000 0x00000000fffeffff 0x0000000000100000 (1,048,576)      
          RW
0x00000000ffff0000 0x00000000ffffffff 0x0000000000010000 (65,536)          
         
0x0000000100000000 0x000007ff7713ffff 0x000007fe77140000 
(8,789,500,887,040)        
0x000007ff77140000 0x000007ff7714bfff 0x000000000000c000 (49,152)         
          R

I guess I was trying to ask whether the LLDB API would report a region 
like that or not really but if we use the SBMemoryRegionInfo iteration 
model it won't really matter - there would be an equivalent blank one 
there anyway so it doesn't really make a difference.
 
Howard Hellyer
IBM Runtime Technologies, IBM Systems


Adrian McCarthy <amccarth at google.com> wrote on 13/05/2016 16:57:07:

> From: Adrian McCarthy <amccarth at google.com>
> To: Howard Hellyer/UK/IBM at IBMGB
> Cc: Greg Clayton <gclayton at apple.com>, LLDB <lldb-dev at lists.llvm.org>
> Date: 13/05/2016 16:57
> Subject: Re: [lldb-dev] Listing memory regions in lldb
> 
> Interestingly when I've worked on Windows core dumps before I've 
> seen that MiniDump, with the right flags, will deliberately insert a
> region in the middle of the memory ranges to represent the unmapped 
> space, on 64 bit it's quite a large section.
> 
> Are you saying that's a bug in the minidump itself or in LLDB's 
> handling of it?  If the latter, please send me the details (e.g., 
> which flags trigger this behavior), and I'll see what I can do about it.
> 
> Adrian.
> 
> On Fri, May 13, 2016 at 3:35 AM, Howard Hellyer via lldb-dev <lldb-
> dev at lists.llvm.org> wrote:
> I have experimented with the GetMemoryRegionInfo approach on the 
> internal APIs already, it has some positives and negatives. 
> 
> - GetMemoryRegionInfo is unimplemented for Linux and Mac core dumps.
> That's not necessarily a bad thing as it could be implemented the 
> "right" way. (As Jim said GetMemoryRegionInfo would have to return 
> the right thing for an unmapped region.) Interestingly when I've 
> worked on Windows core dumps before I've seen that MiniDump, with 
> the right flags, will deliberately insert a region in the middle of 
> the memory ranges to represent the unmapped space, on 64 bit it's 
> quite a large section. 
> 
> - Using GetMemoryRegionInfo to iterate might be quite expensive if 
> there are many small memory regions. 
> 
> - One reason I hadn't looked at just exposing an 
> SBGetMemoryRegionInfo is that it wouldn't match a lot of the SB 
> API's at the moment (for example for Threads and Sections) which 
> tend work by having GetNumXs() and GetXAtIndex() calls. Iterating 
> with SBGetMemoryRegionInfo would be non-obvious to the user of the 
> API since it doesn't match the convention. It would need to be 
> documented in the SBGetMemoryRegionInfo API. 
> 
> - I've found the only way to reliably filter out inaccessible memory
> is to do a test read and check the error return code. I'm pretty 
> sure I've seen some that are readable via a memory read but marked 
> read=false, write=false, execute=false. (I can't remember the exact 
> case now off the top of my head, but I think it might have been 
> allocated but uncommitted memory or similar.) 
> 
> - Using GetMemoryRegionInfo over a remote connection on Linux and 
> Mac worked well but seemed to coalesce some of the memory regions 
> together. It also only allows for read/write/exec attributes to be 
> passed. That's a shame as a live process can often tell you more 
> about what the region is for. The remote command memory map looks 
> like it sends back XML so it might be possible to insert custom 
> properties in there to give more information but I'm not sure how 
> safe it is to do that, I don't know the project quite well enough to
> understand all the use cases for the protocol. 
> 
> - Extended infomation /proc/pid/maps marking a region as [stack] 
> would be lost. All you would get is read/write/exec info. (That said
> supporting everything every platform can provide might be a bit much.) 
> 
> - LLDB's ReadMemory implementations seem to return 0's for missing 
> memory that should be accessible. It might be nice to include 
> whether the memory is backed by real data or not in the API. (For 
> example Linux core files can be missing huge pages depending on the 
> setting of /proc/PID/coredump_filter or files can simply be truncated.) 
> 
> I could implement the GetMemoryRegionInfo iteration mechanism pretty
> quickly and it would actually fit my purposes as far as providing 
> all the addresses you can sensibly access. 
> 
> I'm quite keen to provide a patch but don't want to provide one that
> is at odds with how the rest of lldb works or provides data that's 
> only useful to me so I'm quite keen to get a bit of feedback on what
> the preferred approach would be. It could be that providing both 
> SBGetMemoryRegionInfo and the SBGetNumMemoryRegions/
> SBGetMemoryRegionAtIndex pair is the right solution. 
> 
> Would a patch also need to provide a command to dump this 
> information as it can be awkward to have data that's only accessible
> via the API? 

> 
> Howard Hellyer
> IBM Runtime Technologies, IBM Systems 
> 
> 
> 
> 
> Greg Clayton <gclayton at apple.com> wrote on 12/05/2016 19:09:49:
> 
> > From: Greg Clayton <gclayton at apple.com> 
> > To: Howard Hellyer/UK/IBM at IBMGB 
> > Cc: lldb-dev at lists.llvm.org 
> > Date: 12/05/2016 19:10 
> > Subject: Re: [lldb-dev] Listing memory regions in lldb 
> > 
> > We have a way internally with:
> > 
> >     virtual Error
> >     lldb_private::Process::GetMemoryRegionInfo (lldb::addr_t 
> > load_addr, MemoryRegionInfo &range_info);
> > 
> > This isn't expose via the public API in lldb::SBProcess. If you 
> > want, you can expose this. We would need to expose a 
> > SBMemoryRegionInfo and the call could be:
> > 
> > namespace lldb
> > {
> >     class SBProcess
> >     {
> >         SBError GetMemoryRegionInfo (lldb::addr_t load_addr, 
> > SBMemoryRegionInfo &range_info);
> >     };
> > }
> > 
> > then you would call this API with address zero and it would return a
> > SBMemoryRegionInfo with an address range and if that memory is read/
> > write/execute. On MacOSX we always have a page zero at address 0 for
> > 64 bit apps so it would respond with:
> > 
> > [0x0 - 0x100000000) read=false, write=false, execute=false
> > 
> > Then you call the function again with the end address of the 
> previous range. 
> > 
> > I would love to see this functionality exported through our public 
> > API. Let me know if you are up for making a patch. If you are, you 
> > might want to quickly read the following web page to see the rules 
> > that we apply to anything going into our public API:
> > 
> > http://lldb.llvm.org/SB-api-coding-rules.html
> > 
> > 
> > Greg
> > 
> > > On May 12, 2016, at 6:20 AM, Howard Hellyer via lldb-dev <lldb-
> > dev at lists.llvm.org> wrote:
> > > 
> > > I'm working on a plugin for lldb and need to scan the memory of a 
> > crashed process. Using the API to read chunks of memory and scan 
> > (via SBProcess::Read*) for what I'm looking for is easy but I 
> > haven't been able to find a way to find which address ranges are 
> > accessible. The SBProcess::Read* calls will return an error on an 
> > invalid address but it's not an efficient way to scan a 64 bit 
> address space. 
> > > 
> > > This seems like it blocks simple tasks like scanning memory for 
> > blocks allocated with a header and footer to track down memory 
> > leaks, which is crude but traditional, and ought to be pretty quick 
> > to script via the Python API. 
> > > 
> > > At the moment I've resorted to running a python script prior to 
> > launching my plugin that takes the output of "readelf --segments", /
> > proc/<pid>/maps or "otool -l" but this isn't ideal. On the 
> > assumption that I'm not missing something huge I've looked at 
> > whether it is possible to extend LLDB to provide this functionality 
> > and it seems possible, there are checks protecting calls to read 
> > memory that use the data that would need to be exposed. I'm working 
> > on a prototype implementation which I'd like to deliver back at some
> > stage but before I go too far does this sound like a good idea? 
> > > Howard Hellyer
> > > IBM Runtime Technologies, IBM Systems   
> > > 
> > > 
> > > _______________________________________________
> > > lldb-dev mailing list
> > > lldb-dev at lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> > 
> 
> _______________________________________________
> lldb-dev mailing list
> lldb-dev at lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/lldb-dev/attachments/20160516/4494c95d/attachment-0001.html>


More information about the lldb-dev mailing list