[Lldb-commits] [PATCH] D134906: Have MemoryCache cache addresses that were unreadable, so duplicate read attempts can be suppressed

Jason Molenda via Phabricator via lldb-commits lldb-commits at lists.llvm.org
Tue Oct 4 15:17:24 PDT 2022


jasonmolenda added a comment.

Jim & David, thanks so much for the comments.  The most important question of correctness that David asks is whether we might try to read 32k from an address and mark that address as inaccessible because a page 24k into the memory read failed (for instance).  I tried to be conservative and only mark an address as inaccessible when we were able to read 0 bytes (ReadMemory returns partial reads).

I think adding a little hint in the error message, "(cached)" would make it clearer to an lldb maintainer what happened, and not distract the user with too much.

In D134906#3830290 <https://reviews.llvm.org/D134906#3830290>, @DavidSpickett wrote:

>> If we wanted to track these decisions it would be more appropriate to log them, but I'm not sure even that is necessary.
>
> Yeah this is a better idea, if we do it at all.
>
> Let me rephrase the question. If I had a memory read failing and I suspected that the cache was marking it as unreadable how would I confirm that? If there's a way to do that when it's really needed, then we're good.

When I was testing/debugging this, I did it by turning the packet log on :)

> Perhaps log only when a new address is added to the unreadable list? Then with the memory log plus the packet log you could figure out the issue, even if you didn't know that the cache had this unreadable address feature before you started investigating.

Yes, I can add a log message to Process when I add an address, good idea.

Let me go through the patch again carefully to make sure I'm correctly handling partially-successful reads (not putting anything in the inaccessible memory cache) and tweak that error message a tad & add a bit of logging.

In D134906#3832642 <https://reviews.llvm.org/D134906#3832642>, @labath wrote:

> I don't know about debugserver, but both lldb-server and gdbserver currently return an error when the memory is partially accessible, even though the protocol explicitly allows the possibility of truncated reads. It is somewhat hard to reproduce, because the caching mechanism in lldb aligns memory reads, and the cache "line" size is usually smaller than the page size -- which is probably why this behavior hasn't bothered anyone so far. Nonetheless, I would say that this behavior (not returning partial contents) is a (QoI) bug, but the fact that two stubs have it makes me wonder how many other stubs do the same as well..



In D134906#3832642 <https://reviews.llvm.org/D134906#3832642>, @labath wrote:

> I don't know about debugserver, but both lldb-server and gdbserver currently return an error when the memory is partially accessible, even though the protocol explicitly allows the possibility of truncated reads. It is somewhat hard to reproduce, because the caching mechanism in lldb aligns memory reads, and the cache "line" size is usually smaller than the page size -- which is probably why this behavior hasn't bothered anyone so far. Nonetheless, I would say that this behavior (not returning partial contents) is a (QoI) bug, but the fact that two stubs have it makes me wonder how many other stubs do the same as well..



In D134906#3832642 <https://reviews.llvm.org/D134906#3832642>, @labath wrote:

> I don't know about debugserver, but both lldb-server and gdbserver currently return an error when the memory is partially accessible, even though the protocol explicitly allows the possibility of truncated reads. It is somewhat hard to reproduce, because the caching mechanism in lldb aligns memory reads, and the cache "line" size is usually smaller than the page size -- which is probably why this behavior hasn't bothered anyone so far. Nonetheless, I would say that this behavior (not returning partial contents) is a (QoI) bug, but the fact that two stubs have it makes me wonder how many other stubs do the same as well..

Hi Pavel, thanks for pointing this out.  I did a quick check with debugserver on darwin, using `memory region` to find the end of an accessible region in memory, and starting a read request a little earlier, in readable memory:

  (lldb) sett set target.process.disable-memory-cache true
  
  (lldb) mem region 0x0000000101800000
   <  31> send packet: $qMemoryRegionInfo:101800000#ce
   <  34> read packet: $start:101800000;size:6a600000;#00
  [0x0000000101800000-0x000000016be00000) ---
  
  (lldb) mem region 0x0000000101800000-4
   <  31> send packet: $qMemoryRegionInfo:1017ffffc#d8
   < 122> read packet: $start:101000000;size:800000;permissions:rw;dirty-pages:101000000,101008000,10100c000,1017fc000;type:heap,malloc-small;#00
  [0x0000000101000000-0x0000000101800000) rw-
  
  (lldb) x/8wx 0x0000000101800000-4
  
   <  17> send packet: $x1017ffffc,20#ca
   <   8> read packet: $00000000#00
  
   <  17> send packet: $x101800000,1c#f2
   <   7> read packet: $E80#00
  
  0x1017ffffc: 0x00000000 0x00000000 0x00000000 0x00000000
  0x10180000c: 0x00000000 0x00000000 0x00000000 0x00000000
  warning: Not all bytes (4/32) were able to be read from 0x1017ffffc.
  (lldb) 

We ask for 32 bytes starting at 0x1017ffffc, get back 4 bytes.  Then we try to read the remaining 28 bytes starting at 0x101800000, and get an error. So this is different behavior from other stubs where you might simply get an error for the request to read more bytes than are readable.  This does complicate the approach I'm doing -- because I'll never know which address was inaccessible beyond "something within this address range".  I don't think I can do anything here if that's the case.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134906/new/

https://reviews.llvm.org/D134906



More information about the lldb-commits mailing list