[Lldb-commits] [PATCH] D95601: [lldb][AArch64] Add memory tag reading to lldb-server

David Spickett via Phabricator via lldb-commits lldb-commits at lists.llvm.org
Tue Mar 9 08:28:26 PST 2021


DavidSpickett added inline comments.


================
Comment at: lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp:1345
+
+  tags.resize((range.GetByteSize() / details->manager->GetGranuleSize()) *
+              details->manager->GetBytesPerTag());
----------------
omjavaid wrote:
> is there a difference between Granule and GranuleSize?
Granule is used to mean the current Granule you're in. So if you were at address 0x10 you'd be in the [0x10,0x20) granule.
GranuleSize is the size of each granule.

If I saw `manager->GetGranule` I'd expect it to be something like `std::pair<addr_t, addr_t> GetGranule(addr_t addr);`.
As in, tell me which granule I'm in.

Though I admit this is more an issue of "ExpandToGranule" not being clear enough, rather than "GetGranuleSize" being too redunant.
AlignToGranule(s) perhaps? But then you ask "align how", hence "ExpandTo". Anyway.


================
Comment at: lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp:1357
+
+  if (error.Fail())
+    return error;
----------------
omjavaid wrote:
> DavidSpickett wrote:
> > DavidSpickett wrote:
> > > DavidSpickett wrote:
> > > > omjavaid wrote:
> > > > > ptrace request is a success if number of tags requested is not equal to no of tags read? If not then this and following condition may be redundant.
> > > > Well ptracewrapper doesn't check the iovec, but I'll check the kernel source to see if it's actually possible for it to fail that way.
> > > In `linux/arch/arm64/kernel/mte.c` `__access_remote_tags` there is a comment:
> > > ```
> > > +/*
> > > + * Access MTE tags in another process' address space as given in mm. Update
> > > + * the number of tags copied. Return 0 if any tags copied, error otherwise.
> > > + * Inspired by __access_remote_vm().
> > > + */
> > > ```
> > > 
> > > *any tags* being the key words.
> > > 
> > > So the scenario is:
> > > * ask to read from addr X in page 0, with length of pagesize+some so the range spills into page 1
> > > * kernel can access page 0, reads tags until the end of the page
> > > * tries to access page 1 to read the rest, fails, returns 0 (success) since *some* tags were read
> > > * we see the ptrace call succeeded but with less tags than we expected
> > > 
> > > I don't see it's worth dealing with this corner case here since lldb will look before it leaps. It would have errored much earlier here because either page 1 isn't in the tracee's memory regions or it wasn't MTE enabled.
> > > 
> > > 
> > Added a comment in the code too.
> This means emitting less than requested number of tags is legit. However we have set tags vector size equal to whatever we have requested. We set error string which is actually not being used anywhere and can be removed in favor of a log message to help with debugging if needed.
> 
> Also we need to resize the vector after ptrace request so we use this size in gdb remote reply.
I'll log that error in in GDBRemoteCommunicationServerLLGS.cpp.

I'll do what you suggested to support partial read on the server side. Then lldb client can error if it doesn't get what it expected.
(testing partial reads on the lldb side is going to be very difficult anyway since we'd need a valid smaps entry to even issue one)


================
Comment at: lldb/test/API/tools/lldb-server/memory-tagging/TestGdbRemoteMemoryTagging.py:102
+        # Here we read from 1/2 way through a granule, into the next. Expands to 2 granules
+        self.check_qmemtags_response("{:x},10:1".format(buf_address+64-8), "m0304")
----------------
omjavaid wrote:
> Do we test partial read case here? 
Ack. No, but it should be a case of reading off of the end of the allocated buffer by some amount.


================
Comment at: lldb/test/API/tools/lldb-server/memory-tagging/main.c:13
+  // Wait for lldb-server to stop us
+  while (1) {
+  }
----------------
omjavaid wrote:
> infinite loop in test program may not be a good idea.
I'll check what the timeouts are on the expect packet sequence. I think it would get killed eventually if we didn't see the output we're looking for.
(I could do something more CPU friendly, sleep/halt/wait on something)


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D95601/new/

https://reviews.llvm.org/D95601



More information about the lldb-commits mailing list