[Lldb-commits] [PATCH] D126614: [lldb] [gdb-remote] Client support for using the non-stop protocol

Michał Górny via Phabricator via lldb-commits lldb-commits at lists.llvm.org
Fri Jun 10 08:13:56 PDT 2022


mgorny added a comment.

Thanks for the review and welcome back!

In D126614#3572845 <https://reviews.llvm.org/D126614#3572845>, @labath wrote:

> (I think Jim would be interested in following this.)

Ok, I've added him to all the related diffs.

> I'm not sure I understand what this does. Is it that, when the setting is enabled, lldb uses non-stop communication at the protocol level, but then forces an all-stop whenever one thread stops (so that the upper layers are happy)?

Yes. This way, it preserves compatibility with genuine gdbserver that actually implements non-stop mode.

>> API tests required changing the Process::Halt() API to inspect process'
>> private state rather than the public state, that remains eStateUnloaded
>> in gdb_remote_client tests.
>
> I don't think that checking the private state is correct there. Doesn't that just mean that there's a bug in the tests that it's not simulating a real gdbserver sufficiently well? Or maybe (if the simulated responses are reasonable) lldb should be made to work (properly set the state)  for the simulated responses?

Yes, I think it's limitation of the test logic that public state isn't being updated. I haven't really managed to get past figuring that part out. Do you have any suggestion where to look for the problematic code?

That said, I'm currently focused on multiprocess support in lldb-server. While this is relevant, it isn't blocking server and I don't want to distract myself switching contexts right now but I'll look into implementing your suggestions later on.



================
Comment at: lldb/packages/Python/lldbsuite/test/gdbclientutils.py:26
 
     Framing includes surrounding the message between $ and #, and appending
     a two character hex checksum.
----------------
labath wrote:
> I guess this comment needs updating. I've been also thinking whether this is a sufficiently obvious way to send notifications, but I kinda like the brevity of it. 
Yeah, I can't say I'm very proud of it but I can't think of a way that would both be really clean and wouldn't involve major changes all around the place.

I suppose one option would be to wrap these packets in a minimal `Notification` class, and then do something like:

```
if isinstance(message, Notification):
  ...
```

and then pass `Notification("foo")` instead of `"%foo" but I'm not convinced it'd be actually more obvious.


================
Comment at: lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp:128
+            PacketResult::Success) {
+          LLDB_LOGF(log, "GDBRemoteClientBase::%s () vCtrlC failed",
+                    __FUNCTION__);
----------------
labath wrote:
> not `vCont;t` ? Also, I think these log lines are unnecessarily verbose, and are obscuring the real code. I think a single log line (indicating failure) should be more than enough as more information (the kind of failure, for instance) can be determined by looking at the packet log.
Good catch.

Just to be clear, do I understand that you're talking of having a single `LLDB_LOG` that covers all non-successful responses?


================
Comment at: lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp:147
+        // vCtrlC may not do anything, so timeout if we don't get notification
+        if (ReadPacket(stop_response, milliseconds(500), false) ==
+            PacketResult::Success) {
----------------
labath wrote:
> This is unfortunate. Isn't there a better way to do this? Why aren't you using vCtrlC as all the comments say? Did you find any parallel to this in gdb?
Yes, it is unfortunate. I'm going to think more and try to figure out a better way. I was originally thinking of simply not waiting for the response and instead letting it come after and then taking care of any pending notifications before sending the next continue packet (this would require merging D126655 first) but I'm somewhat afraid of race conditions. Though maybe unnecessarily.

Though OTOH I guess 500 ms may be insufficient for debugging over the Internet, and this would lead to even worse results.

As for `vCtrlC`, I've changed the code to use `vCont;t` as the former didn't work well with gdbserver (and the latter is also more correct wrt the protocol). I've forgotten to update the comments.


================
Comment at: lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp:450
+      if (m_comm.GetNonStopProtocol())
+        success = m_comm.SendPacketNoLock("vCtrlC") == PacketResult::Success;
+      else {
----------------
labath wrote:
> Isn't the vCtrlC packet supposed to have a response (OK or Exx according to <https://sourceware.org/gdb/onlinedocs/gdb/Packets.html#Packets>) ?
Indeed you're correct. I guess I've missed it because stop reason handler took care of eating it.


================
Comment at: lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h:62
 
+  bool GetNonStopProtocol() { return m_non_stop_protocol; }
+
----------------
labath wrote:
> Maybe call this `GetNonStopEnabled` or `GetNonStopInUse` ?
I kinda wanted to emphasize 'protocol' here, to make it clear we aren't introducing full-scale non-stop support.


================
Comment at: lldb/test/API/functionalities/gdb_remote_client/TestGDBRemoteClient.py:686
+            def QNonStop(self, val):
+                assert val == 1
+                return "OK"
----------------
labath wrote:
> This is not a good assertion. Maybe you could replace check that after-the-fact with `assertPacketLogContains`?
Do you mean because the client could issue an irrelevant `QNonStop:0` at some point? I suppose that makes sense then.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D126614/new/

https://reviews.llvm.org/D126614



More information about the lldb-commits mailing list