[Lldb-commits] [PATCH] D22914: [WIP] Add concurrent packets support to gdb-remote client

Pavel Labath via lldb-commits lldb-commits at lists.llvm.org
Fri Jul 29 07:52:18 PDT 2016


labath added a comment.

I am going to put my responses inilne:

In https://reviews.llvm.org/D22914#499519, @clayborg wrote:

> A few things that worry me:
>
> - There are no sequence IDs on any packets so when receiving responses, I am not sure how you are going to match up packets with their responses especially when you are sending from multiple threads. I didn't look through your implementation, but I am guessing you didn't add that?


I did not add packet sequence number, but I don't think they are actually necessary. Matching requests with responses is easy. The responses have to come in the order in which the requests were sent. So the thread which first sent the request will be the first one to read the response. After it is done, the next one gets a chance to read it's response. The whole implementation is very straight forward (see `SendPacketAndWaitForResponseNoLock`) and completely transparent to the server.

The interactions between this and the packet verifier and echo synchronization features you have added (we don't actually rely on those) are a bit more complicated and we can discuss how exactly they should interact, but I am confident it can be done in a way they do not interfere.

> - What not just make a qModulesInfo that gets all module info from a process and avoid the many callbacks? When we were reducing packets for Apple Watch we did a lot of consolidation packets where we get all of the data we needed in fewer packets (we only get 22-50 packet per second on the watch!).

> 

>   So I am kind of against this patch because there is no way to take advantage of it from a API standpoint. Take qModuleInfo for example. ProcessGDBRemote::GetModuleSpec() and PlatformRemoteGDBServer::GetModuleSpec() are the only two callers of this. How would you ever take advantage of sending multiple packets at the same time? Make the access multi-threaded? I was sooner see a new functions that get more than one module spec at a time:

> 

>   ProcessGDBRemote::GetModuleSpecs(...) PlatformRemoteGDBServer::GetModuleSpecs(...)

> 

>   Then this could call the qModulesInfo (note the extra "s") with all of the module infos encoded as JSON, and the reply can contain all module infos needed.


I had considered this approach, and actually started implementing it, but then I sort of became less convinced that it is a good idea. The thing is that the place where I get the list of modules I want to query is buried inside DynamicLoaderPOSIXDYLD::LoadAllCurrentModules, which then calls DynamicLoader::LoadModuleAtAddress, which goes into Target::GetSharedModule, then Platform::GetSharedModule (which is a virtual function implemented in about 10 classes), and eventually ending in PlatformRemoteGDBServer::GetModuleSpecs. To pipe that information through I'd need all of these functions to process modules in bulk, instead of one-by-one. This is certainly possible, but it won't make the inferfaces very nice.

The other option I see would be to make some kind of a shortcut and have the DynamicLoader issue some kind of a "prefetch" call to the target/platform/whatever, which would trigger the qModulesInfo packet, and then we would go about as before, processing them individually. This also does not seem like a good interface, as it complicates the already quite complicated interactions between these classes. I guess these don't sound like very convincing arguments against these approaches, but I have several interesting reasons to try to go the other way.

Yes, my idea was to have the Dynamic Loader spin up some threads which will do the processing of individual modules in parallel. AFAICS, there is nothing about getting a module that would make it inherently serial, so it should be possible to achieve that. Granted, this will need to be done with a very steady hand, reviewing the assumptions the functions I mentioned are making, but I think that's doable, and I'm in no rush to get that done quickly.

What I like about his idea, is that it's not just limited to qModuleInfo packets. To find out which modules are loaded we traverse a linked list in the inferior memory. This is an inherently serial process, but if we have this, we could interleave traversing the list with processing the elements of the list that we have already found. Next, if you remember the parallel module _download_ discussion from the beginning of this year, I was able to obtain about 5x speed improvement by using this approach, so this would be a step towards that. The possibilities are very widely open. (Computing summaries of complicated frame variables in parallel, why not?) All without changing the protocol or even server used.

Additionally, although I have no data to back that up, I believe the pipelining approach will bring superior performance compared to the bulk mode. This is because the bulk mode creates one huge packet, which takes a lot of time to process (disk access), during which nothing is happening. Whereas here, you are overlapping work on the server, with network communication, with client workload.

> As soon as we start adding sequence numbers and extra stuff, this stops becoming GDB remote protocol and becomes our own. We might as well switch over to a new JSON based protocol that fixes all of the issues with GDB remote...


There are things which I don't like about the gdb-remote protocol, but the (lack of) sequence numbers is not one of them. I saw that you are having a lot of problems with the packet sequences getting out of sync, which quite surprises me, as we haven't seen such issues with lldb-server. So actually, moving towards a different protocol is a non-goal for us (although we would be open to discussion if someone wanted to push that).

Let me know what you think.
PS: I will be away next week, so I may not respond until I am back.


https://reviews.llvm.org/D22914





More information about the lldb-commits mailing list