[PATCH] D92198: [clangd] Log remote index connectivity status

Sam McCall via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Wed Dec 9 05:16:30 PST 2020


sammccall added inline comments.


================
Comment at: clang-tools-extra/clangd/index/remote/Client.cpp:114
     assert(!ProjectRoot.empty());
+    ChannelStatusWatcher = std::thread([&Channel]() {
+      grpc_connectivity_state Status =
----------------
kbobyrev wrote:
> sammccall wrote:
> > The thread is a bit surprising here. I was assuming we'd use some callback-based API to watch rather than spawn a thread to do the "work" of blocking on a change.
> > Also this is going to "wake up" periodically to refresh the watch, which seems undesirable if we want to go truly idle.
> > 
> > Ah, so there's no callback-type connection-changed API! Sorry to mislead you :-( I hadn't read the API carefully. (I think this is coming as some point - they're talking about replacing CompletionQueue with EventManager which is a callback-based abstraction)
> > 
> > Seems the "async" option is to subscribe on a CompletionQueue, but then we have to poll it. We can use a zero timeout so the poll doesn't block... But then we might as well poll GetStatus directly.
> > 
> > Then the question is where to do that from. What if we called GetStatus before/after each request, and reported whenever it has changed? That means:
> > 
> > - no idle background work
> > - we know the connection state for each request
> > - we know when a request triggered a connection, and whether that connection succeeded in time for the request
> > - if a connection goes idle, we don't know *when*, it gets reported on the next request
> ("resolved" in this version)
> 
> I can understand your point and I support the arguments but this would mean that we only get status during requests and can't query it at arbitrary time which seems like a loss. Maybe it's OK because there might not be many interesting cases when we want to query the server status but not do any requests and your proposal makes it easier to implement & maintain. But also maybe it could be nice to have some kind of callback on status change.
> 
> As you mentioned, I was looking for something like this in gRPC itself but couldn't find it there outside the CompletionQueue and EvenManager things. Sadly, I don't think we'd be in a position to use the replacement even when it's there since we have to support older gRPC versions :(
> 
> When I see the logs and the server connectivity updates are following the requests, it looks OK but I'm not entirely sure giving up on _watching_ the status is great. But it still makes life easier so maybe this change should be about it.
I agree, it's absolutely a loss vs watching the state continuously.
But I think simplicity needs to win out here. Imagine if we have trouble coordinating shutdown in certain configurations because the thread gets blocked (grpc bug?)... this would not be much fun to debug.

In principle we could #ifdef based on GRPC version in the future, in practice you're probably right that we just suffer without this :-(


================
Comment at: clang-tools-extra/clangd/index/remote/Client.cpp:124
+          Status = Channel->GetState(/*try_to_connect=*/true);
+          log("Remote index connection status changed: {0}", Status);
+        }
----------------
kbobyrev wrote:
> sammccall wrote:
> > Can we include the remote address and prior status?
> > 
> > e.g. `"Remote index ({0}): {1} => {2}"`
> I can see how it'd be nice but I'm not sure how to do this properly. My only idea is to query prior status in the beginning of the request? Otherwise it needs to be stored somewhere and gRPC interfaces require `const` functions, so `IndexClient` does not change the state within requests. I could either wrap it into something or have global `grpc_connnectivity_state` for the last observed status but neither look OK. Maybe querying the prior status right before the request is OK (that's what I've done here) but I'm not sure if that's exactly what you suggested.
TL;DR: `mutable atomic`

Using a `mutable` member is the usual solution to "I want to be stateful but have to adhere to a `const` interface.
We usually have some philosophical debate about whether the function is still "conceptually const" but it certainly is here as the state in question is only affecting logging.

With `mutable` you usually have to add a mutex to provide the usual thread-safety guarantees (const = threadsafe). However in this case we just want to detect transitions, and atomics are great at this:

```
auto NewStatus = getStatus();
auto OldStatus = last_status_.exchange(NewStatus);
if (NewStatus != OldStatus)
  log(...);
```

This is guaranteed to log each time last_status_ changes value. (I think this will correspond closely enough to actual status changes, though calling using a mutex and calling getStatus() under the mutex would be a little more accurate)

Note that there's no difference between what you want to do before vs after an RPC - both are just chances to detect a transition - so pull out a function and call it in both places.

(I'm fairly sure atomic<grpc_connectivity_state> should be directly usable, on any decent impl this should be the same implementation as atomic<underlying_type<grpc_connectivity_state>>)


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D92198/new/

https://reviews.llvm.org/D92198



More information about the cfe-commits mailing list