[PATCH] D43518: [clangd] Allow embedders some control over when diagnostics are generated.

Ilya Biryukov via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Thu Feb 22 05:01:14 PST 2018


ilya-biryukov accepted this revision.
ilya-biryukov added a comment.
This revision is now accepted and ready to land.

LGTM (just one more possibly useful nit about const)



================
Comment at: clangd/TUScheduler.cpp:298
+      while (shouldSkipHeadLocked())
+        Requests.pop_front();
+      assert(!Requests.empty() && "skipped the whole queue");
----------------
ilya-biryukov wrote:
> sammccall wrote:
> > ilya-biryukov wrote:
> > > Instead of skipping requests here we could try removing them from back of the queue in `startTask` (or rewriting the last request instead of adding a new one).
> > > It feels the code could be simpler, as we will only ever have to remove a single request from the queue. And it could also keep the queue smaller in case of many subsequent `Auto` requests.
> > > WDYT?
> > Having startTask look ahead to find things to cancel was the thing I found most confusing/limiting in the previous code, so I'd rather not go back there :-)
> > That said, I did try this first, trying to limit the scope of this patch, but it got hard.
> > 
> > The main problems are:
> > - you're not just looking ahead one task, or even to a fixed one. After [auto no], no cancels no, auto cancels both, read cancels neither. The states and the state machine are hard to reason about. (unless you just loop over the whole queue, which seems just as complex)
> > - the decision of "should task X run" is distributed over time via mutating state, rather than happening at one point via reads
> >  - when looking at startTask time, you have to reason about the (possibly) concurrently running task. In run(), no task is running and nothing can be enqueued, so there's no concurrency issues.
> > 
> > >And it could also keep the queue smaller in case of many subsequent Auto requests.
> > This is true, but it doesn't seem like a practical concern.
> Thanks for clarifying. The first bullet point shouldn't be a big a problem. Yes, the new task can remove multiple items from the back of the queue, but the implementation still looks more natural as it only needs to inspect the **last**  item on the queue on each of the outer loop iterations. (While the current implementation has to do an inner loop through multiple items on the queue in addition to the outer loop).
> 
> The second point makes it hard, though. I would probably go with calling `pop_front()` when removing the request and signalling empty queue separately.
> 
> > This is true, but it doesn't seem like a practical concern.
> It isn't, but I still think it's a nice-to-have.
As discussed offline, I totally missed the case when the new request comes in and has to cancel requests from the middle of the queue. So the implementation I proposed would probably still have two loops and not be less complex.

So LGTM here, my suggestions won't make things simpler.


================
Comment at: clangd/TUScheduler.cpp:105
+  /// Should the first task in the queue be skipped instead of run?
+  bool shouldSkipHeadLocked();
 
----------------
NIT: this could be made `const`


Repository:
  rCTE Clang Tools Extra

https://reviews.llvm.org/D43518





More information about the cfe-commits mailing list