[PATCH] D54796: [clangd] C++ API for emitting file status
Ilya Biryukov via Phabricator via cfe-commits
cfe-commits at lists.llvm.org
Wed Dec 5 10:12:49 PST 2018
ilya-biryukov accepted this revision.
ilya-biryukov added a comment.
This revision is now accepted and ready to land.
LGTM, still have a few NITs and suggestions (see the comment about emitting "queued" on waiting for diagnostics and **not** emitting queued if the lock was acquired from the first try), but they don't look like something that should block this change.
================
Comment at: clangd/TUScheduler.cpp:268
+ /// Status of the TU.
+ TUStatus Status; /* GUARDED_BY(DiagMu) */
};
----------------
hokein wrote:
> ilya-biryukov wrote:
> > hokein wrote:
> > > ilya-biryukov wrote:
> > > > Is `Status` actually ever read from multiple threads?
> > > > I assume it's only accessed on the worker thread, right? I think we can leave out the locks around it and put beside other fields only accessed by the worker thread, i.e. `DiagsWereReported`, etc.
> > > >
> > > > PS Sorry go going back and forth, I didn't think about it in the previous review iteration.
> > > Unfortunately not, most statuses are emitted via worker thread, but we emit `Queued` status in the main thread...
> > Hm, that sounds surprising.
> > What if the rebuild is in progress on the worker thread and we emit the queued status on the main thread at the same time? We might get weird sequences of statuses, right?
> > E.g. `Queued → BuildingPreamble → [new update on the main thread] Queued → [processing request on a worker thread] RunningAction('XRefs')`
> >
> > The `Queued` request status between `BuildingPreamble` and `RunningAction` looks **really** weird.
> > Maybe we could avoid that by setting emitting the `Queued` status on the worker thread whenever we're about to sleep on the debounce timeout?
> This sounds fair enough, and can simplify the code.
>
> > Maybe we could avoid that by setting emitting the Queued status on the worker thread whenever we're about to sleep on the debounce timeout?
>
> The `Queued` state is originally designed for the state that the worker thread is blocked by the max-concurrent-threads semaphore.
> Emitting it when we're about to sleep on the debounce timeout doesn't seem a right place to me. I think a reasonable place is before requiring the `Barrier`.
>
Ah, good point, waiting on a barrier can definitely take some time and is a form of queuing that we might want to notify the clients about.
It would be nice to notify only when the Barrier couldn't be acquired right away though, this probably be achieved adding `try_lock()` on the semaphore and only emitting a `Queued` notification if it fails. That way we'll avoid spamming the clients with non-useful statuses.
Wrt to deciding on whether we should notify when waiting on a debounce timer or on acquiring a semaphore... Why not both?
================
Comment at: clangd/TUScheduler.cpp:422
+ Details.ReuseAST = true;
+ emitTUStatus({TUAction::BuildingFile, TaskName}, &Details);
return;
----------------
Should we also emit this status if we reuse the AST, but still report the diagnostics?
================
Comment at: clangd/TUScheduler.cpp:626
{
+ emitTUStatus({TUAction::Queued, Req.Name});
std::lock_guard<Semaphore> BarrierLock(Barrier);
----------------
See the other suggestion about emitting this status only when locking the barrier failed.
This might be a good fit for a different patch, though, maybe add a FIXME here?
================
Comment at: clangd/TUScheduler.h:75
+ struct BuildDetails {
+ /// indicates whether clang failed to build the TU.
+ bool BuildFailed = false;
----------------
NIT: start with a capital letter to be consistent with the rest of the codebase.
================
Comment at: clangd/TUScheduler.h:77
+ bool BuildFailed = false;
+ /// indicates whether we reused the prebuilt AST.
+ bool ReuseAST = false;
----------------
NIT: start with a capital letter to be consistent with the rest of the codebase.
Repository:
rCTE Clang Tools Extra
CHANGES SINCE LAST ACTION
https://reviews.llvm.org/D54796/new/
https://reviews.llvm.org/D54796
More information about the cfe-commits
mailing list