[PATCH] D42573: [wip] The new threading implementation

Sam McCall via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Wed Jan 31 06:29:51 PST 2018


sammccall added a comment.

Not a complete review, but this looks pretty good!
You probably want to read the last comment first - the gc thread is the threadpooliest bit of code left, and you may (or may not) want to eliminate it.



================
Comment at: clangd/TUScheduler.cpp:23
+    return;
+  Worker = std::thread([&]() {
+    while (true) {
----------------
I tend to find thread bodies more readable as a member function, up to you


================
Comment at: clangd/TUScheduler.cpp:220
+
+  GCThread.cleanupFile(std::move(Data->Worker));
 }
----------------
in the spirit of "just spawn a thread, and write direct code"...

can we just spawn a shared_ptr<std::thread> to do the work here, and replace GCThread with a vector<weak_ptr<std::thread>>.
Then ~TUScheduler could just loop through:

  for (const auto &WeakThread : Cleanups)
    if (auto SharedThread = WeakThread.lock()) // thread may have died already
      SharedThread->join();

might be simpler? You need to lock the Cleanups vector, but no fiddling with CVs etc.


================
Comment at: clangd/TUScheduler.cpp:258
+    std::lock_guard<CountingSemaphore> BarrierLock(Barrier);
+    // XXX: what if preamble got built by this time?
+    // if (!Preamble)
----------------
this seems like where shared_ptr might help us out if we're willing to deal with it.
If we captured a shared_ptr<File> here, then we'd want to call getPossiblyStalePreamble inside rather than outside the lambda.

So this would look something like:
 - FileASTThread does thread shutdown as part of destructor, and requires the TUScheduler to outlive it
 - TUScheduler keeps (conceptually): active files `StringMap<shared_ptr<FileASTThread>>`, and shutdown handles `vector<weak_ptr<thread>>`
 - removal takes the shared_ptr out of the map, transfers ownership of the shared_ptr to a new thread which does nothing (except destroy the shared_ptr), and stashes a weak_ptr to the thread as a shutdown handle, which get looped over in the destructor.

Is this simpler? I'm not sure. It's maybe more thematically consistent :-) and it solves your "preamble got built" problem.



================
Comment at: clangd/TUScheduler.h:37
+/// Limits the number of threads that can acquire this semaphore.
+class CountingSemaphore {
+public:
----------------
Probably belongs in Threading.h instead.

Maybe just Semaphore? If we have *two* semaphores in clangd, we've jumped the shark!
(Isn't a non-counting semaphore just a mutex?)



================
Comment at: clangd/TUScheduler.h:50
+
+class FileASTThread {
+public:
----------------
Maybe FileWorker? this has-a thread, rather than is-a


================
Comment at: clangd/TUScheduler.h:59
+
+  void setDone();
+
----------------
AFAICS, we require callers to setDone() before destroying, and the caller always does this immediately before destroying.

Why not just make the destructor do this?


================
Comment at: clangd/TUScheduler.h:63
+      Context Ctx, ParseInputs Inputs,
+      UniqueFunction<void(Context, llvm::Optional<std::vector<DiagWithFixIts>>)>
+          OnUpdated);
----------------
do we want to move the callback to be a clangd-global thing, rather than a per-update thing, before this patch or later?

(And will that change be mirrored here, or will FileASTThread::update still take a callback?)


================
Comment at: clangd/TUScheduler.h:65
+          OnUpdated);
+  void enqueueRead(UniqueFunction<void(llvm::Expected<InputsAndAST>)> Action);
+
----------------
consider mirroring the runWithAST name


================
Comment at: clangd/TUScheduler.h:70
+  mutable std::mutex Mutex;
+  bool Done;
+  std::condition_variable RequestsCV;
----------------
(no action required, just random thoughts)

one day we should really get the threading annotations (REQUIRES_EXCLUSIVE_LOCK etc macros) set up in LLVM - they're *implemented* in llvm for crying out loud...


================
Comment at: clangd/TUScheduler.h:72
+  std::condition_variable RequestsCV;
+  RequestQueue Requests;
+  std::shared_ptr<CppFile> File;
----------------
Having trouble seeing which parts of this are really used.
My understanding is that the ThreadPool+Queue abstraction is going away.
This means this is a deque + lock + ... a "done" state of some kind? Is more of it still live?

I'd probably expect the 'done' state to be redundant with the one here... If it's just a deque + a lock, I'd lean towards making it really thin, maybe even a plain struct, and putting it in this file.

But not sure of your thoughts here.


Repository:
  rCTE Clang Tools Extra

https://reviews.llvm.org/D42573





More information about the cfe-commits mailing list