[PATCH] D75927: [clangd] Use a separate RunningTask flag instead of leaving a broken request on top of the queue

Kadir Cetinkaya via Phabricator via cfe-commits cfe-commits at lists.llvm.org
Tue Mar 10 09:12:29 PDT 2020


kadircet created this revision.
kadircet added a reviewer: sammccall.
Herald added subscribers: cfe-commits, usaxena95, arphaman, jkorous, MaskRay, javed.absar, ilya-biryukov.
Herald added a project: clang.
sammccall added inline comments.


================
Comment at: clang-tools-extra/clangd/TUScheduler.cpp:279
   std::deque<Request> Requests; /* GUARDED_BY(Mutex) */
+  bool RunningTask = false;     /* GUARDED_BY(Mutex) */
   mutable std::condition_variable RequestsCV;
----------------
I'd consider making this an `llvm::Optional<Request>` and using it as storage too. Even though it won't be accessed directly outside the main loop, I think it makes it easier to reason about the code: it emphasizes the parallel with `Requests` and minimizes the spooky-flags-at-a-distance effect.


================
Comment at: clang-tools-extra/clangd/TUScheduler.cpp:612
   // If there were no writes in the queue, the preamble is ready now.
   if (LastUpdate == Requests.rend()) {
     Lock.unlock();
----------------
what if the currently-running task is a write?


================
Comment at: clang-tools-extra/clangd/TUScheduler.cpp:721
     {
       std::unique_lock<std::mutex> Lock(Mutex);
       for (auto Wait = scheduleLocked(); !Wait.expired();
----------------
maybe assert there's no running task here?


================
Comment at: clang-tools-extra/clangd/unittests/TUSchedulerTests.cpp:376
   Notification Start;
-  updateWithDiags(S, Path, "a", WantDiagnostics::Yes, [&](std::vector<Diag>) {
-    ++Builds;
-    Start.wait();
-  });
+  updateWithDiags(S, Path, "b", WantDiagnostics::Auto,
+                  [&](std::vector<Diag>) { ++Builds; });
----------------
does this deliberately match "b" below? if not, change back to something unique?
If so, probably needs a comment.


================
Comment at: clang-tools-extra/clangd/unittests/TUSchedulerTests.cpp:379
+  ASSERT_TRUE(S.blockUntilIdle(timeoutSeconds(10)));
+  S.runWithAST(
+      "invalidatable-but-running", Path,
----------------
This looks racy to me. What guarantees that the worker thread pulls this item off the queue and into the "running" slot before the second updateWithDiags?


This helps us prevent races when scheduler (or any other thread) tries
to read a request while it's still running.


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D75927

Files:
  clang-tools-extra/clangd/TUScheduler.cpp
  clang-tools-extra/clangd/unittests/TUSchedulerTests.cpp


Index: clang-tools-extra/clangd/unittests/TUSchedulerTests.cpp
===================================================================
--- clang-tools-extra/clangd/unittests/TUSchedulerTests.cpp
+++ clang-tools-extra/clangd/unittests/TUSchedulerTests.cpp
@@ -373,10 +373,17 @@
   std::atomic<int> Builds(0), Actions(0);
 
   Notification Start;
-  updateWithDiags(S, Path, "a", WantDiagnostics::Yes, [&](std::vector<Diag>) {
-    ++Builds;
-    Start.wait();
-  });
+  updateWithDiags(S, Path, "b", WantDiagnostics::Auto,
+                  [&](std::vector<Diag>) { ++Builds; });
+  ASSERT_TRUE(S.blockUntilIdle(timeoutSeconds(10)));
+  S.runWithAST(
+      "invalidatable-but-running", Path,
+      [&](llvm::Expected<InputsAndAST> AST) {
+        Start.wait();
+        ++Actions;
+        EXPECT_TRUE(bool(AST)) << "Shouldn't be invalidated, because running.";
+      },
+      TUScheduler::InvalidateOnUpdate);
   S.runWithAST(
       "invalidatable", Path,
       [&](llvm::Expected<InputsAndAST> AST) {
@@ -421,7 +428,7 @@
   ASSERT_TRUE(S.blockUntilIdle(timeoutSeconds(10)));
 
   EXPECT_EQ(2, Builds.load()) << "Middle build should be skipped";
-  EXPECT_EQ(4, Actions.load()) << "All actions should run (some with error)";
+  EXPECT_EQ(5, Actions.load()) << "All actions should run (some with error)";
 }
 
 TEST_F(TUSchedulerTests, ManyUpdates) {
Index: clang-tools-extra/clangd/TUScheduler.cpp
===================================================================
--- clang-tools-extra/clangd/TUScheduler.cpp
+++ clang-tools-extra/clangd/TUScheduler.cpp
@@ -276,6 +276,7 @@
   /// Set to true to signal run() to finish processing.
   bool Done;                    /* GUARDED_BY(Mutex) */
   std::deque<Request> Requests; /* GUARDED_BY(Mutex) */
+  bool RunningTask = false;     /* GUARDED_BY(Mutex) */
   mutable std::condition_variable RequestsCV;
   /// Guards the callback that publishes results of AST-related computations
   /// (diagnostics, highlightings) and file statuses.
@@ -370,7 +371,8 @@
 #ifndef NDEBUG
   std::lock_guard<std::mutex> Lock(Mutex);
   assert(Done && "handle was not destroyed");
-  assert(Requests.empty() && "unprocessed requests when destroying ASTWorker");
+  assert(Requests.empty() && !RunningTask &&
+         "unprocessed requests when destroying ASTWorker");
 #endif
 }
 
@@ -745,7 +747,8 @@
         wait(Lock, RequestsCV, Wait);
       }
       Req = std::move(Requests.front());
-      // Leave it on the queue for now, so waiters don't see an empty queue.
+      Requests.pop_front();
+      RunningTask = true;
     } // unlock Mutex
 
     {
@@ -763,7 +766,7 @@
     bool IsEmpty = false;
     {
       std::lock_guard<std::mutex> Lock(Mutex);
-      Requests.pop_front();
+      RunningTask = false;
       IsEmpty = Requests.empty();
     }
     if (IsEmpty)
@@ -843,7 +846,8 @@
 
 bool ASTWorker::blockUntilIdle(Deadline Timeout) const {
   std::unique_lock<std::mutex> Lock(Mutex);
-  return wait(Lock, RequestsCV, Timeout, [&] { return Requests.empty(); });
+  return wait(Lock, RequestsCV, Timeout,
+              [&] { return Requests.empty() && !RunningTask; });
 }
 
 // Render a TUAction to a user-facing string representation.


-------------- next part --------------
A non-text attachment was scrubbed...
Name: D75927.249402.patch
Type: text/x-patch
Size: 3187 bytes
Desc: not available
URL: <http://lists.llvm.org/pipermail/cfe-commits/attachments/20200310/e86e1d29/attachment-0001.bin>


More information about the cfe-commits mailing list