[llvm] [LLD] Avoid non-deterministic relocations processing. (PR #107186)
Alexey Lapshin via llvm-commits
llvm-commits at lists.llvm.org
Tue Sep 3 23:00:34 PDT 2024
https://github.com/avl-llvm created https://github.com/llvm/llvm-project/pull/107186
TaskGroup has a "sequential" mode for deterministic processing. LLD uses this for relocations handling. But, As it places relocations to different vectors(based on getThreadIndex), the actual locations can be non-deterministic:
if (shard)
part.relrDyn->relocsVec[parallel::getThreadIndex()].push_back(
{&isec, isec.relocs().size() - 1});
It is neccessary to use the same thread index to have the same locations. This patch changes "sequential" mode to always execute tasks on single thread with index 0.
Fixes this https://github.com/llvm/llvm-project/issues/105958.
>From d8480929f9ec0465b2c06f9e0578da8d79b15d54 Mon Sep 17 00:00:00 2001
From: Alexey Lapshin <avl.lapshin at gmail.com>
Date: Tue, 3 Sep 2024 07:39:12 +0300
Subject: [PATCH] [LLD] Avoid non-deterministic relocations processing.
TaskGroup has a "sequential" mode for deterministic processing.
LLD uses this for relocations handling. But, As it places relocations
to different vectors(based on getThreadIndex), the actual locations
can be non-deterministic:
if (shard)
part.relrDyn->relocsVec[parallel::getThreadIndex()].push_back(
{&isec, isec.relocs().size() - 1});
It is neccessary to use the same thread index to have the same locations.
This patch changes "sequential" mode to always execute tasks on single
thread with index 0.
Fixes this https://github.com/llvm/llvm-project/issues/105958.
---
llvm/include/llvm/Support/Parallel.h | 5 ++---
llvm/lib/Support/Parallel.cpp | 13 ++-----------
llvm/unittests/Support/ParallelTest.cpp | 7 ++++++-
3 files changed, 10 insertions(+), 15 deletions(-)
diff --git a/llvm/include/llvm/Support/Parallel.h b/llvm/include/llvm/Support/Parallel.h
index 8170da98f15a8c..0e1f4fadcf4fe6 100644
--- a/llvm/include/llvm/Support/Parallel.h
+++ b/llvm/include/llvm/Support/Parallel.h
@@ -96,9 +96,8 @@ class TaskGroup {
// Spawn a task, but does not wait for it to finish.
// Tasks marked with \p Sequential will be executed
- // exactly in the order which they were spawned.
- // Note: Sequential tasks may be executed on different
- // threads, but strictly in sequential order.
+ // exactly in the order which they were spawned and
+ // on the thread with index 0.
void spawn(std::function<void()> f, bool Sequential = false);
void sync() const { L.sync(); }
diff --git a/llvm/lib/Support/Parallel.cpp b/llvm/lib/Support/Parallel.cpp
index a3ef3d9c621b98..3ef3b65153937a 100644
--- a/llvm/lib/Support/Parallel.cpp
+++ b/llvm/lib/Support/Parallel.cpp
@@ -113,7 +113,7 @@ class ThreadPoolExecutor : public Executor {
private:
bool hasSequentialTasks() const {
- return !WorkQueueSequential.empty() && !SequentialQueueIsLocked;
+ return !WorkQueueSequential.empty() && (getThreadIndex() == 0);
}
bool hasGeneralTasks() const { return !WorkQueue.empty(); }
@@ -128,24 +128,15 @@ class ThreadPoolExecutor : public Executor {
});
if (Stop)
break;
- bool Sequential = hasSequentialTasks();
- if (Sequential)
- SequentialQueueIsLocked = true;
- else
- assert(hasGeneralTasks());
-
- auto &Queue = Sequential ? WorkQueueSequential : WorkQueue;
+ auto &Queue = hasSequentialTasks() ? WorkQueueSequential : WorkQueue;
auto Task = std::move(Queue.back());
Queue.pop_back();
Lock.unlock();
Task();
- if (Sequential)
- SequentialQueueIsLocked = false;
}
}
std::atomic<bool> Stop{false};
- std::atomic<bool> SequentialQueueIsLocked{false};
std::deque<std::function<void()>> WorkQueue;
std::deque<std::function<void()>> WorkQueueSequential;
std::mutex Mutex;
diff --git a/llvm/unittests/Support/ParallelTest.cpp b/llvm/unittests/Support/ParallelTest.cpp
index 0eafb9b401bee7..c6acd302945651 100644
--- a/llvm/unittests/Support/ParallelTest.cpp
+++ b/llvm/unittests/Support/ParallelTest.cpp
@@ -99,7 +99,12 @@ TEST(Parallel, TaskGroupSequentialFor) {
{
parallel::TaskGroup tg;
for (size_t Idx = 0; Idx < 500; Idx++)
- tg.spawn([&Count, Idx]() { EXPECT_EQ(Count++, Idx); }, true);
+ tg.spawn(
+ [&Count, Idx]() {
+ EXPECT_EQ(Count++, Idx);
+ EXPECT_EQ(llvm::parallel::getThreadIndex(), 0u);
+ },
+ true);
}
EXPECT_EQ(Count, 500ul);
}
More information about the llvm-commits
mailing list