[compiler-rt] 01bf29b - [sanitizers] Optimize locking StackDepotBase for fork (#76280)
via llvm-commits
llvm-commits at lists.llvm.org
Tue Dec 26 11:21:10 PST 2023
Author: Vitaly Buka
Date: 2023-12-26T11:21:06-08:00
New Revision: 01bf29b9d04e047096b34acc7e4ad1aff97f1a43
URL: https://github.com/llvm/llvm-project/commit/01bf29b9d04e047096b34acc7e4ad1aff97f1a43
DIFF: https://github.com/llvm/llvm-project/commit/01bf29b9d04e047096b34acc7e4ad1aff97f1a43.diff
LOG: [sanitizers] Optimize locking StackDepotBase for fork (#76280)
Locking StackDepotBase fully is very expensive, as 2^20 buckets needs to
be locked. Not locking, but only unlocking buckets, needed to be
unlocked to avoid deadlocks, increases a chance of data race, when the
value with same hash can be inserted into table twice, but one is lost.
However this is just a small additional memory usage by forked process.
Added:
Modified:
compiler-rt/lib/sanitizer_common/sanitizer_flat_map.h
compiler-rt/lib/sanitizer_common/sanitizer_stackdepotbase.h
Removed:
################################################################################
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_flat_map.h b/compiler-rt/lib/sanitizer_common/sanitizer_flat_map.h
index 8bb8304910c73f..d246781fe1df5e 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_flat_map.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_flat_map.h
@@ -109,6 +109,10 @@ class TwoLevelMap {
return *AddressSpaceView::LoadWritable(&map2[idx % kSize2]);
}
+ void Lock() SANITIZER_NO_THREAD_SAFETY_ANALYSIS { mu_.Lock(); }
+
+ void Unlock() SANITIZER_NO_THREAD_SAFETY_ANALYSIS { mu_.Unlock(); }
+
private:
constexpr uptr MmapSize() const {
return RoundUpTo(kSize2 * sizeof(T), GetPageSizeCached());
diff --git a/compiler-rt/lib/sanitizer_common/sanitizer_stackdepotbase.h b/compiler-rt/lib/sanitizer_common/sanitizer_stackdepotbase.h
index 21d57d9ab2a911..279bc5de3bb93b 100644
--- a/compiler-rt/lib/sanitizer_common/sanitizer_stackdepotbase.h
+++ b/compiler-rt/lib/sanitizer_common/sanitizer_stackdepotbase.h
@@ -161,18 +161,32 @@ StackDepotBase<Node, kReservedBits, kTabSizeLog>::Get(u32 id) {
template <class Node, int kReservedBits, int kTabSizeLog>
void StackDepotBase<Node, kReservedBits, kTabSizeLog>::LockBeforeFork() {
- for (int i = 0; i < kTabSize; ++i) {
- lock(&tab[i]);
- }
+ // Do not lock hash table. It's very expensive, but it's not rely needed. The
+ // parent process will neither lock nor unlock. Child process risks to be
+ // deadlocked on already locked buckets. To avoid deadlock we will unlock
+ // every locked buckets in `UnlockAfterFork`. This may affect consistency of
+ // the hash table, but the only issue is a few items inserted by parent
+ // process will be not found by child, and the child may insert them again,
+ // wasting some space in `stackStore`.
+
+ // We still need to lock nodes.
+ nodes.Lock();
}
template <class Node, int kReservedBits, int kTabSizeLog>
void StackDepotBase<Node, kReservedBits, kTabSizeLog>::UnlockAfterFork(
bool fork_child) {
+ nodes.Unlock();
+
+ // Only unlock in child process to avoid deadlock. See `LockBeforeFork`.
+ if (!fork_child)
+ return;
+
for (int i = 0; i < kTabSize; ++i) {
atomic_uint32_t *p = &tab[i];
uptr s = atomic_load(p, memory_order_relaxed);
- unlock(p, s & kUnlockMask);
+ if (s & kLockMask)
+ unlock(p, s & kUnlockMask);
}
}
More information about the llvm-commits
mailing list