[compiler-rt] 7ec3087 - tsan: prevent pathological slowdown for spurious races

Dmitry Vyukov via llvm-commits llvm-commits at lists.llvm.org
Mon Jul 25 01:40:16 PDT 2022


Author: Dmitry Vyukov
Date: 2022-07-25T10:40:11+02:00
New Revision: 7ec308715c6ec0d07e62f7b2f8503cccacf9e96e

URL: https://github.com/llvm/llvm-project/commit/7ec308715c6ec0d07e62f7b2f8503cccacf9e96e
DIFF: https://github.com/llvm/llvm-project/commit/7ec308715c6ec0d07e62f7b2f8503cccacf9e96e.diff

LOG: tsan: prevent pathological slowdown for spurious races

Prevent the following pathological behavior:
Since memory access handling is not synchronized with DoReset,
a thread running concurrently with DoReset can leave a bogus shadow value
that will be later falsely detected as a race. For such false races
RestoreStack will return false and we will not report it.
However, consider that a thread leaves a whole lot of such bogus values
and these values are later read by a whole lot of threads.
This will cause massive amounts of ReportRace calls and lots of
serialization. In very pathological cases the resulting slowdown
can be >100x. This is very unlikely, but it was presumably observed
in practice: https://github.com/google/sanitizers/issues/1552
If this happens, previous access sid+epoch will be the same for all of
these false races b/c if the thread will try to increment epoch, it will
notice that DoReset has happened and will stop producing bogus shadow
values. So, last_spurious_race is used to remember the last sid+epoch
for which RestoreStack returned false. Then it is used to filter out
races with the same sid+epoch very early and quickly.
It is of course possible that multiple threads left multiple bogus shadow
values and all of them are read by lots of threads at the same time.
In such case last_spurious_race will only be able to deduplicate a few
races from one thread, then few from another and so on. An alternative
would be to hold an array of such sid+epoch, but we consider such scenario
as even less likely.
Note: this can lead to some rare false negatives as well:
1. When a legit access with the same sid+epoch participates in a race
as the "previous" memory access, it will be wrongly filtered out.
2. When RestoreStack returns false for a legit memory access because it
was already evicted from the thread trace, we will still remember it in
last_spurious_race. Then if there is another racing memory access from
the same thread that happened in the same epoch, but was stored in the
next thread trace part (which is still preserved in the thread trace),
we will also wrongly filter it out while RestoreStack would actually
succeed for that second memory access.

Reviewed By: melver

Differential Revision: https://reviews.llvm.org/D130269

Added: 
    

Modified: 
    compiler-rt/lib/tsan/rtl/tsan_rtl.cpp
    compiler-rt/lib/tsan/rtl/tsan_rtl.h
    compiler-rt/lib/tsan/rtl/tsan_rtl_access.cpp
    compiler-rt/lib/tsan/rtl/tsan_rtl_report.cpp
    compiler-rt/lib/tsan/rtl/tsan_shadow.h

Removed: 
    


################################################################################
diff  --git a/compiler-rt/lib/tsan/rtl/tsan_rtl.cpp b/compiler-rt/lib/tsan/rtl/tsan_rtl.cpp
index e0a142c1de02..758f380b029e 100644
--- a/compiler-rt/lib/tsan/rtl/tsan_rtl.cpp
+++ b/compiler-rt/lib/tsan/rtl/tsan_rtl.cpp
@@ -204,6 +204,7 @@ static void DoResetImpl(uptr epoch) {
   }
   DPrintf("Resetting meta shadow...\n");
   ctx->metamap.ResetClocks();
+  StoreShadow(&ctx->last_spurious_race, Shadow::kEmpty);
   ctx->resetting = false;
 }
 

diff  --git a/compiler-rt/lib/tsan/rtl/tsan_rtl.h b/compiler-rt/lib/tsan/rtl/tsan_rtl.h
index 1ca80f390949..7788c3508c38 100644
--- a/compiler-rt/lib/tsan/rtl/tsan_rtl.h
+++ b/compiler-rt/lib/tsan/rtl/tsan_rtl.h
@@ -314,6 +314,41 @@ struct Context {
 
   ThreadRegistry thread_registry;
 
+  // This is used to prevent a very unlikely but very pathological behavior.
+  // Since memory access handling is not synchronized with DoReset,
+  // a thread running concurrently with DoReset can leave a bogus shadow value
+  // that will be later falsely detected as a race. For such false races
+  // RestoreStack will return false and we will not report it.
+  // However, consider that a thread leaves a whole lot of such bogus values
+  // and these values are later read by a whole lot of threads.
+  // This will cause massive amounts of ReportRace calls and lots of
+  // serialization. In very pathological cases the resulting slowdown
+  // can be >100x. This is very unlikely, but it was presumably observed
+  // in practice: https://github.com/google/sanitizers/issues/1552
+  // If this happens, previous access sid+epoch will be the same for all of
+  // these false races b/c if the thread will try to increment epoch, it will
+  // notice that DoReset has happened and will stop producing bogus shadow
+  // values. So, last_spurious_race is used to remember the last sid+epoch
+  // for which RestoreStack returned false. Then it is used to filter out
+  // races with the same sid+epoch very early and quickly.
+  // It is of course possible that multiple threads left multiple bogus shadow
+  // values and all of them are read by lots of threads at the same time.
+  // In such case last_spurious_race will only be able to deduplicate a few
+  // races from one thread, then few from another and so on. An alternative
+  // would be to hold an array of such sid+epoch, but we consider such scenario
+  // as even less likely.
+  // Note: this can lead to some rare false negatives as well:
+  // 1. When a legit access with the same sid+epoch participates in a race
+  // as the "previous" memory access, it will be wrongly filtered out.
+  // 2. When RestoreStack returns false for a legit memory access because it
+  // was already evicted from the thread trace, we will still remember it in
+  // last_spurious_race. Then if there is another racing memory access from
+  // the same thread that happened in the same epoch, but was stored in the
+  // next thread trace part (which is still preserved in the thread trace),
+  // we will also wrongly filter it out while RestoreStack would actually
+  // succeed for that second memory access.
+  RawShadow last_spurious_race;
+
   Mutex racy_mtx;
   Vector<RacyStacks> racy_stacks;
   // Number of fired suppressions may be large enough.

diff  --git a/compiler-rt/lib/tsan/rtl/tsan_rtl_access.cpp b/compiler-rt/lib/tsan/rtl/tsan_rtl_access.cpp
index 7d771bfaad7f..8b20984a0100 100644
--- a/compiler-rt/lib/tsan/rtl/tsan_rtl_access.cpp
+++ b/compiler-rt/lib/tsan/rtl/tsan_rtl_access.cpp
@@ -145,15 +145,6 @@ void TraceTime(ThreadState* thr) {
   TraceEvent(thr, ev);
 }
 
-ALWAYS_INLINE RawShadow LoadShadow(RawShadow* p) {
-  return static_cast<RawShadow>(
-      atomic_load((atomic_uint32_t*)p, memory_order_relaxed));
-}
-
-ALWAYS_INLINE void StoreShadow(RawShadow* sp, RawShadow s) {
-  atomic_store((atomic_uint32_t*)sp, static_cast<u32>(s), memory_order_relaxed);
-}
-
 NOINLINE void DoReportRace(ThreadState* thr, RawShadow* shadow_mem, Shadow cur,
                            Shadow old,
                            AccessType typ) SANITIZER_NO_THREAD_SAFETY_ANALYSIS {

diff  --git a/compiler-rt/lib/tsan/rtl/tsan_rtl_report.cpp b/compiler-rt/lib/tsan/rtl/tsan_rtl_report.cpp
index 65fba470a795..444f210390cc 100644
--- a/compiler-rt/lib/tsan/rtl/tsan_rtl_report.cpp
+++ b/compiler-rt/lib/tsan/rtl/tsan_rtl_report.cpp
@@ -701,6 +701,11 @@ static bool IsFiredSuppression(Context *ctx, ReportType type, uptr addr) {
   return false;
 }
 
+static bool SpuriousRace(Shadow old) {
+  Shadow last(LoadShadow(&ctx->last_spurious_race));
+  return last.sid() == old.sid() && last.epoch() == old.epoch();
+}
+
 void ReportRace(ThreadState *thr, RawShadow *shadow_mem, Shadow cur, Shadow old,
                 AccessType typ0) {
   CheckedMutex::CheckNoLocks();
@@ -721,6 +726,8 @@ void ReportRace(ThreadState *thr, RawShadow *shadow_mem, Shadow cur, Shadow old,
       ((typ0 & kAccessAtomic) || (typ1 & kAccessAtomic)) &&
       !(typ0 & kAccessFree) && !(typ1 & kAccessFree))
     return;
+  if (SpuriousRace(old))
+    return;
 
   const uptr kMop = 2;
   Shadow s[kMop] = {cur, old};
@@ -760,9 +767,13 @@ void ReportRace(ThreadState *thr, RawShadow *shadow_mem, Shadow cur, Shadow old,
   Lock slot_lock(&ctx->slots[static_cast<uptr>(s[1].sid())].mtx);
   ThreadRegistryLock l0(&ctx->thread_registry);
   Lock slots_lock(&ctx->slot_mtx);
+  if (SpuriousRace(old))
+    return;
   if (!RestoreStack(EventType::kAccessExt, s[1].sid(), s[1].epoch(), addr1,
-                    size1, typ1, &tids[1], &traces[1], mset[1], &tags[1]))
+                    size1, typ1, &tids[1], &traces[1], mset[1], &tags[1])) {
+    StoreShadow(&ctx->last_spurious_race, old.raw());
     return;
+  }
 
   if (IsFiredSuppression(ctx, rep_typ, traces[1]))
     return;

diff  --git a/compiler-rt/lib/tsan/rtl/tsan_shadow.h b/compiler-rt/lib/tsan/rtl/tsan_shadow.h
index b222acf9e6c5..6b8114ef5132 100644
--- a/compiler-rt/lib/tsan/rtl/tsan_shadow.h
+++ b/compiler-rt/lib/tsan/rtl/tsan_shadow.h
@@ -178,6 +178,16 @@ class Shadow {
 
 static_assert(sizeof(Shadow) == kShadowSize, "bad Shadow size");
 
+ALWAYS_INLINE RawShadow LoadShadow(RawShadow *p) {
+  return static_cast<RawShadow>(
+      atomic_load((atomic_uint32_t *)p, memory_order_relaxed));
+}
+
+ALWAYS_INLINE void StoreShadow(RawShadow *sp, RawShadow s) {
+  atomic_store((atomic_uint32_t *)sp, static_cast<u32>(s),
+               memory_order_relaxed);
+}
+
 }  // namespace __tsan
 
 #endif


        


More information about the llvm-commits mailing list