<div dir="ltr"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 18, 2015 at 5:29 AM, Yury Gribov <span dir="ltr"><<a href="mailto:y.gribov@samsung.com" target="_blank">y.gribov@samsung.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 02/06/2015 09:10 AM, Yury Gribov wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On 02/04/2015 07:23 PM, David Blaikie wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Wed, Feb 4, 2015 at 3:56 AM, Yury Gribov <<a href="mailto:tetra2005@gmail.com" target="_blank">tetra2005@gmail.com</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Yes, I think enabling -Wglobal-constructors for ASan (and for all the<br>
</blockquote>
rest sanitizers) will be great.<br>
<br>
<br>
FYI it was relatively easy to get this working on Linux (with ~500 lines<br>
of changes). Unfortunately Windows compiler lacks too many necessary<br>
features: explicit initialization of array members, constexpr,<br>
unrestricted<br>
unions (all still missing in VS2013 and we still use VS2012). Having #if<br>
WINDOWS all over the place isn't an option as well so I'm afraid we<br>
are out<br>
of luck.<br>
<br>
</blockquote>
<br>
Perhaps there's a relatively small number of structures (probably<br>
templates) we could create to keep all of this logic (and Windows<br>
workarounds/suppressions) in one place so it could scale to all the use<br>
cases we have?<br>
</blockquote>
<br>
Thanks for the idea, I'll check this next week.<br>
</blockquote>
<br></span>
Folks,<br>
<br>
Here is a draft version (not full-blown patch but rather a PoC). The main part is in sanitizer_internal_defs.h (classes LinkerInitializedArray and LinkerInitializedStructArray). I also slightly changed atomic types in sanitizer_atomic.h (volatile members can't be linker-initialized). Does this look sane in general?<br></blockquote><div><br></div><div>+Timur for Windows parts.</div><div>Overall, yes, although missing constexpr is such a pain (sigh). Why is it necessary to initialize all fields to 0 in LinkerInitialized constructor?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I've did full rebuild and regtests on Linux (compiler-rt built by trunk Clang, GCC 4.8 and Clang 3.1). I haven't done full testing on Windows and Mac but at least relevant parts of sanitizer_internal_defs.h compiled fine.<br>
<br>
If all this makes sense, how should I proceed with a refactoring of this scale? Is it mandatory to verify _all_ platforms or I could to some extent rely on respected maintainers to fix errors? The former would be quite a burden.</blockquote><div><br></div><div>Well, at least for LLVM, we have buildbots (for Linux/Mac/Windows), so it's fine to test your change on available platforms, and then watch the buildbots closely.</div><div>It would be awesome to rollout this out gradually, first by introducing LinkerInitializedArray, then converting global objects to use constexpr constructor one by one, and enabling -Wglobal-constructor as a final change.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888"><br>
<br>
-Y<br>
<br>
</font></span><br>commit e30fa79972637958739bf4d16758adeb95d301e2<br>
Author: Yury Gribov <<a href="mailto:y.gribov@samsung.com">y.gribov@samsung.com</a>><br>
Date:   Tue Feb 3 13:13:08 2015 +0300<br>
<br>
    Draft support for -Wglobal-constructors in ASan.<br>
<br>
diff --git a/lib/asan/CMakeLists.txt b/lib/asan/CMakeLists.txt<br>
index d4c5c17..560ef86 100644<br>
--- a/lib/asan/CMakeLists.txt<br>
+++ b/lib/asan/CMakeLists.txt<br>
@@ -33,6 +33,8 @@ include_directories(..)<br>
<br>
 set(ASAN_CFLAGS ${SANITIZER_COMMON_CFLAGS})<br>
 append_no_rtti_flag(ASAN_CFLAGS)<br>
+append_list_if(COMPILER_RT_HAS_WGLOBAL_CONSTRUCTORS_FLAG -Wglobal-constructors<br>
+               ASAN_CFLAGS)<br>
<br>
 set(ASAN_COMMON_DEFINITIONS<br>
   ASAN_HAS_EXCEPTIONS=1)<br>
@@ -86,6 +88,9 @@ else()<br>
   endforeach()<br>
 endif()<br>
<br>
+# FIXME: move initializer to separate file<br>
+set_source_files_properties(asan_rtl.cc PROPERTIES COMPILE_FLAGS -Wno-global-constructors)<br>
+<br>
 # Build ASan runtimes shipped with Clang.<br>
 add_custom_target(asan)<br>
 if(APPLE)<br>
diff --git a/lib/asan/asan_allocator.cc b/lib/asan/asan_allocator.cc<br>
index fd63ac6..229f118 100644<br>
--- a/lib/asan/asan_allocator.cc<br>
+++ b/lib/asan/asan_allocator.cc<br>
@@ -202,7 +202,7 @@ AllocatorCache *GetAllocatorCache(AsanThreadLocalMallocStorage *ms) {<br>
 QuarantineCache *GetQuarantineCache(AsanThreadLocalMallocStorage *ms) {<br>
   CHECK(ms);<br>
   CHECK_LE(sizeof(QuarantineCache), sizeof(ms->quarantine_cache));<br>
-  return reinterpret_cast<QuarantineCache *>(ms->quarantine_cache);<br>
+  return reinterpret_cast<QuarantineCache *>(ms->quarantine_cache.data());<br>
 }<br>
<br>
 void AllocatorOptions::SetFrom(const Flags *f, const CommonFlags *cf) {<br>
@@ -239,9 +239,15 @@ struct Allocator {<br>
   atomic_uint8_t alloc_dealloc_mismatch;<br>
<br>
   // ------------------- Initialization ------------------------<br>
-  explicit Allocator(LinkerInitialized)<br>
-      : quarantine(LINKER_INITIALIZED),<br>
-        fallback_quarantine_cache(LINKER_INITIALIZED) {}<br>
+  explicit constexpr Allocator(LinkerInitialized)<br>
+      : allocator(LINKER_INITIALIZED),<br>
+        quarantine(LINKER_INITIALIZED),<br>
+        fallback_mutex(LINKER_INITIALIZED),<br>
+        fallback_allocator_cache(LINKER_INITIALIZED),<br>
+        fallback_quarantine_cache(LINKER_INITIALIZED),<br>
+        min_redzone(0),<br>
+        max_redzone(0),<br>
+        alloc_dealloc_mismatch(0) {}<br>
<br>
   void CheckOptions(const AllocatorOptions &options) const {<br>
     CHECK_GE(options.min_redzone, 16);<br>
diff --git a/lib/asan/asan_allocator.h b/lib/asan/asan_allocator.h<br>
index 3208d1f..adec9ac 100644<br>
--- a/lib/asan/asan_allocator.h<br>
+++ b/lib/asan/asan_allocator.h<br>
@@ -142,14 +142,14 @@ typedef LargeMmapAllocator<AsanMapUnmapCallback> SecondaryAllocator;<br>
 typedef CombinedAllocator<PrimaryAllocator, AllocatorCache,<br>
     SecondaryAllocator> AsanAllocator;<br>
<br>
-<br>
 struct AsanThreadLocalMallocStorage {<br>
-  uptr quarantine_cache[16];<br>
+  LinkerInitializedArray<uptr, 16> quarantine_cache;<br>
   AllocatorCache allocator_cache;<br>
   void CommitBack();<br>
  private:<br>
   // These objects are allocated via mmap() and are zero-initialized.<br>
-  AsanThreadLocalMallocStorage() {}<br>
+  explicit constexpr AsanThreadLocalMallocStorage(LinkerInitialized)<br>
+    : allocator_cache(LINKER_INITIALIZED) {}<br>
 };<br>
<br>
 void *asan_memalign(uptr alignment, uptr size, BufferedStackTrace *stack,<br>
diff --git a/lib/asan/asan_globals.cc b/lib/asan/asan_globals.cc<br>
index c457195..47978db 100644<br>
--- a/lib/asan/asan_globals.cc<br>
+++ b/lib/asan/asan_globals.cc<br>
@@ -34,7 +34,7 @@ struct ListOfGlobals {<br>
 };<br>
<br>
 static BlockingMutex mu_for_globals(LINKER_INITIALIZED);<br>
-static LowLevelAllocator allocator_for_globals;<br>
+static LowLevelAllocator allocator_for_globals(LINKER_INITIALIZED);<br>
 static ListOfGlobals *list_of_all_globals;<br>
<br>
 static const int kDynamicInitGlobalsInitialCapacity = 512;<br>
diff --git a/lib/asan/asan_poisoning.cc b/lib/asan/asan_poisoning.cc<br>
index e2b1f4d..296afe4 100644<br>
--- a/lib/asan/asan_poisoning.cc<br>
+++ b/lib/asan/asan_poisoning.cc<br>
@@ -21,7 +21,7 @@<br>
<br>
 namespace __asan {<br>
<br>
-static atomic_uint8_t can_poison_memory;<br>
+static atomic_uint8_t can_poison_memory(0);<br>
<br>
 void SetCanPoisonMemory(bool value) {<br>
   atomic_store(&can_poison_memory, value, memory_order_release);<br>
diff --git a/lib/asan/asan_report.cc b/lib/asan/asan_report.cc<br>
index 8706d5d..8c17a52 100644<br>
--- a/lib/asan/asan_report.cc<br>
+++ b/lib/asan/asan_report.cc<br>
@@ -598,8 +598,8 @@ void DescribeThread(AsanThreadContext *context) {<br>
 class ScopedInErrorReport {<br>
  public:<br>
   explicit ScopedInErrorReport(ReportData *report = nullptr) {<br>
-    static atomic_uint32_t num_calls;<br>
-    static u32 reporting_thread_tid;<br>
+    static atomic_uint32_t num_calls(0);<br>
+    static u32 reporting_thread_tid(0);<br>
     if (atomic_fetch_add(&num_calls, 1, memory_order_relaxed) != 0) {<br>
       // Do not print more than one report, otherwise they will mix up.<br>
       // Error reporting functions shouldn't return at this situation, as<br>
diff --git a/lib/asan/asan_rtl.cc b/lib/asan/asan_rtl.cc<br>
index 0b2a23d..1bcb957 100644<br>
--- a/lib/asan/asan_rtl.cc<br>
+++ b/lib/asan/asan_rtl.cc<br>
@@ -37,7 +37,7 @@ namespace __asan {<br>
 uptr AsanMappingProfile[kAsanMappingProfileSize];<br>
<br>
 static void AsanDie() {<br>
-  static atomic_uint32_t num_calls;<br>
+  static atomic_uint32_t num_calls(0);<br>
   if (atomic_fetch_add(&num_calls, 1, memory_order_relaxed) != 0) {<br>
     // Don't die twice - run a busy loop.<br>
     while (1) { }<br>
diff --git a/lib/asan/asan_stack.cc b/lib/asan/asan_stack.cc<br>
index cf7a587..ea12e34 100644<br>
--- a/lib/asan/asan_stack.cc<br>
+++ b/lib/asan/asan_stack.cc<br>
@@ -17,7 +17,7 @@<br>
<br>
 namespace __asan {<br>
<br>
-static atomic_uint32_t malloc_context_size;<br>
+static atomic_uint32_t malloc_context_size(0);<br>
<br>
 void SetMallocContextSize(u32 size) {<br>
   atomic_store(&malloc_context_size, size, memory_order_release);<br>
diff --git a/lib/asan/asan_stats.cc b/lib/asan/asan_stats.cc<br>
index a78b7b1..6896c47 100644<br>
--- a/lib/asan/asan_stats.cc<br>
+++ b/lib/asan/asan_stats.cc<br>
@@ -31,7 +31,7 @@ void AsanStats::Clear() {<br>
 }<br>
<br>
 static void PrintMallocStatsArray(const char *prefix,<br>
-                                  uptr (&array)[kNumberOfSizeClasses]) {<br>
+                                  const uptr *array) {<br>
   Printf("%s", prefix);<br>
   for (uptr i = 0; i < kNumberOfSizeClasses; i++) {<br>
     if (!array[i]) continue;<br>
@@ -51,10 +51,11 @@ void AsanStats::Print() {<br>
              (mmaped-munmaped)>>20, mmaped>>20, munmaped>>20,<br>
              mmaps, munmaps);<br>
<br>
-  PrintMallocStatsArray("  mmaps   by size class: ", mmaped_by_size);<br>
-  PrintMallocStatsArray("  mallocs by size class: ", malloced_by_size);<br>
-  PrintMallocStatsArray("  frees   by size class: ", freed_by_size);<br>
-  PrintMallocStatsArray("  rfrees  by size class: ", really_freed_by_size);<br>
+  PrintMallocStatsArray("  mmaps   by size class: ", mmaped_by_size.data());<br>
+  PrintMallocStatsArray("  mallocs by size class: ", malloced_by_size.data());<br>
+  PrintMallocStatsArray("  frees   by size class: ", freed_by_size.data());<br>
+  PrintMallocStatsArray("  rfrees  by size class: ",<br>
+             really_freed_by_size.data());<br>
   Printf("Stats: malloc large: %zu small slow: %zu\n",<br>
              malloc_large, malloc_small_slow);<br>
 }<br>
diff --git a/lib/asan/asan_stats.h b/lib/asan/asan_stats.h<br>
index c66848d..4055154 100644<br>
--- a/lib/asan/asan_stats.h<br>
+++ b/lib/asan/asan_stats.h<br>
@@ -39,16 +39,32 @@ struct AsanStats {<br>
   uptr mmaped;<br>
   uptr munmaps;<br>
   uptr munmaped;<br>
-  uptr mmaped_by_size[kNumberOfSizeClasses];<br>
-  uptr malloced_by_size[kNumberOfSizeClasses];<br>
-  uptr freed_by_size[kNumberOfSizeClasses];<br>
-  uptr really_freed_by_size[kNumberOfSizeClasses];<br>
+  LinkerInitializedArray<uptr, kNumberOfSizeClasses> mmaped_by_size;<br>
+  LinkerInitializedArray<uptr, kNumberOfSizeClasses> malloced_by_size;<br>
+  LinkerInitializedArray<uptr, kNumberOfSizeClasses> freed_by_size;<br>
+  LinkerInitializedArray<uptr, kNumberOfSizeClasses> really_freed_by_size;<br>
<br>
   uptr malloc_large;<br>
   uptr malloc_small_slow;<br>
<br>
   // Ctor for global AsanStats (accumulated stats for dead threads).<br>
-  explicit AsanStats(LinkerInitialized) { }<br>
+  explicit constexpr AsanStats(LinkerInitialized)<br>
+    : mallocs(0),<br>
+      malloced(0),<br>
+      malloced_redzones(0),<br>
+      frees(0),<br>
+      freed(0),<br>
+      real_frees(0),<br>
+      really_freed(0),<br>
+      really_freed_redzones(0),<br>
+      reallocs(0),<br>
+      realloced(0),<br>
+      mmaps(0),<br>
+      mmaped(0),<br>
+      munmaps(0),<br>
+      munmaped(0),<br>
+      malloc_large(0),<br>
+      malloc_small_slow(0) {}<br>
   // Creates empty stats.<br>
   AsanStats();<br>
<br>
diff --git a/lib/asan/asan_thread.cc b/lib/asan/asan_thread.cc<br>
index 9af5706..212b490 100644<br>
--- a/lib/asan/asan_thread.cc<br>
+++ b/lib/asan/asan_thread.cc<br>
@@ -50,7 +50,7 @@ static ALIGNED(16) char thread_registry_placeholder[sizeof(ThreadRegistry)];<br>
 static ThreadRegistry *asan_thread_registry;<br>
<br>
 static BlockingMutex mu_for_thread_context(LINKER_INITIALIZED);<br>
-static LowLevelAllocator allocator_for_thread_context;<br>
+static LowLevelAllocator allocator_for_thread_context(LINKER_INITIALIZED);<br>
<br>
 static ThreadContextBase *GetAsanThreadContext(u32 tid) {<br>
   BlockingMutexLock lock(&mu_for_thread_context);<br>
diff --git a/lib/lsan/lsan_allocator.cc b/lib/lsan/lsan_allocator.cc<br>
index 96a2b04..34e6224 100644<br>
--- a/lib/lsan/lsan_allocator.cc<br>
+++ b/lib/lsan/lsan_allocator.cc<br>
@@ -43,8 +43,8 @@ typedef LargeMmapAllocator<> SecondaryAllocator;<br>
 typedef CombinedAllocator<PrimaryAllocator, AllocatorCache,<br>
           SecondaryAllocator> Allocator;<br>
<br>
-static Allocator allocator;<br>
-static THREADLOCAL AllocatorCache cache;<br>
+static Allocator allocator(LINKER_INITIALIZED);<br>
+static THREADLOCAL AllocatorCache cache(LINKER_INITIALIZED);<br>
<br>
 void InitializeAllocator() {<br>
   allocator.InitLinkerInitialized(common_flags()->allocator_may_return_null);<br>
diff --git a/lib/msan/msan_allocator.cc b/lib/msan/msan_allocator.cc<br>
index 698b6cd..f8761bc 100644<br>
--- a/lib/msan/msan_allocator.cc<br>
+++ b/lib/msan/msan_allocator.cc<br>
@@ -64,9 +64,9 @@ typedef LargeMmapAllocator<MsanMapUnmapCallback> SecondaryAllocator;<br>
 typedef CombinedAllocator<PrimaryAllocator, AllocatorCache,<br>
                           SecondaryAllocator> Allocator;<br>
<br>
-static Allocator allocator;<br>
-static AllocatorCache fallback_allocator_cache;<br>
-static SpinMutex fallback_mutex;<br>
+static Allocator allocator(LINKER_INITIALIZED);<br>
+static AllocatorCache fallback_allocator_cache(LINKER_INITIALIZED);<br>
+static SpinMutex fallback_mutex(LINKER_INITIALIZED);<br>
<br>
 static int inited = 0;<br>
<br>
diff --git a/lib/msan/msan_chained_origin_depot.cc b/lib/msan/msan_chained_origin_depot.cc<br>
index c21e8e8..1258c50 100644<br>
--- a/lib/msan/msan_chained_origin_depot.cc<br>
+++ b/lib/msan/msan_chained_origin_depot.cc<br>
@@ -94,7 +94,8 @@ struct ChainedOriginDepotNode {<br>
   typedef Handle handle_type;<br>
 };<br>
<br>
-static StackDepotBase<ChainedOriginDepotNode, 4, 20> chainedOriginDepot;<br>
+static StackDepotBase<ChainedOriginDepotNode, 4, 20><br>
+    chainedOriginDepot(LINKER_INITIALIZED);<br>
<br>
 StackDepotStats *ChainedOriginDepotGetStats() {<br>
   return chainedOriginDepot.GetStats();<br>
diff --git a/lib/sanitizer_common/sanitizer_allocator.cc b/lib/sanitizer_common/sanitizer_allocator.cc<br>
index 03b3e83..f2699b0 100644<br>
--- a/lib/sanitizer_common/sanitizer_allocator.cc<br>
+++ b/lib/sanitizer_common/sanitizer_allocator.cc<br>
@@ -47,11 +47,11 @@ InternalAllocator *internal_allocator() {<br>
 #else  // SANITIZER_GO<br>
<br>
 static ALIGNED(64) char internal_alloc_placeholder[sizeof(InternalAllocator)];<br>
-static atomic_uint8_t internal_allocator_initialized;<br>
-static StaticSpinMutex internal_alloc_init_mu;<br>
+static atomic_uint8_t internal_allocator_initialized(0);<br>
+static StaticSpinMutex internal_alloc_init_mu(LINKER_INITIALIZED);<br>
<br>
-static InternalAllocatorCache internal_allocator_cache;<br>
-static StaticSpinMutex internal_allocator_cache_mu;<br>
+static InternalAllocatorCache internal_allocator_cache(LINKER_INITIALIZED);<br>
+static StaticSpinMutex internal_allocator_cache_mu(LINKER_INITIALIZED);<br>
<br>
 InternalAllocator *internal_allocator() {<br>
   InternalAllocator *internal_allocator_instance =<br>
diff --git a/lib/sanitizer_common/sanitizer_allocator.h b/lib/sanitizer_common/sanitizer_allocator.h<br>
index b5105f8..5cee177 100644<br>
--- a/lib/sanitizer_common/sanitizer_allocator.h<br>
+++ b/lib/sanitizer_common/sanitizer_allocator.h<br>
@@ -208,39 +208,51 @@ typedef uptr AllocatorStatCounters[AllocatorStatCount];<br>
 // Per-thread stats, live in per-thread cache.<br>
 class AllocatorStats {<br>
  public:<br>
+  AllocatorStats() {}<br>
+<br>
+  explicit constexpr AllocatorStats(LinkerInitialized)<br>
+    : next_(0), prev_(0) {}<br>
+<br>
   void Init() {<br>
     internal_memset(this, 0, sizeof(*this));<br>
   }<br>
   void InitLinkerInitialized() {}<br>
<br>
   void Add(AllocatorStat i, uptr v) {<br>
-    v += atomic_load(&stats_[i], memory_order_relaxed);<br>
-    atomic_store(&stats_[i], v, memory_order_relaxed);<br>
+    v += atomic_load((atomic_uintptr_t *)&stats_[i], memory_order_relaxed);<br>
+    atomic_store((atomic_uintptr_t *)&stats_[i], v, memory_order_relaxed);<br>
   }<br>
<br>
   void Sub(AllocatorStat i, uptr v) {<br>
-    v = atomic_load(&stats_[i], memory_order_relaxed) - v;<br>
-    atomic_store(&stats_[i], v, memory_order_relaxed);<br>
+    v = atomic_load((atomic_uintptr_t *)&stats_[i], memory_order_relaxed) - v;<br>
+    atomic_store((atomic_uintptr_t *)&stats_[i], v, memory_order_relaxed);<br>
   }<br>
<br>
   void Set(AllocatorStat i, uptr v) {<br>
-    atomic_store(&stats_[i], v, memory_order_relaxed);<br>
+    atomic_store((atomic_uintptr_t *)&stats_[i], v, memory_order_relaxed);<br>
   }<br>
<br>
   uptr Get(AllocatorStat i) const {<br>
-    return atomic_load(&stats_[i], memory_order_relaxed);<br>
+    return atomic_load((const atomic_uintptr_t *)&stats_[i],<br>
+                       memory_order_relaxed);<br>
   }<br>
<br>
  private:<br>
   friend class AllocatorGlobalStats;<br>
   AllocatorStats *next_;<br>
   AllocatorStats *prev_;<br>
-  atomic_uintptr_t stats_[AllocatorStatCount];<br>
+  LinkerInitializedStructArray<atomic_uintptr_t, AllocatorStatCount> stats_;<br>
 };<br>
<br>
 // Global stats, used for aggregation and querying.<br>
 class AllocatorGlobalStats : public AllocatorStats {<br>
  public:<br>
+  AllocatorGlobalStats() {}<br>
+<br>
+  explicit constexpr AllocatorGlobalStats(LinkerInitialized)<br>
+    : AllocatorStats(LINKER_INITIALIZED),<br>
+      mu_(LINKER_INITIALIZED) {}<br>
+<br>
   void InitLinkerInitialized() {<br>
     next_ = this;<br>
     prev_ = this;<br>
@@ -321,6 +333,10 @@ class SizeClassAllocator64 {<br>
       SizeClassMap, MapUnmapCallback> ThisT;<br>
   typedef SizeClassAllocatorLocalCache<ThisT> AllocatorCache;<br>
<br>
+  SizeClassAllocator64() {}<br>
+<br>
+  explicit constexpr SizeClassAllocator64(LinkerInitialized) {}<br>
+<br>
   void Init() {<br>
     CHECK_EQ(kSpaceBeg,<br>
              reinterpret_cast<uptr>(Mprotect(kSpaceBeg, kSpaceSize)));<br>
@@ -579,8 +595,12 @@ class SizeClassAllocator64 {<br>
 template<u64 kSize><br>
 class FlatByteMap {<br>
  public:<br>
+  FlatByteMap() {}<br>
+<br>
+  explicit constexpr FlatByteMap(LinkerInitialized) {}<br>
+<br>
   void TestOnlyInit() {<br>
-    internal_memset(map_, 0, sizeof(map_));<br>
+    internal_memset(map_.data(), 0, sizeof(map_));<br>
   }<br>
<br>
   void set(uptr idx, u8 val) {<br>
@@ -594,7 +614,7 @@ class FlatByteMap {<br>
     return map_[idx];<br>
   }<br>
  private:<br>
-  u8 map_[kSize];<br>
+  LinkerInitializedArray<u8, kSize> map_;<br>
 };<br>
<br>
 // TwoLevelByteMap maps integers in range [0, kSize1*kSize2) to u8 values.<br>
@@ -693,6 +713,11 @@ class SizeClassAllocator32 {<br>
       SizeClassMap, kRegionSizeLog, ByteMap, MapUnmapCallback> ThisT;<br>
   typedef SizeClassAllocatorLocalCache<ThisT> AllocatorCache;<br>
<br>
+  SizeClassAllocator32() {}<br>
+<br>
+  explicit constexpr SizeClassAllocator32(LinkerInitialized)<br>
+    : possible_regions(LINKER_INITIALIZED) {}<br>
+<br>
   void Init() {<br>
     possible_regions.TestOnlyInit();<br>
     internal_memset(size_class_info_array, 0, sizeof(size_class_info_array));<br>
@@ -832,7 +857,12 @@ class SizeClassAllocator32 {<br>
   struct SizeClassInfo {<br>
     SpinMutex mutex;<br>
     IntrusiveList<Batch> free_list;<br>
-    char padding[kCacheLineSize - sizeof(uptr) - sizeof(IntrusiveList<Batch>)];<br>
+    static const uptr padding_size =<br>
+      kCacheLineSize - sizeof(uptr) - sizeof(IntrusiveList<Batch>);<br>
+    LinkerInitializedArray<char, padding_size> padding;<br>
+    constexpr SizeClassInfo()<br>
+      : mutex(LINKER_INITIALIZED),<br>
+        free_list(LINKER_INITIALIZED) {}<br>
   };<br>
   COMPILER_CHECK(sizeof(SizeClassInfo) == kCacheLineSize);<br>
<br>
@@ -902,6 +932,11 @@ struct SizeClassAllocatorLocalCache {<br>
   typedef SizeClassAllocator Allocator;<br>
   static const uptr kNumClasses = SizeClassAllocator::kNumClasses;<br>
<br>
+  SizeClassAllocatorLocalCache() {}<br>
+<br>
+  explicit constexpr SizeClassAllocatorLocalCache(LinkerInitialized)<br>
+    : stats_(LINKER_INITIALIZED) {}<br>
+<br>
   void Init(AllocatorGlobalStats *s) {<br>
     stats_.Init();<br>
     if (s)<br>
@@ -956,7 +991,7 @@ struct SizeClassAllocatorLocalCache {<br>
     uptr max_count;<br>
     void *batch[2 * SizeClassMap::kMaxNumCached];<br>
   };<br>
-  PerClass per_class_[kNumClasses];<br>
+  LinkerInitializedArray<PerClass, kNumClasses> per_class_;<br>
   AllocatorStats stats_;<br>
<br>
   void InitCache() {<br>
@@ -1006,6 +1041,17 @@ struct SizeClassAllocatorLocalCache {<br>
 template <class MapUnmapCallback = NoOpMapUnmapCallback><br>
 class LargeMmapAllocator {<br>
  public:<br>
+  LargeMmapAllocator() {}<br>
+<br>
+  explicit constexpr LargeMmapAllocator(LinkerInitialized)<br>
+    : page_size_(0),<br>
+      n_chunks_(0),<br>
+      min_mmap_(0),<br>
+      max_mmap_(0),<br>
+      chunks_sorted_(false),<br>
+      may_return_null_(0),<br>
+      mutex_(LINKER_INITIALIZED) {}<br>
+<br>
   void InitLinkerInitialized(bool may_return_null) {<br>
     page_size_ = GetPageSizeCached();<br>
     atomic_store(&may_return_null_, may_return_null, memory_order_relaxed);<br>
@@ -1149,7 +1195,7 @@ class LargeMmapAllocator {<br>
     if (!n) return 0;<br>
     if (!chunks_sorted_) {<br>
       // Do one-time sort. chunks_sorted_ is reset in Allocate/Deallocate.<br>
-      SortArray(reinterpret_cast<uptr*>(chunks_), n);<br>
+      SortArray(reinterpret_cast<uptr*>(chunks_.data()), n);<br>
       for (uptr i = 0; i < n; i++)<br>
         chunks_[i]->chunk_idx = i;<br>
       chunks_sorted_ = true;<br>
@@ -1240,12 +1286,18 @@ class LargeMmapAllocator {<br>
   }<br>
<br>
   uptr page_size_;<br>
-  Header *chunks_[kMaxNumChunks];<br>
+  LinkerInitializedArray<Header *, kMaxNumChunks> chunks_;<br>
   uptr n_chunks_;<br>
   uptr min_mmap_, max_mmap_;<br>
   bool chunks_sorted_;<br>
   struct Stats {<br>
-    uptr n_allocs, n_frees, currently_allocated, max_allocated, by_size_log[64];<br>
+    uptr n_allocs, n_frees, currently_allocated, max_allocated;<br>
+    LinkerInitializedArray<uptr, 64> by_size_log;<br>
+    constexpr Stats()<br>
+      : n_allocs(0),<br>
+        n_frees(0),<br>
+        currently_allocated(0),<br>
+        max_allocated(0) {}<br>
   } stats;<br>
   atomic_uint8_t may_return_null_;<br>
   SpinMutex mutex_;<br>
@@ -1261,6 +1313,15 @@ template <class PrimaryAllocator, class AllocatorCache,<br>
           class SecondaryAllocator>  // NOLINT<br>
 class CombinedAllocator {<br>
  public:<br>
+  CombinedAllocator() {}<br>
+<br>
+  explicit constexpr CombinedAllocator(LinkerInitialized)<br>
+    : primary_(LINKER_INITIALIZED),<br>
+      secondary_(LINKER_INITIALIZED),<br>
+      stats_(LINKER_INITIALIZED),<br>
+      may_return_null_(0),<br>
+      rss_limit_is_exceeded_(0) {}<br>
+<br>
   void InitCommon(bool may_return_null) {<br>
     primary_.Init();<br>
     atomic_store(&may_return_null_, may_return_null, memory_order_relaxed);<br>
diff --git a/lib/sanitizer_common/sanitizer_atomic.h b/lib/sanitizer_common/sanitizer_atomic.h<br>
index 6643c54..9091d74 100644<br>
--- a/lib/sanitizer_common/sanitizer_atomic.h<br>
+++ b/lib/sanitizer_common/sanitizer_atomic.h<br>
@@ -27,31 +27,25 @@ enum memory_order {<br>
   memory_order_seq_cst = 1 << 5<br>
 };<br>
<br>
-struct atomic_uint8_t {<br>
-  typedef u8 Type;<br>
-  volatile Type val_dont_use;<br>
-};<br>
-<br>
-struct atomic_uint16_t {<br>
-  typedef u16 Type;<br>
-  volatile Type val_dont_use;<br>
-};<br>
-<br>
-struct atomic_uint32_t {<br>
-  typedef u32 Type;<br>
-  volatile Type val_dont_use;<br>
-};<br>
-<br>
-struct atomic_uint64_t {<br>
-  typedef u64 Type;<br>
-  // On 32-bit platforms u64 is not necessary aligned on 8 bytes.<br>
-  volatile ALIGNED(8) Type val_dont_use;<br>
-};<br>
-<br>
-struct atomic_uintptr_t {<br>
-  typedef uptr Type;<br>
-  volatile Type val_dont_use;<br>
-};<br>
+#define DEFINE_ATOMIC_TYPE(name, base_type, attr) \<br>
+  struct name { \<br>
+    typedef base_type Type; \<br>
+    name() {} \<br>
+    explicit constexpr name(base_type v) \<br>
+      : val_(v) {} \<br>
+    volatile Type &val() volatile { return val_; } \<br>
+    const volatile Type &val() const volatile { return val_; } \<br>
+  private: \<br>
+    attr Type val_; \<br>
+  }<br>
+<br>
+DEFINE_ATOMIC_TYPE(atomic_uint8_t, u8,);  // NOLINT<br>
+DEFINE_ATOMIC_TYPE(atomic_uint16_t, u16,);  // NOLINT<br>
+DEFINE_ATOMIC_TYPE(atomic_uint32_t, u32,);  // NOLINT<br>
+DEFINE_ATOMIC_TYPE(atomic_uintptr_t, uptr,);  // NOLINT<br>
+<br>
+// On 32-bit platforms u64 is not necessary aligned on 8 bytes.<br>
+DEFINE_ATOMIC_TYPE(atomic_uint64_t, u64, ALIGNED(8));<br>
<br>
 }  // namespace __sanitizer<br>
<br>
diff --git a/lib/sanitizer_common/sanitizer_atomic_clang.h b/lib/sanitizer_common/sanitizer_atomic_clang.h<br>
index 38363e8..ea786a9 100644<br>
--- a/lib/sanitizer_common/sanitizer_atomic_clang.h<br>
+++ b/lib/sanitizer_common/sanitizer_atomic_clang.h<br>
@@ -48,7 +48,7 @@ INLINE typename T::Type atomic_fetch_add(volatile T *a,<br>
     typename T::Type v, memory_order mo) {<br>
   (void)mo;<br>
   DCHECK(!((uptr)a % sizeof(*a)));<br>
-  return __sync_fetch_and_add(&a->val_dont_use, v);<br>
+  return __sync_fetch_and_add(&a->val(), v);<br>
 }<br>
<br>
 template<typename T><br>
@@ -56,7 +56,7 @@ INLINE typename T::Type atomic_fetch_sub(volatile T *a,<br>
     typename T::Type v, memory_order mo) {<br>
   (void)mo;<br>
   DCHECK(!((uptr)a % sizeof(*a)));<br>
-  return __sync_fetch_and_add(&a->val_dont_use, -v);<br>
+  return __sync_fetch_and_add(&a->val(), -v);<br>
 }<br>
<br>
 template<typename T><br>
@@ -65,7 +65,7 @@ INLINE typename T::Type atomic_exchange(volatile T *a,<br>
   DCHECK(!((uptr)a % sizeof(*a)));<br>
   if (mo & (memory_order_release | memory_order_acq_rel | memory_order_seq_cst))<br>
     __sync_synchronize();<br>
-  v = __sync_lock_test_and_set(&a->val_dont_use, v);<br>
+  v = __sync_lock_test_and_set(&a->val(), v);<br>
   if (mo == memory_order_seq_cst)<br>
     __sync_synchronize();<br>
   return v;<br>
@@ -78,7 +78,7 @@ INLINE bool atomic_compare_exchange_strong(volatile T *a,<br>
                                            memory_order mo) {<br>
   typedef typename T::Type Type;<br>
   Type cmpv = *cmp;<br>
-  Type prev = __sync_val_compare_and_swap(&a->val_dont_use, cmpv, xchg);<br>
+  Type prev = __sync_val_compare_and_swap(&a->val(), cmpv, xchg);<br>
   if (prev == cmpv)<br>
     return true;<br>
   *cmp = prev;<br>
diff --git a/lib/sanitizer_common/sanitizer_atomic_clang_other.h b/lib/sanitizer_common/sanitizer_atomic_clang_other.h<br>
index 099b9f7..f4c0028 100644<br>
--- a/lib/sanitizer_common/sanitizer_atomic_clang_other.h<br>
+++ b/lib/sanitizer_common/sanitizer_atomic_clang_other.h<br>
@@ -32,21 +32,21 @@ INLINE typename T::Type atomic_load(<br>
   if (sizeof(*a) < 8 || sizeof(void*) == 8) {<br>
     // Assume that aligned loads are atomic.<br>
     if (mo == memory_order_relaxed) {<br>
-      v = a->val_dont_use;<br>
+      v = a->val();<br>
     } else if (mo == memory_order_consume) {<br>
       // Assume that processor respects data dependencies<br>
       // (and that compiler won't break them).<br>
       __asm__ __volatile__("" ::: "memory");<br>
-      v = a->val_dont_use;<br>
+      v = a->val();<br>
       __asm__ __volatile__("" ::: "memory");<br>
     } else if (mo == memory_order_acquire) {<br>
       __asm__ __volatile__("" ::: "memory");<br>
-      v = a->val_dont_use;<br>
+      v = a->val();<br>
       __sync_synchronize();<br>
     } else {  // seq_cst<br>
       // E.g. on POWER we need a hw fence even before the store.<br>
       __sync_synchronize();<br>
-      v = a->val_dont_use;<br>
+      v = a->val();<br>
       __sync_synchronize();<br>
     }<br>
   } else {<br>
@@ -54,7 +54,7 @@ INLINE typename T::Type atomic_load(<br>
     // Gross, but simple and reliable.<br>
     // Assume that it is not in read-only memory.<br>
     v = __sync_fetch_and_add(<br>
-        const_cast<typename T::Type volatile *>(&a->val_dont_use), 0);<br>
+        const_cast<typename T::Type volatile *>(&a->val()), 0);<br>
   }<br>
   return v;<br>
 }<br>
@@ -68,23 +68,23 @@ INLINE void atomic_store(volatile T *a, typename T::Type v, memory_order mo) {<br>
   if (sizeof(*a) < 8 || sizeof(void*) == 8) {<br>
     // Assume that aligned loads are atomic.<br>
     if (mo == memory_order_relaxed) {<br>
-      a->val_dont_use = v;<br>
+      a->val() = v;<br>
     } else if (mo == memory_order_release) {<br>
       __sync_synchronize();<br>
-      a->val_dont_use = v;<br>
+      a->val() = v;<br>
       __asm__ __volatile__("" ::: "memory");<br>
     } else {  // seq_cst<br>
       __sync_synchronize();<br>
-      a->val_dont_use = v;<br>
+      a->val() = v;<br>
       __sync_synchronize();<br>
     }<br>
   } else {<br>
     // 64-bit store on 32-bit platform.<br>
     // Gross, but simple and reliable.<br>
-    typename T::Type cmp = a->val_dont_use;<br>
+    typename T::Type cmp = a->val();<br>
     typename T::Type cur;<br>
     for (;;) {<br>
-      cur = __sync_val_compare_and_swap(&a->val_dont_use, cmp, v);<br>
+      cur = __sync_val_compare_and_swap(&a->val(), cmp, v);<br>
       if (cmp == v)<br>
         break;<br>
       cmp = cur;<br>
diff --git a/lib/sanitizer_common/sanitizer_atomic_clang_x86.h b/lib/sanitizer_common/sanitizer_atomic_clang_x86.h<br>
index 38feb29..6e4226f 100644<br>
--- a/lib/sanitizer_common/sanitizer_atomic_clang_x86.h<br>
+++ b/lib/sanitizer_common/sanitizer_atomic_clang_x86.h<br>
@@ -35,22 +35,22 @@ INLINE typename T::Type atomic_load(<br>
   if (sizeof(*a) < 8 || sizeof(void*) == 8) {<br>
     // Assume that aligned loads are atomic.<br>
     if (mo == memory_order_relaxed) {<br>
-      v = a->val_dont_use;<br>
+      v = a->val();<br>
     } else if (mo == memory_order_consume) {<br>
       // Assume that processor respects data dependencies<br>
       // (and that compiler won't break them).<br>
       __asm__ __volatile__("" ::: "memory");<br>
-      v = a->val_dont_use;<br>
+      v = a->val();<br>
       __asm__ __volatile__("" ::: "memory");<br>
     } else if (mo == memory_order_acquire) {<br>
       __asm__ __volatile__("" ::: "memory");<br>
-      v = a->val_dont_use;<br>
+      v = a->val();<br>
       // On x86 loads are implicitly acquire.<br>
       __asm__ __volatile__("" ::: "memory");<br>
     } else {  // seq_cst<br>
       // On x86 plain MOV is enough for seq_cst store.<br>
       __asm__ __volatile__("" ::: "memory");<br>
-      v = a->val_dont_use;<br>
+      v = a->val();<br>
       __asm__ __volatile__("" ::: "memory");<br>
     }<br>
   } else {<br>
@@ -60,7 +60,7 @@ INLINE typename T::Type atomic_load(<br>
         "movq %%mm0, %0;"  // (ptr could be read-only)<br>
         "emms;"            // Empty mmx state/Reset FP regs<br>
         : "=m" (v)<br>
-        : "m" (a->val_dont_use)<br>
+        : "m" (a->val())<br>
         : // mark the FP stack and mmx registers as clobbered<br>
           "st", "st(1)", "st(2)", "st(3)", "st(4)", "st(5)", "st(6)", "st(7)",<br>
 #ifdef __MMX__<br>
@@ -80,16 +80,16 @@ INLINE void atomic_store(volatile T *a, typename T::Type v, memory_order mo) {<br>
   if (sizeof(*a) < 8 || sizeof(void*) == 8) {<br>
     // Assume that aligned loads are atomic.<br>
     if (mo == memory_order_relaxed) {<br>
-      a->val_dont_use = v;<br>
+      a->val() = v;<br>
     } else if (mo == memory_order_release) {<br>
       // On x86 stores are implicitly release.<br>
       __asm__ __volatile__("" ::: "memory");<br>
-      a->val_dont_use = v;<br>
+      a->val() = v;<br>
       __asm__ __volatile__("" ::: "memory");<br>
     } else {  // seq_cst<br>
       // On x86 stores are implicitly release.<br>
       __asm__ __volatile__("" ::: "memory");<br>
-      a->val_dont_use = v;<br>
+      a->val() = v;<br>
       __sync_synchronize();<br>
     }<br>
   } else {<br>
@@ -98,7 +98,7 @@ INLINE void atomic_store(volatile T *a, typename T::Type v, memory_order mo) {<br>
         "movq %1, %%mm0;"  // Use mmx reg for 64-bit atomic moves<br>
         "movq %%mm0, %0;"<br>
         "emms;"            // Empty mmx state/Reset FP regs<br>
-        : "=m" (a->val_dont_use)<br>
+        : "=m" (a->val())<br>
         : "m" (v)<br>
         : // mark the FP stack and mmx registers as clobbered<br>
           "st", "st(1)", "st(2)", "st(3)", "st(4)", "st(5)", "st(6)", "st(7)",<br>
diff --git a/lib/sanitizer_common/sanitizer_atomic_msvc.h b/lib/sanitizer_common/sanitizer_atomic_msvc.h<br>
index 12ffef3..c52cde0 100644<br>
--- a/lib/sanitizer_common/sanitizer_atomic_msvc.h<br>
+++ b/lib/sanitizer_common/sanitizer_atomic_msvc.h<br>
@@ -73,10 +73,10 @@ INLINE typename T::Type atomic_load(<br>
   typename T::Type v;<br>
   // FIXME(dvyukov): 64-bit load is not atomic on 32-bits.<br>
   if (mo == memory_order_relaxed) {<br>
-    v = a->val_dont_use;<br>
+    v = a->val();<br>
   } else {<br>
     atomic_signal_fence(memory_order_seq_cst);<br>
-    v = a->val_dont_use;<br>
+    v = a->val();<br>
     atomic_signal_fence(memory_order_seq_cst);<br>
   }<br>
   return v;<br>
@@ -89,10 +89,10 @@ INLINE void atomic_store(volatile T *a, typename T::Type v, memory_order mo) {<br>
   DCHECK(!((uptr)a % sizeof(*a)));<br>
   // FIXME(dvyukov): 64-bit store is not atomic on 32-bits.<br>
   if (mo == memory_order_relaxed) {<br>
-    a->val_dont_use = v;<br>
+    a->val() = v;<br>
   } else {<br>
     atomic_signal_fence(memory_order_seq_cst);<br>
-    a->val_dont_use = v;<br>
+    a->val() = v;<br>
     atomic_signal_fence(memory_order_seq_cst);<br>
   }<br>
   if (mo == memory_order_seq_cst)<br>
@@ -104,7 +104,7 @@ INLINE u32 atomic_fetch_add(volatile atomic_uint32_t *a,<br>
   (void)mo;<br>
   DCHECK(!((uptr)a % sizeof(*a)));<br>
   return (u32)_InterlockedExchangeAdd(<br>
-      (volatile long*)&a->val_dont_use, (long)v);  // NOLINT<br>
+      (volatile long*)&a->val(), (long)v);  // NOLINT<br>
 }<br>
<br>
 INLINE uptr atomic_fetch_add(volatile atomic_uintptr_t *a,<br>
@@ -113,10 +113,10 @@ INLINE uptr atomic_fetch_add(volatile atomic_uintptr_t *a,<br>
   DCHECK(!((uptr)a % sizeof(*a)));<br>
 #ifdef _WIN64<br>
   return (uptr)_InterlockedExchangeAdd64(<br>
-      (volatile long long*)&a->val_dont_use, (long long)v);  // NOLINT<br>
+      (volatile long long*)&a->val(), (long long)v);  // NOLINT<br>
 #else<br>
   return (uptr)_InterlockedExchangeAdd(<br>
-      (volatile long*)&a->val_dont_use, (long)v);  // NOLINT<br>
+      (volatile long*)&a->val(), (long)v);  // NOLINT<br>
 #endif<br>
 }<br>
<br>
@@ -125,7 +125,7 @@ INLINE u32 atomic_fetch_sub(volatile atomic_uint32_t *a,<br>
   (void)mo;<br>
   DCHECK(!((uptr)a % sizeof(*a)));<br>
   return (u32)_InterlockedExchangeAdd(<br>
-      (volatile long*)&a->val_dont_use, -(long)v);  // NOLINT<br>
+      (volatile long*)&a->val(), -(long)v);  // NOLINT<br>
 }<br>
<br>
 INLINE uptr atomic_fetch_sub(volatile atomic_uintptr_t *a,<br>
@@ -134,10 +134,10 @@ INLINE uptr atomic_fetch_sub(volatile atomic_uintptr_t *a,<br>
   DCHECK(!((uptr)a % sizeof(*a)));<br>
 #ifdef _WIN64<br>
   return (uptr)_InterlockedExchangeAdd64(<br>
-      (volatile long long*)&a->val_dont_use, -(long long)v);  // NOLINT<br>
+      (volatile long long*)&a->val(), -(long long)v);  // NOLINT<br>
 #else<br>
   return (uptr)_InterlockedExchangeAdd(<br>
-      (volatile long*)&a->val_dont_use, -(long)v);  // NOLINT<br>
+      (volatile long*)&a->val(), -(long)v);  // NOLINT<br>
 #endif<br>
 }<br>
<br>
@@ -194,7 +194,7 @@ INLINE bool atomic_compare_exchange_strong(volatile atomic_uintptr_t *a,<br>
                                            memory_order mo) {<br>
   uptr cmpv = *cmp;<br>
   uptr prev = (uptr)_InterlockedCompareExchangePointer(<br>
-      (void*volatile*)&a->val_dont_use, (void*)xchg, (void*)cmpv);<br>
+      (void*volatile*)&a->val(), (void*)xchg, (void*)cmpv);<br>
   if (prev == cmpv)<br>
     return true;<br>
   *cmp = prev;<br>
@@ -207,7 +207,7 @@ INLINE bool atomic_compare_exchange_strong(volatile atomic_uint16_t *a,<br>
                                            memory_order mo) {<br>
   u16 cmpv = *cmp;<br>
   u16 prev = (u16)_InterlockedCompareExchange16(<br>
-      (volatile short*)&a->val_dont_use, (short)xchg, (short)cmpv);<br>
+      (volatile short*)&a->val(), (short)xchg, (short)cmpv);<br>
   if (prev == cmpv)<br>
     return true;<br>
   *cmp = prev;<br>
@@ -220,7 +220,7 @@ INLINE bool atomic_compare_exchange_strong(volatile atomic_uint32_t *a,<br>
                                            memory_order mo) {<br>
   u32 cmpv = *cmp;<br>
   u32 prev = (u32)_InterlockedCompareExchange(<br>
-      (volatile long*)&a->val_dont_use, (long)xchg, (long)cmpv);<br>
+      (volatile long*)&a->val(), (long)xchg, (long)cmpv);<br>
   if (prev == cmpv)<br>
     return true;<br>
   *cmp = prev;<br>
@@ -233,7 +233,7 @@ INLINE bool atomic_compare_exchange_strong(volatile atomic_uint64_t *a,<br>
                                            memory_order mo) {<br>
   u64 cmpv = *cmp;<br>
   u64 prev = (u64)_InterlockedCompareExchange64(<br>
-      (volatile long long*)&a->val_dont_use, (long long)xchg, (long long)cmpv);<br>
+      (volatile long long*)&a->val(), (long long)xchg, (long long)cmpv);<br>
   if (prev == cmpv)<br>
     return true;<br>
   *cmp = prev;<br>
diff --git a/lib/sanitizer_common/sanitizer_common.cc b/lib/sanitizer_common/sanitizer_common.cc<br>
index 489081e..8b19ccd 100644<br>
--- a/lib/sanitizer_common/sanitizer_common.cc<br>
+++ b/lib/sanitizer_common/sanitizer_common.cc<br>
@@ -21,7 +21,7 @@ namespace __sanitizer {<br>
<br>
 const char *SanitizerToolName = "SanitizerTool";<br>
<br>
-atomic_uint32_t current_verbosity;<br>
+atomic_uint32_t current_verbosity(0);<br>
<br>
 uptr GetPageSizeCached() {<br>
   static uptr PageSize;<br>
@@ -30,7 +30,7 @@ uptr GetPageSizeCached() {<br>
   return PageSize;<br>
 }<br>
<br>
-StaticSpinMutex report_file_mu;<br>
+StaticSpinMutex report_file_mu(LINKER_INITIALIZED);<br>
 ReportFile report_file = {&report_file_mu, kStderrFd, "", "", 0};<br>
<br>
 void RawWrite(const char *buffer) {<br>
@@ -272,7 +272,7 @@ bool LoadedModule::containsAddress(uptr address) const {<br>
   return false;<br>
 }<br>
<br>
-static atomic_uintptr_t g_total_mmaped;<br>
+static atomic_uintptr_t g_total_mmaped(0);<br>
<br>
 void IncreaseTotalMmap(uptr size) {<br>
   if (!common_flags()->mmap_limit_mb) return;<br>
diff --git a/lib/sanitizer_common/sanitizer_common.h b/lib/sanitizer_common/sanitizer_common.h<br>
index 720cd73..e26a6f0 100644<br>
--- a/lib/sanitizer_common/sanitizer_common.h<br>
+++ b/lib/sanitizer_common/sanitizer_common.h<br>
@@ -127,6 +127,8 @@ class InternalScopedString : public InternalScopedBuffer<char> {<br>
 // linker initialized.<br>
 class LowLevelAllocator {<br>
  public:<br>
+  explicit constexpr LowLevelAllocator(LinkerInitialized)<br>
+    : allocated_end_(0), allocated_current_(0) {}<br>
   // Requires an external lock.<br>
   void *Allocate(uptr size);<br>
  private:<br>
@@ -399,6 +401,9 @@ INLINE int ToLower(int c) {<br>
 template<typename T><br>
 class InternalMmapVectorNoCtor {<br>
  public:<br>
+  InternalMmapVectorNoCtor() {}<br>
+  explicit constexpr InternalMmapVectorNoCtor(LinkerInitialized)<br>
+    : data_(0), capacity_(0), size_(0) {}<br>
   void Initialize(uptr initial_capacity) {<br>
     capacity_ = Max(initial_capacity, (uptr)1);<br>
     size_ = 0;<br>
diff --git a/lib/sanitizer_common/sanitizer_common_interceptors.inc b/lib/sanitizer_common/sanitizer_common_interceptors.inc<br>
index 87c33e1..885524e 100644<br>
--- a/lib/sanitizer_common/sanitizer_common_interceptors.inc<br>
+++ b/lib/sanitizer_common/sanitizer_common_interceptors.inc<br>
@@ -4755,7 +4755,7 @@ INTERCEPTOR(int, timerfd_gettime, int fd, void *curr_value) {<br>
 // Linux kernel has a bug that leads to kernel deadlock if a process<br>
 // maps TBs of memory and then calls mlock().<br>
 static void MlockIsUnsupported() {<br>
-  static atomic_uint8_t printed;<br>
+  static atomic_uint8_t printed(0);<br>
   if (atomic_exchange(&printed, 1, memory_order_relaxed))<br>
     return;<br>
   VPrintf(1, "INFO: %s ignores mlock/mlockall/munlock/munlockall\n",<br>
diff --git a/lib/sanitizer_common/sanitizer_coverage_libcdep.cc b/lib/sanitizer_common/sanitizer_coverage_libcdep.cc<br>
index e8f42f6..be15419 100644<br>
--- a/lib/sanitizer_common/sanitizer_coverage_libcdep.cc<br>
+++ b/lib/sanitizer_common/sanitizer_coverage_libcdep.cc<br>
@@ -43,9 +43,10 @@<br>
 #include "sanitizer_symbolizer.h"<br>
 #include "sanitizer_flags.h"<br>
<br>
-static atomic_uint32_t dump_once_guard;  // Ensure that CovDump runs only once.<br>
+// Ensure that CovDump runs only once.<br>
+static atomic_uint32_t dump_once_guard(0);<br>
<br>
-static atomic_uintptr_t coverage_counter;<br>
+static atomic_uintptr_t coverage_counter(0);<br>
<br>
 // pc_array is the array containing the covered PCs.<br>
 // To make the pc_array thread- and async-signal-safe it has to be large enough.<br>
@@ -65,6 +66,21 @@ namespace __sanitizer {<br>
<br>
 class CoverageData {<br>
  public:<br>
+  explicit constexpr CoverageData(LinkerInitialized)<br>
+    : pc_array(0),<br>
+      pc_array_index(0),<br>
+      pc_array_size(0),<br>
+      pc_array_mapped_size(0),<br>
+      pc_fd(0),<br>
+      guard_array_vec(LINKER_INITIALIZED),<br>
+      cc_array(0),<br>
+      cc_array_index(0),<br>
+      cc_array_size(0),<br>
+      tr_event_array(0),<br>
+      tr_event_array_size(0),<br>
+      tr_event_pointer(0),<br>
+      mu(LINKER_INITIALIZED) {}<br>
+<br>
   void Init();<br>
   void Enable();<br>
   void Disable();<br>
@@ -134,7 +150,7 @@ class CoverageData {<br>
   void DirectOpen();<br>
 };<br>
<br>
-static CoverageData coverage_data;<br>
+static CoverageData coverage_data(LINKER_INITIALIZED);<br>
<br>
 void CovUpdateMapping(const char *path, uptr caller_pc = 0);<br>
<br>
diff --git a/lib/sanitizer_common/sanitizer_coverage_mapping_libcdep.cc b/lib/sanitizer_common/sanitizer_coverage_mapping_libcdep.cc<br>
index 6b5e91f..8f0ab7f 100644<br>
--- a/lib/sanitizer_common/sanitizer_coverage_mapping_libcdep.cc<br>
+++ b/lib/sanitizer_common/sanitizer_coverage_mapping_libcdep.cc<br>
@@ -60,7 +60,7 @@ struct CachedMapping {<br>
 };<br>
<br>
 static CachedMapping cached_mapping;<br>
-static StaticSpinMutex mapping_mu;<br>
+static StaticSpinMutex mapping_mu(LINKER_INITIALIZED);<br>
<br>
 void CovUpdateMapping(const char *coverage_dir, uptr caller_pc) {<br>
   if (!common_flags()->coverage_direct) return;<br>
diff --git a/lib/sanitizer_common/sanitizer_flag_parser.cc b/lib/sanitizer_common/sanitizer_flag_parser.cc<br>
index d125002..9f005a2 100644<br>
--- a/lib/sanitizer_common/sanitizer_flag_parser.cc<br>
+++ b/lib/sanitizer_common/sanitizer_flag_parser.cc<br>
@@ -20,7 +20,7 @@<br>
<br>
 namespace __sanitizer {<br>
<br>
-LowLevelAllocator FlagParser::Alloc;<br>
+LowLevelAllocator FlagParser::Alloc(LINKER_INITIALIZED);<br>
<br>
 class UnknownFlags {<br>
   static const int kMaxUnknownFlags = 20;<br>
diff --git a/lib/sanitizer_common/sanitizer_flags.cc b/lib/sanitizer_common/sanitizer_flags.cc<br>
index a296535..52e7aae 100644<br>
--- a/lib/sanitizer_common/sanitizer_flags.cc<br>
+++ b/lib/sanitizer_common/sanitizer_flags.cc<br>
@@ -28,7 +28,7 @@ struct FlagDescription {<br>
   FlagDescription *next;<br>
 };<br>
<br>
-IntrusiveList<FlagDescription> flag_descriptions;<br>
+IntrusiveList<FlagDescription> flag_descriptions(LINKER_INITIALIZED);<br>
<br>
 // If set, the tool will install its own SEGV signal handler by default.<br>
 #ifndef SANITIZER_NEEDS_SEGV<br>
diff --git a/lib/sanitizer_common/sanitizer_internal_defs.h b/lib/sanitizer_common/sanitizer_internal_defs.h<br>
index 2a0b41f..aee9c2e 100644<br>
--- a/lib/sanitizer_common/sanitizer_internal_defs.h<br>
+++ b/lib/sanitizer_common/sanitizer_internal_defs.h<br>
@@ -38,6 +38,10 @@<br>
 # define SANITIZER_SUPPORTS_WEAK_HOOKS 0<br>
 #endif<br>
<br>
+#if SANITIZER_WINDOWS<br>
+# define constexpr<br>
+#endif<br>
+<br>
 // We can use .preinit_array section on Linux to call sanitizer initialization<br>
 // functions very early in the process startup (unless PIC macro is defined).<br>
 // FIXME: do we have anything like this on Mac?<br>
@@ -332,4 +336,79 @@ extern "C" void* _ReturnAddress(void);<br>
     enable_fp = GET_CURRENT_FRAME();                               \<br>
   } while (0)<br>
<br>
+template <typename T, unsigned long N> struct LinkerInitializedArray {<br>
+  typedef T Type;<br>
+  Type elements[N];<br>
+<br>
+  constexpr LinkerInitializedArray()<br>
+#if SANITIZER_WINDOWS<br>
+  {<br>
+    internal_memset(elements, 0, sizeof(elements));<br>
+  }<br>
+#else<br>
+    : elements {} {}<br>
+#endif<br>
+<br>
+  Type *data() {<br>
+    return &elements[0];<br>
+  }<br>
+<br>
+  const Type *data() const {<br>
+    return &elements[0];<br>
+  }<br>
+<br>
+  Type &operator[](unsigned long i) {<br>
+    return elements[i];<br>
+  }<br>
+<br>
+  const Type &operator[](unsigned long i) const {<br>
+    return elements[i];<br>
+  }<br>
+};<br>
+<br>
+template <typename T, unsigned long N> struct LinkerInitializedStructArray {<br>
+  typedef T Type;<br>
+<br>
+#if SANITIZER_WINDOWS<br>
+  Type val[N];<br>
+<br>
+  constexpr LinkerInitializedStructArray() {<br>
+    internal_memset(val, 0, sizeof(val));<br>
+  }<br>
+<br>
+  Type *data() {<br>
+    return &val[0];<br>
+  }<br>
+<br>
+  const Type *data() const {<br>
+    return &val[0];<br>
+  }<br>
+#else<br>
+  union {<br>
+    char bytes[sizeof(T) * N];<br>
+    Type elements[N];<br>
+  } val;<br>
+<br>
+  constexpr LinkerInitializedStructArray()<br>
+    : val {} {}<br>
+<br>
+  Type *data() {<br>
+    return &val.elements[0];<br>
+  }<br>
+<br>
+  const Type *data() const {<br>
+    return &val.elements[0];<br>
+  }<br>
+#endif<br>
+<br>
+  Type &operator[](unsigned long i) {<br>
+    return data()[i];<br>
+  }<br>
+<br>
+  const Type &operator[](unsigned long i) const {<br>
+    return data()[i];<br>
+  }<br>
+};<br>
+<br>
 #endif  // SANITIZER_DEFS_H<br>
+<br>
diff --git a/lib/sanitizer_common/sanitizer_linux_libcdep.cc b/lib/sanitizer_common/sanitizer_linux_libcdep.cc<br>
index df42c36..5bfc667 100644<br>
--- a/lib/sanitizer_common/sanitizer_linux_libcdep.cc<br>
+++ b/lib/sanitizer_common/sanitizer_linux_libcdep.cc<br>
@@ -186,7 +186,7 @@ void InitTlsSize() {<br>
<br>
 #if (defined(__x86_64__) || defined(__i386__)) && SANITIZER_LINUX<br>
 // sizeof(struct thread) from glibc.<br>
-static atomic_uintptr_t kThreadDescriptorSize;<br>
+static atomic_uintptr_t kThreadDescriptorSize(0);<br>
<br>
 uptr ThreadDescriptorSize() {<br>
   uptr val = atomic_load(&kThreadDescriptorSize, memory_order_relaxed);<br>
diff --git a/lib/sanitizer_common/sanitizer_list.h b/lib/sanitizer_common/sanitizer_list.h<br>
index 6dd9c8f..3fb1522 100644<br>
--- a/lib/sanitizer_common/sanitizer_list.h<br>
+++ b/lib/sanitizer_common/sanitizer_list.h<br>
@@ -28,6 +28,13 @@ template<class Item><br>
 struct IntrusiveList {<br>
   friend class Iterator;<br>
<br>
+  IntrusiveList() {}<br>
+<br>
+  explicit constexpr IntrusiveList(LinkerInitialized)<br>
+    : size_(0),<br>
+      first_(0),<br>
+      last_(0) {}<br>
+<br>
   void clear() {<br>
     first_ = last_ = 0;<br>
     size_ = 0;<br>
diff --git a/lib/sanitizer_common/sanitizer_mutex.h b/lib/sanitizer_common/sanitizer_mutex.h<br>
index d06fc45..01890c2 100644<br>
--- a/lib/sanitizer_common/sanitizer_mutex.h<br>
+++ b/lib/sanitizer_common/sanitizer_mutex.h<br>
@@ -22,6 +22,11 @@ namespace __sanitizer {<br>
<br>
 class StaticSpinMutex {<br>
  public:<br>
+  StaticSpinMutex() {}<br>
+<br>
+  explicit constexpr StaticSpinMutex(LinkerInitialized)<br>
+    : state_(0) {}<br>
+<br>
   void Init() {<br>
     atomic_store(&state_, 0, memory_order_relaxed);<br>
   }<br>
@@ -66,6 +71,9 @@ class SpinMutex : public StaticSpinMutex {<br>
     Init();<br>
   }<br>
<br>
+  explicit constexpr SpinMutex(LinkerInitialized)<br>
+    : StaticSpinMutex(LINKER_INITIALIZED) {}<br>
+<br>
  private:<br>
   SpinMutex(const SpinMutex&);<br>
   void operator=(const SpinMutex&);<br>
@@ -96,6 +104,9 @@ class RWMutex {<br>
     atomic_store(&state_, kUnlocked, memory_order_relaxed);<br>
   }<br>
<br>
+  explicit constexpr RWMutex(LinkerInitialized)<br>
+    : state_(0) {}<br>
+<br>
   ~RWMutex() {<br>
     CHECK_EQ(atomic_load(&state_, memory_order_relaxed), kUnlocked);<br>
   }<br>
diff --git a/lib/sanitizer_common/sanitizer_persistent_allocator.cc b/lib/sanitizer_common/sanitizer_persistent_allocator.cc<br>
index 5fa533a..e7cd2e4 100644<br>
--- a/lib/sanitizer_common/sanitizer_persistent_allocator.cc<br>
+++ b/lib/sanitizer_common/sanitizer_persistent_allocator.cc<br>
@@ -14,6 +14,6 @@<br>
<br>
 namespace __sanitizer {<br>
<br>
-PersistentAllocator thePersistentAllocator;<br>
+PersistentAllocator thePersistentAllocator(LINKER_INITIALIZED);<br>
<br>
 }  // namespace __sanitizer<br>
diff --git a/lib/sanitizer_common/sanitizer_persistent_allocator.h b/lib/sanitizer_common/sanitizer_persistent_allocator.h<br>
index 326406b..54c05bf 100644<br>
--- a/lib/sanitizer_common/sanitizer_persistent_allocator.h<br>
+++ b/lib/sanitizer_common/sanitizer_persistent_allocator.h<br>
@@ -22,6 +22,10 @@ namespace __sanitizer {<br>
<br>
 class PersistentAllocator {<br>
  public:<br>
+  explicit constexpr PersistentAllocator(LinkerInitialized)<br>
+    : mtx(LINKER_INITIALIZED),<br>
+      region_pos(0),<br>
+      region_end(0) {}<br>
   void *alloc(uptr size);<br>
<br>
  private:<br>
diff --git a/lib/sanitizer_common/sanitizer_printf.cc b/lib/sanitizer_common/sanitizer_printf.cc<br>
index 3be6723..fd299e3 100644<br>
--- a/lib/sanitizer_common/sanitizer_printf.cc<br>
+++ b/lib/sanitizer_common/sanitizer_printf.cc<br>
@@ -29,7 +29,7 @@<br>
<br>
 namespace __sanitizer {<br>
<br>
-StaticSpinMutex CommonSanitizerReportMutex;<br>
+StaticSpinMutex CommonSanitizerReportMutex(LINKER_INITIALIZED);<br>
<br>
 static int AppendChar(char **buff, const char *buff_end, char c) {<br>
   if (*buff < buff_end) {<br>
diff --git a/lib/sanitizer_common/sanitizer_procmaps_common.cc b/lib/sanitizer_common/sanitizer_procmaps_common.cc<br>
index 2ec08d7..b2efde2 100644<br>
--- a/lib/sanitizer_common/sanitizer_procmaps_common.cc<br>
+++ b/lib/sanitizer_common/sanitizer_procmaps_common.cc<br>
@@ -20,7 +20,7 @@ namespace __sanitizer {<br>
<br>
 // Linker initialized.<br>
 ProcSelfMapsBuff MemoryMappingLayout::cached_proc_self_maps_;<br>
-StaticSpinMutex MemoryMappingLayout::cache_lock_;  // Linker initialized.<br>
+StaticSpinMutex MemoryMappingLayout::cache_lock_(LINKER_INITIALIZED);<br>
<br>
 static int TranslateDigit(char c) {<br>
   if (c >= '0' && c <= '9')<br>
diff --git a/lib/sanitizer_common/sanitizer_quarantine.h b/lib/sanitizer_common/sanitizer_quarantine.h<br>
index 404d375..0628064 100644<br>
--- a/lib/sanitizer_common/sanitizer_quarantine.h<br>
+++ b/lib/sanitizer_common/sanitizer_quarantine.h<br>
@@ -44,9 +44,13 @@ class Quarantine {<br>
  public:<br>
   typedef QuarantineCache<Callback> Cache;<br>
<br>
-  explicit Quarantine(LinkerInitialized)<br>
-      : cache_(LINKER_INITIALIZED) {<br>
-  }<br>
+  explicit constexpr Quarantine(LinkerInitialized)<br>
+      : max_size_(0),<br>
+        min_size_(0),<br>
+        max_cache_size_(0),<br>
+        cache_mutex_(LINKER_INITIALIZED),<br>
+        recycle_mutex_(LINKER_INITIALIZED),<br>
+        cache_(LINKER_INITIALIZED) {}<br>
<br>
   void Init(uptr size, uptr cache_size) {<br>
     atomic_store(&max_size_, size, memory_order_release);<br>
@@ -74,15 +78,15 @@ class Quarantine {<br>
<br>
  private:<br>
   // Read-only data.<br>
-  char pad0_[kCacheLineSize];<br>
+  LinkerInitializedArray<char, kCacheLineSize> pad0_;<br>
   atomic_uintptr_t max_size_;<br>
   atomic_uintptr_t min_size_;<br>
   uptr max_cache_size_;<br>
-  char pad1_[kCacheLineSize];<br>
+  LinkerInitializedArray<char, kCacheLineSize> pad1_;<br>
   SpinMutex cache_mutex_;<br>
   SpinMutex recycle_mutex_;<br>
   Cache cache_;<br>
-  char pad2_[kCacheLineSize];<br>
+  LinkerInitializedArray<char, kCacheLineSize> pad2_;<br>
<br>
   void NOINLINE Recycle(Callback cb) {<br>
     Cache tmp;<br>
@@ -116,8 +120,9 @@ class Quarantine {<br>
 template<typename Callback><br>
 class QuarantineCache {<br>
  public:<br>
-  explicit QuarantineCache(LinkerInitialized) {<br>
-  }<br>
+  explicit constexpr QuarantineCache(LinkerInitialized)<br>
+    : list_(LINKER_INITIALIZED),<br>
+      size_(0) {}<br>
<br>
   QuarantineCache()<br>
       : size_() {<br>
diff --git a/lib/sanitizer_common/sanitizer_stackdepot.cc b/lib/sanitizer_common/sanitizer_stackdepot.cc<br>
index 59b53f4..0e3fbb5 100644<br>
--- a/lib/sanitizer_common/sanitizer_stackdepot.cc<br>
+++ b/lib/sanitizer_common/sanitizer_stackdepot.cc<br>
@@ -102,7 +102,7 @@ void StackDepotHandle::inc_use_count_unsafe() {<br>
 // FIXME(dvyukov): this single reserved bit is used in TSan.<br>
 typedef StackDepotBase<StackDepotNode, 1, StackDepotNode::kTabSizeLog><br>
     StackDepot;<br>
-static StackDepot theDepot;<br>
+static StackDepot theDepot(LINKER_INITIALIZED);<br>
<br>
 StackDepotStats *StackDepotGetStats() {<br>
   return theDepot.GetStats();<br>
diff --git a/lib/sanitizer_common/sanitizer_stackdepotbase.h b/lib/sanitizer_common/sanitizer_stackdepotbase.h<br>
index 5de2e71..a8418b0 100644<br>
--- a/lib/sanitizer_common/sanitizer_stackdepotbase.h<br>
+++ b/lib/sanitizer_common/sanitizer_stackdepotbase.h<br>
@@ -23,6 +23,9 @@ namespace __sanitizer {<br>
 template <class Node, int kReservedBits, int kTabSizeLog><br>
 class StackDepotBase {<br>
  public:<br>
+  explicit constexpr StackDepotBase(LinkerInitialized)<br>
+    : stats { 0, 0 } {}<br>
+<br>
   typedef typename Node::args_type args_type;<br>
   typedef typename Node::handle_type handle_type;<br>
   // Maps stack trace to an unique id.<br>
@@ -48,8 +51,11 @@ class StackDepotBase {<br>
   static const int kPartSize = kTabSize / kPartCount;<br>
   static const int kMaxId = 1 << kPartShift;<br>
<br>
-  atomic_uintptr_t tab[kTabSize];   // Hash table of Node's.<br>
-  atomic_uint32_t seq[kPartCount];  // Unique id generators.<br>
+  // Hash table of Node's.<br>
+  LinkerInitializedStructArray<atomic_uintptr_t, kTabSize> tab;<br>
+<br>
+  // Unique id generators.<br>
+  LinkerInitializedStructArray<atomic_uint32_t, kPartCount> seq;<br>
<br>
   StackDepotStats stats;<br>
<br>
diff --git a/lib/sanitizer_common/sanitizer_symbolizer.cc b/lib/sanitizer_common/sanitizer_symbolizer.cc<br>
index 135720e..b7194d3 100644<br>
--- a/lib/sanitizer_common/sanitizer_symbolizer.cc<br>
+++ b/lib/sanitizer_common/sanitizer_symbolizer.cc<br>
@@ -67,8 +67,8 @@ void DataInfo::Clear() {<br>
 }<br>
<br>
 Symbolizer *Symbolizer::symbolizer_;<br>
-StaticSpinMutex Symbolizer::init_mu_;<br>
-LowLevelAllocator Symbolizer::symbolizer_allocator_;<br>
+StaticSpinMutex Symbolizer::init_mu_(LINKER_INITIALIZED);<br>
+LowLevelAllocator Symbolizer::symbolizer_allocator_(LINKER_INITIALIZED);<br>
<br>
 Symbolizer *Symbolizer::Disable() {<br>
   CHECK_EQ(0, symbolizer_);<br>
diff --git a/lib/sanitizer_common/sanitizer_tls_get_addr.cc b/lib/sanitizer_common/sanitizer_tls_get_addr.cc<br>
index 6142ce5..98033d4 100644<br>
--- a/lib/sanitizer_common/sanitizer_tls_get_addr.cc<br>
+++ b/lib/sanitizer_common/sanitizer_tls_get_addr.cc<br>
@@ -39,7 +39,7 @@ static __thread DTLS dtls;<br>
<br>
 // Make sure we properly destroy the DTLS objects:<br>
 // this counter should never get too large.<br>
-static atomic_uintptr_t number_of_live_dtls;<br>
+static atomic_uintptr_t number_of_live_dtls(0);<br>
<br>
 static const uptr kDestroyedThread = -1;<br>
<br>
diff --git a/lib/sanitizer_common/tests/sanitizer_atomic_test.cc b/lib/sanitizer_common/tests/sanitizer_atomic_test.cc<br>
index 56bcd35..c42a145 100644<br>
--- a/lib/sanitizer_common/tests/sanitizer_atomic_test.cc<br>
+++ b/lib/sanitizer_common/tests/sanitizer_atomic_test.cc<br>
@@ -41,11 +41,11 @@ void CheckStoreLoad() {<br>
     v |= v << 8;<br>
     v |= v << 16;<br>
     v |= v << 32;<br>
-    val.a.val_dont_use = (Type)v;<br>
+    val.a.val() = (Type)v;<br>
     EXPECT_EQ(atomic_load(&val.a, load_mo), (Type)v);<br>
-    val.a.val_dont_use = (Type)-1;<br>
+    val.a.val() = (Type)-1;<br>
     atomic_store(&val.a, (Type)v, store_mo);<br>
-    EXPECT_EQ(val.a.val_dont_use, (Type)v);<br>
+    EXPECT_EQ(val.a.val(), (Type)v);<br>
   }<br>
   EXPECT_EQ(val.magic0, (Type)-3);<br>
   EXPECT_EQ(val.magic1, (Type)-3);<br>
diff --git a/lib/sanitizer_common/tests/sanitizer_thread_registry_test.cc b/lib/sanitizer_common/tests/sanitizer_thread_registry_test.cc<br>
index 58c627a..0962c20 100644<br>
--- a/lib/sanitizer_common/tests/sanitizer_thread_registry_test.cc<br>
+++ b/lib/sanitizer_common/tests/sanitizer_thread_registry_test.cc<br>
@@ -21,7 +21,7 @@<br>
 namespace __sanitizer {<br>
<br>
 static BlockingMutex tctx_allocator_lock(LINKER_INITIALIZED);<br>
-static LowLevelAllocator tctx_allocator;<br>
+static LowLevelAllocator tctx_allocator(LINKER_INITIALIZED);<br>
<br>
 template<typename TCTX><br>
 static ThreadContextBase *GetThreadContext(u32 tid) {<br>
diff --git a/lib/tsan/rtl/tsan_fd.cc b/lib/tsan/rtl/tsan_fd.cc<br>
index d18502f..b61f9c0 100644<br>
--- a/lib/tsan/rtl/tsan_fd.cc<br>
+++ b/lib/tsan/rtl/tsan_fd.cc<br>
@@ -23,6 +23,8 @@ const int kTableSize = kTableSizeL1 * kTableSizeL2;<br>
<br>
 struct FdSync {<br>
   atomic_uint64_t rc;<br>
+  constexpr FdSync()<br>
+    : rc(0) {}<br>
 };<br>
<br>
 struct FdDesc {<br>
@@ -32,12 +34,14 @@ struct FdDesc {<br>
 };<br>
<br>
 struct FdContext {<br>
-  atomic_uintptr_t tab[kTableSizeL1];<br>
+  LinkerInitializedStructArray<atomic_uintptr_t, kTableSizeL1> tab;<br>
   // Addresses used for synchronization.<br>
   FdSync globsync;<br>
   FdSync filesync;<br>
   FdSync socksync;<br>
   u64 connectsync;<br>
+  explicit constexpr FdContext()<br>
+    : connectsync(0) {}<br>
 };<br>
<br>
 static FdContext fdctx;<br>
diff --git a/lib/tsan/rtl/tsan_interface_atomic.cc b/lib/tsan/rtl/tsan_interface_atomic.cc<br>
index 9b69951..acd3d1b 100644<br>
--- a/lib/tsan/rtl/tsan_interface_atomic.cc<br>
+++ b/lib/tsan/rtl/tsan_interface_atomic.cc<br>
@@ -42,7 +42,7 @@ __extension__ typedef __int128 a128;<br>
<br>
 #ifndef SANITIZER_GO<br>
 // Protects emulation of 128-bit atomic operations.<br>
-static StaticSpinMutex mutex128;<br>
+static StaticSpinMutex mutex128(LINKER_INITIALIZED);<br>
 #endif<br>
<br>
 // Part of ABI, do not change.<br>
diff --git a/lib/tsan/rtl/tsan_rtl.cc b/lib/tsan/rtl/tsan_rtl.cc<br>
index b3320aa..71f7d5f 100644<br>
--- a/lib/tsan/rtl/tsan_rtl.cc<br>
+++ b/lib/tsan/rtl/tsan_rtl.cc<br>
@@ -103,6 +103,8 @@ ThreadState::ThreadState(Context *ctx, int tid, int unique_id, u64 epoch,<br>
   // , ignore_interceptors()<br>
   , clock(tid, reuse_count)<br>
 #ifndef SANITIZER_GO<br>
+  , alloc_cache(LINKER_INITIALIZED)<br>
+  , internal_alloc_cache(LINKER_INITIALIZED)<br>
   , jmp_bufs(MBlockJmpBuf)<br>
 #endif<br>
   , tid(tid)<br>
diff --git a/lib/tsan/tests/rtl/tsan_test_util_linux.cc b/lib/tsan/tests/rtl/tsan_test_util_linux.cc<br>
index 9298bf0..d2e40c9 100644<br>
--- a/lib/tsan/tests/rtl/tsan_test_util_linux.cc<br>
+++ b/lib/tsan/tests/rtl/tsan_test_util_linux.cc<br>
@@ -77,7 +77,7 @@ bool OnReport(const ReportDesc *rep, bool suppressed) {<br>
<br>
 static void* allocate_addr(int size, int offset_from_aligned = 0) {<br>
   static uintptr_t foo;<br>
-  static atomic_uintptr_t uniq = {(uintptr_t)&foo};  // Some real address.<br>
+  static atomic_uintptr_t uniq((uintptr_t)&foo);  // Some real address.<br>
   const int kAlign = 16;<br>
   CHECK(offset_from_aligned < kAlign);<br>
   size = (size + 2 * kAlign) & ~(kAlign - 1);<br>
<br>_______________________________________________<br>
llvm-commits mailing list<br>
<a href="mailto:llvm-commits@cs.uiuc.edu">llvm-commits@cs.uiuc.edu</a><br>
<a href="http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits" target="_blank">http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr">Alexey Samsonov<br><a href="mailto:vonosmas@gmail.com" target="_blank">vonosmas@gmail.com</a></div></div>
</div></div>