[PATCH] [ASan] Make BlockingMutex really linker initialized.

Alexey Samsonov vonosmas at gmail.com
Thu Feb 19 18:19:06 PST 2015


On Wed, Feb 18, 2015 at 5:29 AM, Yury Gribov <y.gribov at samsung.com> wrote:

> On 02/06/2015 09:10 AM, Yury Gribov wrote:
>
>> On 02/04/2015 07:23 PM, David Blaikie wrote:
>>
>>> On Wed, Feb 4, 2015 at 3:56 AM, Yury Gribov <tetra2005 at gmail.com> wrote:
>>>
>>>  Yes, I think enabling -Wglobal-constructors for ASan (and for all the
>>>>>
>>>> rest sanitizers) will be great.
>>>>
>>>>
>>>> FYI it was relatively easy to get this working on Linux (with ~500 lines
>>>> of changes). Unfortunately Windows compiler lacks too many necessary
>>>> features: explicit initialization of array members, constexpr,
>>>> unrestricted
>>>> unions (all still missing in VS2013 and we still use VS2012). Having #if
>>>> WINDOWS all over the place isn't an option as well so I'm afraid we
>>>> are out
>>>> of luck.
>>>>
>>>>
>>> Perhaps there's a relatively small number of structures (probably
>>> templates) we could create to keep all of this logic (and Windows
>>> workarounds/suppressions) in one place so it could scale to all the use
>>> cases we have?
>>>
>>
>> Thanks for the idea, I'll check this next week.
>>
>
> Folks,
>
> Here is a draft version (not full-blown patch but rather a PoC). The main
> part is in sanitizer_internal_defs.h (classes LinkerInitializedArray and
> LinkerInitializedStructArray). I also slightly changed atomic types in
> sanitizer_atomic.h (volatile members can't be linker-initialized). Does
> this look sane in general?
>

+Timur for Windows parts.
Overall, yes, although missing constexpr is such a pain (sigh). Why is it
necessary to initialize all fields to 0 in LinkerInitialized constructor?


>
> I've did full rebuild and regtests on Linux (compiler-rt built by trunk
> Clang, GCC 4.8 and Clang 3.1). I haven't done full testing on Windows and
> Mac but at least relevant parts of sanitizer_internal_defs.h compiled fine.
>
> If all this makes sense, how should I proceed with a refactoring of this
> scale? Is it mandatory to verify _all_ platforms or I could to some extent
> rely on respected maintainers to fix errors? The former would be quite a
> burden.


Well, at least for LLVM, we have buildbots (for Linux/Mac/Windows), so it's
fine to test your change on available platforms, and then watch the
buildbots closely.
It would be awesome to rollout this out gradually, first by introducing
LinkerInitializedArray, then converting global objects to use constexpr
constructor one by one, and enabling -Wglobal-constructor as a final change.


>
>
> -Y
>
>
> commit e30fa79972637958739bf4d16758adeb95d301e2
> Author: Yury Gribov <y.gribov at samsung.com>
> Date:   Tue Feb 3 13:13:08 2015 +0300
>
>     Draft support for -Wglobal-constructors in ASan.
>
> diff --git a/lib/asan/CMakeLists.txt b/lib/asan/CMakeLists.txt
> index d4c5c17..560ef86 100644
> --- a/lib/asan/CMakeLists.txt
> +++ b/lib/asan/CMakeLists.txt
> @@ -33,6 +33,8 @@ include_directories(..)
>
>  set(ASAN_CFLAGS ${SANITIZER_COMMON_CFLAGS})
>  append_no_rtti_flag(ASAN_CFLAGS)
> +append_list_if(COMPILER_RT_HAS_WGLOBAL_CONSTRUCTORS_FLAG
> -Wglobal-constructors
> +               ASAN_CFLAGS)
>
>  set(ASAN_COMMON_DEFINITIONS
>    ASAN_HAS_EXCEPTIONS=1)
> @@ -86,6 +88,9 @@ else()
>    endforeach()
>  endif()
>
> +# FIXME: move initializer to separate file
> +set_source_files_properties(asan_rtl.cc PROPERTIES COMPILE_FLAGS
> -Wno-global-constructors)
> +
>  # Build ASan runtimes shipped with Clang.
>  add_custom_target(asan)
>  if(APPLE)
> diff --git a/lib/asan/asan_allocator.cc b/lib/asan/asan_allocator.cc
> index fd63ac6..229f118 100644
> --- a/lib/asan/asan_allocator.cc
> +++ b/lib/asan/asan_allocator.cc
> @@ -202,7 +202,7 @@ AllocatorCache
> *GetAllocatorCache(AsanThreadLocalMallocStorage *ms) {
>  QuarantineCache *GetQuarantineCache(AsanThreadLocalMallocStorage *ms) {
>    CHECK(ms);
>    CHECK_LE(sizeof(QuarantineCache), sizeof(ms->quarantine_cache));
> -  return reinterpret_cast<QuarantineCache *>(ms->quarantine_cache);
> +  return reinterpret_cast<QuarantineCache *>(ms->quarantine_cache.data());
>  }
>
>  void AllocatorOptions::SetFrom(const Flags *f, const CommonFlags *cf) {
> @@ -239,9 +239,15 @@ struct Allocator {
>    atomic_uint8_t alloc_dealloc_mismatch;
>
>    // ------------------- Initialization ------------------------
> -  explicit Allocator(LinkerInitialized)
> -      : quarantine(LINKER_INITIALIZED),
> -        fallback_quarantine_cache(LINKER_INITIALIZED) {}
> +  explicit constexpr Allocator(LinkerInitialized)
> +      : allocator(LINKER_INITIALIZED),
> +        quarantine(LINKER_INITIALIZED),
> +        fallback_mutex(LINKER_INITIALIZED),
> +        fallback_allocator_cache(LINKER_INITIALIZED),
> +        fallback_quarantine_cache(LINKER_INITIALIZED),
> +        min_redzone(0),
> +        max_redzone(0),
> +        alloc_dealloc_mismatch(0) {}
>
>    void CheckOptions(const AllocatorOptions &options) const {
>      CHECK_GE(options.min_redzone, 16);
> diff --git a/lib/asan/asan_allocator.h b/lib/asan/asan_allocator.h
> index 3208d1f..adec9ac 100644
> --- a/lib/asan/asan_allocator.h
> +++ b/lib/asan/asan_allocator.h
> @@ -142,14 +142,14 @@ typedef LargeMmapAllocator<AsanMapUnmapCallback>
> SecondaryAllocator;
>  typedef CombinedAllocator<PrimaryAllocator, AllocatorCache,
>      SecondaryAllocator> AsanAllocator;
>
> -
>  struct AsanThreadLocalMallocStorage {
> -  uptr quarantine_cache[16];
> +  LinkerInitializedArray<uptr, 16> quarantine_cache;
>    AllocatorCache allocator_cache;
>    void CommitBack();
>   private:
>    // These objects are allocated via mmap() and are zero-initialized.
> -  AsanThreadLocalMallocStorage() {}
> +  explicit constexpr AsanThreadLocalMallocStorage(LinkerInitialized)
> +    : allocator_cache(LINKER_INITIALIZED) {}
>  };
>
>  void *asan_memalign(uptr alignment, uptr size, BufferedStackTrace *stack,
> diff --git a/lib/asan/asan_globals.cc b/lib/asan/asan_globals.cc
> index c457195..47978db 100644
> --- a/lib/asan/asan_globals.cc
> +++ b/lib/asan/asan_globals.cc
> @@ -34,7 +34,7 @@ struct ListOfGlobals {
>  };
>
>  static BlockingMutex mu_for_globals(LINKER_INITIALIZED);
> -static LowLevelAllocator allocator_for_globals;
> +static LowLevelAllocator allocator_for_globals(LINKER_INITIALIZED);
>  static ListOfGlobals *list_of_all_globals;
>
>  static const int kDynamicInitGlobalsInitialCapacity = 512;
> diff --git a/lib/asan/asan_poisoning.cc b/lib/asan/asan_poisoning.cc
> index e2b1f4d..296afe4 100644
> --- a/lib/asan/asan_poisoning.cc
> +++ b/lib/asan/asan_poisoning.cc
> @@ -21,7 +21,7 @@
>
>  namespace __asan {
>
> -static atomic_uint8_t can_poison_memory;
> +static atomic_uint8_t can_poison_memory(0);
>
>  void SetCanPoisonMemory(bool value) {
>    atomic_store(&can_poison_memory, value, memory_order_release);
> diff --git a/lib/asan/asan_report.cc b/lib/asan/asan_report.cc
> index 8706d5d..8c17a52 100644
> --- a/lib/asan/asan_report.cc
> +++ b/lib/asan/asan_report.cc
> @@ -598,8 +598,8 @@ void DescribeThread(AsanThreadContext *context) {
>  class ScopedInErrorReport {
>   public:
>    explicit ScopedInErrorReport(ReportData *report = nullptr) {
> -    static atomic_uint32_t num_calls;
> -    static u32 reporting_thread_tid;
> +    static atomic_uint32_t num_calls(0);
> +    static u32 reporting_thread_tid(0);
>      if (atomic_fetch_add(&num_calls, 1, memory_order_relaxed) != 0) {
>        // Do not print more than one report, otherwise they will mix up.
>        // Error reporting functions shouldn't return at this situation, as
> diff --git a/lib/asan/asan_rtl.cc b/lib/asan/asan_rtl.cc
> index 0b2a23d..1bcb957 100644
> --- a/lib/asan/asan_rtl.cc
> +++ b/lib/asan/asan_rtl.cc
> @@ -37,7 +37,7 @@ namespace __asan {
>  uptr AsanMappingProfile[kAsanMappingProfileSize];
>
>  static void AsanDie() {
> -  static atomic_uint32_t num_calls;
> +  static atomic_uint32_t num_calls(0);
>    if (atomic_fetch_add(&num_calls, 1, memory_order_relaxed) != 0) {
>      // Don't die twice - run a busy loop.
>      while (1) { }
> diff --git a/lib/asan/asan_stack.cc b/lib/asan/asan_stack.cc
> index cf7a587..ea12e34 100644
> --- a/lib/asan/asan_stack.cc
> +++ b/lib/asan/asan_stack.cc
> @@ -17,7 +17,7 @@
>
>  namespace __asan {
>
> -static atomic_uint32_t malloc_context_size;
> +static atomic_uint32_t malloc_context_size(0);
>
>  void SetMallocContextSize(u32 size) {
>    atomic_store(&malloc_context_size, size, memory_order_release);
> diff --git a/lib/asan/asan_stats.cc b/lib/asan/asan_stats.cc
> index a78b7b1..6896c47 100644
> --- a/lib/asan/asan_stats.cc
> +++ b/lib/asan/asan_stats.cc
> @@ -31,7 +31,7 @@ void AsanStats::Clear() {
>  }
>
>  static void PrintMallocStatsArray(const char *prefix,
> -                                  uptr (&array)[kNumberOfSizeClasses]) {
> +                                  const uptr *array) {
>    Printf("%s", prefix);
>    for (uptr i = 0; i < kNumberOfSizeClasses; i++) {
>      if (!array[i]) continue;
> @@ -51,10 +51,11 @@ void AsanStats::Print() {
>               (mmaped-munmaped)>>20, mmaped>>20, munmaped>>20,
>               mmaps, munmaps);
>
> -  PrintMallocStatsArray("  mmaps   by size class: ", mmaped_by_size);
> -  PrintMallocStatsArray("  mallocs by size class: ", malloced_by_size);
> -  PrintMallocStatsArray("  frees   by size class: ", freed_by_size);
> -  PrintMallocStatsArray("  rfrees  by size class: ",
> really_freed_by_size);
> +  PrintMallocStatsArray("  mmaps   by size class: ",
> mmaped_by_size.data());
> +  PrintMallocStatsArray("  mallocs by size class: ",
> malloced_by_size.data());
> +  PrintMallocStatsArray("  frees   by size class: ",
> freed_by_size.data());
> +  PrintMallocStatsArray("  rfrees  by size class: ",
> +             really_freed_by_size.data());
>    Printf("Stats: malloc large: %zu small slow: %zu\n",
>               malloc_large, malloc_small_slow);
>  }
> diff --git a/lib/asan/asan_stats.h b/lib/asan/asan_stats.h
> index c66848d..4055154 100644
> --- a/lib/asan/asan_stats.h
> +++ b/lib/asan/asan_stats.h
> @@ -39,16 +39,32 @@ struct AsanStats {
>    uptr mmaped;
>    uptr munmaps;
>    uptr munmaped;
> -  uptr mmaped_by_size[kNumberOfSizeClasses];
> -  uptr malloced_by_size[kNumberOfSizeClasses];
> -  uptr freed_by_size[kNumberOfSizeClasses];
> -  uptr really_freed_by_size[kNumberOfSizeClasses];
> +  LinkerInitializedArray<uptr, kNumberOfSizeClasses> mmaped_by_size;
> +  LinkerInitializedArray<uptr, kNumberOfSizeClasses> malloced_by_size;
> +  LinkerInitializedArray<uptr, kNumberOfSizeClasses> freed_by_size;
> +  LinkerInitializedArray<uptr, kNumberOfSizeClasses> really_freed_by_size;
>
>    uptr malloc_large;
>    uptr malloc_small_slow;
>
>    // Ctor for global AsanStats (accumulated stats for dead threads).
> -  explicit AsanStats(LinkerInitialized) { }
> +  explicit constexpr AsanStats(LinkerInitialized)
> +    : mallocs(0),
> +      malloced(0),
> +      malloced_redzones(0),
> +      frees(0),
> +      freed(0),
> +      real_frees(0),
> +      really_freed(0),
> +      really_freed_redzones(0),
> +      reallocs(0),
> +      realloced(0),
> +      mmaps(0),
> +      mmaped(0),
> +      munmaps(0),
> +      munmaped(0),
> +      malloc_large(0),
> +      malloc_small_slow(0) {}
>    // Creates empty stats.
>    AsanStats();
>
> diff --git a/lib/asan/asan_thread.cc b/lib/asan/asan_thread.cc
> index 9af5706..212b490 100644
> --- a/lib/asan/asan_thread.cc
> +++ b/lib/asan/asan_thread.cc
> @@ -50,7 +50,7 @@ static ALIGNED(16) char
> thread_registry_placeholder[sizeof(ThreadRegistry)];
>  static ThreadRegistry *asan_thread_registry;
>
>  static BlockingMutex mu_for_thread_context(LINKER_INITIALIZED);
> -static LowLevelAllocator allocator_for_thread_context;
> +static LowLevelAllocator allocator_for_thread_context(LINKER_INITIALIZED);
>
>  static ThreadContextBase *GetAsanThreadContext(u32 tid) {
>    BlockingMutexLock lock(&mu_for_thread_context);
> diff --git a/lib/lsan/lsan_allocator.cc b/lib/lsan/lsan_allocator.cc
> index 96a2b04..34e6224 100644
> --- a/lib/lsan/lsan_allocator.cc
> +++ b/lib/lsan/lsan_allocator.cc
> @@ -43,8 +43,8 @@ typedef LargeMmapAllocator<> SecondaryAllocator;
>  typedef CombinedAllocator<PrimaryAllocator, AllocatorCache,
>            SecondaryAllocator> Allocator;
>
> -static Allocator allocator;
> -static THREADLOCAL AllocatorCache cache;
> +static Allocator allocator(LINKER_INITIALIZED);
> +static THREADLOCAL AllocatorCache cache(LINKER_INITIALIZED);
>
>  void InitializeAllocator() {
>
>  allocator.InitLinkerInitialized(common_flags()->allocator_may_return_null);
> diff --git a/lib/msan/msan_allocator.cc b/lib/msan/msan_allocator.cc
> index 698b6cd..f8761bc 100644
> --- a/lib/msan/msan_allocator.cc
> +++ b/lib/msan/msan_allocator.cc
> @@ -64,9 +64,9 @@ typedef LargeMmapAllocator<MsanMapUnmapCallback>
> SecondaryAllocator;
>  typedef CombinedAllocator<PrimaryAllocator, AllocatorCache,
>                            SecondaryAllocator> Allocator;
>
> -static Allocator allocator;
> -static AllocatorCache fallback_allocator_cache;
> -static SpinMutex fallback_mutex;
> +static Allocator allocator(LINKER_INITIALIZED);
> +static AllocatorCache fallback_allocator_cache(LINKER_INITIALIZED);
> +static SpinMutex fallback_mutex(LINKER_INITIALIZED);
>
>  static int inited = 0;
>
> diff --git a/lib/msan/msan_chained_origin_depot.cc
> b/lib/msan/msan_chained_origin_depot.cc
> index c21e8e8..1258c50 100644
> --- a/lib/msan/msan_chained_origin_depot.cc
> +++ b/lib/msan/msan_chained_origin_depot.cc
> @@ -94,7 +94,8 @@ struct ChainedOriginDepotNode {
>    typedef Handle handle_type;
>  };
>
> -static StackDepotBase<ChainedOriginDepotNode, 4, 20> chainedOriginDepot;
> +static StackDepotBase<ChainedOriginDepotNode, 4, 20>
> +    chainedOriginDepot(LINKER_INITIALIZED);
>
>  StackDepotStats *ChainedOriginDepotGetStats() {
>    return chainedOriginDepot.GetStats();
> diff --git a/lib/sanitizer_common/sanitizer_allocator.cc
> b/lib/sanitizer_common/sanitizer_allocator.cc
> index 03b3e83..f2699b0 100644
> --- a/lib/sanitizer_common/sanitizer_allocator.cc
> +++ b/lib/sanitizer_common/sanitizer_allocator.cc
> @@ -47,11 +47,11 @@ InternalAllocator *internal_allocator() {
>  #else  // SANITIZER_GO
>
>  static ALIGNED(64) char
> internal_alloc_placeholder[sizeof(InternalAllocator)];
> -static atomic_uint8_t internal_allocator_initialized;
> -static StaticSpinMutex internal_alloc_init_mu;
> +static atomic_uint8_t internal_allocator_initialized(0);
> +static StaticSpinMutex internal_alloc_init_mu(LINKER_INITIALIZED);
>
> -static InternalAllocatorCache internal_allocator_cache;
> -static StaticSpinMutex internal_allocator_cache_mu;
> +static InternalAllocatorCache
> internal_allocator_cache(LINKER_INITIALIZED);
> +static StaticSpinMutex internal_allocator_cache_mu(LINKER_INITIALIZED);
>
>  InternalAllocator *internal_allocator() {
>    InternalAllocator *internal_allocator_instance =
> diff --git a/lib/sanitizer_common/sanitizer_allocator.h
> b/lib/sanitizer_common/sanitizer_allocator.h
> index b5105f8..5cee177 100644
> --- a/lib/sanitizer_common/sanitizer_allocator.h
> +++ b/lib/sanitizer_common/sanitizer_allocator.h
> @@ -208,39 +208,51 @@ typedef uptr
> AllocatorStatCounters[AllocatorStatCount];
>  // Per-thread stats, live in per-thread cache.
>  class AllocatorStats {
>   public:
> +  AllocatorStats() {}
> +
> +  explicit constexpr AllocatorStats(LinkerInitialized)
> +    : next_(0), prev_(0) {}
> +
>    void Init() {
>      internal_memset(this, 0, sizeof(*this));
>    }
>    void InitLinkerInitialized() {}
>
>    void Add(AllocatorStat i, uptr v) {
> -    v += atomic_load(&stats_[i], memory_order_relaxed);
> -    atomic_store(&stats_[i], v, memory_order_relaxed);
> +    v += atomic_load((atomic_uintptr_t *)&stats_[i],
> memory_order_relaxed);
> +    atomic_store((atomic_uintptr_t *)&stats_[i], v, memory_order_relaxed);
>    }
>
>    void Sub(AllocatorStat i, uptr v) {
> -    v = atomic_load(&stats_[i], memory_order_relaxed) - v;
> -    atomic_store(&stats_[i], v, memory_order_relaxed);
> +    v = atomic_load((atomic_uintptr_t *)&stats_[i], memory_order_relaxed)
> - v;
> +    atomic_store((atomic_uintptr_t *)&stats_[i], v, memory_order_relaxed);
>    }
>
>    void Set(AllocatorStat i, uptr v) {
> -    atomic_store(&stats_[i], v, memory_order_relaxed);
> +    atomic_store((atomic_uintptr_t *)&stats_[i], v, memory_order_relaxed);
>    }
>
>    uptr Get(AllocatorStat i) const {
> -    return atomic_load(&stats_[i], memory_order_relaxed);
> +    return atomic_load((const atomic_uintptr_t *)&stats_[i],
> +                       memory_order_relaxed);
>    }
>
>   private:
>    friend class AllocatorGlobalStats;
>    AllocatorStats *next_;
>    AllocatorStats *prev_;
> -  atomic_uintptr_t stats_[AllocatorStatCount];
> +  LinkerInitializedStructArray<atomic_uintptr_t, AllocatorStatCount>
> stats_;
>  };
>
>  // Global stats, used for aggregation and querying.
>  class AllocatorGlobalStats : public AllocatorStats {
>   public:
> +  AllocatorGlobalStats() {}
> +
> +  explicit constexpr AllocatorGlobalStats(LinkerInitialized)
> +    : AllocatorStats(LINKER_INITIALIZED),
> +      mu_(LINKER_INITIALIZED) {}
> +
>    void InitLinkerInitialized() {
>      next_ = this;
>      prev_ = this;
> @@ -321,6 +333,10 @@ class SizeClassAllocator64 {
>        SizeClassMap, MapUnmapCallback> ThisT;
>    typedef SizeClassAllocatorLocalCache<ThisT> AllocatorCache;
>
> +  SizeClassAllocator64() {}
> +
> +  explicit constexpr SizeClassAllocator64(LinkerInitialized) {}
> +
>    void Init() {
>      CHECK_EQ(kSpaceBeg,
>               reinterpret_cast<uptr>(Mprotect(kSpaceBeg, kSpaceSize)));
> @@ -579,8 +595,12 @@ class SizeClassAllocator64 {
>  template<u64 kSize>
>  class FlatByteMap {
>   public:
> +  FlatByteMap() {}
> +
> +  explicit constexpr FlatByteMap(LinkerInitialized) {}
> +
>    void TestOnlyInit() {
> -    internal_memset(map_, 0, sizeof(map_));
> +    internal_memset(map_.data(), 0, sizeof(map_));
>    }
>
>    void set(uptr idx, u8 val) {
> @@ -594,7 +614,7 @@ class FlatByteMap {
>      return map_[idx];
>    }
>   private:
> -  u8 map_[kSize];
> +  LinkerInitializedArray<u8, kSize> map_;
>  };
>
>  // TwoLevelByteMap maps integers in range [0, kSize1*kSize2) to u8 values.
> @@ -693,6 +713,11 @@ class SizeClassAllocator32 {
>        SizeClassMap, kRegionSizeLog, ByteMap, MapUnmapCallback> ThisT;
>    typedef SizeClassAllocatorLocalCache<ThisT> AllocatorCache;
>
> +  SizeClassAllocator32() {}
> +
> +  explicit constexpr SizeClassAllocator32(LinkerInitialized)
> +    : possible_regions(LINKER_INITIALIZED) {}
> +
>    void Init() {
>      possible_regions.TestOnlyInit();
>      internal_memset(size_class_info_array, 0,
> sizeof(size_class_info_array));
> @@ -832,7 +857,12 @@ class SizeClassAllocator32 {
>    struct SizeClassInfo {
>      SpinMutex mutex;
>      IntrusiveList<Batch> free_list;
> -    char padding[kCacheLineSize - sizeof(uptr) -
> sizeof(IntrusiveList<Batch>)];
> +    static const uptr padding_size =
> +      kCacheLineSize - sizeof(uptr) - sizeof(IntrusiveList<Batch>);
> +    LinkerInitializedArray<char, padding_size> padding;
> +    constexpr SizeClassInfo()
> +      : mutex(LINKER_INITIALIZED),
> +        free_list(LINKER_INITIALIZED) {}
>    };
>    COMPILER_CHECK(sizeof(SizeClassInfo) == kCacheLineSize);
>
> @@ -902,6 +932,11 @@ struct SizeClassAllocatorLocalCache {
>    typedef SizeClassAllocator Allocator;
>    static const uptr kNumClasses = SizeClassAllocator::kNumClasses;
>
> +  SizeClassAllocatorLocalCache() {}
> +
> +  explicit constexpr SizeClassAllocatorLocalCache(LinkerInitialized)
> +    : stats_(LINKER_INITIALIZED) {}
> +
>    void Init(AllocatorGlobalStats *s) {
>      stats_.Init();
>      if (s)
> @@ -956,7 +991,7 @@ struct SizeClassAllocatorLocalCache {
>      uptr max_count;
>      void *batch[2 * SizeClassMap::kMaxNumCached];
>    };
> -  PerClass per_class_[kNumClasses];
> +  LinkerInitializedArray<PerClass, kNumClasses> per_class_;
>    AllocatorStats stats_;
>
>    void InitCache() {
> @@ -1006,6 +1041,17 @@ struct SizeClassAllocatorLocalCache {
>  template <class MapUnmapCallback = NoOpMapUnmapCallback>
>  class LargeMmapAllocator {
>   public:
> +  LargeMmapAllocator() {}
> +
> +  explicit constexpr LargeMmapAllocator(LinkerInitialized)
> +    : page_size_(0),
> +      n_chunks_(0),
> +      min_mmap_(0),
> +      max_mmap_(0),
> +      chunks_sorted_(false),
> +      may_return_null_(0),
> +      mutex_(LINKER_INITIALIZED) {}
> +
>    void InitLinkerInitialized(bool may_return_null) {
>      page_size_ = GetPageSizeCached();
>      atomic_store(&may_return_null_, may_return_null,
> memory_order_relaxed);
> @@ -1149,7 +1195,7 @@ class LargeMmapAllocator {
>      if (!n) return 0;
>      if (!chunks_sorted_) {
>        // Do one-time sort. chunks_sorted_ is reset in Allocate/Deallocate.
> -      SortArray(reinterpret_cast<uptr*>(chunks_), n);
> +      SortArray(reinterpret_cast<uptr*>(chunks_.data()), n);
>        for (uptr i = 0; i < n; i++)
>          chunks_[i]->chunk_idx = i;
>        chunks_sorted_ = true;
> @@ -1240,12 +1286,18 @@ class LargeMmapAllocator {
>    }
>
>    uptr page_size_;
> -  Header *chunks_[kMaxNumChunks];
> +  LinkerInitializedArray<Header *, kMaxNumChunks> chunks_;
>    uptr n_chunks_;
>    uptr min_mmap_, max_mmap_;
>    bool chunks_sorted_;
>    struct Stats {
> -    uptr n_allocs, n_frees, currently_allocated, max_allocated,
> by_size_log[64];
> +    uptr n_allocs, n_frees, currently_allocated, max_allocated;
> +    LinkerInitializedArray<uptr, 64> by_size_log;
> +    constexpr Stats()
> +      : n_allocs(0),
> +        n_frees(0),
> +        currently_allocated(0),
> +        max_allocated(0) {}
>    } stats;
>    atomic_uint8_t may_return_null_;
>    SpinMutex mutex_;
> @@ -1261,6 +1313,15 @@ template <class PrimaryAllocator, class
> AllocatorCache,
>            class SecondaryAllocator>  // NOLINT
>  class CombinedAllocator {
>   public:
> +  CombinedAllocator() {}
> +
> +  explicit constexpr CombinedAllocator(LinkerInitialized)
> +    : primary_(LINKER_INITIALIZED),
> +      secondary_(LINKER_INITIALIZED),
> +      stats_(LINKER_INITIALIZED),
> +      may_return_null_(0),
> +      rss_limit_is_exceeded_(0) {}
> +
>    void InitCommon(bool may_return_null) {
>      primary_.Init();
>      atomic_store(&may_return_null_, may_return_null,
> memory_order_relaxed);
> diff --git a/lib/sanitizer_common/sanitizer_atomic.h
> b/lib/sanitizer_common/sanitizer_atomic.h
> index 6643c54..9091d74 100644
> --- a/lib/sanitizer_common/sanitizer_atomic.h
> +++ b/lib/sanitizer_common/sanitizer_atomic.h
> @@ -27,31 +27,25 @@ enum memory_order {
>    memory_order_seq_cst = 1 << 5
>  };
>
> -struct atomic_uint8_t {
> -  typedef u8 Type;
> -  volatile Type val_dont_use;
> -};
> -
> -struct atomic_uint16_t {
> -  typedef u16 Type;
> -  volatile Type val_dont_use;
> -};
> -
> -struct atomic_uint32_t {
> -  typedef u32 Type;
> -  volatile Type val_dont_use;
> -};
> -
> -struct atomic_uint64_t {
> -  typedef u64 Type;
> -  // On 32-bit platforms u64 is not necessary aligned on 8 bytes.
> -  volatile ALIGNED(8) Type val_dont_use;
> -};
> -
> -struct atomic_uintptr_t {
> -  typedef uptr Type;
> -  volatile Type val_dont_use;
> -};
> +#define DEFINE_ATOMIC_TYPE(name, base_type, attr) \
> +  struct name { \
> +    typedef base_type Type; \
> +    name() {} \
> +    explicit constexpr name(base_type v) \
> +      : val_(v) {} \
> +    volatile Type &val() volatile { return val_; } \
> +    const volatile Type &val() const volatile { return val_; } \
> +  private: \
> +    attr Type val_; \
> +  }
> +
> +DEFINE_ATOMIC_TYPE(atomic_uint8_t, u8,);  // NOLINT
> +DEFINE_ATOMIC_TYPE(atomic_uint16_t, u16,);  // NOLINT
> +DEFINE_ATOMIC_TYPE(atomic_uint32_t, u32,);  // NOLINT
> +DEFINE_ATOMIC_TYPE(atomic_uintptr_t, uptr,);  // NOLINT
> +
> +// On 32-bit platforms u64 is not necessary aligned on 8 bytes.
> +DEFINE_ATOMIC_TYPE(atomic_uint64_t, u64, ALIGNED(8));
>
>  }  // namespace __sanitizer
>
> diff --git a/lib/sanitizer_common/sanitizer_atomic_clang.h
> b/lib/sanitizer_common/sanitizer_atomic_clang.h
> index 38363e8..ea786a9 100644
> --- a/lib/sanitizer_common/sanitizer_atomic_clang.h
> +++ b/lib/sanitizer_common/sanitizer_atomic_clang.h
> @@ -48,7 +48,7 @@ INLINE typename T::Type atomic_fetch_add(volatile T *a,
>      typename T::Type v, memory_order mo) {
>    (void)mo;
>    DCHECK(!((uptr)a % sizeof(*a)));
> -  return __sync_fetch_and_add(&a->val_dont_use, v);
> +  return __sync_fetch_and_add(&a->val(), v);
>  }
>
>  template<typename T>
> @@ -56,7 +56,7 @@ INLINE typename T::Type atomic_fetch_sub(volatile T *a,
>      typename T::Type v, memory_order mo) {
>    (void)mo;
>    DCHECK(!((uptr)a % sizeof(*a)));
> -  return __sync_fetch_and_add(&a->val_dont_use, -v);
> +  return __sync_fetch_and_add(&a->val(), -v);
>  }
>
>  template<typename T>
> @@ -65,7 +65,7 @@ INLINE typename T::Type atomic_exchange(volatile T *a,
>    DCHECK(!((uptr)a % sizeof(*a)));
>    if (mo & (memory_order_release | memory_order_acq_rel |
> memory_order_seq_cst))
>      __sync_synchronize();
> -  v = __sync_lock_test_and_set(&a->val_dont_use, v);
> +  v = __sync_lock_test_and_set(&a->val(), v);
>    if (mo == memory_order_seq_cst)
>      __sync_synchronize();
>    return v;
> @@ -78,7 +78,7 @@ INLINE bool atomic_compare_exchange_strong(volatile T *a,
>                                             memory_order mo) {
>    typedef typename T::Type Type;
>    Type cmpv = *cmp;
> -  Type prev = __sync_val_compare_and_swap(&a->val_dont_use, cmpv, xchg);
> +  Type prev = __sync_val_compare_and_swap(&a->val(), cmpv, xchg);
>    if (prev == cmpv)
>      return true;
>    *cmp = prev;
> diff --git a/lib/sanitizer_common/sanitizer_atomic_clang_other.h
> b/lib/sanitizer_common/sanitizer_atomic_clang_other.h
> index 099b9f7..f4c0028 100644
> --- a/lib/sanitizer_common/sanitizer_atomic_clang_other.h
> +++ b/lib/sanitizer_common/sanitizer_atomic_clang_other.h
> @@ -32,21 +32,21 @@ INLINE typename T::Type atomic_load(
>    if (sizeof(*a) < 8 || sizeof(void*) == 8) {
>      // Assume that aligned loads are atomic.
>      if (mo == memory_order_relaxed) {
> -      v = a->val_dont_use;
> +      v = a->val();
>      } else if (mo == memory_order_consume) {
>        // Assume that processor respects data dependencies
>        // (and that compiler won't break them).
>        __asm__ __volatile__("" ::: "memory");
> -      v = a->val_dont_use;
> +      v = a->val();
>        __asm__ __volatile__("" ::: "memory");
>      } else if (mo == memory_order_acquire) {
>        __asm__ __volatile__("" ::: "memory");
> -      v = a->val_dont_use;
> +      v = a->val();
>        __sync_synchronize();
>      } else {  // seq_cst
>        // E.g. on POWER we need a hw fence even before the store.
>        __sync_synchronize();
> -      v = a->val_dont_use;
> +      v = a->val();
>        __sync_synchronize();
>      }
>    } else {
> @@ -54,7 +54,7 @@ INLINE typename T::Type atomic_load(
>      // Gross, but simple and reliable.
>      // Assume that it is not in read-only memory.
>      v = __sync_fetch_and_add(
> -        const_cast<typename T::Type volatile *>(&a->val_dont_use), 0);
> +        const_cast<typename T::Type volatile *>(&a->val()), 0);
>    }
>    return v;
>  }
> @@ -68,23 +68,23 @@ INLINE void atomic_store(volatile T *a, typename
> T::Type v, memory_order mo) {
>    if (sizeof(*a) < 8 || sizeof(void*) == 8) {
>      // Assume that aligned loads are atomic.
>      if (mo == memory_order_relaxed) {
> -      a->val_dont_use = v;
> +      a->val() = v;
>      } else if (mo == memory_order_release) {
>        __sync_synchronize();
> -      a->val_dont_use = v;
> +      a->val() = v;
>        __asm__ __volatile__("" ::: "memory");
>      } else {  // seq_cst
>        __sync_synchronize();
> -      a->val_dont_use = v;
> +      a->val() = v;
>        __sync_synchronize();
>      }
>    } else {
>      // 64-bit store on 32-bit platform.
>      // Gross, but simple and reliable.
> -    typename T::Type cmp = a->val_dont_use;
> +    typename T::Type cmp = a->val();
>      typename T::Type cur;
>      for (;;) {
> -      cur = __sync_val_compare_and_swap(&a->val_dont_use, cmp, v);
> +      cur = __sync_val_compare_and_swap(&a->val(), cmp, v);
>        if (cmp == v)
>          break;
>        cmp = cur;
> diff --git a/lib/sanitizer_common/sanitizer_atomic_clang_x86.h
> b/lib/sanitizer_common/sanitizer_atomic_clang_x86.h
> index 38feb29..6e4226f 100644
> --- a/lib/sanitizer_common/sanitizer_atomic_clang_x86.h
> +++ b/lib/sanitizer_common/sanitizer_atomic_clang_x86.h
> @@ -35,22 +35,22 @@ INLINE typename T::Type atomic_load(
>    if (sizeof(*a) < 8 || sizeof(void*) == 8) {
>      // Assume that aligned loads are atomic.
>      if (mo == memory_order_relaxed) {
> -      v = a->val_dont_use;
> +      v = a->val();
>      } else if (mo == memory_order_consume) {
>        // Assume that processor respects data dependencies
>        // (and that compiler won't break them).
>        __asm__ __volatile__("" ::: "memory");
> -      v = a->val_dont_use;
> +      v = a->val();
>        __asm__ __volatile__("" ::: "memory");
>      } else if (mo == memory_order_acquire) {
>        __asm__ __volatile__("" ::: "memory");
> -      v = a->val_dont_use;
> +      v = a->val();
>        // On x86 loads are implicitly acquire.
>        __asm__ __volatile__("" ::: "memory");
>      } else {  // seq_cst
>        // On x86 plain MOV is enough for seq_cst store.
>        __asm__ __volatile__("" ::: "memory");
> -      v = a->val_dont_use;
> +      v = a->val();
>        __asm__ __volatile__("" ::: "memory");
>      }
>    } else {
> @@ -60,7 +60,7 @@ INLINE typename T::Type atomic_load(
>          "movq %%mm0, %0;"  // (ptr could be read-only)
>          "emms;"            // Empty mmx state/Reset FP regs
>          : "=m" (v)
> -        : "m" (a->val_dont_use)
> +        : "m" (a->val())
>          : // mark the FP stack and mmx registers as clobbered
>            "st", "st(1)", "st(2)", "st(3)", "st(4)", "st(5)", "st(6)",
> "st(7)",
>  #ifdef __MMX__
> @@ -80,16 +80,16 @@ INLINE void atomic_store(volatile T *a, typename
> T::Type v, memory_order mo) {
>    if (sizeof(*a) < 8 || sizeof(void*) == 8) {
>      // Assume that aligned loads are atomic.
>      if (mo == memory_order_relaxed) {
> -      a->val_dont_use = v;
> +      a->val() = v;
>      } else if (mo == memory_order_release) {
>        // On x86 stores are implicitly release.
>        __asm__ __volatile__("" ::: "memory");
> -      a->val_dont_use = v;
> +      a->val() = v;
>        __asm__ __volatile__("" ::: "memory");
>      } else {  // seq_cst
>        // On x86 stores are implicitly release.
>        __asm__ __volatile__("" ::: "memory");
> -      a->val_dont_use = v;
> +      a->val() = v;
>        __sync_synchronize();
>      }
>    } else {
> @@ -98,7 +98,7 @@ INLINE void atomic_store(volatile T *a, typename T::Type
> v, memory_order mo) {
>          "movq %1, %%mm0;"  // Use mmx reg for 64-bit atomic moves
>          "movq %%mm0, %0;"
>          "emms;"            // Empty mmx state/Reset FP regs
> -        : "=m" (a->val_dont_use)
> +        : "=m" (a->val())
>          : "m" (v)
>          : // mark the FP stack and mmx registers as clobbered
>            "st", "st(1)", "st(2)", "st(3)", "st(4)", "st(5)", "st(6)",
> "st(7)",
> diff --git a/lib/sanitizer_common/sanitizer_atomic_msvc.h
> b/lib/sanitizer_common/sanitizer_atomic_msvc.h
> index 12ffef3..c52cde0 100644
> --- a/lib/sanitizer_common/sanitizer_atomic_msvc.h
> +++ b/lib/sanitizer_common/sanitizer_atomic_msvc.h
> @@ -73,10 +73,10 @@ INLINE typename T::Type atomic_load(
>    typename T::Type v;
>    // FIXME(dvyukov): 64-bit load is not atomic on 32-bits.
>    if (mo == memory_order_relaxed) {
> -    v = a->val_dont_use;
> +    v = a->val();
>    } else {
>      atomic_signal_fence(memory_order_seq_cst);
> -    v = a->val_dont_use;
> +    v = a->val();
>      atomic_signal_fence(memory_order_seq_cst);
>    }
>    return v;
> @@ -89,10 +89,10 @@ INLINE void atomic_store(volatile T *a, typename
> T::Type v, memory_order mo) {
>    DCHECK(!((uptr)a % sizeof(*a)));
>    // FIXME(dvyukov): 64-bit store is not atomic on 32-bits.
>    if (mo == memory_order_relaxed) {
> -    a->val_dont_use = v;
> +    a->val() = v;
>    } else {
>      atomic_signal_fence(memory_order_seq_cst);
> -    a->val_dont_use = v;
> +    a->val() = v;
>      atomic_signal_fence(memory_order_seq_cst);
>    }
>    if (mo == memory_order_seq_cst)
> @@ -104,7 +104,7 @@ INLINE u32 atomic_fetch_add(volatile atomic_uint32_t
> *a,
>    (void)mo;
>    DCHECK(!((uptr)a % sizeof(*a)));
>    return (u32)_InterlockedExchangeAdd(
> -      (volatile long*)&a->val_dont_use, (long)v);  // NOLINT
> +      (volatile long*)&a->val(), (long)v);  // NOLINT
>  }
>
>  INLINE uptr atomic_fetch_add(volatile atomic_uintptr_t *a,
> @@ -113,10 +113,10 @@ INLINE uptr atomic_fetch_add(volatile
> atomic_uintptr_t *a,
>    DCHECK(!((uptr)a % sizeof(*a)));
>  #ifdef _WIN64
>    return (uptr)_InterlockedExchangeAdd64(
> -      (volatile long long*)&a->val_dont_use, (long long)v);  // NOLINT
> +      (volatile long long*)&a->val(), (long long)v);  // NOLINT
>  #else
>    return (uptr)_InterlockedExchangeAdd(
> -      (volatile long*)&a->val_dont_use, (long)v);  // NOLINT
> +      (volatile long*)&a->val(), (long)v);  // NOLINT
>  #endif
>  }
>
> @@ -125,7 +125,7 @@ INLINE u32 atomic_fetch_sub(volatile atomic_uint32_t
> *a,
>    (void)mo;
>    DCHECK(!((uptr)a % sizeof(*a)));
>    return (u32)_InterlockedExchangeAdd(
> -      (volatile long*)&a->val_dont_use, -(long)v);  // NOLINT
> +      (volatile long*)&a->val(), -(long)v);  // NOLINT
>  }
>
>  INLINE uptr atomic_fetch_sub(volatile atomic_uintptr_t *a,
> @@ -134,10 +134,10 @@ INLINE uptr atomic_fetch_sub(volatile
> atomic_uintptr_t *a,
>    DCHECK(!((uptr)a % sizeof(*a)));
>  #ifdef _WIN64
>    return (uptr)_InterlockedExchangeAdd64(
> -      (volatile long long*)&a->val_dont_use, -(long long)v);  // NOLINT
> +      (volatile long long*)&a->val(), -(long long)v);  // NOLINT
>  #else
>    return (uptr)_InterlockedExchangeAdd(
> -      (volatile long*)&a->val_dont_use, -(long)v);  // NOLINT
> +      (volatile long*)&a->val(), -(long)v);  // NOLINT
>  #endif
>  }
>
> @@ -194,7 +194,7 @@ INLINE bool atomic_compare_exchange_strong(volatile
> atomic_uintptr_t *a,
>                                             memory_order mo) {
>    uptr cmpv = *cmp;
>    uptr prev = (uptr)_InterlockedCompareExchangePointer(
> -      (void*volatile*)&a->val_dont_use, (void*)xchg, (void*)cmpv);
> +      (void*volatile*)&a->val(), (void*)xchg, (void*)cmpv);
>    if (prev == cmpv)
>      return true;
>    *cmp = prev;
> @@ -207,7 +207,7 @@ INLINE bool atomic_compare_exchange_strong(volatile
> atomic_uint16_t *a,
>                                             memory_order mo) {
>    u16 cmpv = *cmp;
>    u16 prev = (u16)_InterlockedCompareExchange16(
> -      (volatile short*)&a->val_dont_use, (short)xchg, (short)cmpv);
> +      (volatile short*)&a->val(), (short)xchg, (short)cmpv);
>    if (prev == cmpv)
>      return true;
>    *cmp = prev;
> @@ -220,7 +220,7 @@ INLINE bool atomic_compare_exchange_strong(volatile
> atomic_uint32_t *a,
>                                             memory_order mo) {
>    u32 cmpv = *cmp;
>    u32 prev = (u32)_InterlockedCompareExchange(
> -      (volatile long*)&a->val_dont_use, (long)xchg, (long)cmpv);
> +      (volatile long*)&a->val(), (long)xchg, (long)cmpv);
>    if (prev == cmpv)
>      return true;
>    *cmp = prev;
> @@ -233,7 +233,7 @@ INLINE bool atomic_compare_exchange_strong(volatile
> atomic_uint64_t *a,
>                                             memory_order mo) {
>    u64 cmpv = *cmp;
>    u64 prev = (u64)_InterlockedCompareExchange64(
> -      (volatile long long*)&a->val_dont_use, (long long)xchg, (long
> long)cmpv);
> +      (volatile long long*)&a->val(), (long long)xchg, (long long)cmpv);
>    if (prev == cmpv)
>      return true;
>    *cmp = prev;
> diff --git a/lib/sanitizer_common/sanitizer_common.cc
> b/lib/sanitizer_common/sanitizer_common.cc
> index 489081e..8b19ccd 100644
> --- a/lib/sanitizer_common/sanitizer_common.cc
> +++ b/lib/sanitizer_common/sanitizer_common.cc
> @@ -21,7 +21,7 @@ namespace __sanitizer {
>
>  const char *SanitizerToolName = "SanitizerTool";
>
> -atomic_uint32_t current_verbosity;
> +atomic_uint32_t current_verbosity(0);
>
>  uptr GetPageSizeCached() {
>    static uptr PageSize;
> @@ -30,7 +30,7 @@ uptr GetPageSizeCached() {
>    return PageSize;
>  }
>
> -StaticSpinMutex report_file_mu;
> +StaticSpinMutex report_file_mu(LINKER_INITIALIZED);
>  ReportFile report_file = {&report_file_mu, kStderrFd, "", "", 0};
>
>  void RawWrite(const char *buffer) {
> @@ -272,7 +272,7 @@ bool LoadedModule::containsAddress(uptr address) const
> {
>    return false;
>  }
>
> -static atomic_uintptr_t g_total_mmaped;
> +static atomic_uintptr_t g_total_mmaped(0);
>
>  void IncreaseTotalMmap(uptr size) {
>    if (!common_flags()->mmap_limit_mb) return;
> diff --git a/lib/sanitizer_common/sanitizer_common.h
> b/lib/sanitizer_common/sanitizer_common.h
> index 720cd73..e26a6f0 100644
> --- a/lib/sanitizer_common/sanitizer_common.h
> +++ b/lib/sanitizer_common/sanitizer_common.h
> @@ -127,6 +127,8 @@ class InternalScopedString : public
> InternalScopedBuffer<char> {
>  // linker initialized.
>  class LowLevelAllocator {
>   public:
> +  explicit constexpr LowLevelAllocator(LinkerInitialized)
> +    : allocated_end_(0), allocated_current_(0) {}
>    // Requires an external lock.
>    void *Allocate(uptr size);
>   private:
> @@ -399,6 +401,9 @@ INLINE int ToLower(int c) {
>  template<typename T>
>  class InternalMmapVectorNoCtor {
>   public:
> +  InternalMmapVectorNoCtor() {}
> +  explicit constexpr InternalMmapVectorNoCtor(LinkerInitialized)
> +    : data_(0), capacity_(0), size_(0) {}
>    void Initialize(uptr initial_capacity) {
>      capacity_ = Max(initial_capacity, (uptr)1);
>      size_ = 0;
> diff --git a/lib/sanitizer_common/sanitizer_common_interceptors.inc
> b/lib/sanitizer_common/sanitizer_common_interceptors.inc
> index 87c33e1..885524e 100644
> --- a/lib/sanitizer_common/sanitizer_common_interceptors.inc
> +++ b/lib/sanitizer_common/sanitizer_common_interceptors.inc
> @@ -4755,7 +4755,7 @@ INTERCEPTOR(int, timerfd_gettime, int fd, void
> *curr_value) {
>  // Linux kernel has a bug that leads to kernel deadlock if a process
>  // maps TBs of memory and then calls mlock().
>  static void MlockIsUnsupported() {
> -  static atomic_uint8_t printed;
> +  static atomic_uint8_t printed(0);
>    if (atomic_exchange(&printed, 1, memory_order_relaxed))
>      return;
>    VPrintf(1, "INFO: %s ignores mlock/mlockall/munlock/munlockall\n",
> diff --git a/lib/sanitizer_common/sanitizer_coverage_libcdep.cc
> b/lib/sanitizer_common/sanitizer_coverage_libcdep.cc
> index e8f42f6..be15419 100644
> --- a/lib/sanitizer_common/sanitizer_coverage_libcdep.cc
> +++ b/lib/sanitizer_common/sanitizer_coverage_libcdep.cc
> @@ -43,9 +43,10 @@
>  #include "sanitizer_symbolizer.h"
>  #include "sanitizer_flags.h"
>
> -static atomic_uint32_t dump_once_guard;  // Ensure that CovDump runs only
> once.
> +// Ensure that CovDump runs only once.
> +static atomic_uint32_t dump_once_guard(0);
>
> -static atomic_uintptr_t coverage_counter;
> +static atomic_uintptr_t coverage_counter(0);
>
>  // pc_array is the array containing the covered PCs.
>  // To make the pc_array thread- and async-signal-safe it has to be large
> enough.
> @@ -65,6 +66,21 @@ namespace __sanitizer {
>
>  class CoverageData {
>   public:
> +  explicit constexpr CoverageData(LinkerInitialized)
> +    : pc_array(0),
> +      pc_array_index(0),
> +      pc_array_size(0),
> +      pc_array_mapped_size(0),
> +      pc_fd(0),
> +      guard_array_vec(LINKER_INITIALIZED),
> +      cc_array(0),
> +      cc_array_index(0),
> +      cc_array_size(0),
> +      tr_event_array(0),
> +      tr_event_array_size(0),
> +      tr_event_pointer(0),
> +      mu(LINKER_INITIALIZED) {}
> +
>    void Init();
>    void Enable();
>    void Disable();
> @@ -134,7 +150,7 @@ class CoverageData {
>    void DirectOpen();
>  };
>
> -static CoverageData coverage_data;
> +static CoverageData coverage_data(LINKER_INITIALIZED);
>
>  void CovUpdateMapping(const char *path, uptr caller_pc = 0);
>
> diff --git a/lib/sanitizer_common/sanitizer_coverage_mapping_libcdep.cc
> b/lib/sanitizer_common/sanitizer_coverage_mapping_libcdep.cc
> index 6b5e91f..8f0ab7f 100644
> --- a/lib/sanitizer_common/sanitizer_coverage_mapping_libcdep.cc
> +++ b/lib/sanitizer_common/sanitizer_coverage_mapping_libcdep.cc
> @@ -60,7 +60,7 @@ struct CachedMapping {
>  };
>
>  static CachedMapping cached_mapping;
> -static StaticSpinMutex mapping_mu;
> +static StaticSpinMutex mapping_mu(LINKER_INITIALIZED);
>
>  void CovUpdateMapping(const char *coverage_dir, uptr caller_pc) {
>    if (!common_flags()->coverage_direct) return;
> diff --git a/lib/sanitizer_common/sanitizer_flag_parser.cc
> b/lib/sanitizer_common/sanitizer_flag_parser.cc
> index d125002..9f005a2 100644
> --- a/lib/sanitizer_common/sanitizer_flag_parser.cc
> +++ b/lib/sanitizer_common/sanitizer_flag_parser.cc
> @@ -20,7 +20,7 @@
>
>  namespace __sanitizer {
>
> -LowLevelAllocator FlagParser::Alloc;
> +LowLevelAllocator FlagParser::Alloc(LINKER_INITIALIZED);
>
>  class UnknownFlags {
>    static const int kMaxUnknownFlags = 20;
> diff --git a/lib/sanitizer_common/sanitizer_flags.cc
> b/lib/sanitizer_common/sanitizer_flags.cc
> index a296535..52e7aae 100644
> --- a/lib/sanitizer_common/sanitizer_flags.cc
> +++ b/lib/sanitizer_common/sanitizer_flags.cc
> @@ -28,7 +28,7 @@ struct FlagDescription {
>    FlagDescription *next;
>  };
>
> -IntrusiveList<FlagDescription> flag_descriptions;
> +IntrusiveList<FlagDescription> flag_descriptions(LINKER_INITIALIZED);
>
>  // If set, the tool will install its own SEGV signal handler by default.
>  #ifndef SANITIZER_NEEDS_SEGV
> diff --git a/lib/sanitizer_common/sanitizer_internal_defs.h
> b/lib/sanitizer_common/sanitizer_internal_defs.h
> index 2a0b41f..aee9c2e 100644
> --- a/lib/sanitizer_common/sanitizer_internal_defs.h
> +++ b/lib/sanitizer_common/sanitizer_internal_defs.h
> @@ -38,6 +38,10 @@
>  # define SANITIZER_SUPPORTS_WEAK_HOOKS 0
>  #endif
>
> +#if SANITIZER_WINDOWS
> +# define constexpr
> +#endif
> +
>  // We can use .preinit_array section on Linux to call sanitizer
> initialization
>  // functions very early in the process startup (unless PIC macro is
> defined).
>  // FIXME: do we have anything like this on Mac?
> @@ -332,4 +336,79 @@ extern "C" void* _ReturnAddress(void);
>      enable_fp = GET_CURRENT_FRAME();                               \
>    } while (0)
>
> +template <typename T, unsigned long N> struct LinkerInitializedArray {
> +  typedef T Type;
> +  Type elements[N];
> +
> +  constexpr LinkerInitializedArray()
> +#if SANITIZER_WINDOWS
> +  {
> +    internal_memset(elements, 0, sizeof(elements));
> +  }
> +#else
> +    : elements {} {}
> +#endif
> +
> +  Type *data() {
> +    return &elements[0];
> +  }
> +
> +  const Type *data() const {
> +    return &elements[0];
> +  }
> +
> +  Type &operator[](unsigned long i) {
> +    return elements[i];
> +  }
> +
> +  const Type &operator[](unsigned long i) const {
> +    return elements[i];
> +  }
> +};
> +
> +template <typename T, unsigned long N> struct
> LinkerInitializedStructArray {
> +  typedef T Type;
> +
> +#if SANITIZER_WINDOWS
> +  Type val[N];
> +
> +  constexpr LinkerInitializedStructArray() {
> +    internal_memset(val, 0, sizeof(val));
> +  }
> +
> +  Type *data() {
> +    return &val[0];
> +  }
> +
> +  const Type *data() const {
> +    return &val[0];
> +  }
> +#else
> +  union {
> +    char bytes[sizeof(T) * N];
> +    Type elements[N];
> +  } val;
> +
> +  constexpr LinkerInitializedStructArray()
> +    : val {} {}
> +
> +  Type *data() {
> +    return &val.elements[0];
> +  }
> +
> +  const Type *data() const {
> +    return &val.elements[0];
> +  }
> +#endif
> +
> +  Type &operator[](unsigned long i) {
> +    return data()[i];
> +  }
> +
> +  const Type &operator[](unsigned long i) const {
> +    return data()[i];
> +  }
> +};
> +
>  #endif  // SANITIZER_DEFS_H
> +
> diff --git a/lib/sanitizer_common/sanitizer_linux_libcdep.cc
> b/lib/sanitizer_common/sanitizer_linux_libcdep.cc
> index df42c36..5bfc667 100644
> --- a/lib/sanitizer_common/sanitizer_linux_libcdep.cc
> +++ b/lib/sanitizer_common/sanitizer_linux_libcdep.cc
> @@ -186,7 +186,7 @@ void InitTlsSize() {
>
>  #if (defined(__x86_64__) || defined(__i386__)) && SANITIZER_LINUX
>  // sizeof(struct thread) from glibc.
> -static atomic_uintptr_t kThreadDescriptorSize;
> +static atomic_uintptr_t kThreadDescriptorSize(0);
>
>  uptr ThreadDescriptorSize() {
>    uptr val = atomic_load(&kThreadDescriptorSize, memory_order_relaxed);
> diff --git a/lib/sanitizer_common/sanitizer_list.h
> b/lib/sanitizer_common/sanitizer_list.h
> index 6dd9c8f..3fb1522 100644
> --- a/lib/sanitizer_common/sanitizer_list.h
> +++ b/lib/sanitizer_common/sanitizer_list.h
> @@ -28,6 +28,13 @@ template<class Item>
>  struct IntrusiveList {
>    friend class Iterator;
>
> +  IntrusiveList() {}
> +
> +  explicit constexpr IntrusiveList(LinkerInitialized)
> +    : size_(0),
> +      first_(0),
> +      last_(0) {}
> +
>    void clear() {
>      first_ = last_ = 0;
>      size_ = 0;
> diff --git a/lib/sanitizer_common/sanitizer_mutex.h
> b/lib/sanitizer_common/sanitizer_mutex.h
> index d06fc45..01890c2 100644
> --- a/lib/sanitizer_common/sanitizer_mutex.h
> +++ b/lib/sanitizer_common/sanitizer_mutex.h
> @@ -22,6 +22,11 @@ namespace __sanitizer {
>
>  class StaticSpinMutex {
>   public:
> +  StaticSpinMutex() {}
> +
> +  explicit constexpr StaticSpinMutex(LinkerInitialized)
> +    : state_(0) {}
> +
>    void Init() {
>      atomic_store(&state_, 0, memory_order_relaxed);
>    }
> @@ -66,6 +71,9 @@ class SpinMutex : public StaticSpinMutex {
>      Init();
>    }
>
> +  explicit constexpr SpinMutex(LinkerInitialized)
> +    : StaticSpinMutex(LINKER_INITIALIZED) {}
> +
>   private:
>    SpinMutex(const SpinMutex&);
>    void operator=(const SpinMutex&);
> @@ -96,6 +104,9 @@ class RWMutex {
>      atomic_store(&state_, kUnlocked, memory_order_relaxed);
>    }
>
> +  explicit constexpr RWMutex(LinkerInitialized)
> +    : state_(0) {}
> +
>    ~RWMutex() {
>      CHECK_EQ(atomic_load(&state_, memory_order_relaxed), kUnlocked);
>    }
> diff --git a/lib/sanitizer_common/sanitizer_persistent_allocator.cc
> b/lib/sanitizer_common/sanitizer_persistent_allocator.cc
> index 5fa533a..e7cd2e4 100644
> --- a/lib/sanitizer_common/sanitizer_persistent_allocator.cc
> +++ b/lib/sanitizer_common/sanitizer_persistent_allocator.cc
> @@ -14,6 +14,6 @@
>
>  namespace __sanitizer {
>
> -PersistentAllocator thePersistentAllocator;
> +PersistentAllocator thePersistentAllocator(LINKER_INITIALIZED);
>
>  }  // namespace __sanitizer
> diff --git a/lib/sanitizer_common/sanitizer_persistent_allocator.h
> b/lib/sanitizer_common/sanitizer_persistent_allocator.h
> index 326406b..54c05bf 100644
> --- a/lib/sanitizer_common/sanitizer_persistent_allocator.h
> +++ b/lib/sanitizer_common/sanitizer_persistent_allocator.h
> @@ -22,6 +22,10 @@ namespace __sanitizer {
>
>  class PersistentAllocator {
>   public:
> +  explicit constexpr PersistentAllocator(LinkerInitialized)
> +    : mtx(LINKER_INITIALIZED),
> +      region_pos(0),
> +      region_end(0) {}
>    void *alloc(uptr size);
>
>   private:
> diff --git a/lib/sanitizer_common/sanitizer_printf.cc
> b/lib/sanitizer_common/sanitizer_printf.cc
> index 3be6723..fd299e3 100644
> --- a/lib/sanitizer_common/sanitizer_printf.cc
> +++ b/lib/sanitizer_common/sanitizer_printf.cc
> @@ -29,7 +29,7 @@
>
>  namespace __sanitizer {
>
> -StaticSpinMutex CommonSanitizerReportMutex;
> +StaticSpinMutex CommonSanitizerReportMutex(LINKER_INITIALIZED);
>
>  static int AppendChar(char **buff, const char *buff_end, char c) {
>    if (*buff < buff_end) {
> diff --git a/lib/sanitizer_common/sanitizer_procmaps_common.cc
> b/lib/sanitizer_common/sanitizer_procmaps_common.cc
> index 2ec08d7..b2efde2 100644
> --- a/lib/sanitizer_common/sanitizer_procmaps_common.cc
> +++ b/lib/sanitizer_common/sanitizer_procmaps_common.cc
> @@ -20,7 +20,7 @@ namespace __sanitizer {
>
>  // Linker initialized.
>  ProcSelfMapsBuff MemoryMappingLayout::cached_proc_self_maps_;
> -StaticSpinMutex MemoryMappingLayout::cache_lock_;  // Linker initialized.
> +StaticSpinMutex MemoryMappingLayout::cache_lock_(LINKER_INITIALIZED);
>
>  static int TranslateDigit(char c) {
>    if (c >= '0' && c <= '9')
> diff --git a/lib/sanitizer_common/sanitizer_quarantine.h
> b/lib/sanitizer_common/sanitizer_quarantine.h
> index 404d375..0628064 100644
> --- a/lib/sanitizer_common/sanitizer_quarantine.h
> +++ b/lib/sanitizer_common/sanitizer_quarantine.h
> @@ -44,9 +44,13 @@ class Quarantine {
>   public:
>    typedef QuarantineCache<Callback> Cache;
>
> -  explicit Quarantine(LinkerInitialized)
> -      : cache_(LINKER_INITIALIZED) {
> -  }
> +  explicit constexpr Quarantine(LinkerInitialized)
> +      : max_size_(0),
> +        min_size_(0),
> +        max_cache_size_(0),
> +        cache_mutex_(LINKER_INITIALIZED),
> +        recycle_mutex_(LINKER_INITIALIZED),
> +        cache_(LINKER_INITIALIZED) {}
>
>    void Init(uptr size, uptr cache_size) {
>      atomic_store(&max_size_, size, memory_order_release);
> @@ -74,15 +78,15 @@ class Quarantine {
>
>   private:
>    // Read-only data.
> -  char pad0_[kCacheLineSize];
> +  LinkerInitializedArray<char, kCacheLineSize> pad0_;
>    atomic_uintptr_t max_size_;
>    atomic_uintptr_t min_size_;
>    uptr max_cache_size_;
> -  char pad1_[kCacheLineSize];
> +  LinkerInitializedArray<char, kCacheLineSize> pad1_;
>    SpinMutex cache_mutex_;
>    SpinMutex recycle_mutex_;
>    Cache cache_;
> -  char pad2_[kCacheLineSize];
> +  LinkerInitializedArray<char, kCacheLineSize> pad2_;
>
>    void NOINLINE Recycle(Callback cb) {
>      Cache tmp;
> @@ -116,8 +120,9 @@ class Quarantine {
>  template<typename Callback>
>  class QuarantineCache {
>   public:
> -  explicit QuarantineCache(LinkerInitialized) {
> -  }
> +  explicit constexpr QuarantineCache(LinkerInitialized)
> +    : list_(LINKER_INITIALIZED),
> +      size_(0) {}
>
>    QuarantineCache()
>        : size_() {
> diff --git a/lib/sanitizer_common/sanitizer_stackdepot.cc
> b/lib/sanitizer_common/sanitizer_stackdepot.cc
> index 59b53f4..0e3fbb5 100644
> --- a/lib/sanitizer_common/sanitizer_stackdepot.cc
> +++ b/lib/sanitizer_common/sanitizer_stackdepot.cc
> @@ -102,7 +102,7 @@ void StackDepotHandle::inc_use_count_unsafe() {
>  // FIXME(dvyukov): this single reserved bit is used in TSan.
>  typedef StackDepotBase<StackDepotNode, 1, StackDepotNode::kTabSizeLog>
>      StackDepot;
> -static StackDepot theDepot;
> +static StackDepot theDepot(LINKER_INITIALIZED);
>
>  StackDepotStats *StackDepotGetStats() {
>    return theDepot.GetStats();
> diff --git a/lib/sanitizer_common/sanitizer_stackdepotbase.h
> b/lib/sanitizer_common/sanitizer_stackdepotbase.h
> index 5de2e71..a8418b0 100644
> --- a/lib/sanitizer_common/sanitizer_stackdepotbase.h
> +++ b/lib/sanitizer_common/sanitizer_stackdepotbase.h
> @@ -23,6 +23,9 @@ namespace __sanitizer {
>  template <class Node, int kReservedBits, int kTabSizeLog>
>  class StackDepotBase {
>   public:
> +  explicit constexpr StackDepotBase(LinkerInitialized)
> +    : stats { 0, 0 } {}
> +
>    typedef typename Node::args_type args_type;
>    typedef typename Node::handle_type handle_type;
>    // Maps stack trace to an unique id.
> @@ -48,8 +51,11 @@ class StackDepotBase {
>    static const int kPartSize = kTabSize / kPartCount;
>    static const int kMaxId = 1 << kPartShift;
>
> -  atomic_uintptr_t tab[kTabSize];   // Hash table of Node's.
> -  atomic_uint32_t seq[kPartCount];  // Unique id generators.
> +  // Hash table of Node's.
> +  LinkerInitializedStructArray<atomic_uintptr_t, kTabSize> tab;
> +
> +  // Unique id generators.
> +  LinkerInitializedStructArray<atomic_uint32_t, kPartCount> seq;
>
>    StackDepotStats stats;
>
> diff --git a/lib/sanitizer_common/sanitizer_symbolizer.cc
> b/lib/sanitizer_common/sanitizer_symbolizer.cc
> index 135720e..b7194d3 100644
> --- a/lib/sanitizer_common/sanitizer_symbolizer.cc
> +++ b/lib/sanitizer_common/sanitizer_symbolizer.cc
> @@ -67,8 +67,8 @@ void DataInfo::Clear() {
>  }
>
>  Symbolizer *Symbolizer::symbolizer_;
> -StaticSpinMutex Symbolizer::init_mu_;
> -LowLevelAllocator Symbolizer::symbolizer_allocator_;
> +StaticSpinMutex Symbolizer::init_mu_(LINKER_INITIALIZED);
> +LowLevelAllocator Symbolizer::symbolizer_allocator_(LINKER_INITIALIZED);
>
>  Symbolizer *Symbolizer::Disable() {
>    CHECK_EQ(0, symbolizer_);
> diff --git a/lib/sanitizer_common/sanitizer_tls_get_addr.cc
> b/lib/sanitizer_common/sanitizer_tls_get_addr.cc
> index 6142ce5..98033d4 100644
> --- a/lib/sanitizer_common/sanitizer_tls_get_addr.cc
> +++ b/lib/sanitizer_common/sanitizer_tls_get_addr.cc
> @@ -39,7 +39,7 @@ static __thread DTLS dtls;
>
>  // Make sure we properly destroy the DTLS objects:
>  // this counter should never get too large.
> -static atomic_uintptr_t number_of_live_dtls;
> +static atomic_uintptr_t number_of_live_dtls(0);
>
>  static const uptr kDestroyedThread = -1;
>
> diff --git a/lib/sanitizer_common/tests/sanitizer_atomic_test.cc
> b/lib/sanitizer_common/tests/sanitizer_atomic_test.cc
> index 56bcd35..c42a145 100644
> --- a/lib/sanitizer_common/tests/sanitizer_atomic_test.cc
> +++ b/lib/sanitizer_common/tests/sanitizer_atomic_test.cc
> @@ -41,11 +41,11 @@ void CheckStoreLoad() {
>      v |= v << 8;
>      v |= v << 16;
>      v |= v << 32;
> -    val.a.val_dont_use = (Type)v;
> +    val.a.val() = (Type)v;
>      EXPECT_EQ(atomic_load(&val.a, load_mo), (Type)v);
> -    val.a.val_dont_use = (Type)-1;
> +    val.a.val() = (Type)-1;
>      atomic_store(&val.a, (Type)v, store_mo);
> -    EXPECT_EQ(val.a.val_dont_use, (Type)v);
> +    EXPECT_EQ(val.a.val(), (Type)v);
>    }
>    EXPECT_EQ(val.magic0, (Type)-3);
>    EXPECT_EQ(val.magic1, (Type)-3);
> diff --git a/lib/sanitizer_common/tests/sanitizer_thread_registry_test.cc
> b/lib/sanitizer_common/tests/sanitizer_thread_registry_test.cc
> index 58c627a..0962c20 100644
> --- a/lib/sanitizer_common/tests/sanitizer_thread_registry_test.cc
> +++ b/lib/sanitizer_common/tests/sanitizer_thread_registry_test.cc
> @@ -21,7 +21,7 @@
>  namespace __sanitizer {
>
>  static BlockingMutex tctx_allocator_lock(LINKER_INITIALIZED);
> -static LowLevelAllocator tctx_allocator;
> +static LowLevelAllocator tctx_allocator(LINKER_INITIALIZED);
>
>  template<typename TCTX>
>  static ThreadContextBase *GetThreadContext(u32 tid) {
> diff --git a/lib/tsan/rtl/tsan_fd.cc b/lib/tsan/rtl/tsan_fd.cc
> index d18502f..b61f9c0 100644
> --- a/lib/tsan/rtl/tsan_fd.cc
> +++ b/lib/tsan/rtl/tsan_fd.cc
> @@ -23,6 +23,8 @@ const int kTableSize = kTableSizeL1 * kTableSizeL2;
>
>  struct FdSync {
>    atomic_uint64_t rc;
> +  constexpr FdSync()
> +    : rc(0) {}
>  };
>
>  struct FdDesc {
> @@ -32,12 +34,14 @@ struct FdDesc {
>  };
>
>  struct FdContext {
> -  atomic_uintptr_t tab[kTableSizeL1];
> +  LinkerInitializedStructArray<atomic_uintptr_t, kTableSizeL1> tab;
>    // Addresses used for synchronization.
>    FdSync globsync;
>    FdSync filesync;
>    FdSync socksync;
>    u64 connectsync;
> +  explicit constexpr FdContext()
> +    : connectsync(0) {}
>  };
>
>  static FdContext fdctx;
> diff --git a/lib/tsan/rtl/tsan_interface_atomic.cc
> b/lib/tsan/rtl/tsan_interface_atomic.cc
> index 9b69951..acd3d1b 100644
> --- a/lib/tsan/rtl/tsan_interface_atomic.cc
> +++ b/lib/tsan/rtl/tsan_interface_atomic.cc
> @@ -42,7 +42,7 @@ __extension__ typedef __int128 a128;
>
>  #ifndef SANITIZER_GO
>  // Protects emulation of 128-bit atomic operations.
> -static StaticSpinMutex mutex128;
> +static StaticSpinMutex mutex128(LINKER_INITIALIZED);
>  #endif
>
>  // Part of ABI, do not change.
> diff --git a/lib/tsan/rtl/tsan_rtl.cc b/lib/tsan/rtl/tsan_rtl.cc
> index b3320aa..71f7d5f 100644
> --- a/lib/tsan/rtl/tsan_rtl.cc
> +++ b/lib/tsan/rtl/tsan_rtl.cc
> @@ -103,6 +103,8 @@ ThreadState::ThreadState(Context *ctx, int tid, int
> unique_id, u64 epoch,
>    // , ignore_interceptors()
>    , clock(tid, reuse_count)
>  #ifndef SANITIZER_GO
> +  , alloc_cache(LINKER_INITIALIZED)
> +  , internal_alloc_cache(LINKER_INITIALIZED)
>    , jmp_bufs(MBlockJmpBuf)
>  #endif
>    , tid(tid)
> diff --git a/lib/tsan/tests/rtl/tsan_test_util_linux.cc
> b/lib/tsan/tests/rtl/tsan_test_util_linux.cc
> index 9298bf0..d2e40c9 100644
> --- a/lib/tsan/tests/rtl/tsan_test_util_linux.cc
> +++ b/lib/tsan/tests/rtl/tsan_test_util_linux.cc
> @@ -77,7 +77,7 @@ bool OnReport(const ReportDesc *rep, bool suppressed) {
>
>  static void* allocate_addr(int size, int offset_from_aligned = 0) {
>    static uintptr_t foo;
> -  static atomic_uintptr_t uniq = {(uintptr_t)&foo};  // Some real address.
> +  static atomic_uintptr_t uniq((uintptr_t)&foo);  // Some real address.
>    const int kAlign = 16;
>    CHECK(offset_from_aligned < kAlign);
>    size = (size + 2 * kAlign) & ~(kAlign - 1);
>
> _______________________________________________
> llvm-commits mailing list
> llvm-commits at cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits
>
>


-- 
Alexey Samsonov
vonosmas at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-commits/attachments/20150219/d8253a29/attachment.html>


More information about the llvm-commits mailing list