[PATCH] D16689: [tsan] Dynamically detect heap range for aarch64.

Dmitry Vyukov via llvm-commits llvm-commits at lists.llvm.org
Wed Feb 3 07:02:14 PST 2016


dvyukov added a comment.

The mapping become more and more messy. But I don't see any simple solution, so I guess I am OK with this.
Are there any objections from ARM people?

We also have problems on x86, modules have moved to 0x5555 without ASRL.
If we agree to have 1 memory access in mem->shadow translation, we could split address space into, say, 256 parts and have arbitrary mapping between these parts. Something long the lines of:

uptr mapping[256];

uptr MemToShadow(uptr x) {

  return (x << 2) ^ mapping[x >> kAddressShift];

}

This would allow a platform to have up to 50 regions with user memory arbitrary mapped onto 50*4 shadow regions.
Just thinking aloud.


================
Comment at: lib/tsan/rtl/tsan_platform.h:610
@@ +609,3 @@
+#ifndef SANITIZER_GO
+  if (x >= Mapping::kHeapMemBeg && x < Mapping::kHeapMemEnd)
+    return ((x - Mapping::kHeapMemBeg) & ~(kShadowCell - 1)) * kShadowCnt
----------------
It should be faster to use [kHeapMemBeg, kHeapMemBeg + kHeapMemSize) instead of [kHeapMemBeg, kHeapMemEnd). Because kHeapMemSize is a const.



http://reviews.llvm.org/D16689





More information about the llvm-commits mailing list