[compiler-rt] 45f6d55 - [DFSan] Change shadow and origin memory layouts to match MSan.

Andrew Browne via llvm-commits llvm-commits at lists.llvm.org
Fri Jun 25 17:01:25 PDT 2021


Author: Andrew Browne
Date: 2021-06-25T17:00:38-07:00
New Revision: 45f6d5522f8d9b6439a40885ed30ad898089a2cd

URL: https://github.com/llvm/llvm-project/commit/45f6d5522f8d9b6439a40885ed30ad898089a2cd
DIFF: https://github.com/llvm/llvm-project/commit/45f6d5522f8d9b6439a40885ed30ad898089a2cd.diff

LOG: [DFSan] Change shadow and origin memory layouts to match MSan.

Previously on x86_64:

  +--------------------+ 0x800000000000 (top of memory)
  | application memory |
  +--------------------+ 0x700000008000 (kAppAddr)
  |                    |
  |       unused       |
  |                    |
  +--------------------+ 0x300000000000 (kUnusedAddr)
  |       origin       |
  +--------------------+ 0x200000008000 (kOriginAddr)
  |       unused       |
  +--------------------+ 0x200000000000
  |   shadow memory    |
  +--------------------+ 0x100000008000 (kShadowAddr)
  |       unused       |
  +--------------------+ 0x000000010000
  | reserved by kernel |
  +--------------------+ 0x000000000000

  MEM_TO_SHADOW(mem) = mem & ~0x600000000000
  SHADOW_TO_ORIGIN(shadow) = kOriginAddr - kShadowAddr + shadow

Now for x86_64:

  +--------------------+ 0x800000000000 (top of memory)
  |    application 3   |
  +--------------------+ 0x700000000000
  |      invalid       |
  +--------------------+ 0x610000000000
  |      origin 1      |
  +--------------------+ 0x600000000000
  |    application 2   |
  +--------------------+ 0x510000000000
  |      shadow 1      |
  +--------------------+ 0x500000000000
  |      invalid       |
  +--------------------+ 0x400000000000
  |      origin 3      |
  +--------------------+ 0x300000000000
  |      shadow 3      |
  +--------------------+ 0x200000000000
  |      origin 2      |
  +--------------------+ 0x110000000000
  |      invalid       |
  +--------------------+ 0x100000000000
  |      shadow 2      |
  +--------------------+ 0x010000000000
  |    application 1   |
  +--------------------+ 0x000000000000

  MEM_TO_SHADOW(mem) = mem ^ 0x500000000000
  SHADOW_TO_ORIGIN(shadow) = shadow + 0x100000000000

Reviewed By: stephan.yichao.zhao, gbalats

Differential Revision: https://reviews.llvm.org/D104896

Added: 
    

Modified: 
    clang/docs/DataFlowSanitizerDesign.rst
    compiler-rt/lib/dfsan/dfsan.cpp
    compiler-rt/lib/dfsan/dfsan.h
    compiler-rt/lib/dfsan/dfsan_allocator.cpp
    compiler-rt/lib/dfsan/dfsan_platform.h
    compiler-rt/test/dfsan/origin_invalid.c
    llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp
    llvm/test/Instrumentation/DataFlowSanitizer/atomics.ll
    llvm/test/Instrumentation/DataFlowSanitizer/basic.ll
    llvm/test/Instrumentation/DataFlowSanitizer/load.ll
    llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll
    llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll
    llvm/test/Instrumentation/DataFlowSanitizer/store.ll

Removed: 
    


################################################################################
diff  --git a/clang/docs/DataFlowSanitizerDesign.rst b/clang/docs/DataFlowSanitizerDesign.rst
index fc1ca9840f6e..7615a2acc58b 100644
--- a/clang/docs/DataFlowSanitizerDesign.rst
+++ b/clang/docs/DataFlowSanitizerDesign.rst
@@ -76,25 +76,41 @@ The following is the memory layout for Linux/x86\_64:
 +---------------+---------------+--------------------+
 |    Start      |    End        |        Use         |
 +===============+===============+====================+
-| 0x700000008000|0x800000000000 | application memory |
+| 0x700000000000|0x800000000000 |    application 3   |
 +---------------+---------------+--------------------+
-| 0x300000000000|0x700000008000 |       unused       |
+| 0x610000000000|0x700000000000 |       unused       |
 +---------------+---------------+--------------------+
-| 0x200000008000|0x300000000000 |       origin       |
+| 0x600000000000|0x610000000000 |      origin 1      |
 +---------------+---------------+--------------------+
-| 0x200000000000|0x200000008000 |       unused       |
+| 0x510000000000|0x600000000000 |    application 2   |
 +---------------+---------------+--------------------+
-| 0x100000008000|0x200000000000 |   shadow memory    |
+| 0x500000000000|0x510000000000 |      shadow 1      |
 +---------------+---------------+--------------------+
-| 0x000000010000|0x100000008000 |       unused       |
+| 0x400000000000|0x500000000000 |       unused       |
 +---------------+---------------+--------------------+
-| 0x000000000000|0x000000010000 | reserved by kernel |
+| 0x300000000000|0x400000000000 |      origin 3      |
++---------------+---------------+--------------------+
+| 0x200000000000|0x300000000000 |      shadow 3      |
++---------------+---------------+--------------------+
+| 0x110000000000|0x200000000000 |      origin 2      |
++---------------+---------------+--------------------+
+| 0x100000000000|0x110000000000 |       unused       |
++---------------+---------------+--------------------+
+| 0x010000000000|0x100000000000 |      shadow 2      |
++---------------+---------------+--------------------+
+| 0x000000000000|0x010000000000 |    application 1   |
 +---------------+---------------+--------------------+
 
 Each byte of application memory corresponds to a single byte of shadow
-memory, which is used to store its taint label. As for LLVM SSA
-registers, we have not found it necessary to associate a label with
-each byte or bit of data, as some other tools do. Instead, labels are
+memory, which is used to store its taint label. We map memory, shadow, and
+origin regions to each other with these masks and offsets:
+
+* shadow_addr = memory_addr ^ 0x500000000000
+
+* origin_addr = shadow_addr + 0x100000000000
+
+As for LLVM SSA registers, we have not found it necessary to associate a label
+with each byte or bit of data, as some other tools do. Instead, labels are
 associated directly with registers.  Loads will result in a union of
 all shadow labels corresponding to bytes loaded, and stores will
 result in a copy of the label of the stored value to the shadow of all

diff  --git a/compiler-rt/lib/dfsan/dfsan.cpp b/compiler-rt/lib/dfsan/dfsan.cpp
index 9f047125126e..a029500c5fa1 100644
--- a/compiler-rt/lib/dfsan/dfsan.cpp
+++ b/compiler-rt/lib/dfsan/dfsan.cpp
@@ -64,30 +64,34 @@ int __dfsan_get_track_origins() {
 
 // On Linux/x86_64, memory is laid out as follows:
 //
-// +--------------------+ 0x800000000000 (top of memory)
-// | application memory |
-// +--------------------+ 0x700000008000 (kAppAddr)
-// |                    |
-// |       unused       |
-// |                    |
-// +--------------------+ 0x300000000000 (kUnusedAddr)
-// |       origin       |
-// +--------------------+ 0x200000008000 (kOriginAddr)
-// |       unused       |
-// +--------------------+ 0x200000000000
-// |   shadow memory    |
-// +--------------------+ 0x100000008000 (kShadowAddr)
-// |       unused       |
-// +--------------------+ 0x000000010000
-// | reserved by kernel |
-// +--------------------+ 0x000000000000
+//  +--------------------+ 0x800000000000 (top of memory)
+//  |    application 3   |
+//  +--------------------+ 0x700000000000
+//  |      invalid       |
+//  +--------------------+ 0x610000000000
+//  |      origin 1      |
+//  +--------------------+ 0x600000000000
+//  |    application 2   |
+//  +--------------------+ 0x510000000000
+//  |      shadow 1      |
+//  +--------------------+ 0x500000000000
+//  |      invalid       |
+//  +--------------------+ 0x400000000000
+//  |      origin 3      |
+//  +--------------------+ 0x300000000000
+//  |      shadow 3      |
+//  +--------------------+ 0x200000000000
+//  |      origin 2      |
+//  +--------------------+ 0x110000000000
+//  |      invalid       |
+//  +--------------------+ 0x100000000000
+//  |      shadow 2      |
+//  +--------------------+ 0x010000000000
+//  |    application 1   |
+//  +--------------------+ 0x000000000000
 //
-// To derive a shadow memory address from an application memory address, bits
-// 45-46 are cleared to bring the address into the range
-// [0x100000008000,0x200000000000).  See the function shadow_for below.
-//
-//
-
+//  MEM_TO_SHADOW(mem) = mem ^ 0x500000000000
+//  SHADOW_TO_ORIGIN(shadow) = shadow + 0x100000000000
 
 extern "C" SANITIZER_INTERFACE_ATTRIBUTE
 dfsan_label __dfsan_union_load(const dfsan_label *ls, uptr n) {
@@ -160,14 +164,12 @@ static uptr OriginAlignDown(uptr u) { return u & kOriginAlignMask; }
 static dfsan_origin GetOriginIfTainted(uptr addr, uptr size) {
   for (uptr i = 0; i < size; ++i, ++addr) {
     dfsan_label *s = shadow_for((void *)addr);
-    if (!is_shadow_addr_valid((uptr)s)) {
-      // The current DFSan memory layout is not always correct. For example,
-      // addresses (0, 0x10000) are mapped to (0, 0x10000). Before fixing the
-      // issue, we ignore such addresses.
-      continue;
-    }
-    if (*s)
+
+    if (*s) {
+      // Validate address region.
+      CHECK(MEM_IS_SHADOW(s));
       return *(dfsan_origin *)origin_for((void *)addr);
+    }
   }
   return 0;
 }
@@ -317,10 +319,12 @@ static void ReverseCopyOrigin(const void *dst, const void *src, uptr size,
 // operation.
 static void MoveOrigin(const void *dst, const void *src, uptr size,
                        StackTrace *stack) {
-  if (!has_valid_shadow_addr(dst) ||
-      !has_valid_shadow_addr((void *)((uptr)dst + size)) ||
-      !has_valid_shadow_addr(src) ||
-      !has_valid_shadow_addr((void *)((uptr)src + size))) {
+  // Validate address regions.
+  if (!MEM_IS_SHADOW(shadow_for(dst)) ||
+      !MEM_IS_SHADOW(shadow_for((void *)((uptr)dst + size))) ||
+      !MEM_IS_SHADOW(shadow_for(src)) ||
+      !MEM_IS_SHADOW(shadow_for((void *)((uptr)src + size)))) {
+    CHECK(false);
     return;
   }
   // If destination origin range overlaps with source origin range, move
@@ -833,8 +837,149 @@ void dfsan_clear_thread_local_state() {
 }
 
 extern "C" void dfsan_flush() {
-  if (!MmapFixedSuperNoReserve(ShadowAddr(), UnusedAddr() - ShadowAddr()))
-    Die();
+  const uptr maxVirtualAddress = GetMaxUserVirtualAddress();
+  for (unsigned i = 0; i < kMemoryLayoutSize; ++i) {
+    uptr start = kMemoryLayout[i].start;
+    uptr end = kMemoryLayout[i].end;
+    uptr size = end - start;
+    MappingDesc::Type type = kMemoryLayout[i].type;
+
+    if (type != MappingDesc::SHADOW && type != MappingDesc::ORIGIN)
+      continue;
+
+    // Check if the segment should be mapped based on platform constraints.
+    if (start >= maxVirtualAddress)
+      continue;
+
+    if (!MmapFixedSuperNoReserve(start, size, kMemoryLayout[i].name)) {
+      Printf("FATAL: DataFlowSanitizer: failed to clear memory region\n");
+      Die();
+    }
+  }
+}
+
+// TODO: CheckMemoryLayoutSanity is based on msan.
+// Consider refactoring these into a shared implementation.
+static void CheckMemoryLayoutSanity() {
+  uptr prev_end = 0;
+  for (unsigned i = 0; i < kMemoryLayoutSize; ++i) {
+    uptr start = kMemoryLayout[i].start;
+    uptr end = kMemoryLayout[i].end;
+    MappingDesc::Type type = kMemoryLayout[i].type;
+    CHECK_LT(start, end);
+    CHECK_EQ(prev_end, start);
+    CHECK(addr_is_type(start, type));
+    CHECK(addr_is_type((start + end) / 2, type));
+    CHECK(addr_is_type(end - 1, type));
+    if (type == MappingDesc::APP) {
+      uptr addr = start;
+      CHECK(MEM_IS_SHADOW(MEM_TO_SHADOW(addr)));
+      CHECK(MEM_IS_ORIGIN(MEM_TO_ORIGIN(addr)));
+      CHECK_EQ(MEM_TO_ORIGIN(addr), SHADOW_TO_ORIGIN(MEM_TO_SHADOW(addr)));
+
+      addr = (start + end) / 2;
+      CHECK(MEM_IS_SHADOW(MEM_TO_SHADOW(addr)));
+      CHECK(MEM_IS_ORIGIN(MEM_TO_ORIGIN(addr)));
+      CHECK_EQ(MEM_TO_ORIGIN(addr), SHADOW_TO_ORIGIN(MEM_TO_SHADOW(addr)));
+
+      addr = end - 1;
+      CHECK(MEM_IS_SHADOW(MEM_TO_SHADOW(addr)));
+      CHECK(MEM_IS_ORIGIN(MEM_TO_ORIGIN(addr)));
+      CHECK_EQ(MEM_TO_ORIGIN(addr), SHADOW_TO_ORIGIN(MEM_TO_SHADOW(addr)));
+    }
+    prev_end = end;
+  }
+}
+
+// TODO: CheckMemoryRangeAvailability is based on msan.
+// Consider refactoring these into a shared implementation.
+static bool CheckMemoryRangeAvailability(uptr beg, uptr size) {
+  if (size > 0) {
+    uptr end = beg + size - 1;
+    if (!MemoryRangeIsAvailable(beg, end)) {
+      Printf("FATAL: Memory range %p - %p is not available.\n", beg, end);
+      return false;
+    }
+  }
+  return true;
+}
+
+// TODO: ProtectMemoryRange is based on msan.
+// Consider refactoring these into a shared implementation.
+static bool ProtectMemoryRange(uptr beg, uptr size, const char *name) {
+  if (size > 0) {
+    void *addr = MmapFixedNoAccess(beg, size, name);
+    if (beg == 0 && addr) {
+      // Depending on the kernel configuration, we may not be able to protect
+      // the page at address zero.
+      uptr gap = 16 * GetPageSizeCached();
+      beg += gap;
+      size -= gap;
+      addr = MmapFixedNoAccess(beg, size, name);
+    }
+    if ((uptr)addr != beg) {
+      uptr end = beg + size - 1;
+      Printf("FATAL: Cannot protect memory range %p - %p (%s).\n", beg, end,
+             name);
+      return false;
+    }
+  }
+  return true;
+}
+
+// TODO: InitShadow is based on msan.
+// Consider refactoring these into a shared implementation.
+bool InitShadow(bool init_origins) {
+  // Let user know mapping parameters first.
+  VPrintf(1, "dfsan_init %p\n", &__dfsan::dfsan_init);
+  for (unsigned i = 0; i < kMemoryLayoutSize; ++i)
+    VPrintf(1, "%s: %zx - %zx\n", kMemoryLayout[i].name, kMemoryLayout[i].start,
+            kMemoryLayout[i].end - 1);
+
+  CheckMemoryLayoutSanity();
+
+  if (!MEM_IS_APP(&__dfsan::dfsan_init)) {
+    Printf("FATAL: Code %p is out of application range. Non-PIE build?\n",
+           (uptr)&__dfsan::dfsan_init);
+    return false;
+  }
+
+  const uptr maxVirtualAddress = GetMaxUserVirtualAddress();
+
+  for (unsigned i = 0; i < kMemoryLayoutSize; ++i) {
+    uptr start = kMemoryLayout[i].start;
+    uptr end = kMemoryLayout[i].end;
+    uptr size = end - start;
+    MappingDesc::Type type = kMemoryLayout[i].type;
+
+    // Check if the segment should be mapped based on platform constraints.
+    if (start >= maxVirtualAddress)
+      continue;
+
+    bool map = type == MappingDesc::SHADOW ||
+               (init_origins && type == MappingDesc::ORIGIN);
+    bool protect = type == MappingDesc::INVALID ||
+                   (!init_origins && type == MappingDesc::ORIGIN);
+    CHECK(!(map && protect));
+    if (!map && !protect)
+      CHECK(type == MappingDesc::APP);
+    if (map) {
+      if (!CheckMemoryRangeAvailability(start, size))
+        return false;
+      if (!MmapFixedSuperNoReserve(start, size, kMemoryLayout[i].name))
+        return false;
+      if (common_flags()->use_madv_dontdump)
+        DontDumpShadowMemory(start, size);
+    }
+    if (protect) {
+      if (!CheckMemoryRangeAvailability(start, size))
+        return false;
+      if (!ProtectMemoryRange(start, size, kMemoryLayout[i].name))
+        return false;
+    }
+  }
+
+  return true;
 }
 
 static void DFsanInit(int argc, char **argv, char **envp) {
@@ -848,18 +993,9 @@ static void DFsanInit(int argc, char **argv, char **envp) {
 
   InitializeFlags();
 
-  dfsan_flush();
-  if (common_flags()->use_madv_dontdump)
-    DontDumpShadowMemory(ShadowAddr(), UnusedAddr() - ShadowAddr());
-
-  // Protect the region of memory we don't use, to preserve the one-to-one
-  // mapping from application to shadow memory. But if ASLR is disabled, Linux
-  // will load our executable in the middle of our unused region. This mostly
-  // works so long as the program doesn't use too much memory. We support this
-  // case by disabling memory protection when ASLR is disabled.
-  uptr init_addr = (uptr)&DFsanInit;
-  if (!(init_addr >= UnusedAddr() && init_addr < AppAddr()))
-    MmapFixedNoAccess(UnusedAddr(), AppAddr() - UnusedAddr());
+  CheckASLR();
+
+  InitShadow(__dfsan_get_track_origins());
 
   initialize_interceptors();
 

diff  --git a/compiler-rt/lib/dfsan/dfsan.h b/compiler-rt/lib/dfsan/dfsan.h
index 43dfb008b564..b212298157eb 100644
--- a/compiler-rt/lib/dfsan/dfsan.h
+++ b/compiler-rt/lib/dfsan/dfsan.h
@@ -61,16 +61,14 @@ extern bool dfsan_init_is_running;
 void initialize_interceptors();
 
 inline dfsan_label *shadow_for(void *ptr) {
-  return (dfsan_label *)(((uptr)ptr) & ShadowMask());
+  return (dfsan_label *)MEM_TO_SHADOW(ptr);
 }
 
 inline const dfsan_label *shadow_for(const void *ptr) {
   return shadow_for(const_cast<void *>(ptr));
 }
 
-inline uptr unaligned_origin_for(uptr ptr) {
-  return OriginAddr() - ShadowAddr() + (ptr & ShadowMask());
-}
+inline uptr unaligned_origin_for(uptr ptr) { return MEM_TO_ORIGIN(ptr); }
 
 inline dfsan_origin *origin_for(void *ptr) {
   auto aligned_addr = unaligned_origin_for(reinterpret_cast<uptr>(ptr)) &
@@ -82,24 +80,6 @@ inline const dfsan_origin *origin_for(const void *ptr) {
   return origin_for(const_cast<void *>(ptr));
 }
 
-inline bool is_shadow_addr_valid(uptr shadow_addr) {
-  return (uptr)shadow_addr >= ShadowAddr() && (uptr)shadow_addr < OriginAddr();
-}
-
-inline bool has_valid_shadow_addr(const void *ptr) {
-  const dfsan_label *ptr_s = shadow_for(ptr);
-  return is_shadow_addr_valid((uptr)ptr_s);
-}
-
-inline bool is_origin_addr_valid(uptr origin_addr) {
-  return (uptr)origin_addr >= OriginAddr() && (uptr)origin_addr < UnusedAddr();
-}
-
-inline bool has_valid_origin_addr(const void *ptr) {
-  const dfsan_origin *ptr_orig = origin_for(ptr);
-  return is_origin_addr_valid((uptr)ptr_orig);
-}
-
 void dfsan_copy_memory(void *dst, const void *src, uptr size);
 
 void dfsan_allocator_init();

diff  --git a/compiler-rt/lib/dfsan/dfsan_allocator.cpp b/compiler-rt/lib/dfsan/dfsan_allocator.cpp
index a18cdb14b93a..b2e94564446e 100644
--- a/compiler-rt/lib/dfsan/dfsan_allocator.cpp
+++ b/compiler-rt/lib/dfsan/dfsan_allocator.cpp
@@ -33,15 +33,11 @@ struct DFsanMapUnmapCallback {
   void OnUnmap(uptr p, uptr size) const { dfsan_set_label(0, (void *)p, size); }
 };
 
+static const uptr kAllocatorSpace = 0x700000000000ULL;
 static const uptr kMaxAllowedMallocSize = 8UL << 30;
 
 struct AP64 {  // Allocator64 parameters. Deliberately using a short name.
-  // TODO: DFSan assumes application memory starts from 0x700000008000. For
-  // unknown reason, the sanitizer allocator does not support any start address
-  // between 0x701000000000 and 0x700000008000. After switching to fast8labels
-  // mode, DFSan memory layout will be changed to the same to MSan's. Then we
-  // set the start address to 0x700000000000 as MSan.
-  static const uptr kSpaceBeg = 0x701000000000ULL;
+  static const uptr kSpaceBeg = kAllocatorSpace;
   static const uptr kSpaceSize = 0x40000000000;  // 4T.
   static const uptr kMetadataSize = sizeof(Metadata);
   typedef DefaultSizeClassMap SizeClassMap;

diff  --git a/compiler-rt/lib/dfsan/dfsan_platform.h b/compiler-rt/lib/dfsan/dfsan_platform.h
index 64a093ff97d1..9b4333ee99d0 100644
--- a/compiler-rt/lib/dfsan/dfsan_platform.h
+++ b/compiler-rt/lib/dfsan/dfsan_platform.h
@@ -15,69 +15,73 @@
 #define DFSAN_PLATFORM_H
 
 #include "sanitizer_common/sanitizer_common.h"
+#include "sanitizer_common/sanitizer_platform.h"
 
 namespace __dfsan {
 
 using __sanitizer::uptr;
 
-struct Mapping {
-  static const uptr kShadowAddr = 0x100000008000;
-  static const uptr kOriginAddr = 0x200000008000;
-  static const uptr kUnusedAddr = 0x300000000000;
-  static const uptr kAppAddr = 0x700000008000;
-  static const uptr kShadowMask = ~0x600000000000;
-};
+// TODO: The memory mapping code to setup a 1:1 shadow is based on msan.
+// Consider refactoring these into a shared implementation.
 
-enum MappingType {
-  MAPPING_SHADOW_ADDR,
-  MAPPING_ORIGIN_ADDR,
-  MAPPING_UNUSED_ADDR,
-  MAPPING_APP_ADDR,
-  MAPPING_SHADOW_MASK
+struct MappingDesc {
+  uptr start;
+  uptr end;
+  enum Type { INVALID, APP, SHADOW, ORIGIN } type;
+  const char *name;
 };
 
-template<typename Mapping, int Type>
-uptr MappingImpl(void) {
-  switch (Type) {
-    case MAPPING_SHADOW_ADDR: return Mapping::kShadowAddr;
-#if defined(__x86_64__)
-    case MAPPING_ORIGIN_ADDR:
-      return Mapping::kOriginAddr;
-#endif
-    case MAPPING_UNUSED_ADDR:
-      return Mapping::kUnusedAddr;
-    case MAPPING_APP_ADDR: return Mapping::kAppAddr;
-    case MAPPING_SHADOW_MASK: return Mapping::kShadowMask;
-  }
-}
+#if SANITIZER_LINUX && SANITIZER_WORDSIZE == 64
 
-template<int Type>
-uptr MappingArchImpl(void) {
-  return MappingImpl<Mapping, Type>();
-}
+// All of the following configurations are supported.
+// ASLR disabled: main executable and DSOs at 0x555550000000
+// PIE and ASLR: main executable and DSOs at 0x7f0000000000
+// non-PIE: main executable below 0x100000000, DSOs at 0x7f0000000000
+// Heap at 0x700000000000.
+const MappingDesc kMemoryLayout[] = {
+    {0x000000000000ULL, 0x010000000000ULL, MappingDesc::APP, "app-1"},
+    {0x010000000000ULL, 0x100000000000ULL, MappingDesc::SHADOW, "shadow-2"},
+    {0x100000000000ULL, 0x110000000000ULL, MappingDesc::INVALID, "invalid"},
+    {0x110000000000ULL, 0x200000000000ULL, MappingDesc::ORIGIN, "origin-2"},
+    {0x200000000000ULL, 0x300000000000ULL, MappingDesc::SHADOW, "shadow-3"},
+    {0x300000000000ULL, 0x400000000000ULL, MappingDesc::ORIGIN, "origin-3"},
+    {0x400000000000ULL, 0x500000000000ULL, MappingDesc::INVALID, "invalid"},
+    {0x500000000000ULL, 0x510000000000ULL, MappingDesc::SHADOW, "shadow-1"},
+    {0x510000000000ULL, 0x600000000000ULL, MappingDesc::APP, "app-2"},
+    {0x600000000000ULL, 0x610000000000ULL, MappingDesc::ORIGIN, "origin-1"},
+    {0x610000000000ULL, 0x700000000000ULL, MappingDesc::INVALID, "invalid"},
+    {0x700000000000ULL, 0x800000000000ULL, MappingDesc::APP, "app-3"}};
+#  define MEM_TO_SHADOW(mem) (((uptr)(mem)) ^ 0x500000000000ULL)
+#  define SHADOW_TO_ORIGIN(mem) (((uptr)(mem)) + 0x100000000000ULL)
 
-ALWAYS_INLINE
-uptr ShadowAddr() {
-  return MappingArchImpl<MAPPING_SHADOW_ADDR>();
-}
+#else
+#  error "Unsupported platform"
+#endif
 
-ALWAYS_INLINE
-uptr OriginAddr() {
-  return MappingArchImpl<MAPPING_ORIGIN_ADDR>();
-}
+const uptr kMemoryLayoutSize = sizeof(kMemoryLayout) / sizeof(kMemoryLayout[0]);
 
-ALWAYS_INLINE
-uptr UnusedAddr() { return MappingArchImpl<MAPPING_UNUSED_ADDR>(); }
+#define MEM_TO_ORIGIN(mem) (SHADOW_TO_ORIGIN(MEM_TO_SHADOW((mem))))
 
-ALWAYS_INLINE
-uptr AppAddr() {
-  return MappingArchImpl<MAPPING_APP_ADDR>();
+#ifndef __clang__
+__attribute__((optimize("unroll-loops")))
+#endif
+inline bool
+addr_is_type(uptr addr, MappingDesc::Type mapping_type) {
+// It is critical for performance that this loop is unrolled (because then it is
+// simplified into just a few constant comparisons).
+#ifdef __clang__
+#  pragma unroll
+#endif
+  for (unsigned i = 0; i < kMemoryLayoutSize; ++i)
+    if (kMemoryLayout[i].type == mapping_type &&
+        addr >= kMemoryLayout[i].start && addr < kMemoryLayout[i].end)
+      return true;
+  return false;
 }
 
-ALWAYS_INLINE
-uptr ShadowMask() {
-  return MappingArchImpl<MAPPING_SHADOW_MASK>();
-}
+#define MEM_IS_APP(mem) addr_is_type((uptr)(mem), MappingDesc::APP)
+#define MEM_IS_SHADOW(mem) addr_is_type((uptr)(mem), MappingDesc::SHADOW)
+#define MEM_IS_ORIGIN(mem) addr_is_type((uptr)(mem), MappingDesc::ORIGIN)
 
 }  // namespace __dfsan
 

diff  --git a/compiler-rt/test/dfsan/origin_invalid.c b/compiler-rt/test/dfsan/origin_invalid.c
index 2de5b01dea2f..bc54609156ef 100644
--- a/compiler-rt/test/dfsan/origin_invalid.c
+++ b/compiler-rt/test/dfsan/origin_invalid.c
@@ -9,8 +9,14 @@
 int main(int argc, char *argv[]) {
   uint64_t a = 10;
   dfsan_set_label(1, &a, sizeof(a));
-  size_t origin_addr =
-      (((size_t)&a & ~0x600000000000LL + 0x100000000000LL) & ~0x3UL);
+
+  // Manually compute origin address for &a.
+  // See x86 MEM_TO_ORIGIN macro for logic to replicate here.
+  // Alignment is also needed after to MEM_TO_ORIGIN.
+  uint64_t origin_addr =
+      (((uint64_t)&a ^ 0x500000000000ULL) + 0x100000000000ULL) & ~0x3ULL;
+
+  // Take the address we computed, and store 0 in it to mess it up.
   asm("mov %0, %%rax": :"r"(origin_addr));
   asm("movq $0, (%rax)");
   dfsan_print_origin_trace(&a, "invalid");

diff  --git a/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp
index b36fd725e80f..6588c88111fc 100644
--- a/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp
@@ -23,28 +23,33 @@
 /// laid out as follows:
 ///
 /// +--------------------+ 0x800000000000 (top of memory)
-/// | application memory |
-/// +--------------------+ 0x700000008000 (kAppAddr)
-/// |                    |
-/// |       unused       |
-/// |                    |
-/// +--------------------+ 0x300000000000 (kUnusedAddr)
-/// |       origin       |
-/// +--------------------+ 0x200000008000 (kOriginAddr)
-/// |       unused       |
+/// |    application 3   |
+/// +--------------------+ 0x700000000000
+/// |      invalid       |
+/// +--------------------+ 0x610000000000
+/// |      origin 1      |
+/// +--------------------+ 0x600000000000
+/// |    application 2   |
+/// +--------------------+ 0x510000000000
+/// |      shadow 1      |
+/// +--------------------+ 0x500000000000
+/// |      invalid       |
+/// +--------------------+ 0x400000000000
+/// |      origin 3      |
+/// +--------------------+ 0x300000000000
+/// |      shadow 3      |
 /// +--------------------+ 0x200000000000
-/// |   shadow memory    |
-/// +--------------------+ 0x100000008000 (kShadowAddr)
-/// |       unused       |
-/// +--------------------+ 0x000000010000
-/// | reserved by kernel |
+/// |      origin 2      |
+/// +--------------------+ 0x110000000000
+/// |      invalid       |
+/// +--------------------+ 0x100000000000
+/// |      shadow 2      |
+/// +--------------------+ 0x010000000000
+/// |    application 1   |
 /// +--------------------+ 0x000000000000
 ///
-///
-/// To derive a shadow memory address from an application memory address, bits
-/// 45-46 are cleared to bring the address into the range
-/// [0x100000008000,0x200000000000). See the function
-/// DataFlowSanitizer::getShadowAddress below.
+/// MEM_TO_SHADOW(mem) = mem ^ 0x500000000000
+/// SHADOW_TO_ORIGIN(shadow) = shadow + 0x100000000000
 ///
 /// For more information, please refer to the design document:
 /// http://clang.llvm.org/docs/DataFlowSanitizerDesign.html
@@ -235,6 +240,30 @@ static StringRef getGlobalTypeString(const GlobalValue &G) {
 
 namespace {
 
+// Memory map parameters used in application-to-shadow address calculation.
+// Offset = (Addr & ~AndMask) ^ XorMask
+// Shadow = ShadowBase + Offset
+// Origin = (OriginBase + Offset) & ~3ULL
+struct MemoryMapParams {
+  uint64_t AndMask;
+  uint64_t XorMask;
+  uint64_t ShadowBase;
+  uint64_t OriginBase;
+};
+
+} // end anonymous namespace
+
+// x86_64 Linux
+// NOLINTNEXTLINE(readability-identifier-naming)
+static const MemoryMapParams Linux_X86_64_MemoryMapParams = {
+    0,              // AndMask (not used)
+    0x500000000000, // XorMask
+    0,              // ShadowBase (not used)
+    0x100000000000, // OriginBase
+};
+
+namespace {
+
 class DFSanABIList {
   std::unique_ptr<SpecialCaseList> SCL;
 
@@ -390,9 +419,6 @@ class DataFlowSanitizer {
   PointerType *PrimitiveShadowPtrTy;
   IntegerType *IntptrTy;
   ConstantInt *ZeroPrimitiveShadow;
-  ConstantInt *ShadowPtrMask;
-  ConstantInt *ShadowBase;
-  ConstantInt *OriginBase;
   Constant *ArgTLS;
   ArrayType *ArgOriginTLSTy;
   Constant *ArgOriginTLS;
@@ -432,6 +458,10 @@ class DataFlowSanitizer {
   DenseMap<Value *, Function *> UnwrappedFnMap;
   AttrBuilder ReadOnlyNoneAttrs;
 
+  /// Memory map parameters used in calculation mapping application addresses
+  /// to shadow addresses and origin addresses.
+  const MemoryMapParams *MapParams;
+
   Value *getShadowOffset(Value *Addr, IRBuilder<> &IRB);
   Value *getShadowAddress(Value *Addr, Instruction *Pos);
   Value *getShadowAddress(Value *Addr, Instruction *Pos, Value *ShadowOffset);
@@ -452,7 +482,7 @@ class DataFlowSanitizer {
   void initializeCallbackFunctions(Module &M);
   void initializeRuntimeFunctions(Module &M);
   void injectMetadataGlobals(Module &M);
-  bool init(Module &M);
+  bool initializeModule(Module &M);
 
   /// Advances \p OriginAddr to point to the next 32-bit origin and then loads
   /// from it. Returns the origin's loaded value.
@@ -1002,7 +1032,7 @@ Type *DataFlowSanitizer::getShadowTy(Value *V) {
   return getShadowTy(V->getType());
 }
 
-bool DataFlowSanitizer::init(Module &M) {
+bool DataFlowSanitizer::initializeModule(Module &M) {
   Triple TargetTriple(M.getTargetTriple());
   const DataLayout &DL = M.getDataLayout();
 
@@ -1010,6 +1040,7 @@ bool DataFlowSanitizer::init(Module &M) {
     report_fatal_error("unsupported operating system");
   if (TargetTriple.getArch() != Triple::x86_64)
     report_fatal_error("unsupported architecture");
+  MapParams = &Linux_X86_64_MemoryMapParams;
 
   Mod = &M;
   Ctx = &M.getContext();
@@ -1022,9 +1053,6 @@ bool DataFlowSanitizer::init(Module &M) {
   ZeroPrimitiveShadow = ConstantInt::getSigned(PrimitiveShadowTy, 0);
   ZeroOrigin = ConstantInt::getSigned(OriginTy, 0);
 
-  ShadowBase = ConstantInt::get(IntptrTy, 0x100000008000LL);
-  OriginBase = ConstantInt::get(IntptrTy, 0x200000008000LL);
-  ShadowPtrMask = ConstantInt::getSigned(IntptrTy, ~0x600000000000LL);
   Type *DFSanUnionLoadArgs[2] = {PrimitiveShadowPtrTy, IntptrTy};
   DFSanUnionLoadFnTy = FunctionType::get(PrimitiveShadowTy, DFSanUnionLoadArgs,
                                          /*isVarArg=*/false);
@@ -1334,7 +1362,7 @@ void DataFlowSanitizer::injectMetadataGlobals(Module &M) {
 }
 
 bool DataFlowSanitizer::runImpl(Module &M) {
-  init(M);
+  initializeModule(M);
 
   if (ABIList.isIn(M, "skip"))
     return false;
@@ -1721,11 +1749,23 @@ void DFSanFunction::setShadow(Instruction *I, Value *Shadow) {
   ValShadowMap[I] = Shadow;
 }
 
+/// Compute the integer shadow offset that corresponds to a given
+/// application address.
+///
+/// Offset = (Addr & ~AndMask) ^ XorMask
 Value *DataFlowSanitizer::getShadowOffset(Value *Addr, IRBuilder<> &IRB) {
-  // Returns Addr & shadow_mask
   assert(Addr != RetvalTLS && "Reinstrumenting?");
-  return IRB.CreateAnd(IRB.CreatePtrToInt(Addr, IntptrTy),
-                       IRB.CreatePtrToInt(ShadowPtrMask, IntptrTy));
+  Value *OffsetLong = IRB.CreatePointerCast(Addr, IntptrTy);
+
+  uint64_t AndMask = MapParams->AndMask;
+  if (AndMask)
+    OffsetLong =
+        IRB.CreateAnd(OffsetLong, ConstantInt::get(IntptrTy, ~AndMask));
+
+  uint64_t XorMask = MapParams->XorMask;
+  if (XorMask)
+    OffsetLong = IRB.CreateXor(OffsetLong, ConstantInt::get(IntptrTy, XorMask));
+  return OffsetLong;
 }
 
 std::pair<Value *, Value *>
@@ -1734,13 +1774,22 @@ DataFlowSanitizer::getShadowOriginAddress(Value *Addr, Align InstAlignment,
   // Returns ((Addr & shadow_mask) + origin_base - shadow_base) & ~4UL
   IRBuilder<> IRB(Pos);
   Value *ShadowOffset = getShadowOffset(Addr, IRB);
-  Value *ShadowPtr = getShadowAddress(Addr, Pos, ShadowOffset);
+  Value *ShadowLong = ShadowOffset;
+  uint64_t ShadowBase = MapParams->ShadowBase;
+  if (ShadowBase != 0) {
+    ShadowLong =
+        IRB.CreateAdd(ShadowLong, ConstantInt::get(IntptrTy, ShadowBase));
+  }
+  IntegerType *ShadowTy = IntegerType::get(*Ctx, ShadowWidthBits);
+  Value *ShadowPtr =
+      IRB.CreateIntToPtr(ShadowLong, PointerType::get(ShadowTy, 0));
   Value *OriginPtr = nullptr;
   if (shouldTrackOrigins()) {
-    static Value *OriginByShadowOffset = ConstantInt::get(
-        IntptrTy, OriginBase->getZExtValue() - ShadowBase->getZExtValue());
-
-    Value *OriginLong = IRB.CreateAdd(ShadowOffset, OriginByShadowOffset);
+    Value *OriginLong = ShadowOffset;
+    uint64_t OriginBase = MapParams->OriginBase;
+    if (OriginBase != 0)
+      OriginLong =
+          IRB.CreateAdd(OriginLong, ConstantInt::get(IntptrTy, OriginBase));
     const Align Alignment = llvm::assumeAligned(InstAlignment.value());
     // When alignment is >= 4, Addr must be aligned to 4, otherwise it is UB.
     // So Mask is unnecessary.
@@ -1750,7 +1799,7 @@ DataFlowSanitizer::getShadowOriginAddress(Value *Addr, Align InstAlignment,
     }
     OriginPtr = IRB.CreateIntToPtr(OriginLong, OriginPtrTy);
   }
-  return {ShadowPtr, OriginPtr};
+  return std::make_pair(ShadowPtr, OriginPtr);
 }
 
 Value *DataFlowSanitizer::getShadowAddress(Value *Addr, Instruction *Pos,
@@ -1760,7 +1809,6 @@ Value *DataFlowSanitizer::getShadowAddress(Value *Addr, Instruction *Pos,
 }
 
 Value *DataFlowSanitizer::getShadowAddress(Value *Addr, Instruction *Pos) {
-  // Returns (Addr & shadow_mask)
   IRBuilder<> IRB(Pos);
   Value *ShadowOffset = getShadowOffset(Addr, IRB);
   return getShadowAddress(Addr, Pos, ShadowOffset);

diff  --git a/llvm/test/Instrumentation/DataFlowSanitizer/atomics.ll b/llvm/test/Instrumentation/DataFlowSanitizer/atomics.ll
index 76d4cfe13c28..be60a3e2199a 100644
--- a/llvm/test/Instrumentation/DataFlowSanitizer/atomics.ll
+++ b/llvm/test/Instrumentation/DataFlowSanitizer/atomics.ll
@@ -17,8 +17,8 @@ entry:
   ; CHECK-NOT:         @__dfsan_arg_origin_tls
   ; CHECK-NOT:         @__dfsan_arg_tls
   ; CHECK:             %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:        %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:        %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:        %[[#SHADOW_PTR64:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#NUM_BITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:        store i[[#NUM_BITS]] 0, i[[#NUM_BITS]]* %[[#SHADOW_PTR64]], align [[#SBYTES]]
   ; CHECK-NEXT:        atomicrmw xchg i32* %p, i32 %x seq_cst
@@ -37,8 +37,8 @@ define i32 @AtomicRmwMax(i32* %p, i32 %x) {
   ; CHECK-NOT:         @__dfsan_arg_origin_tls
   ; CHECK-NOT:         @__dfsan_arg_tls
   ; CHECK:             %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:        %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:        %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:        %[[#SHADOW_PTR64:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#NUM_BITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:        store i[[#NUM_BITS]] 0, i[[#NUM_BITS]]* %[[#SHADOW_PTR64]], align [[#SBYTES]]
   ; CHECK-NEXT:        atomicrmw max i32* %p, i32 %x seq_cst
@@ -59,8 +59,8 @@ define i32 @Cmpxchg(i32* %p, i32 %a, i32 %b) {
   ; CHECK-NOT:         @__dfsan_arg_origin_tls
   ; CHECK-NOT:         @__dfsan_arg_tls
   ; CHECK:             %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:        %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:        %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:        %[[#SHADOW_PTR64:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#NUM_BITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:        store i[[#NUM_BITS]] 0, i[[#NUM_BITS]]* %[[#SHADOW_PTR64]], align [[#SBYTES]]
   ; CHECK-NEXT:        %pair = cmpxchg i32* %p, i32 %a, i32 %b seq_cst seq_cst
@@ -82,8 +82,8 @@ define i32 @CmpxchgMonotonic(i32* %p, i32 %a, i32 %b) {
   ; CHECK-NOT:         @__dfsan_arg_origin_tls
   ; CHECK-NOT:         @__dfsan_arg_tls
   ; CHECK:             %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:        %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:        %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:        %[[#SHADOW_PTR64:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#NUM_BITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:        store i[[#NUM_BITS]] 0, i[[#NUM_BITS]]* %[[#SHADOW_PTR64]], align [[#SBYTES]]
   ; CHECK-NEXT:        %pair = cmpxchg i32* %p, i32 %a, i32 %b release monotonic
@@ -205,8 +205,8 @@ define void @AtomicStore(i32* %p, i32 %x) {
   ; CHECK-NOT:         @__dfsan_arg_tls
   ; CHECK_ORIGIN-NOT:  35184372088832
   ; CHECK:             %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:        %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:        %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:        %[[#SHADOW_PTR64:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#NUM_BITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:        store i[[#NUM_BITS]] 0, i[[#NUM_BITS]]* %[[#SHADOW_PTR64]], align [[#SBYTES]]
   ; CHECK:             store atomic i32 %x, i32* %p seq_cst, align 16
@@ -225,8 +225,8 @@ define void @AtomicStoreRelease(i32* %p, i32 %x) {
   ; CHECK-NOT:         @__dfsan_arg_tls
   ; CHECK_ORIGIN-NOT:  35184372088832
   ; CHECK:             %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:        %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:        %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:        %[[#SHADOW_PTR64:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#NUM_BITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:        store i[[#NUM_BITS]] 0, i[[#NUM_BITS]]* %[[#SHADOW_PTR64]], align [[#SBYTES]]
   ; CHECK:             store atomic i32 %x, i32* %p release, align 16
@@ -245,8 +245,8 @@ define void @AtomicStoreMonotonic(i32* %p, i32 %x) {
   ; CHECK-NOT:         @__dfsan_arg_tls
   ; CHECK_ORIGIN-NOT:  35184372088832
   ; CHECK:             %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:        %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:        %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:        %[[#SHADOW_PTR64:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#NUM_BITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:        store i[[#NUM_BITS]] 0, i[[#NUM_BITS]]* %[[#SHADOW_PTR64]], align [[#SBYTES]]
   ; CHECK:             store atomic i32 %x, i32* %p release, align 16
@@ -265,8 +265,8 @@ define void @AtomicStoreUnordered(i32* %p, i32 %x) {
   ; CHECK-NOT:         @__dfsan_arg_tls
   ; CHECK_ORIGIN-NOT:  35184372088832
   ; CHECK:             %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:        %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:        %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:        %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:        %[[#SHADOW_PTR64:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#NUM_BITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:        store i[[#NUM_BITS]] 0, i[[#NUM_BITS]]* %[[#SHADOW_PTR64]], align [[#SBYTES]]
   ; CHECK:             store atomic i32 %x, i32* %p release, align 16

diff  --git a/llvm/test/Instrumentation/DataFlowSanitizer/basic.ll b/llvm/test/Instrumentation/DataFlowSanitizer/basic.ll
index f8d4281aa26e..40e63f6e6362 100644
--- a/llvm/test/Instrumentation/DataFlowSanitizer/basic.ll
+++ b/llvm/test/Instrumentation/DataFlowSanitizer/basic.ll
@@ -1,5 +1,5 @@
-; RUN: opt < %s -dfsan -S | FileCheck %s --check-prefixes=CHECK,CHECK_NO_ORIGIN -DSHADOW_MASK=-105553116266497   --dump-input-context=100
-; RUN: opt < %s -dfsan -dfsan-track-origins=1  -S | FileCheck %s --check-prefixes=CHECK,CHECK_ORIGIN -DSHADOW_MASK=-105553116266497  --dump-input-context=100
+; RUN: opt < %s -dfsan -S | FileCheck %s --check-prefixes=CHECK,CHECK_NO_ORIGIN -DSHADOW_XOR_MASK=87960930222080 --dump-input-context=100
+; RUN: opt < %s -dfsan -dfsan-track-origins=1  -S | FileCheck %s --check-prefixes=CHECK,CHECK_ORIGIN -DSHADOW_XOR_MASK=87960930222080 --dump-input-context=100
 target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"
 target triple = "x86_64-unknown-linux-gnu"
 
@@ -14,7 +14,7 @@ target triple = "x86_64-unknown-linux-gnu"
 
 define i8 @load(i8* %p) {
   ; CHECK-LABEL: define i8 @load.dfsan
-  ; CHECK: and i64 {{.*}}, [[SHADOW_MASK]]
+  ; CHECK: xor i64 {{.*}}, [[SHADOW_XOR_MASK]]
   ; CHECK: ret i8 %a
   %a = load i8, i8* %p
   ret i8 %a
@@ -22,7 +22,7 @@ define i8 @load(i8* %p) {
 
 define void @store(i8* %p) {
   ; CHECK-LABEL: define void @store.dfsan
-  ; CHECK: and i64 {{.*}}, [[SHADOW_MASK]]
+  ; CHECK: xor i64 {{.*}}, [[SHADOW_XOR_MASK]]
   ; CHECK: ret void
   store i8 0, i8* %p
   ret void

diff  --git a/llvm/test/Instrumentation/DataFlowSanitizer/load.ll b/llvm/test/Instrumentation/DataFlowSanitizer/load.ll
index 38b16fcf90b3..3f405793c858 100644
--- a/llvm/test/Instrumentation/DataFlowSanitizer/load.ll
+++ b/llvm/test/Instrumentation/DataFlowSanitizer/load.ll
@@ -23,8 +23,8 @@ define i8 @load8(i8* %p) {
   ; CHECK-LABEL:           @load8.dfsan
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([[TLS_ARR]]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i8* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:            %[[#SHADOW:]] = load i[[#SBITS]], i[[#SBITS]]* %[[#SHADOW_PTR]]
   ; COMBINE_LOAD_PTR-NEXT: %[[#SHADOW:]] = or i[[#SBITS]] %[[#SHADOW]], %[[#PS]]
   ; CHECK-NEXT:            %a = load i8, i8* %p
@@ -39,8 +39,8 @@ define i16 @load16(i16* %p) {
   ; CHECK-LABEL:           @load16.dfsan
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([[TLS_ARR]]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i16* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:            %[[#SHADOW_PTR+1]] = getelementptr i[[#SBITS]], i[[#SBITS]]* %[[#SHADOW_PTR]], i64 1
   ; CHECK-NEXT:            %[[#SHADOW:]]  = load i[[#SBITS]], i[[#SBITS]]* %[[#SHADOW_PTR]]
   ; CHECK-NEXT:            %[[#SHADOW+1]] = load i[[#SBITS]], i[[#SBITS]]* %[[#SHADOW_PTR+1]]
@@ -59,8 +59,8 @@ define i32 @load32(i32* %p) {
   ; CHECK-LABEL:           @load32.dfsan
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([[TLS_ARR]]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:            %[[#WIDE_SHADOW_PTR:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#WSBITS:mul(SBITS,4)]]*
   ; CHECK-NEXT:            %[[#WIDE_SHADOW:]] = load i[[#WSBITS]], i[[#WSBITS]]* %[[#WIDE_SHADOW_PTR]], align [[#SBYTES]]
   ; CHECK-NEXT:            %[[#WIDE_SHADOW_SHIFTED:]] = lshr i[[#WSBITS]] %[[#WIDE_SHADOW]], [[#mul(SBITS,2)]]
@@ -81,8 +81,8 @@ define i64 @load64(i64* %p) {
   ; CHECK-LABEL:           @load64.dfsan
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([[TLS_ARR]]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i64* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:            %[[#WIDE_SHADOW_PTR:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i64*
   ; CHECK-NEXT:            %[[#WIDE_SHADOW:]] = load i64, i64* %[[#WIDE_SHADOW_PTR]], align [[#SBYTES]]
   ; CHECK-NEXT:            %[[#WIDE_SHADOW_SHIFTED:]] = lshr i64 %[[#WIDE_SHADOW]], 32
@@ -105,8 +105,8 @@ define i128 @load128(i128* %p) {
   ; CHECK-LABEL:           @load128.dfsan
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([[TLS_ARR]]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i128* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:            %[[#WIDE_SHADOW_PTR:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i64*
   ; CHECK-NEXT:            %[[#WIDE_SHADOW:]] = load i64, i64* %[[#WIDE_SHADOW_PTR]], align [[#SBYTES]]
   ; CHECK-NEXT:            %[[#WIDE_SHADOW_PTR2:]] = getelementptr i64, i64* %[[#WIDE_SHADOW_PTR]], i64 1
@@ -133,8 +133,8 @@ define i17 @load17(i17* %p) {
   ; CHECK-LABEL:           @load17.dfsan
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([[TLS_ARR]]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i17* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
   ; CHECK-NEXT:            %[[#SHADOW:]] = call zeroext i8 @__dfsan_union_load(i[[#SBITS]]* %[[#SHADOW_PTR]], i64 3)
   ; COMBINE_LOAD_PTR-NEXT: %[[#SHADOW:]] = or i[[#SBITS]] %[[#SHADOW]], %[[#PS]]
   ; CHECK-NEXT:            %a = load i17, i17* %p

diff  --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll
index fc38216be377..041d4b9b08e5 100644
--- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll
+++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll
@@ -35,9 +35,9 @@ define i16 @load_non_escaped_alloca() {
 define i16* @load_escaped_alloca() {
   ; CHECK-LABEL:  @load_escaped_alloca.dfsan
   ; CHECK:        %[[#INTP:]] = ptrtoint i16* %p to i64
-  ; CHECK-NEXT:   %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:   %[[#SHADOW_PTR0:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
-  ; CHECK-NEXT:   %[[#ORIGIN_OFFSET:]] = add i64 %[[#INTP+1]], [[#%.10d,ORIGIN_MASK:]]
+  ; CHECK-NEXT:   %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:   %[[#SHADOW_PTR0:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
+  ; CHECK-NEXT:   %[[#ORIGIN_OFFSET:]] = add i64 %[[#SHADOW_OFFSET]], [[#%.10d,ORIGIN_BASE:]]
   ; CHECK-NEXT:   %[[#ORIGIN_ADDR:]] = and i64 %[[#ORIGIN_OFFSET]], -4
   ; CHECK-NEXT:   %[[#ORIGIN_PTR:]] = inttoptr i64 %[[#ORIGIN_ADDR]] to i32*
   ; CHECK-NEXT:   {{%.*}} = load i32, i32* %[[#ORIGIN_PTR]], align 4
@@ -72,9 +72,9 @@ define i1 @load1(i1* %p) {
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([100 x i64]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
 
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i1* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
-  ; CHECK-NEXT:            %[[#ORIGIN_OFFSET:]] = add i64 %[[#INTP+1]], [[#ORIGIN_MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#ORIGIN_OFFSET:]] = add i64 %[[#SHADOW_OFFSET]], [[#ORIGIN_BASE]]
   ; CHECK-NEXT:            %[[#ORIGIN_ADDR:]] = and i64 %[[#ORIGIN_OFFSET]], -4
   ; CHECK-NEXT:            %[[#ORIGIN_PTR:]] = inttoptr i64 %[[#ORIGIN_ADDR]] to i32*
   ; CHECK-NEXT:            %[[#AO:]] = load i32, i32* %[[#ORIGIN_PTR]], align 4
@@ -99,9 +99,9 @@ define i16 @load16(i1 %i, i16* %p) {
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* inttoptr (i64 add (i64 ptrtoint ([100 x i64]* @__dfsan_arg_tls to i64), i64 2) to i[[#SBITS]]*), align [[ALIGN]]
 
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i16* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR0:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
-  ; CHECK-NEXT:            %[[#ORIGIN_OFFSET:]] = add i64 %[[#INTP+1]], [[#ORIGIN_MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR0:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#ORIGIN_OFFSET:]] = add i64 %[[#SHADOW_OFFSET]], [[#ORIGIN_BASE]]
   ; CHECK-NEXT:            %[[#ORIGIN_ADDR:]] = and i64 %[[#ORIGIN_OFFSET]], -4
   ; CHECK-NEXT:            %[[#ORIGIN_PTR:]] = inttoptr i64 %[[#ORIGIN_ADDR]] to i32*
   ; CHECK-NEXT:            %[[#AO:]] = load i32, i32* %[[#ORIGIN_PTR]], align 4
@@ -129,9 +129,9 @@ define i32 @load32(i32* %p) {
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([100 x i64]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
 
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i32* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
-  ; CHECK-NEXT:            %[[#ORIGIN_ADDR:]] = add i64 %[[#INTP+1]], [[#ORIGIN_MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#ORIGIN_ADDR:]] = add i64 %[[#SHADOW_OFFSET]], [[#ORIGIN_BASE]]
   ; CHECK-NEXT:            %[[#ORIGIN_PTR:]] = inttoptr i64 %[[#ORIGIN_ADDR]] to i32*
   ; CHECK-NEXT:            %[[#AO:]] = load i32, i32* %[[#ORIGIN_PTR]], align 4
   ; CHECK-NEXT:            %[[#WIDE_SHADOW_PTR:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i[[#WSBITS:mul(SBITS,4)]]*
@@ -161,9 +161,9 @@ define i64 @load64(i64* %p) {
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([100 x i64]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
 
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i64* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
-  ; CHECK-NEXT:            %[[#ORIGIN_ADDR:]] = add i64 %[[#INTP+1]], [[#ORIGIN_MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#ORIGIN_ADDR:]] = add i64 %[[#SHADOW_OFFSET]], [[#ORIGIN_BASE]]
   ; CHECK-NEXT:            %[[#ORIGIN_PTR:]] = inttoptr i64 %[[#ORIGIN_ADDR]] to i32*
   ; CHECK-NEXT:            %[[#ORIGIN:]] = load i32, i32* %[[#ORIGIN_PTR]], align 8
   ; CHECK-NEXT:            %[[#WIDE_SHADOW_PTR:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i64*
@@ -226,9 +226,9 @@ define i128 @load128(i128* %p) {
   ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([100 x i64]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
 
   ; CHECK-NEXT:            %[[#INTP:]] = ptrtoint i128* %p to i64
-  ; CHECK-NEXT:            %[[#SHADOW_ADDR:INTP+1]] = and i64 %[[#INTP]], [[#MASK]]
-  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
-  ; CHECK-NEXT:            %[[#ORIGIN_ADDR:]] = add i64 %[[#SHADOW_ADDR]], [[#ORIGIN_MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]]
+  ; CHECK-NEXT:            %[[#SHADOW_PTR:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
+  ; CHECK-NEXT:            %[[#ORIGIN_ADDR:]] = add i64 %[[#SHADOW_OFFSET]], [[#ORIGIN_BASE]]
   ; CHECK-NEXT:            %[[#ORIGIN1_PTR:]] = inttoptr i64 %[[#ORIGIN_ADDR]] to i32*
   ; CHECK-NEXT:            %[[#ORIGIN1:]] = load i32, i32* %[[#ORIGIN1_PTR]], align 8
   ; CHECK-NEXT:            %[[#WIDE_SHADOW1_PTR:]] = bitcast i[[#SBITS]]* %[[#SHADOW_PTR]] to i64*

diff  --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll
index b98fee789741..31ae5356118c 100644
--- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll
+++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll
@@ -52,9 +52,9 @@ define void @store_nonzero_to_escaped_alloca(i16 %a) {
   ; CHECK-NEXT:   %[[#AO:]] = load i32, i32* getelementptr inbounds ([200 x i32], [200 x i32]* @__dfsan_arg_origin_tls, i64 0, i64 0), align 4
   ; CHECK-NEXT:   %[[#AS:]] = load i[[#SBITS]], i[[#SBITS]]* bitcast ([100 x i64]* @__dfsan_arg_tls to i[[#SBITS]]*), align [[ALIGN]]
   ; CHECK:        %[[#INTP:]] = ptrtoint i16* %p to i64
-  ; CHECK-NEXT:   %[[#SHADOW_ADDR:]] = and i64 %[[#INTP]], [[#%.10d,MASK:]]
-  ; CHECK-NEXT:   %[[#SHADOW_PTR0:]] = inttoptr i64 %[[#SHADOW_ADDR]] to i[[#SBITS]]*
-  ; CHECK-NEXT:   %[[#ORIGIN_OFFSET:]] = add i64 %[[#INTP+1]], [[#%.10d,ORIGIN_MASK:]]
+  ; CHECK-NEXT:   %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#%.10d,MASK:]]
+  ; CHECK-NEXT:   %[[#SHADOW_PTR0:]] = inttoptr i64 %[[#SHADOW_OFFSET]] to i[[#SBITS]]*
+  ; CHECK-NEXT:   %[[#ORIGIN_OFFSET:]] = add i64 %[[#SHADOW_OFFSET]], [[#%.10d,ORIGIN_BASE:]]
   ; CHECK-NEXT:   %[[#ORIGIN_ADDR:]] = and i64 %[[#ORIGIN_OFFSET]], -4
   ; CHECK-NEXT:   %[[#ORIGIN_PTR:]] = inttoptr i64 %[[#ORIGIN_ADDR]] to i32*
   ; CHECK:        %_dfscmp = icmp ne i[[#SBITS]] %[[#AS]], 0

diff  --git a/llvm/test/Instrumentation/DataFlowSanitizer/store.ll b/llvm/test/Instrumentation/DataFlowSanitizer/store.ll
index ab5084417903..679f57e0f5f4 100644
--- a/llvm/test/Instrumentation/DataFlowSanitizer/store.ll
+++ b/llvm/test/Instrumentation/DataFlowSanitizer/store.ll
@@ -22,7 +22,7 @@ define void @store8(i8 %v, i8* %p) {
   ; COMBINE_PTR_LABEL: load i[[#SBITS]], i[[#SBITS]]* {{.*}} @__dfsan_arg_tls
   ; COMBINE_PTR_LABEL: or i[[#SBITS]]
   ; CHECK:             ptrtoint i8* {{.*}} i64
-  ; CHECK-NEXT:        and i64
+  ; CHECK-NEXT:        xor i64
   ; CHECK-NEXT:        inttoptr i64 {{.*}} i[[#SBITS]]*
   ; CHECK-NEXT:        getelementptr i[[#SBITS]], i[[#SBITS]]*
   ; CHECK-NEXT:        store i[[#SBITS]]
@@ -39,7 +39,7 @@ define void @store16(i16 %v, i16* %p) {
   ; COMBINE_PTR_LABEL: load i[[#SBITS]], i[[#SBITS]]* {{.*}} @__dfsan_arg_tls
   ; COMBINE_PTR_LABEL: or i[[#SBITS]]
   ; CHECK:             ptrtoint i16* {{.*}} i64
-  ; CHECK-NEXT:        and i64
+  ; CHECK-NEXT:        xor i64
   ; CHECK-NEXT:        inttoptr i64 {{.*}} i[[#SBITS]]*
   ; CHECK-NEXT:        getelementptr i[[#SBITS]], i[[#SBITS]]*
   ; CHECK-NEXT:        store i[[#SBITS]]
@@ -58,7 +58,7 @@ define void @store32(i32 %v, i32* %p) {
   ; COMBINE_PTR_LABEL: load i[[#SBITS]], i[[#SBITS]]* {{.*}} @__dfsan_arg_tls
   ; COMBINE_PTR_LABEL: or i[[#SBITS]]
   ; CHECK:             ptrtoint i32* {{.*}} i64
-  ; CHECK-NEXT:        and i64
+  ; CHECK-NEXT:        xor i64
   ; CHECK-NEXT:        inttoptr i64 {{.*}} i[[#SBITS]]*
   ; CHECK-NEXT:        getelementptr i[[#SBITS]], i[[#SBITS]]*
   ; CHECK-NEXT:        store i[[#SBITS]]
@@ -81,7 +81,7 @@ define void @store64(i64 %v, i64* %p) {
   ; COMBINE_PTR_LABEL: load i[[#SBITS]], i[[#SBITS]]* {{.*}} @__dfsan_arg_tls
   ; COMBINE_PTR_LABEL: or i[[#SBITS]]
   ; CHECK:             ptrtoint i64* {{.*}} i64
-  ; CHECK-NEXT:        and i64
+  ; CHECK-NEXT:        xor i64
   ; CHECK-NEXT:        inttoptr i64 {{.*}} i[[#SBITS]]*
   ; CHECK-COUNT-8:     insertelement {{.*}} i[[#SBITS]]
   ; CHECK-NEXT:        bitcast i[[#SBITS]]* {{.*}} <8 x i[[#SBITS]]>*


        


More information about the llvm-commits mailing list