[clang] [compiler-rt] [llvm] [RFC] [msan] make MSan up to 20x faster on AMD CPUs (PR #171993)

Azat Khuzhin via llvm-commits llvm-commits at lists.llvm.org
Sat Dec 13 13:06:21 PST 2025


https://github.com/azat updated https://github.com/llvm/llvm-project/pull/171993

>From a51eae0d7365bade01af833836094757a9126399 Mon Sep 17 00:00:00 2001
From: Azat Khuzhin <a3at.mail at gmail.com>
Date: Fri, 12 Dec 2025 09:32:15 +0100
Subject: [PATCH 1/4] [msan] make MSan up to 20x faster on AMD CPUs

I noticed that on AMD CPU (so far I've tested on Zen 3 and Zen 4c - AMD
EPYC 9R14) a simple program under MSan is up to 20x slower:

    #include <stdio.h>
    #include <time.h>
    #include <stdint.h>

    uint64_t factorial(int n) {
        if (n <= 1) return 1;
        return n * factorial(n - 1);
    }

    int main() {
        const int iterations = 100000000;
        clock_t start = clock();

        for (int i = 0; i < iterations; i++) {
            volatile uint64_t result = factorial(20);
        }

        double elapsed = (double)(clock() - start) / CLOCKS_PER_SEC;
        printf("Direct loop:          %.3f seconds\n", elapsed);
        return 0;
    }

The problem here is the `volatile`, but the underlying problem
apparently is that cache conflicts, `result` and it's address in shadow
area has conflicts, and overwrites each other, so it has tons of cache
misses:

   Performance counter stats for './factorial-test-original':

         212,850,471      L1-dcache-loads
         200,634,333      L1-dcache-load-misses            #   94.26% of all L1-dcache accesses
     <not supported>      L1-dcache-stores

         1.232666099 seconds time elapsed

         1.228437000 seconds user
         0.000994000 seconds sys

To avoid this conflicts we can add size of one cache line to the shadow
addresses, and here are the results - 20x improvement:

    $ /usr/bin/clang++ -fsanitize=memory -O3 factorial-test.c -o factorial-test-original
    $ ./factorial-test-original
    Direct loop:          1.223 seconds
    $ clang++ -fsanitize=memory -O3 factorial-test.c -o factorial-test-patched
    $ ./factorial-test-patched
    Direct loop:          0.060 seconds

I've tested performance on Intel CPUs (Intel(R) Xeon(R) Platinum 8124M
CPU @ 3.00GHz), and it looks the same after the patch.

Curious about what you think about this!
---
 compiler-rt/lib/msan/msan.h                   | 65 +++++++++++++++----
 compiler-rt/test/msan/strlen_of_shadow.cpp    |  2 +-
 .../Instrumentation/MemorySanitizer.cpp       |  8 +--
 3 files changed, 58 insertions(+), 17 deletions(-)

diff --git a/compiler-rt/lib/msan/msan.h b/compiler-rt/lib/msan/msan.h
index edb26997be07d..2b85c7d7f3754 100644
--- a/compiler-rt/lib/msan/msan.h
+++ b/compiler-rt/lib/msan/msan.h
@@ -201,27 +201,68 @@ const MappingDesc kMemoryLayout[] = {
 
 #elif SANITIZER_NETBSD || (SANITIZER_LINUX && SANITIZER_WORDSIZE == 64)
 
+// Offset applied to shadow addresses to avoid cache line conflicts on AMD Zen
+// (on Intel it is not required, but does not harm).
+//
+// Problem: AMD Zen's 32 KiB L1D cache is 8-way associative with 64-byte lines
+// (64 sets) [1]. Addresses are partitioned as:
+//
+//   | tag | set_index (bits 6-11) | offset (bits 0-5) |
+//           ^^^^^^^^^^^^^^^^^^^^^   ^^^^^^^^^^^^^^^^^
+//              6 bits set index    6 bits index in set
+//
+// Without offset, app memory and its shadow share the same set_index (bits
+// 6-11) but have different tags. Normally, 8-way associativity accommodates
+// both lines in the same set without conflict.
+//
+// However, AMD Zen uses a Linear address μtag/way-predictor (derived from
+// addr[12:27], see [2]) to save power by predicting which way to check instead
+// of searching all 8 ways. Since XOR preserves addr[0:43], shadow and app share
+// identical μtags, causing them to predict the same way, effectively degrading
+// the 8-way cache to direct-mapped (20x slowdown).
+//
+// Solution: Adding 2MB offset (bit 21) changes the μtag while maintaining 2MB
+// page alignment for mmap.
+//
+//   [1]: https://lsferreira.net/public/knowledge-base/x86/upos/amd_zen+.PDF
+//   [2]: https://inria.hal.science/hal-02866777/document
+const unsigned long kShadowOffset = 0x200000ULL;
 // All of the following configurations are supported.
 // ASLR disabled: main executable and DSOs at 0x555550000000
 // PIE and ASLR: main executable and DSOs at 0x7f0000000000
 // non-PIE: main executable below 0x100000000, DSOs at 0x7f0000000000
 // Heap at 0x700000000000.
 const MappingDesc kMemoryLayout[] = {
-    {0x000000000000ULL, 0x010000000000ULL, MappingDesc::APP, "app-1"},
-    {0x010000000000ULL, 0x100000000000ULL, MappingDesc::SHADOW, "shadow-2"},
-    {0x100000000000ULL, 0x110000000000ULL, MappingDesc::INVALID, "invalid"},
-    {0x110000000000ULL, 0x200000000000ULL, MappingDesc::ORIGIN, "origin-2"},
-    {0x200000000000ULL, 0x300000000000ULL, MappingDesc::SHADOW, "shadow-3"},
-    {0x300000000000ULL, 0x400000000000ULL, MappingDesc::ORIGIN, "origin-3"},
-    {0x400000000000ULL, 0x500000000000ULL, MappingDesc::INVALID, "invalid"},
-    {0x500000000000ULL, 0x510000000000ULL, MappingDesc::SHADOW, "shadow-1"},
-    {0x510000000000ULL, 0x600000000000ULL, MappingDesc::APP, "app-2"},
-    {0x600000000000ULL, 0x610000000000ULL, MappingDesc::ORIGIN, "origin-1"},
+    {0x000000000000ULL, 0x010000000000ULL - kShadowOffset, MappingDesc::APP,
+     "app-1"},
+    {0x010000000000ULL - kShadowOffset, 0x010000000000ULL + kShadowOffset,
+     MappingDesc::INVALID, "gap"},
+    {0x010000000000ULL + kShadowOffset, 0x100000000000ULL + kShadowOffset,
+     MappingDesc::SHADOW, "shadow-2"},
+    {0x100000000000ULL + kShadowOffset, 0x110000000000ULL + kShadowOffset,
+     MappingDesc::INVALID, "invalid"},
+    {0x110000000000ULL + kShadowOffset, 0x200000000000ULL + kShadowOffset,
+     MappingDesc::ORIGIN, "origin-2"},
+    {0x200000000000ULL + kShadowOffset, 0x300000000000ULL + kShadowOffset,
+     MappingDesc::SHADOW, "shadow-3"},
+    {0x300000000000ULL + kShadowOffset, 0x400000000000ULL + kShadowOffset,
+     MappingDesc::ORIGIN, "origin-3"},
+    {0x400000000000ULL + kShadowOffset, 0x500000000000ULL + kShadowOffset,
+     MappingDesc::INVALID, "invalid"},
+    {0x500000000000ULL + kShadowOffset, 0x510000000000ULL, MappingDesc::SHADOW,
+     "shadow-1"},
+    {0x510000000000ULL, 0x600000000000ULL - kShadowOffset, MappingDesc::APP,
+     "app-2"},
+    {0x600000000000ULL - kShadowOffset, 0x600000000000ULL + kShadowOffset,
+     MappingDesc::INVALID, "gap"},
+    {0x600000000000ULL + kShadowOffset, 0x610000000000ULL, MappingDesc::ORIGIN,
+     "origin-1"},
     {0x610000000000ULL, 0x700000000000ULL, MappingDesc::INVALID, "invalid"},
     {0x700000000000ULL, 0x740000000000ULL, MappingDesc::ALLOCATOR, "allocator"},
     {0x740000000000ULL, 0x800000000000ULL, MappingDesc::APP, "app-3"}};
-#define MEM_TO_SHADOW(mem) (((uptr)(mem)) ^ 0x500000000000ULL)
-#define SHADOW_TO_ORIGIN(mem) (((uptr)(mem)) + 0x100000000000ULL)
+#  define MEM_TO_SHADOW(mem) \
+    ((((uptr)(mem)) ^ 0x500000000000ULL) + kShadowOffset)
+#  define SHADOW_TO_ORIGIN(mem) (((uptr)(mem)) + 0x100000000000ULL)
 
 #else
 #error "Unsupported platform"
diff --git a/compiler-rt/test/msan/strlen_of_shadow.cpp b/compiler-rt/test/msan/strlen_of_shadow.cpp
index 39962481df5ba..4814e9a670298 100644
--- a/compiler-rt/test/msan/strlen_of_shadow.cpp
+++ b/compiler-rt/test/msan/strlen_of_shadow.cpp
@@ -14,7 +14,7 @@
 
 const char *mem_to_shadow(const char *p) {
 #if defined(__x86_64__)
-  return (char *)((uintptr_t)p ^ 0x500000000000ULL);
+  return (char *)((uintptr_t)p ^ 0x500000200000ULL);
 #elif defined(__loongarch_lp64)
   return (char *)((uintptr_t)p ^ 0x500000000000ULL);
 #elif defined (__mips64)
diff --git a/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp b/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
index 32ee16c89b4fe..57d06233b5c9f 100644
--- a/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
+++ b/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp
@@ -443,8 +443,8 @@ static const MemoryMapParams Linux_I386_MemoryMapParams = {
 static const MemoryMapParams Linux_X86_64_MemoryMapParams = {
     0,              // AndMask (not used)
     0x500000000000, // XorMask
-    0,              // ShadowBase (not used)
-    0x100000000000, // OriginBase
+    0x200000,       // ShadowBase (== kShadowOffset)
+    0x100000200000, // OriginBase
 };
 
 // mips32 Linux
@@ -531,8 +531,8 @@ static const MemoryMapParams FreeBSD_X86_64_MemoryMapParams = {
 static const MemoryMapParams NetBSD_X86_64_MemoryMapParams = {
     0,              // AndMask
     0x500000000000, // XorMask
-    0,              // ShadowBase
-    0x100000000000, // OriginBase
+    0x200000,       // ShadowBase (== kShadowOffset)
+    0x100000200000, // OriginBase
 };
 
 static const PlatformMemoryMapParams Linux_X86_MemoryMapParams = {

>From 8f509e8c87d977354860ee187e46a1efeab1c5e3 Mon Sep 17 00:00:00 2001
From: Azat Khuzhin <a3at.mail at gmail.com>
Date: Sat, 13 Dec 2025 19:31:44 +0100
Subject: [PATCH 2/4] [clang/test] update MSan shadow area calcuation in
 cleanup-destslot-simple.c

---
 clang/test/CodeGen/cleanup-destslot-simple.c | 33 +++++++++++---------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/clang/test/CodeGen/cleanup-destslot-simple.c b/clang/test/CodeGen/cleanup-destslot-simple.c
index 23a70d4a7da25..2b43d0de51a28 100644
--- a/clang/test/CodeGen/cleanup-destslot-simple.c
+++ b/clang/test/CodeGen/cleanup-destslot-simple.c
@@ -41,30 +41,33 @@
 // CHECK-MSAN-NEXT:    [[X:%.*]] = alloca i32, align 4
 // CHECK-MSAN-NEXT:    [[P:%.*]] = alloca ptr, align 8
 // CHECK-MSAN-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[X]]) #[[ATTR3:[0-9]+]], !dbg [[DBG10:![0-9]+]]
-// CHECK-MSAN-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[X]] to i64, !dbg [[DBG10]]
-// CHECK-MSAN-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080, !dbg [[DBG10]]
-// CHECK-MSAN-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr, !dbg [[DBG10]]
-// CHECK-MSAN-NEXT:    store i32 0, ptr [[TMP2]], align 4, !dbg [[DBG11:![0-9]+]]
+// CHECK-MSAN-NEXT:    [[TMP1_0:%.*]] = ptrtoint ptr [[X]] to i64, !dbg [[DBG10]]
+// CHECK-MSAN-NEXT:    [[TMP1_1:%.*]] = xor i64 [[TMP1_0]], 87960930222080, !dbg [[DBG10]]
+// CHECK-MSAN-NEXT:    [[TMP1_2:%.*]] = add i64 [[TMP1_1]], 2097152, !dbg [[DBG10]]
+// CHECK-MSAN-NEXT:    [[TMP1_3:%.*]] = inttoptr i64 [[TMP1_2]] to ptr, !dbg [[DBG10]]
+// CHECK-MSAN-NEXT:    store i32 0, ptr [[TMP1_3]], align 4, !dbg [[DBG11:![0-9]+]]
 // CHECK-MSAN-NEXT:    store i32 3, ptr [[X]], align 4, !dbg [[DBG11]], !tbaa [[INT_TBAA12:![0-9]+]]
 // CHECK-MSAN-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[P]]), !dbg [[DBG16:![0-9]+]]
-// CHECK-MSAN-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[P]] to i64, !dbg [[DBG16]]
-// CHECK-MSAN-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080, !dbg [[DBG16]]
-// CHECK-MSAN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr, !dbg [[DBG16]]
-// CHECK-MSAN-NEXT:    store i64 0, ptr [[TMP5]], align 8, !dbg [[DBG17:![0-9]+]]
+// CHECK-MSAN-NEXT:    [[TMP1_0:%.*]] = ptrtoint ptr [[P]] to i64, !dbg [[DBG16]]
+// CHECK-MSAN-NEXT:    [[TMP1_1:%.*]] = xor i64 [[TMP1_0]], 87960930222080, !dbg [[DBG16]]
+// CHECK-MSAN-NEXT:    [[TMP1_2:%.*]] = add i64 [[TMP1_1]], 2097152, !dbg [[DBG16]]
+// CHECK-MSAN-NEXT:    [[TMP1_3:%.*]] = inttoptr i64 [[TMP1_2]] to ptr, !dbg [[DBG16]]
+// CHECK-MSAN-NEXT:    store i64 0, ptr [[TMP1_3]], align 8, !dbg [[DBG17:![0-9]+]]
 // CHECK-MSAN-NEXT:    store volatile ptr [[X]], ptr [[P]], align 8, !dbg [[DBG17]], !tbaa [[INTPTR_TBAA18:![0-9]+]]
 // CHECK-MSAN-NEXT:    [[P_0_P_0_P_0_P_0_:%.*]] = load volatile ptr, ptr [[P]], align 8, !dbg [[DBG21:![0-9]+]], !tbaa [[INTPTR_TBAA18]]
-// CHECK-MSAN-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP5]], align 8, !dbg [[DBG21]]
+// CHECK-MSAN-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP1_3]], align 8, !dbg [[DBG21]]
 // CHECK-MSAN-NEXT:    [[_MSCMP_NOT:%.*]] = icmp eq i64 [[_MSLD]], 0, !dbg [[DBG22:![0-9]+]]
 // CHECK-MSAN-NEXT:    br i1 [[_MSCMP_NOT]], label %[[BB7:.*]], label %[[BB6:.*]], !dbg [[DBG22]], !prof [[PROF23:![0-9]+]]
 // CHECK-MSAN:       [[BB6]]:
 // CHECK-MSAN-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4:[0-9]+]], !dbg [[DBG22]]
 // CHECK-MSAN-NEXT:    unreachable, !dbg [[DBG22]]
 // CHECK-MSAN:       [[BB7]]:
-// CHECK-MSAN-NEXT:    [[TMP8:%.*]] = load i32, ptr [[P_0_P_0_P_0_P_0_]], align 4, !dbg [[DBG22]], !tbaa [[INT_TBAA12]]
-// CHECK-MSAN-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[P_0_P_0_P_0_P_0_]] to i64, !dbg [[DBG22]]
-// CHECK-MSAN-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080, !dbg [[DBG22]]
-// CHECK-MSAN-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr, !dbg [[DBG22]]
-// CHECK-MSAN-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP11]], align 4, !dbg [[DBG22]]
+// CHECK-MSAN-NEXT:    [[TMP2_0:%.*]] = load i32, ptr [[P_0_P_0_P_0_P_0_]], align 4, !dbg [[DBG22]], !tbaa [[INT_TBAA12]]
+// CHECK-MSAN-NEXT:    [[TMP2_1:%.*]] = ptrtoint ptr [[P_0_P_0_P_0_P_0_]] to i64, !dbg [[DBG22]]
+// CHECK-MSAN-NEXT:    [[TMP2_2:%.*]] = xor i64 [[TMP2_1]], 87960930222080, !dbg [[DBG22]]
+// CHECK-MSAN-NEXT:    [[TMP2_3:%.*]] = add i64 [[TMP2_2]], 2097152, !dbg [[DBG22]]
+// CHECK-MSAN-NEXT:    [[TMP2_4:%.*]] = inttoptr i64 [[TMP2_3]] to ptr, !dbg [[DBG22]]
+// CHECK-MSAN-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP2_4]], align 4, !dbg [[DBG22]]
 // CHECK-MSAN-NEXT:    call void @llvm.lifetime.end.p0(ptr nonnull [[P]]), !dbg [[DBG24:![0-9]+]]
 // CHECK-MSAN-NEXT:    call void @llvm.lifetime.end.p0(ptr nonnull [[X]]) #[[ATTR3]], !dbg [[DBG24]]
 // CHECK-MSAN-NEXT:    [[_MSCMP2_NOT:%.*]] = icmp eq i32 [[_MSLD1]], 0, !dbg [[DBG25:![0-9]+]]
@@ -73,7 +76,7 @@
 // CHECK-MSAN-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]], !dbg [[DBG25]]
 // CHECK-MSAN-NEXT:    unreachable, !dbg [[DBG25]]
 // CHECK-MSAN:       [[BB13]]:
-// CHECK-MSAN-NEXT:    ret i32 [[TMP8]], !dbg [[DBG25]]
+// CHECK-MSAN-NEXT:    ret i32 [[TMP2_0]], !dbg [[DBG25]]
 //
 // CHECK-KMSAN-LABEL: define dso_local i32 @test(
 // CHECK-KMSAN-SAME: ) local_unnamed_addr #[[ATTR0:[0-9]+]] !dbg [[DBG6:![0-9]+]] {

>From c2e92275e099be66d6652219402e59f673b339dc Mon Sep 17 00:00:00 2001
From: Azat Khuzhin <a3at.mail at gmail.com>
Date: Sat, 13 Dec 2025 19:58:53 +0100
Subject: [PATCH 3/4] [llvm/test/Instrumentation/MemorySanitizer] regenerate
 test checks with update_test_checks.py

$ llvm/utils/update_test_checks.py --opt-binary=.cmake-patched-msan/bin/opt $(cat .cmake-patched-msan/tests.log )
---
 .../MemorySanitizer/X86/avx-intrinsics-x86.ll |   99 +-
 .../X86/avx10_2_512ni-intrinsics.ll           |   18 +-
 .../X86/avx2-intrinsics-x86.ll                |   90 +-
 .../X86/avx512-intrinsics-upgrade.ll          | 1450 +++++++++--------
 .../MemorySanitizer/X86/avx512-intrinsics.ll  |  300 ++--
 .../MemorySanitizer/X86/avx512bf16-mov.ll     |   66 +-
 .../X86/avx512bw-intrinsics-upgrade.ll        |  364 +++--
 .../X86/avx512bw-intrinsics.ll                |   57 +-
 .../X86/avx512fp16-arith-intrinsics.ll        |   30 +-
 .../X86/avx512fp16-arith-vl-intrinsics.ll     |   42 +-
 .../X86/avx512fp16-intrinsics.ll              |  153 +-
 .../X86/avx512vl-intrinsics.ll                |   36 +-
 .../X86/avx512vl_vnni-intrinsics-upgrade.ll   |   24 +-
 .../X86/avx512vl_vnni-intrinsics.ll           |   27 +-
 .../X86/avx512vnni-intrinsics-upgrade.ll      |   12 +-
 .../X86/avx512vnni-intrinsics.ll              |   12 +-
 .../X86/avxvnniint8-intrinsics.ll             |   36 +-
 .../X86/f16c-intrinsics-upgrade.ll            |   15 +-
 .../MemorySanitizer/X86/f16c-intrinsics.ll    |   12 +-
 .../MemorySanitizer/X86/mmx-intrinsics.ll     |    9 +-
 .../MemorySanitizer/X86/sse-intrinsics-x86.ll |   18 +-
 .../X86/sse2-intrinsics-x86.ll                |   30 +-
 .../X86/sse41-intrinsics-x86.ll               |   15 +-
 .../MemorySanitizer/X86/vararg_shadow.ll      |  369 +++--
 .../Instrumentation/MemorySanitizer/byval.ll  |  128 +-
 .../MemorySanitizer/disambiguate-origin.ll    |  124 +-
 .../MemorySanitizer/masked-store-load.ll      |  147 +-
 .../MemorySanitizer/msan-pass-second-run.ll   |    3 +-
 .../MemorySanitizer/msan_asm_conservative.ll  |  783 +++++++--
 .../MemorySanitizer/msan_basic.ll             |  571 ++++---
 .../MemorySanitizer/msan_debug_info.ll        |  217 +--
 .../MemorySanitizer/msan_eager.ll             |   49 +-
 .../MemorySanitizer/opaque-ptr.ll             |    6 +-
 .../Instrumentation/MemorySanitizer/reduce.ll |   15 +-
 .../MemorySanitizer/vector-load-store.ll      |  240 ++-
 .../Instrumentation/MemorySanitizer/vscale.ll |  172 +-
 36 files changed, 3558 insertions(+), 2181 deletions(-)

diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx-intrinsics-x86.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx-intrinsics-x86.ll
index e9c7e65c65a67..b9ddcb017b6b9 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx-intrinsics-x86.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx-intrinsics-x86.ll
@@ -489,14 +489,15 @@ define <32 x i8> @test_x86_avx_ldu_dq_256(ptr %a0) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i8>, ptr [[TMP4]], align 1
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1:![0-9]+]]
-; CHECK:       5:
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1:![0-9]+]]
+; CHECK:       6:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       6:
+; CHECK:       7:
 ; CHECK-NEXT:    [[RES:%.*]] = call <32 x i8> @llvm.x86.avx.ldu.dq.256(ptr [[A0]])
 ; CHECK-NEXT:    store <32 x i8> [[_MSLD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <32 x i8> [[RES]]
@@ -513,16 +514,17 @@ define <2 x double> @test_x86_avx_maskload_pd(ptr %a0, <2 x i64> %mask) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <2 x double> @llvm.x86.avx.maskload.pd(ptr [[TMP4]], <2 x i64> [[MASK:%.*]])
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast <2 x double> [[TMP5]] to <2 x i64>
 ; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <2 x i64> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP3]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP9:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    [[RES:%.*]] = call <2 x double> @llvm.x86.avx.maskload.pd(ptr [[A0]], <2 x i64> [[MASK]])
 ; CHECK-NEXT:    store <2 x i64> [[TMP6]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <2 x double> [[RES]]
@@ -539,16 +541,17 @@ define <4 x double> @test_x86_avx_maskload_pd_256(ptr %a0, <4 x i64> %mask) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <4 x double> @llvm.x86.avx.maskload.pd.256(ptr [[TMP4]], <4 x i64> [[MASK:%.*]])
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast <4 x double> [[TMP5]] to <4 x i64>
 ; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <4 x i64> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i256 [[TMP3]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP9:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x double> @llvm.x86.avx.maskload.pd.256(ptr [[A0]], <4 x i64> [[MASK]])
 ; CHECK-NEXT:    store <4 x i64> [[TMP6]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x double> [[RES]]
@@ -565,16 +568,17 @@ define <4 x float> @test_x86_avx_maskload_ps(ptr %a0, <4 x i32> %mask) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <4 x float> @llvm.x86.avx.maskload.ps(ptr [[TMP4]], <4 x i32> [[MASK:%.*]])
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast <4 x float> [[TMP5]] to <4 x i32>
 ; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <4 x i32> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP3]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP9:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx.maskload.ps(ptr [[A0]], <4 x i32> [[MASK]])
 ; CHECK-NEXT:    store <4 x i32> [[TMP6]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -591,16 +595,17 @@ define <8 x float> @test_x86_avx_maskload_ps_256(ptr %a0, <8 x i32> %mask) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <8 x float> @llvm.x86.avx.maskload.ps.256(ptr [[TMP4]], <8 x i32> [[MASK:%.*]])
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast <8 x float> [[TMP5]] to <8 x i32>
 ; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <8 x i32> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i256 [[TMP3]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP9:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x float> @llvm.x86.avx.maskload.ps.256(ptr [[A0]], <8 x i32> [[MASK]])
 ; CHECK-NEXT:    store <8 x i32> [[TMP6]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x float> [[RES]]
@@ -619,18 +624,19 @@ define void @test_x86_avx_maskstore_pd(ptr %a0, <2 x i64> %mask, <2 x double> %a
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <2 x i64> [[TMP3]] to <2 x double>
 ; CHECK-NEXT:    call void @llvm.x86.avx.maskstore.pd(ptr [[TMP6]], <2 x i64> [[MASK:%.*]], <2 x double> [[TMP7]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <2 x i64> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i128 [[TMP5]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       10:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       10:
+; CHECK:       11:
 ; CHECK-NEXT:    call void @llvm.x86.avx.maskstore.pd(ptr [[A0]], <2 x i64> [[MASK]], <2 x double> [[A2:%.*]])
 ; CHECK-NEXT:    ret void
 ;
@@ -648,18 +654,19 @@ define void @test_x86_avx_maskstore_pd_256(ptr %a0, <4 x i64> %mask, <4 x double
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <4 x i64> [[TMP3]] to <4 x double>
 ; CHECK-NEXT:    call void @llvm.x86.avx.maskstore.pd.256(ptr [[TMP6]], <4 x i64> [[MASK:%.*]], <4 x double> [[TMP7]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <4 x i64> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i256 [[TMP5]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       10:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       10:
+; CHECK:       11:
 ; CHECK-NEXT:    call void @llvm.x86.avx.maskstore.pd.256(ptr [[A0]], <4 x i64> [[MASK]], <4 x double> [[A2:%.*]])
 ; CHECK-NEXT:    ret void
 ;
@@ -677,18 +684,19 @@ define void @test_x86_avx_maskstore_ps(ptr %a0, <4 x i32> %mask, <4 x float> %a2
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <4 x i32> [[TMP3]] to <4 x float>
 ; CHECK-NEXT:    call void @llvm.x86.avx.maskstore.ps(ptr [[TMP6]], <4 x i32> [[MASK:%.*]], <4 x float> [[TMP7]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <4 x i32> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i128 [[TMP5]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       10:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       10:
+; CHECK:       11:
 ; CHECK-NEXT:    call void @llvm.x86.avx.maskstore.ps(ptr [[A0]], <4 x i32> [[MASK]], <4 x float> [[A2:%.*]])
 ; CHECK-NEXT:    ret void
 ;
@@ -706,18 +714,19 @@ define void @test_x86_avx_maskstore_ps_256(ptr %a0, <8 x i32> %mask, <8 x float>
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <8 x i32> [[TMP3]] to <8 x float>
 ; CHECK-NEXT:    call void @llvm.x86.avx.maskstore.ps.256(ptr [[TMP6]], <8 x i32> [[MASK:%.*]], <8 x float> [[TMP7]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <8 x i32> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i256 [[TMP5]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       10:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       10:
+; CHECK:       11:
 ; CHECK-NEXT:    call void @llvm.x86.avx.maskstore.ps.256(ptr [[A0]], <8 x i32> [[MASK]], <8 x float> [[A2:%.*]])
 ; CHECK-NEXT:    ret void
 ;
@@ -1048,7 +1057,8 @@ define <4 x float> @test_x86_avx_vpermilvar_ps_load(<4 x float> %a0, ptr %a1) #0
 ; CHECK-NEXT:    [[A2:%.*]] = load <4 x i32>, ptr [[A1:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = trunc <4 x i32> [[_MSLD]] to <4 x i2>
 ; CHECK-NEXT:    [[A0:%.*]] = bitcast <4 x i32> [[TMP2]] to <4 x float>
@@ -1056,11 +1066,11 @@ define <4 x float> @test_x86_avx_vpermilvar_ps_load(<4 x float> %a0, ptr %a1) #0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <4 x float> [[RES]] to <4 x i32>
 ; CHECK-NEXT:    [[TMP12:%.*]] = bitcast <4 x i2> [[TMP8]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP12]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
+; CHECK:       14:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       14:
+; CHECK:       15:
 ; CHECK-NEXT:    [[RES1:%.*]] = call <4 x float> @llvm.x86.avx.vpermilvar.ps(<4 x float> [[A3:%.*]], <4 x i32> [[A2]])
 ; CHECK-NEXT:    store <4 x i32> [[TMP10]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES1]]
@@ -1366,7 +1376,8 @@ define void @movnt_dq(ptr %p, <2 x i64> %a1) nounwind #0 {
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store <4 x i64> [[_MSPROP1]], ptr [[TMP7]], align 32
 ; CHECK-NEXT:    store <4 x i64> [[A3]], ptr [[P]], align 32, !nontemporal [[META2:![0-9]+]]
 ; CHECK-NEXT:    ret void
@@ -1391,7 +1402,8 @@ define void @movnt_ps(ptr %p, <8 x float> %a) nounwind #0 {
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store <8 x i32> [[TMP2]], ptr [[TMP7]], align 32
 ; CHECK-NEXT:    store <8 x float> [[A:%.*]], ptr [[P]], align 32, !nontemporal [[META2]]
 ; CHECK-NEXT:    ret void
@@ -1417,7 +1429,8 @@ define void @movnt_pd(ptr %p, <4 x double> %a1) nounwind #0 {
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store <4 x i64> [[_MSPROP]], ptr [[TMP7]], align 32
 ; CHECK-NEXT:    store <4 x double> [[A2]], ptr [[P]], align 32, !nontemporal [[META2]]
 ; CHECK-NEXT:    ret void
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx10_2_512ni-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx10_2_512ni-intrinsics.ll
index 9d5cabca5fef8..1849cd6ebbbc1 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx10_2_512ni-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx10_2_512ni-intrinsics.ll
@@ -140,7 +140,8 @@ define <16 x i32> @test_mm512_dpbssd_epi32(<16 x i32> %__W, <64 x i8> %__A, ptr
 ; CHECK-NEXT:    [[TMP10:%.*]] = load <64 x i8>, ptr [[PB]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PB]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    [[TMP9:%.*]] = load <64 x i8>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP14:%.*]] = icmp ne <64 x i8> [[TMP12]], zeroinitializer
 ; CHECK-NEXT:    [[TMP15:%.*]] = icmp ne <64 x i8> [[TMP9]], zeroinitializer
@@ -264,7 +265,8 @@ define <16 x i32> @test_mm512_dpbsud_epi32(<16 x i32> %__W, <64 x i8> %__A, ptr
 ; CHECK-NEXT:    [[__B:%.*]] = load <64 x i8>, ptr [[PB]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PB]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <64 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <64 x i8> [[_MSLD]], zeroinitializer
@@ -388,7 +390,8 @@ define <16 x i32> @test_mm512_dpbuud_epi32(<16 x i32> %__W, <64 x i8> %__A, ptr
 ; CHECK-NEXT:    [[__B:%.*]] = load <64 x i8>, ptr [[PB]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PB]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <64 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <64 x i8> [[_MSLD]], zeroinitializer
@@ -513,7 +516,8 @@ define <16 x i32> @test_mm512_dpwsud_epi32(<16 x i32> %__W, <32 x i16> %__A, ptr
 ; CHECK-NEXT:    [[__B:%.*]] = load <32 x i16>, ptr [[PB]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PB]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <32 x i16> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <32 x i16> [[_MSLD]], zeroinitializer
@@ -637,7 +641,8 @@ define <16 x i32> @test_mm512_dpwusd_epi32(<16 x i32> %__W, <32 x i16> %__A, ptr
 ; CHECK-NEXT:    [[__B:%.*]] = load <32 x i16>, ptr [[PB]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PB]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <32 x i16> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <32 x i16> [[_MSLD]], zeroinitializer
@@ -761,7 +766,8 @@ define <16 x i32> @test_mm512_dpwuud_epi32(<16 x i32> %__W, <32 x i16> %__A, ptr
 ; CHECK-NEXT:    [[__B:%.*]] = load <32 x i16>, ptr [[PB]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PB]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <32 x i16> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <32 x i16> [[_MSLD]], zeroinitializer
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll
index 2d29b73226134..8502643763f80 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx2-intrinsics-x86.ll
@@ -506,7 +506,8 @@ define <16 x i16> @test_x86_avx2_psrl_w_load(<16 x i16> %a0, ptr %p) #0 {
 ; CHECK-NEXT:    [[A1:%.*]] = load <8 x i16>, ptr [[P:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i16> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[TMP9:%.*]] = trunc i128 [[TMP8]] to i64
@@ -722,7 +723,8 @@ define <16 x i16> @test_x86_avx2_pmadd_ub_sw_load_op0(ptr %ptr, <32 x i8> %a1) #
 ; CHECK-NEXT:    [[A0:%.*]] = load <32 x i8>, ptr [[PTR:%.*]], align 32
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i8>, ptr [[TMP7]], align 32
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <32 x i8> [[_MSLD]], zeroinitializer
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <32 x i8> [[TMP2]], zeroinitializer
@@ -865,18 +867,19 @@ define <16 x i16> @test_x86_avx2_mpsadbw_load_op0(ptr %ptr, <32 x i8> %a1) #0 {
 ; CHECK-NEXT:    [[A0:%.*]] = load <32 x i8>, ptr [[PTR:%.*]], align 32
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i8>, ptr [[TMP7]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <32 x i8> [[_MSLD]] to i256
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i256 [[TMP8]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <32 x i8> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i256 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP1]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
+; CHECK:       11:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       11:
+; CHECK:       12:
 ; CHECK-NEXT:    [[RES:%.*]] = call <16 x i16> @llvm.x86.avx2.mpsadbw(<32 x i8> [[A0]], <32 x i8> [[A1:%.*]], i8 7)
 ; CHECK-NEXT:    store <16 x i16> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i16> [[RES]]
@@ -1039,15 +1042,16 @@ define <2 x i64> @test_x86_avx2_maskload_q(ptr %a0, <2 x i64> %a1) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <2 x i64> @llvm.x86.avx2.maskload.q(ptr [[TMP4]], <2 x i64> [[A1:%.*]])
 ; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <2 x i64> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP3]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
-; CHECK:       7:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       8:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       8:
+; CHECK:       9:
 ; CHECK-NEXT:    [[RES:%.*]] = call <2 x i64> @llvm.x86.avx2.maskload.q(ptr [[A0]], <2 x i64> [[A1]])
 ; CHECK-NEXT:    store <2 x i64> [[TMP5]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <2 x i64> [[RES]]
@@ -1064,15 +1068,16 @@ define <4 x i64> @test_x86_avx2_maskload_q_256(ptr %a0, <4 x i64> %a1) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <4 x i64> @llvm.x86.avx2.maskload.q.256(ptr [[TMP4]], <4 x i64> [[A1:%.*]])
 ; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <4 x i64> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i256 [[TMP3]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
-; CHECK:       7:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       8:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       8:
+; CHECK:       9:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x i64> @llvm.x86.avx2.maskload.q.256(ptr [[A0]], <4 x i64> [[A1]])
 ; CHECK-NEXT:    store <4 x i64> [[TMP5]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x i64> [[RES]]
@@ -1089,15 +1094,16 @@ define <4 x i32> @test_x86_avx2_maskload_d(ptr %a0, <4 x i32> %a1) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <4 x i32> @llvm.x86.avx2.maskload.d(ptr [[TMP4]], <4 x i32> [[A1:%.*]])
 ; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <4 x i32> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP3]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
-; CHECK:       7:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       8:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       8:
+; CHECK:       9:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x i32> @llvm.x86.avx2.maskload.d(ptr [[A0]], <4 x i32> [[A1]])
 ; CHECK-NEXT:    store <4 x i32> [[TMP5]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x i32> [[RES]]
@@ -1114,15 +1120,16 @@ define <8 x i32> @test_x86_avx2_maskload_d_256(ptr %a0, <8 x i32> %a1) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <8 x i32> @llvm.x86.avx2.maskload.d.256(ptr [[TMP4]], <8 x i32> [[A1:%.*]])
 ; CHECK-NEXT:    [[TMP3:%.*]] = bitcast <8 x i32> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i256 [[TMP3]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
-; CHECK:       7:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       8:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       8:
+; CHECK:       9:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x i32> @llvm.x86.avx2.maskload.d.256(ptr [[A0]], <8 x i32> [[A1]])
 ; CHECK-NEXT:    store <8 x i32> [[TMP5]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x i32> [[RES]]
@@ -1141,17 +1148,18 @@ define void @test_x86_avx2_maskstore_q(ptr %a0, <2 x i64> %a1, <2 x i64> %a2) #0
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    call void @llvm.x86.avx2.maskstore.q(ptr [[TMP6]], <2 x i64> [[A1:%.*]], <2 x i64> [[TMP3]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <2 x i64> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i128 [[TMP5]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    call void @llvm.x86.avx2.maskstore.q(ptr [[A0]], <2 x i64> [[A1]], <2 x i64> [[A2:%.*]])
 ; CHECK-NEXT:    ret void
 ;
@@ -1169,17 +1177,18 @@ define void @test_x86_avx2_maskstore_q_256(ptr %a0, <4 x i64> %a1, <4 x i64> %a2
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    call void @llvm.x86.avx2.maskstore.q.256(ptr [[TMP6]], <4 x i64> [[A1:%.*]], <4 x i64> [[TMP3]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <4 x i64> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i256 [[TMP5]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    call void @llvm.x86.avx2.maskstore.q.256(ptr [[A0]], <4 x i64> [[A1]], <4 x i64> [[A2:%.*]])
 ; CHECK-NEXT:    ret void
 ;
@@ -1197,17 +1206,18 @@ define void @test_x86_avx2_maskstore_d(ptr %a0, <4 x i32> %a1, <4 x i32> %a2) #0
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    call void @llvm.x86.avx2.maskstore.d(ptr [[TMP6]], <4 x i32> [[A1:%.*]], <4 x i32> [[TMP3]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <4 x i32> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i128 [[TMP5]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    call void @llvm.x86.avx2.maskstore.d(ptr [[A0]], <4 x i32> [[A1]], <4 x i32> [[A2:%.*]])
 ; CHECK-NEXT:    ret void
 ;
@@ -1225,17 +1235,18 @@ define void @test_x86_avx2_maskstore_d_256(ptr %a0, <8 x i32> %a1, <8 x i32> %a2
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    call void @llvm.x86.avx2.maskstore.d.256(ptr [[TMP6]], <8 x i32> [[A1:%.*]], <8 x i32> [[TMP3]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast <8 x i32> [[TMP2]] to i256
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i256 [[TMP5]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    call void @llvm.x86.avx2.maskstore.d.256(ptr [[A0]], <8 x i32> [[A1]], <8 x i32> [[A2:%.*]])
 ; CHECK-NEXT:    ret void
 ;
@@ -2151,7 +2162,8 @@ define <8 x float>  @test_gather_mask(<8 x float> %a0, ptr %a, <8 x i32> %idx, <
 ; CHECK:       12:
 ; CHECK-NEXT:    [[TMP13:%.*]] = ptrtoint ptr [[OUT:%.*]] to i64
 ; CHECK-NEXT:    [[TMP14:%.*]] = xor i64 [[TMP13]], 87960930222080
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 2097152
+; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    store <8 x i32> [[TMP4]], ptr [[TMP15]], align 4
 ; CHECK-NEXT:    store <8 x float> [[MASK]], ptr [[OUT]], align 4
 ; CHECK-NEXT:    store <8 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512-intrinsics-upgrade.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512-intrinsics-upgrade.ll
index f0a1791068d9e..23286ac63c09a 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512-intrinsics-upgrade.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512-intrinsics-upgrade.ll
@@ -731,27 +731,29 @@ define void @test_store1(<16 x float> %data, ptr %ptr, ptr %ptr2, i16 %mask)  #0
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v16i32.p0(<16 x i32> [[TMP2]], ptr align 1 [[TMP9]], <16 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i1> [[TMP5]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1:![0-9]+]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8:[0-9]+]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1:![0-9]+]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9:[0-9]+]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v16f32.p0(<16 x float> [[DATA:%.*]], ptr align 1 [[PTR]], <16 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <16 x i32> [[TMP2]], ptr [[TMP17]], align 1
 ; CHECK-NEXT:    store <16 x float> [[DATA]], ptr [[PTR2]], align 1
 ; CHECK-NEXT:    ret void
@@ -775,27 +777,29 @@ define void @test_store2(<8 x double> %data, ptr %ptr, ptr %ptr2, i8 %mask)  #0
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v8i64.p0(<8 x i64> [[TMP2]], ptr align 1 [[TMP9]], <8 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <8 x i1> [[TMP5]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v8f64.p0(<8 x double> [[DATA:%.*]], ptr align 1 [[PTR]], <8 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <8 x i64> [[TMP2]], ptr [[TMP17]], align 1
 ; CHECK-NEXT:    store <8 x double> [[DATA]], ptr [[PTR2]], align 1
 ; CHECK-NEXT:    ret void
@@ -819,27 +823,29 @@ define void @test_mask_store_aligned_ps(<16 x float> %data, ptr %ptr, ptr %ptr2,
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v16i32.p0(<16 x i32> [[TMP2]], ptr align 64 [[TMP9]], <16 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i1> [[TMP5]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v16f32.p0(<16 x float> [[DATA:%.*]], ptr align 64 [[PTR]], <16 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <16 x i32> [[TMP2]], ptr [[TMP17]], align 64
 ; CHECK-NEXT:    store <16 x float> [[DATA]], ptr [[PTR2]], align 64
 ; CHECK-NEXT:    ret void
@@ -863,27 +869,29 @@ define void @test_mask_store_aligned_pd(<8 x double> %data, ptr %ptr, ptr %ptr2,
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v8i64.p0(<8 x i64> [[TMP2]], ptr align 64 [[TMP9]], <8 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <8 x i1> [[TMP5]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v8f64.p0(<8 x double> [[DATA:%.*]], ptr align 64 [[PTR]], <8 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <8 x i64> [[TMP2]], ptr [[TMP17]], align 64
 ; CHECK-NEXT:    store <8 x double> [[DATA]], ptr [[PTR2]], align 64
 ; CHECK-NEXT:    ret void
@@ -907,27 +915,29 @@ define void at test_int_x86_avx512_mask_storeu_q_512(ptr %ptr1, ptr %ptr2, <8 x i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i8 [[X2:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR1:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v8i64.p0(<8 x i64> [[TMP2]], ptr align 1 [[TMP9]], <8 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <8 x i1> [[TMP5]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v8i64.p0(<8 x i64> [[X1:%.*]], ptr align 1 [[PTR1]], <8 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <8 x i64> [[TMP2]], ptr [[TMP17]], align 1
 ; CHECK-NEXT:    store <8 x i64> [[X1]], ptr [[PTR2]], align 1
 ; CHECK-NEXT:    ret void
@@ -951,27 +961,29 @@ define void at test_int_x86_avx512_mask_storeu_d_512(ptr %ptr1, ptr %ptr2, <16 x i3
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i16 [[X2:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR1:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v16i32.p0(<16 x i32> [[TMP2]], ptr align 1 [[TMP9]], <16 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i1> [[TMP5]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v16i32.p0(<16 x i32> [[X1:%.*]], ptr align 1 [[PTR1]], <16 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <16 x i32> [[TMP2]], ptr [[TMP17]], align 1
 ; CHECK-NEXT:    store <16 x i32> [[X1]], ptr [[PTR2]], align 1
 ; CHECK-NEXT:    ret void
@@ -995,27 +1007,29 @@ define void at test_int_x86_avx512_mask_store_q_512(ptr %ptr1, ptr %ptr2, <8 x i64>
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i8 [[X2:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR1:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v8i64.p0(<8 x i64> [[TMP2]], ptr align 64 [[TMP9]], <8 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <8 x i1> [[TMP5]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v8i64.p0(<8 x i64> [[X1:%.*]], ptr align 64 [[PTR1]], <8 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <8 x i64> [[TMP2]], ptr [[TMP17]], align 64
 ; CHECK-NEXT:    store <8 x i64> [[X1]], ptr [[PTR2]], align 64
 ; CHECK-NEXT:    ret void
@@ -1039,27 +1053,29 @@ define void at test_int_x86_avx512_mask_store_d_512(ptr %ptr1, ptr %ptr2, <16 x i32
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i16 [[X2:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR1:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v16i32.p0(<16 x i32> [[TMP2]], ptr align 64 [[TMP9]], <16 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i1> [[TMP5]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v16i32.p0(<16 x i32> [[X1:%.*]], ptr align 64 [[PTR1]], <16 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <16 x i32> [[TMP2]], ptr [[TMP17]], align 64
 ; CHECK-NEXT:    store <16 x i32> [[X1]], ptr [[PTR2]], align 64
 ; CHECK-NEXT:    ret void
@@ -1080,45 +1096,48 @@ define <16 x float> @test_mask_load_aligned_ps(<16 x float> %data, ptr %ptr, i16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = load <16 x float>, ptr [[PTR:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast i16 [[TMP2]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 64 [[TMP13]], <16 x i1> [[TMP10]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP14:%.*]] = bitcast <16 x i1> [[TMP9]] to i16
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i16 [[TMP14]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
-; CHECK:       15:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP25:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       17:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       16:
+; CHECK:       18:
 ; CHECK-NEXT:    [[TMP17:%.*]] = call <16 x float> @llvm.masked.load.v16f32.p0(ptr align 64 [[PTR]], <16 x i1> [[TMP10]], <16 x float> [[TMP5]])
 ; CHECK-NEXT:    [[TMP18:%.*]] = bitcast i16 [[TMP2]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i16 [[MASK]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP21:%.*]] = xor i64 [[TMP20]], 87960930222080
-; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP21]], 2097152
+; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 64 [[TMP22]], <16 x i1> [[TMP19]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP23:%.*]] = bitcast <16 x i1> [[TMP18]] to i16
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i16 [[TMP23]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP24:%.*]], label [[TMP25:%.*]], !prof [[PROF1]]
-; CHECK:       24:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP27:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
+; CHECK:       27:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       25:
+; CHECK:       28:
 ; CHECK-NEXT:    [[TMP26:%.*]] = call <16 x float> @llvm.masked.load.v16f32.p0(ptr align 64 [[PTR]], <16 x i1> [[TMP19]], <16 x float> zeroinitializer)
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[_MSMASKEDLD1]], [[_MSMASKEDLD]]
 ; CHECK-NEXT:    [[RES4:%.*]] = fadd <16 x float> [[TMP26]], [[TMP17]]
@@ -1143,45 +1162,48 @@ define <16 x float> @test_mask_load_unaligned_ps(<16 x float> %data, ptr %ptr, i
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = load <16 x float>, ptr [[PTR:%.*]], align 1
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 1
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast i16 [[TMP2]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 1 [[TMP13]], <16 x i1> [[TMP10]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP14:%.*]] = bitcast <16 x i1> [[TMP9]] to i16
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i16 [[TMP14]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
-; CHECK:       15:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP25:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       17:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       16:
+; CHECK:       18:
 ; CHECK-NEXT:    [[TMP17:%.*]] = call <16 x float> @llvm.masked.load.v16f32.p0(ptr align 1 [[PTR]], <16 x i1> [[TMP10]], <16 x float> [[TMP5]])
 ; CHECK-NEXT:    [[TMP18:%.*]] = bitcast i16 [[TMP2]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i16 [[MASK]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP21:%.*]] = xor i64 [[TMP20]], 87960930222080
-; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP21]], 2097152
+; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 1 [[TMP22]], <16 x i1> [[TMP19]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP23:%.*]] = bitcast <16 x i1> [[TMP18]] to i16
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i16 [[TMP23]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP24:%.*]], label [[TMP25:%.*]], !prof [[PROF1]]
-; CHECK:       24:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP27:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
+; CHECK:       27:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       25:
+; CHECK:       28:
 ; CHECK-NEXT:    [[TMP26:%.*]] = call <16 x float> @llvm.masked.load.v16f32.p0(ptr align 1 [[PTR]], <16 x i1> [[TMP19]], <16 x float> zeroinitializer)
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[_MSMASKEDLD1]], [[_MSMASKEDLD]]
 ; CHECK-NEXT:    [[RES4:%.*]] = fadd <16 x float> [[TMP26]], [[TMP17]]
@@ -1206,45 +1228,48 @@ define <8 x double> @test_mask_load_aligned_pd(<8 x double> %data, ptr %ptr, i8
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = load <8 x double>, ptr [[PTR:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast i8 [[TMP2]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 64 [[TMP13]], <8 x i1> [[TMP10]], <8 x i64> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP14:%.*]] = bitcast <8 x i1> [[TMP9]] to i8
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[TMP14]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
-; CHECK:       15:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP25:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       17:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       16:
+; CHECK:       18:
 ; CHECK-NEXT:    [[TMP17:%.*]] = call <8 x double> @llvm.masked.load.v8f64.p0(ptr align 64 [[PTR]], <8 x i1> [[TMP10]], <8 x double> [[TMP5]])
 ; CHECK-NEXT:    [[TMP18:%.*]] = bitcast i8 [[TMP2]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i8 [[MASK]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP21:%.*]] = xor i64 [[TMP20]], 87960930222080
-; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP21]], 2097152
+; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 64 [[TMP22]], <8 x i1> [[TMP19]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP23:%.*]] = bitcast <8 x i1> [[TMP18]] to i8
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i8 [[TMP23]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP24:%.*]], label [[TMP25:%.*]], !prof [[PROF1]]
-; CHECK:       24:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP27:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
+; CHECK:       27:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       25:
+; CHECK:       28:
 ; CHECK-NEXT:    [[TMP26:%.*]] = call <8 x double> @llvm.masked.load.v8f64.p0(ptr align 64 [[PTR]], <8 x i1> [[TMP19]], <8 x double> zeroinitializer)
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[_MSMASKEDLD1]], [[_MSMASKEDLD]]
 ; CHECK-NEXT:    [[RES4:%.*]] = fadd <8 x double> [[TMP26]], [[TMP17]]
@@ -1269,45 +1294,48 @@ define <8 x double> @test_mask_load_unaligned_pd(<8 x double> %data, ptr %ptr, i
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = load <8 x double>, ptr [[PTR:%.*]], align 1
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP8]], align 1
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast i8 [[TMP2]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 1 [[TMP13]], <8 x i1> [[TMP10]], <8 x i64> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP14:%.*]] = bitcast <8 x i1> [[TMP9]] to i8
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[TMP14]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
-; CHECK:       15:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP25:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       17:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       16:
+; CHECK:       18:
 ; CHECK-NEXT:    [[TMP17:%.*]] = call <8 x double> @llvm.masked.load.v8f64.p0(ptr align 1 [[PTR]], <8 x i1> [[TMP10]], <8 x double> [[TMP5]])
 ; CHECK-NEXT:    [[TMP18:%.*]] = bitcast i8 [[TMP2]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i8 [[MASK]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP21:%.*]] = xor i64 [[TMP20]], 87960930222080
-; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP21]], 2097152
+; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 1 [[TMP22]], <8 x i1> [[TMP19]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP23:%.*]] = bitcast <8 x i1> [[TMP18]] to i8
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i8 [[TMP23]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP24:%.*]], label [[TMP25:%.*]], !prof [[PROF1]]
-; CHECK:       24:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP27:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
+; CHECK:       27:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       25:
+; CHECK:       28:
 ; CHECK-NEXT:    [[TMP26:%.*]] = call <8 x double> @llvm.masked.load.v8f64.p0(ptr align 1 [[PTR]], <8 x i1> [[TMP19]], <8 x double> zeroinitializer)
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[_MSMASKEDLD1]], [[_MSMASKEDLD]]
 ; CHECK-NEXT:    [[RES4:%.*]] = fadd <8 x double> [[TMP26]], [[TMP17]]
@@ -1335,45 +1363,48 @@ define <16 x i32> @test_mask_load_unaligned_d(ptr %ptr, ptr %ptr2, <16 x i32> %d
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[TMP6:%.*]] = load <16 x i32>, ptr [[PTR:%.*]], align 1
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 1
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i16 [[TMP2]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 1 [[TMP14]], <16 x i1> [[TMP11]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP15:%.*]] = bitcast <16 x i1> [[TMP10]] to i16
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i16 [[TMP15]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP16:%.*]], label [[TMP17:%.*]], !prof [[PROF1]]
-; CHECK:       16:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP26:%.*]], label [[TMP30:%.*]], !prof [[PROF1]]
+; CHECK:       18:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       17:
+; CHECK:       19:
 ; CHECK-NEXT:    [[TMP18:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 1 [[PTR2]], <16 x i1> [[TMP11]], <16 x i32> [[TMP6]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i16 [[TMP2]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = bitcast i16 [[MASK]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 1 [[TMP23]], <16 x i1> [[TMP20]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP24:%.*]] = bitcast <16 x i1> [[TMP19]] to i16
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i16 [[TMP24]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP25:%.*]], label [[TMP26:%.*]], !prof [[PROF1]]
-; CHECK:       25:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       28:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       26:
+; CHECK:       29:
 ; CHECK-NEXT:    [[TMP27:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 1 [[PTR]], <16 x i1> [[TMP20]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[_MSMASKEDLD1]], [[_MSMASKEDLD]]
 ; CHECK-NEXT:    [[RES4:%.*]] = add <16 x i32> [[TMP27]], [[TMP18]]
@@ -1399,45 +1430,48 @@ define <8 x i64> @test_mask_load_unaligned_q(ptr %ptr, ptr %ptr2, <8 x i64> %dat
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[TMP6:%.*]] = load <8 x i64>, ptr [[PTR:%.*]], align 1
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP9]], align 1
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i8 [[TMP2]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 1 [[TMP14]], <8 x i1> [[TMP11]], <8 x i64> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP15:%.*]] = bitcast <8 x i1> [[TMP10]] to i8
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[TMP15]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP16:%.*]], label [[TMP17:%.*]], !prof [[PROF1]]
-; CHECK:       16:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP26:%.*]], label [[TMP30:%.*]], !prof [[PROF1]]
+; CHECK:       18:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       17:
+; CHECK:       19:
 ; CHECK-NEXT:    [[TMP18:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 1 [[PTR2]], <8 x i1> [[TMP11]], <8 x i64> [[TMP6]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i8 [[TMP2]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = bitcast i8 [[MASK]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 1 [[TMP23]], <8 x i1> [[TMP20]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP24:%.*]] = bitcast <8 x i1> [[TMP19]] to i8
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i8 [[TMP24]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP25:%.*]], label [[TMP26:%.*]], !prof [[PROF1]]
-; CHECK:       25:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       28:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       26:
+; CHECK:       29:
 ; CHECK-NEXT:    [[TMP27:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 1 [[PTR]], <8 x i1> [[TMP20]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[_MSMASKEDLD1]], [[_MSMASKEDLD]]
 ; CHECK-NEXT:    [[RES4:%.*]] = add <8 x i64> [[TMP27]], [[TMP18]]
@@ -1462,45 +1496,48 @@ define <16 x i32> @test_mask_load_aligned_d(<16 x i32> %data, ptr %ptr, i16 %mas
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = load <16 x i32>, ptr [[PTR:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast i16 [[TMP2]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 64 [[TMP13]], <16 x i1> [[TMP10]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP14:%.*]] = bitcast <16 x i1> [[TMP9]] to i16
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i16 [[TMP14]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
-; CHECK:       15:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP25:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       17:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       16:
+; CHECK:       18:
 ; CHECK-NEXT:    [[TMP17:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 64 [[PTR]], <16 x i1> [[TMP10]], <16 x i32> [[TMP5]])
 ; CHECK-NEXT:    [[TMP18:%.*]] = bitcast i16 [[TMP2]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i16 [[MASK]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP21:%.*]] = xor i64 [[TMP20]], 87960930222080
-; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP21]], 2097152
+; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 64 [[TMP22]], <16 x i1> [[TMP19]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP23:%.*]] = bitcast <16 x i1> [[TMP18]] to i16
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i16 [[TMP23]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP24:%.*]], label [[TMP25:%.*]], !prof [[PROF1]]
-; CHECK:       24:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP27:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
+; CHECK:       27:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       25:
+; CHECK:       28:
 ; CHECK-NEXT:    [[TMP26:%.*]] = call <16 x i32> @llvm.masked.load.v16i32.p0(ptr align 64 [[PTR]], <16 x i1> [[TMP19]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[_MSMASKEDLD1]], [[_MSMASKEDLD]]
 ; CHECK-NEXT:    [[RES4:%.*]] = add <16 x i32> [[TMP26]], [[TMP17]]
@@ -1525,45 +1562,48 @@ define <8 x i64> @test_mask_load_aligned_q(<8 x i64> %data, ptr %ptr, i8 %mask)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = load <8 x i64>, ptr [[PTR:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast i8 [[TMP2]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 64 [[TMP13]], <8 x i1> [[TMP10]], <8 x i64> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP14:%.*]] = bitcast <8 x i1> [[TMP9]] to i8
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[TMP14]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
-; CHECK:       15:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP25:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       17:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       16:
+; CHECK:       18:
 ; CHECK-NEXT:    [[TMP17:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 64 [[PTR]], <8 x i1> [[TMP10]], <8 x i64> [[TMP5]])
 ; CHECK-NEXT:    [[TMP18:%.*]] = bitcast i8 [[TMP2]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i8 [[MASK]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP21:%.*]] = xor i64 [[TMP20]], 87960930222080
-; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP21]], 2097152
+; CHECK-NEXT:    [[TMP22:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 64 [[TMP22]], <8 x i1> [[TMP19]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP23:%.*]] = bitcast <8 x i1> [[TMP18]] to i8
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i8 [[TMP23]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP24:%.*]], label [[TMP25:%.*]], !prof [[PROF1]]
-; CHECK:       24:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP27:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
+; CHECK:       27:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       25:
+; CHECK:       28:
 ; CHECK-NEXT:    [[TMP26:%.*]] = call <8 x i64> @llvm.masked.load.v8i64.p0(ptr align 64 [[PTR]], <8 x i1> [[TMP19]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[_MSMASKEDLD1]], [[_MSMASKEDLD]]
 ; CHECK-NEXT:    [[RES4:%.*]] = add <8 x i64> [[TMP26]], [[TMP17]]
@@ -2770,12 +2810,13 @@ define void at test_storent_q_512(<8 x i64> %data, ptr %ptr)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store <8 x i64> [[TMP2]], ptr [[TMP7]], align 64
 ; CHECK-NEXT:    store <8 x i64> [[DATA:%.*]], ptr [[PTR]], align 64, !nontemporal [[META2:![0-9]+]]
 ; CHECK-NEXT:    ret void
@@ -2795,12 +2836,13 @@ define void @test_storent_pd_512(<8 x double> %data, ptr %ptr)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store <8 x i64> [[TMP2]], ptr [[TMP7]], align 64
 ; CHECK-NEXT:    store <8 x double> [[DATA:%.*]], ptr [[PTR]], align 64, !nontemporal [[META2]]
 ; CHECK-NEXT:    ret void
@@ -2820,12 +2862,13 @@ define void @test_storent_ps_512(<16 x float> %data, ptr %ptr)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store <16 x i32> [[TMP2]], ptr [[TMP7]], align 64
 ; CHECK-NEXT:    store <16 x float> [[DATA:%.*]], ptr [[PTR]], align 64, !nontemporal [[META2]]
 ; CHECK-NEXT:    ret void
@@ -3192,13 +3235,14 @@ define <16 x i32> @test_mask_add_epi32_rm(<16 x i32> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = add <16 x i32> [[A:%.*]], [[B]]
@@ -3221,13 +3265,14 @@ define <16 x i32> @test_mask_add_epi32_rmk(<16 x i32> %a, ptr %ptr_b, <16 x i32>
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = add <16 x i32> [[A:%.*]], [[B]]
@@ -3257,13 +3302,14 @@ define <16 x i32> @test_mask_add_epi32_rmkz(<16 x i32> %a, ptr %ptr_b, i16 %mask
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = add <16 x i32> [[A:%.*]], [[B]]
@@ -3292,15 +3338,16 @@ define <16 x i32> @test_mask_add_epi32_rmb(<16 x i32> %a, ptr %ptr_b, <16 x i32>
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <16 x i32>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP3]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -3331,13 +3378,14 @@ define <16 x i32> @test_mask_add_epi32_rmbk(<16 x i32> %a, ptr %ptr_b, <16 x i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP18:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -3375,13 +3423,14 @@ define <16 x i32> @test_mask_add_epi32_rmbkz(<16 x i32> %a, ptr %ptr_b, i16 %mas
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP17:%.*]], label [[TMP18:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -3481,13 +3530,14 @@ define <16 x i32> @test_mask_sub_epi32_rm(<16 x i32> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = sub <16 x i32> [[A:%.*]], [[B]]
@@ -3510,13 +3560,14 @@ define <16 x i32> @test_mask_sub_epi32_rmk(<16 x i32> %a, ptr %ptr_b, <16 x i32>
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = sub <16 x i32> [[A:%.*]], [[B]]
@@ -3546,13 +3597,14 @@ define <16 x i32> @test_mask_sub_epi32_rmkz(<16 x i32> %a, ptr %ptr_b, i16 %mask
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = sub <16 x i32> [[A:%.*]], [[B]]
@@ -3581,15 +3633,16 @@ define <16 x i32> @test_mask_sub_epi32_rmb(<16 x i32> %a, ptr %ptr_b, <16 x i32>
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <16 x i32>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP3]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -3620,13 +3673,14 @@ define <16 x i32> @test_mask_sub_epi32_rmbk(<16 x i32> %a, ptr %ptr_b, <16 x i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP18:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -3663,13 +3717,14 @@ define <16 x i32> @test_mask_sub_epi32_rmbkz(<16 x i32> %a, ptr %ptr_b, i16 %mas
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP17:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -3769,13 +3824,14 @@ define <8 x i64> @test_mask_add_epi64_rm(<8 x i64> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <8 x i64>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = add <8 x i64> [[A:%.*]], [[B]]
@@ -3798,13 +3854,14 @@ define <8 x i64> @test_mask_add_epi64_rmk(<8 x i64> %a, ptr %ptr_b, <8 x i64> %p
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <8 x i64>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = add <8 x i64> [[A:%.*]], [[B]]
@@ -3834,13 +3891,14 @@ define <8 x i64> @test_mask_add_epi64_rmkz(<8 x i64> %a, ptr %ptr_b, i8 %mask)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <8 x i64>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = add <8 x i64> [[A:%.*]], [[B]]
@@ -3869,15 +3927,16 @@ define <8 x i64> @test_mask_add_epi64_rmb(<8 x i64> %a, ptr %ptr_b, <8 x i64> %e
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <8 x i64>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP7]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP3]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -3908,13 +3967,14 @@ define <8 x i64> @test_mask_add_epi64_rmbk(<8 x i64> %a, ptr %ptr_b, <8 x i64> %
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP18:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -3952,13 +4012,14 @@ define <8 x i64> @test_mask_add_epi64_rmbkz(<8 x i64> %a, ptr %ptr_b, i8 %mask,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP17:%.*]], label [[TMP18:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -4058,13 +4119,14 @@ define <8 x i64> @test_mask_sub_epi64_rm(<8 x i64> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <8 x i64>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = sub <8 x i64> [[A:%.*]], [[B]]
@@ -4087,13 +4149,14 @@ define <8 x i64> @test_mask_sub_epi64_rmk(<8 x i64> %a, ptr %ptr_b, <8 x i64> %p
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <8 x i64>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = sub <8 x i64> [[A:%.*]], [[B]]
@@ -4123,13 +4186,14 @@ define <8 x i64> @test_mask_sub_epi64_rmkz(<8 x i64> %a, ptr %ptr_b, i8 %mask)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <8 x i64>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i64> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = sub <8 x i64> [[A:%.*]], [[B]]
@@ -4158,15 +4222,16 @@ define <8 x i64> @test_mask_sub_epi64_rmb(<8 x i64> %a, ptr %ptr_b, <8 x i64> %e
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <8 x i64>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP7]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP3]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -4197,13 +4262,14 @@ define <8 x i64> @test_mask_sub_epi64_rmbk(<8 x i64> %a, ptr %ptr_b, <8 x i64> %
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP18:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -4241,13 +4307,14 @@ define <8 x i64> @test_mask_sub_epi64_rmbkz(<8 x i64> %a, ptr %ptr_b, i8 %mask,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP17:%.*]], label [[TMP18:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -4347,13 +4414,14 @@ define <16 x i32> @test_mask_mullo_epi32_rm_512(<16 x i32> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = mul <16 x i32> [[A:%.*]], [[B]]
@@ -4376,13 +4444,14 @@ define <16 x i32> @test_mask_mullo_epi32_rmk_512(<16 x i32> %a, ptr %ptr_b, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = mul <16 x i32> [[A:%.*]], [[B]]
@@ -4412,13 +4481,14 @@ define <16 x i32> @test_mask_mullo_epi32_rmkz_512(<16 x i32> %a, ptr %ptr_b, i16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i32> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = mul <16 x i32> [[A:%.*]], [[B]]
@@ -4447,15 +4517,16 @@ define <16 x i32> @test_mask_mullo_epi32_rmb_512(<16 x i32> %a, ptr %ptr_b, <16
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <16 x i32>, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP3]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -4486,13 +4557,14 @@ define <16 x i32> @test_mask_mullo_epi32_rmbk_512(<16 x i32> %a, ptr %ptr_b, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP18:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -4530,13 +4602,14 @@ define <16 x i32> @test_mask_mullo_epi32_rmbkz_512(<16 x i32> %a, ptr %ptr_b, i1
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP17:%.*]], label [[TMP18:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> [[EXTRA_PARAM:%.*]], i32 [[Q]], i32 0
@@ -7853,13 +7926,14 @@ define <8 x i64> @test_x86_avx512_psrlv_q_memop(<8 x i64> %a0, ptr %ptr)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <8 x i64>, ptr [[PTR:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <8 x i64> [[_MSLD]], zeroinitializer
 ; CHECK-NEXT:    [[TMP9:%.*]] = sext <8 x i1> [[TMP8]] to <8 x i64>
@@ -7964,7 +8038,7 @@ define <16 x float> @test_x86_vcvtph2ps_512(<16 x i16> %a0)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i256 [[TMP2]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[RES:%.*]] = call <16 x float> @llvm.x86.avx512.mask.vcvtph2ps.512(<16 x i16> [[A0:%.*]], <16 x float> zeroinitializer, i16 -1, i32 4)
@@ -7983,7 +8057,7 @@ define <16 x float> @test_x86_vcvtph2ps_512_sae(<16 x i16> %a0)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i256 [[TMP2]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[RES:%.*]] = call <16 x float> @llvm.x86.avx512.mask.vcvtph2ps.512(<16 x i16> [[A0:%.*]], <16 x float> zeroinitializer, i16 -1, i32 8)
@@ -8010,7 +8084,7 @@ define <16 x float> @test_x86_vcvtph2ps_512_rrk(<16 x i16> %a0,<16 x float> %a1,
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[RES:%.*]] = call <16 x float> @llvm.x86.avx512.mask.vcvtph2ps.512(<16 x i16> [[A0:%.*]], <16 x float> [[A1:%.*]], i16 [[MASK:%.*]], i32 4)
@@ -8033,7 +8107,7 @@ define <16 x float> @test_x86_vcvtph2ps_512_sae_rrkz(<16 x i16> %a0, i16 %mask)
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[RES:%.*]] = call <16 x float> @llvm.x86.avx512.mask.vcvtph2ps.512(<16 x i16> [[A0:%.*]], <16 x float> zeroinitializer, i16 [[MASK:%.*]], i32 8)
@@ -8056,7 +8130,7 @@ define <16 x float> @test_x86_vcvtph2ps_512_rrkz(<16 x i16> %a0, i16 %mask)  #0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[RES:%.*]] = call <16 x float> @llvm.x86.avx512.mask.vcvtph2ps.512(<16 x i16> [[A0:%.*]], <16 x float> zeroinitializer, i16 [[MASK:%.*]], i32 4)
@@ -8151,7 +8225,7 @@ define <8 x double>@test_int_x86_avx512_vpermilvar_pd_512(<8 x double> %x0, <8 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i24 [[TMP6]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <8 x double> @llvm.x86.avx512.vpermilvar.pd.512(<8 x double> [[X3:%.*]], <8 x i64> [[X2]])
@@ -8178,7 +8252,7 @@ define <8 x double>@test_int_x86_avx512_mask_vpermilvar_pd_512(<8 x double> %x0,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i24 [[TMP8]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP19:%.*]], label [[TMP20:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       11:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <8 x double> @llvm.x86.avx512.vpermilvar.pd.512(<8 x double> [[X5:%.*]], <8 x i64> [[X4]])
@@ -8214,7 +8288,7 @@ define <8 x double>@test_int_x86_avx512_maskz_vpermilvar_pd_512(<8 x double> %x0
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i24 [[TMP7]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP17:%.*]], label [[TMP18:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP6:%.*]] = call <8 x double> @llvm.x86.avx512.vpermilvar.pd.512(<8 x double> [[X4:%.*]], <8 x i64> [[X2]])
@@ -8249,7 +8323,7 @@ define <16 x float>@test_int_x86_avx512_vpermilvar_ps_512(<16 x float> %x0, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP6]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <16 x float> @llvm.x86.avx512.vpermilvar.ps.512(<16 x float> [[X3:%.*]], <16 x i32> [[X2]])
@@ -8276,7 +8350,7 @@ define <16 x float>@test_int_x86_avx512_mask_vpermilvar_ps_512(<16 x float> %x0,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP8]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP19:%.*]], label [[TMP20:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       11:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.vpermilvar.ps.512(<16 x float> [[X5:%.*]], <16 x i32> [[X4]])
@@ -8313,7 +8387,7 @@ define <16 x float>@test_int_x86_avx512_maskz_vpermilvar_ps_512(<16 x float> %x0
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP7]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP17:%.*]], label [[TMP18:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP6:%.*]] = call <16 x float> @llvm.x86.avx512.vpermilvar.ps.512(<16 x float> [[X4:%.*]], <16 x i32> [[X2]])
@@ -8507,13 +8581,14 @@ define <8 x i64> @test_mask_mul_epi32_rm(<16 x i32> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -8552,13 +8627,14 @@ define <8 x i64> @test_mask_mul_epi32_rmk(<16 x i32> %a, ptr %ptr_b, <8 x i64> %
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -8604,13 +8680,14 @@ define <8 x i64> @test_mask_mul_epi32_rmkz(<16 x i32> %a, ptr %ptr_b, i8 %mask)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP33:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP33]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -8657,13 +8734,14 @@ define <8 x i64> @test_mask_mul_epi32_rmb(<16 x i32> %a, ptr %ptr_b, <8 x i64> %
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP26:%.*]], label [[TMP27:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP28:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP28]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP7]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP3]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -8713,13 +8791,14 @@ define <8 x i64> @test_mask_mul_epi32_rmbk(<16 x i32> %a, ptr %ptr_b, <8 x i64>
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP35:%.*]], label [[TMP36:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP37:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP37]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -8776,13 +8855,14 @@ define <8 x i64> @test_mask_mul_epi32_rmbk_buildvector(<16 x i32> %a, ptr %ptr_b
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP35:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -8857,13 +8937,14 @@ define <8 x i64> @test_mask_mul_epi32_rmbkz(<16 x i32> %a, ptr %ptr_b, i8 %mask,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP34:%.*]], label [[TMP35:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -8919,13 +9000,14 @@ define <8 x i64> @test_mask_mul_epi32_rmbkz_buildvector(<16 x i32> %a, ptr %ptr_
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP35:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP35]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -9110,13 +9192,14 @@ define <8 x i64> @test_mask_mul_epu32_rm(<16 x i32> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -9155,13 +9238,14 @@ define <8 x i64> @test_mask_mul_epu32_rmk(<16 x i32> %a, ptr %ptr_b, <8 x i64> %
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -9207,13 +9291,14 @@ define <8 x i64> @test_mask_mul_epu32_rmkz(<16 x i32> %a, ptr %ptr_b, i8 %mask)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP33:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP33]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -9260,13 +9345,14 @@ define <8 x i64> @test_mask_mul_epu32_rmb(<16 x i32> %a, ptr %ptr_b, <8 x i64> %
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP26:%.*]], label [[TMP27:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP28:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP28]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP7]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP3]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -9316,13 +9402,14 @@ define <8 x i64> @test_mask_mul_epu32_rmbk(<16 x i32> %a, ptr %ptr_b, <8 x i64>
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP35:%.*]], label [[TMP36:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP37:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP37]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -9379,13 +9466,14 @@ define <8 x i64> @test_mask_mul_epu32_rmbkz(<16 x i32> %a, ptr %ptr_b, i8 %mask,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP34:%.*]], label [[TMP35:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -9830,13 +9918,14 @@ define <8 x i64> @test_x86_avx512_movntdqa(ptr %a0)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP2:%.*]], label [[TMP3:%.*]], !prof [[PROF1]]
 ; CHECK:       2:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       3:
 ; CHECK-NEXT:    [[TMP4:%.*]] = load <8 x i64>, ptr [[A0:%.*]], align 64, !nontemporal [[META2]]
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A0]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    store <8 x i64> [[_MSLD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x i64> [[TMP4]]
@@ -11122,13 +11211,14 @@ define <16 x float>@test_int_x86_avx512_mask_broadcastf32x4_512_load(ptr %x0ptr,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[X0:%.*]] = load <4 x float>, ptr [[X0PTR:%.*]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[X0PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP8]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = shufflevector <4 x i32> [[_MSLD]], <4 x i32> [[_MSLD]], <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
 ; CHECK-NEXT:    [[TMP9:%.*]] = shufflevector <4 x float> [[X0]], <4 x float> [[X0]], <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
@@ -11225,13 +11315,14 @@ define <8 x double>@test_int_x86_avx512_mask_broadcastf64x4_512_load(ptr %x0ptr,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[X0:%.*]] = load <4 x double>, ptr [[X0PTR:%.*]], align 32
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[X0PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i64>, ptr [[TMP8]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = shufflevector <4 x i64> [[_MSLD]], <4 x i64> [[_MSLD]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
 ; CHECK-NEXT:    [[TMP9:%.*]] = shufflevector <4 x double> [[X0]], <4 x double> [[X0]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
@@ -11312,13 +11403,14 @@ define <16 x i32>@test_int_x86_avx512_mask_broadcasti32x4_512_load(ptr %x0ptr, <
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[X0:%.*]] = load <4 x i32>, ptr [[X0PTR:%.*]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[X0PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP8]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = shufflevector <4 x i32> [[_MSLD]], <4 x i32> [[_MSLD]], <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
 ; CHECK-NEXT:    [[TMP9:%.*]] = shufflevector <4 x i32> [[X0]], <4 x i32> [[X0]], <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
@@ -11410,13 +11502,14 @@ define <8 x i64>@test_int_x86_avx512_mask_broadcasti64x4_512_load(ptr %x0ptr, <8
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[X0:%.*]] = load <4 x i64>, ptr [[X0PTR:%.*]], align 32
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[X0PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i64>, ptr [[TMP8]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = shufflevector <4 x i64> [[_MSLD]], <4 x i64> [[_MSLD]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
 ; CHECK-NEXT:    [[TMP9:%.*]] = shufflevector <4 x i64> [[X0]], <4 x i64> [[X0]], <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3>
@@ -12127,7 +12220,7 @@ define i16 @test_cmpps(<16 x float> %a, <16 x float> %b)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[RES:%.*]] = call <16 x i1> @llvm.x86.avx512.mask.cmp.ps.512(<16 x float> [[A:%.*]], <16 x float> [[B:%.*]], i32 2, <16 x i1> splat (i1 true), i32 8)
@@ -12152,7 +12245,7 @@ define i8 @test_cmppd(<8 x double> %a, <8 x double> %b)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x i1> @llvm.x86.avx512.mask.cmp.pd.512(<8 x double> [[A:%.*]], <8 x double> [[B:%.*]], i32 4, <8 x i1> splat (i1 true), i32 4)
@@ -12289,13 +12382,14 @@ define <8 x i64> @test_mul_epi32_rm(<16 x i32> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -12334,13 +12428,14 @@ define <8 x i64> @test_mul_epi32_rmk(<16 x i32> %a, ptr %ptr_b, <8 x i64> %passT
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP32:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP32]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -12388,13 +12483,14 @@ define <8 x i64> @test_mul_epi32_rmkz(<16 x i32> %a, ptr %ptr_b, i8 %mask)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP31:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP31]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -12442,13 +12538,14 @@ define <8 x i64> @test_mul_epi32_rmb(<16 x i32> %a, ptr %ptr_b, <8 x i64> %extra
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP26:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP27:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP27]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP7]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP3]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -12497,13 +12594,14 @@ define <8 x i64> @test_mul_epi32_rmbk(<16 x i32> %a, ptr %ptr_b, <8 x i64> %pass
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP33:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -12561,13 +12659,14 @@ define <8 x i64> @test_mul_epi32_rmbkz(<16 x i32> %a, ptr %ptr_b, i8 %mask, <8 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP32:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP33:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP33]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -12740,13 +12839,14 @@ define <8 x i64> @test_mul_epu32_rm(<16 x i32> %a, ptr %ptr_b)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -12785,13 +12885,14 @@ define <8 x i64> @test_mul_epu32_rmk(<16 x i32> %a, ptr %ptr_b, <8 x i64> %passT
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP32:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP32]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -12839,13 +12940,14 @@ define <8 x i64> @test_mul_epu32_rmkz(<16 x i32> %a, ptr %ptr_b, i8 %mask)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP31:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP31]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[TMP2]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i32> [[A:%.*]] to <8 x i64>
@@ -12893,13 +12995,14 @@ define <8 x i64> @test_mul_epu32_rmb(<16 x i32> %a, ptr %ptr_b, <8 x i64> %extra
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP26:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP27:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP27]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP7]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP3]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -12948,13 +13051,14 @@ define <8 x i64> @test_mul_epu32_rmbk(<16 x i32> %a, ptr %ptr_b, <8 x i64> %pass
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP33:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -13012,13 +13116,14 @@ define <8 x i64> @test_mul_epu32_rmbkz(<16 x i32> %a, ptr %ptr_b, i8 %mask, <8 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP32:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i64, ptr [[PTR_B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP33:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP33]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <8 x i64> [[EXTRA_PARAM:%.*]], i64 [[Q]], i32 0
@@ -13094,13 +13199,14 @@ define <16 x float> @test_x86_vbroadcast_ss_512(ptr %a0)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP2:%.*]], label [[TMP3:%.*]], !prof [[PROF1]]
 ; CHECK:       2:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       3:
 ; CHECK-NEXT:    [[TMP4:%.*]] = load float, ptr [[A0:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A0]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[TMP8:%.*]] = insertelement <16 x float> poison, float [[TMP4]], i32 0
@@ -13150,13 +13256,14 @@ define <8 x double> @test_x86_vbroadcast_sd_512(ptr %a0)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP2:%.*]], label [[TMP3:%.*]], !prof [[PROF1]]
 ; CHECK:       2:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       3:
 ; CHECK-NEXT:    [[TMP4:%.*]] = load double, ptr [[A0:%.*]], align 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A0]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP7]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> splat (i64 -1), i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[TMP8:%.*]] = insertelement <8 x double> poison, double [[TMP4]], i32 0
@@ -13196,7 +13303,7 @@ define <8 x double>@test_int_x86_avx512_permvar_df_512(<8 x double> %x0, <8 x i6
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <8 x double> @llvm.x86.avx512.permvar.df.512(<8 x double> [[X0:%.*]], <8 x i64> [[X1:%.*]])
@@ -13222,7 +13329,7 @@ define <8 x double>@test_int_x86_avx512_mask_permvar_df_512(<8 x double> %x0, <8
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <8 x double> @llvm.x86.avx512.permvar.df.512(<8 x double> [[X0:%.*]], <8 x i64> [[X1:%.*]])
@@ -13257,7 +13364,7 @@ define <8 x double>@test_int_x86_avx512_maskz_permvar_df_512(<8 x double> %x0, <
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <8 x double> @llvm.x86.avx512.permvar.df.512(<8 x double> [[X0:%.*]], <8 x i64> [[X1:%.*]])
@@ -13356,7 +13463,7 @@ define <16 x float>@test_int_x86_avx512_permvar_sf_512(<16 x float> %x0, <16 x i
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.permvar.sf.512(<16 x float> [[X0:%.*]], <16 x i32> [[X1:%.*]])
@@ -13382,7 +13489,7 @@ define <16 x float>@test_int_x86_avx512_mask_permvar_sf_512(<16 x float> %x0, <1
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.permvar.sf.512(<16 x float> [[X0:%.*]], <16 x i32> [[X1:%.*]])
@@ -13417,7 +13524,7 @@ define <16 x float>@test_int_x86_avx512_maskz_permvar_sf_512(<16 x float> %x0, <
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.permvar.sf.512(<16 x float> [[X0:%.*]], <16 x i32> [[X1:%.*]])
@@ -13520,7 +13627,7 @@ define <16 x i32>@test_int_x86_avx512_pternlog_d_512(<16 x i32> %x0, <16 x i32>
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x i32> @llvm.x86.avx512.pternlog.d.512(<16 x i32> [[X0:%.*]], <16 x i32> [[X1:%.*]], <16 x i32> [[X2:%.*]], i32 33)
@@ -13549,7 +13656,7 @@ define <16 x i32>@test_int_x86_avx512_mask_pternlog_d_512(<16 x i32> %x0, <16 x
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x i32> @llvm.x86.avx512.pternlog.d.512(<16 x i32> [[X0:%.*]], <16 x i32> [[X1:%.*]], <16 x i32> [[X2:%.*]], i32 33)
@@ -13588,7 +13695,7 @@ define <16 x i32>@test_int_x86_avx512_maskz_pternlog_d_512(<16 x i32> %x0, <16 x
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x i32> @llvm.x86.avx512.pternlog.d.512(<16 x i32> [[X0:%.*]], <16 x i32> [[X1:%.*]], <16 x i32> [[X2:%.*]], i32 33)
@@ -13625,7 +13732,7 @@ define <8 x i64>@test_int_x86_avx512_pternlog_q_512(<8 x i64> %x0, <8 x i64> %x1
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <8 x i64> @llvm.x86.avx512.pternlog.q.512(<8 x i64> [[X0:%.*]], <8 x i64> [[X1:%.*]], <8 x i64> [[X2:%.*]], i32 33)
@@ -13654,7 +13761,7 @@ define <8 x i64>@test_int_x86_avx512_mask_pternlog_q_512(<8 x i64> %x0, <8 x i64
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <8 x i64> @llvm.x86.avx512.pternlog.q.512(<8 x i64> [[X0:%.*]], <8 x i64> [[X1:%.*]], <8 x i64> [[X2:%.*]], i32 33)
@@ -13693,7 +13800,7 @@ define <8 x i64>@test_int_x86_avx512_maskz_pternlog_q_512(<8 x i64> %x0, <8 x i6
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <8 x i64> @llvm.x86.avx512.pternlog.q.512(<8 x i64> [[X0:%.*]], <8 x i64> [[X1:%.*]], <8 x i64> [[X2:%.*]], i32 33)
@@ -13724,23 +13831,24 @@ define <16 x i32>@test_int_x86_avx512_vpermi2var_d_512(<16 x i32> %x0, <16 x i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP14:%.*]] = trunc <16 x i32> [[X1]] to <16 x i4>
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[TMP2]], <16 x i32> [[X3:%.*]], <16 x i32> [[TMP4]])
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <16 x i4> [[TMP14]] to i64
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP11]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP13:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP16:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[X0:%.*]], <16 x i32> [[X3]], <16 x i32> [[X4:%.*]])
 ; CHECK-NEXT:    store <16 x i32> [[_MSPROP1]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i32> [[TMP10]]
@@ -13761,23 +13869,24 @@ define <16 x i32>@test_int_x86_avx512_mask_vpermi2var_d_512(<16 x i32> %x0, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP18:%.*]] = trunc <16 x i32> [[TMP3]] to <16 x i4>
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[TMP2]], <16 x i32> [[X1:%.*]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast <16 x i4> [[TMP18]] to i64
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP19]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP20:%.*]], label [[TMP21:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP21:%.*]], label [[TMP22:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[X0:%.*]], <16 x i32> [[X1]], <16 x i32> [[X2]])
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i16 [[TMP4]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP12:%.*]] = bitcast i16 [[X3:%.*]] to <16 x i1>
@@ -13812,7 +13921,7 @@ define <8 x double>@test_int_x86_avx512_vpermi2var_pd_512(<8 x double> %x0, <8 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i24 [[TMP12]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       11:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <8 x double> @llvm.x86.avx512.vpermi2var.pd.512(<8 x double> [[X0:%.*]], <8 x i64> [[X1]], <8 x double> [[X2:%.*]])
@@ -13841,7 +13950,7 @@ define <8 x double>@test_int_x86_avx512_mask_vpermi2var_pd_512(<8 x double> %x0,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i24 [[TMP21]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP22:%.*]], label [[TMP23:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       12:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <8 x double> @llvm.x86.avx512.vpermi2var.pd.512(<8 x double> [[X0:%.*]], <8 x i64> [[X1]], <8 x double> [[X2:%.*]])
@@ -13880,7 +13989,7 @@ define <16 x float>@test_int_x86_avx512_vpermi2var_ps_512(<16 x float> %x0, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP12]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       11:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.vpermi2var.ps.512(<16 x float> [[X0:%.*]], <16 x i32> [[X1]], <16 x float> [[X2:%.*]])
@@ -13909,7 +14018,7 @@ define <16 x float>@test_int_x86_avx512_mask_vpermi2var_ps_512(<16 x float> %x0,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP21]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP22:%.*]], label [[TMP23:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       12:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x float> @llvm.x86.avx512.vpermi2var.ps.512(<16 x float> [[X0:%.*]], <16 x i32> [[X1]], <16 x float> [[X2:%.*]])
@@ -13945,7 +14054,7 @@ define <8 x i64>@test_int_x86_avx512_vpermi2var_q_512(<8 x i64> %x0, <8 x i64> %
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i24 [[TMP5]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP4:%.*]] = call <8 x i64> @llvm.x86.avx512.vpermi2var.q.512(<8 x i64> [[X0:%.*]], <8 x i64> [[X3]], <8 x i64> [[X2:%.*]])
@@ -13970,7 +14079,7 @@ define <8 x i64>@test_int_x86_avx512_mask_vpermi2var_q_512(<8 x i64> %x0, <8 x i
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i24 [[TMP14]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <8 x i64> @llvm.x86.avx512.vpermi2var.q.512(<8 x i64> [[X0:%.*]], <8 x i64> [[X1]], <8 x i64> [[X2:%.*]])
@@ -14002,23 +14111,24 @@ define <16 x i32>@test_int_x86_avx512_maskz_vpermt2var_d_512(<16 x i32> %x0, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP18:%.*]] = trunc <16 x i32> [[X0]] to <16 x i4>
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[TMP2]], <16 x i32> [[X4:%.*]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast <16 x i4> [[TMP18]] to i64
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP19]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP20:%.*]], label [[TMP21:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP21:%.*]], label [[TMP22:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[X1:%.*]], <16 x i32> [[X4]], <16 x i32> [[X2]])
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i16 [[TMP4]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP12:%.*]] = bitcast i16 [[X3:%.*]] to <16 x i1>
@@ -14050,13 +14160,14 @@ define <8 x double>@test_int_x86_avx512_maskz_vpermt2var_pd_512(<8 x i64> %x0, <
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[X2S:%.*]] = load double, ptr [[X2PTR:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2PTR]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP26:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP26]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[X2INS:%.*]] = insertelement <8 x double> [[EXTRA_PARAM:%.*]], double [[X2S]], i32 0
@@ -14069,11 +14180,11 @@ define <8 x double>@test_int_x86_avx512_maskz_vpermt2var_pd_512(<8 x i64> %x0, <
 ; CHECK-NEXT:    [[TMP14:%.*]] = bitcast <8 x double> [[TMP13]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP25:%.*]] = bitcast <8 x i3> [[TMP10]] to i24
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i24 [[TMP25]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP26:%.*]], label [[TMP27:%.*]], !prof [[PROF1]]
-; CHECK:       17:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP27:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
 ; CHECK:       18:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       19:
 ; CHECK-NEXT:    [[TMP15:%.*]] = call <8 x double> @llvm.x86.avx512.vpermi2var.pd.512(<8 x double> [[X1:%.*]], <8 x i64> [[X4]], <8 x double> [[X2]])
 ; CHECK-NEXT:    [[TMP16:%.*]] = bitcast i8 [[TMP4]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP17:%.*]] = bitcast i8 [[X3:%.*]] to <8 x i1>
@@ -14113,7 +14224,7 @@ define <16 x float>@test_int_x86_avx512_maskz_vpermt2var_ps_512(<16 x i32> %x0,
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP9]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP20:%.*]], label [[TMP21:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       12:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x float> @llvm.x86.avx512.vpermi2var.ps.512(<16 x float> [[X1:%.*]], <16 x i32> [[X4]], <16 x float> [[X2:%.*]])
@@ -14150,7 +14261,7 @@ define <8 x i64>@test_int_x86_avx512_maskz_vpermt2var_q_512(<8 x i64> %x0, <8 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i24 [[TMP14]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <8 x i64> @llvm.x86.avx512.vpermi2var.q.512(<8 x i64> [[X1:%.*]], <8 x i64> [[X4]], <8 x i64> [[X2:%.*]])
@@ -14183,7 +14294,7 @@ define <16 x i32>@test_int_x86_avx512_vpermt2var_d_512(<16 x i32> %x0, <16 x i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP5]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP4:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[X1:%.*]], <16 x i32> [[X3]], <16 x i32> [[X2:%.*]])
@@ -14208,7 +14319,7 @@ define <16 x i32>@test_int_x86_avx512_mask_vpermt2var_d_512(<16 x i32> %x0, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP14]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[X1:%.*]], <16 x i32> [[X4]], <16 x i32> [[X2:%.*]])
@@ -14243,7 +14354,7 @@ define <16 x float> @test_vsubps_rn(<16 x float> %a0, <16 x float> %a1)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -14267,7 +14378,7 @@ define <16 x float> @test_vsubps_rd(<16 x float> %a0, <16 x float> %a1)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -14291,7 +14402,7 @@ define <16 x float> @test_vsubps_ru(<16 x float> %a0, <16 x float> %a1)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -14315,7 +14426,7 @@ define <16 x float> @test_vsubps_rz(<16 x float> %a0, <16 x float> %a1)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -14339,7 +14450,7 @@ define <16 x float> @test_vmulps_rn(<16 x float> %a0, <16 x float> %a1)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -14363,7 +14474,7 @@ define <16 x float> @test_vmulps_rd(<16 x float> %a0, <16 x float> %a1)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -14387,7 +14498,7 @@ define <16 x float> @test_vmulps_ru(<16 x float> %a0, <16 x float> %a1)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -14411,7 +14522,7 @@ define <16 x float> @test_vmulps_rz(<16 x float> %a0, <16 x float> %a1)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -14438,7 +14549,7 @@ define <16 x float> @test_vmulps_mask_rn(<16 x float> %a0, <16 x float> %a1, i16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -14473,7 +14584,7 @@ define <16 x float> @test_vmulps_mask_rd(<16 x float> %a0, <16 x float> %a1, i16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -14508,7 +14619,7 @@ define <16 x float> @test_vmulps_mask_ru(<16 x float> %a0, <16 x float> %a1, i16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -14543,7 +14654,7 @@ define <16 x float> @test_vmulps_mask_rz(<16 x float> %a0, <16 x float> %a1, i16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -14580,7 +14691,7 @@ define <16 x float> @test_vmulps_mask_passthru_rn(<16 x float> %a0, <16 x float>
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -14617,7 +14728,7 @@ define <16 x float> @test_vmulps_mask_passthru_rd(<16 x float> %a0, <16 x float>
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -14654,7 +14765,7 @@ define <16 x float> @test_vmulps_mask_passthru_ru(<16 x float> %a0, <16 x float>
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -14691,7 +14802,7 @@ define <16 x float> @test_vmulps_mask_passthru_rz(<16 x float> %a0, <16 x float>
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.mul.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -14728,7 +14839,7 @@ define <8 x double> @test_vmulpd_mask_rn(<8 x double> %a0, <8 x double> %a1, i8
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <8 x double> @llvm.x86.avx512.mul.pd.512(<8 x double> [[A0:%.*]], <8 x double> [[A1:%.*]], i32 8)
@@ -14763,7 +14874,7 @@ define <8 x double> @test_vmulpd_mask_rd(<8 x double> %a0, <8 x double> %a1, i8
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <8 x double> @llvm.x86.avx512.mul.pd.512(<8 x double> [[A0:%.*]], <8 x double> [[A1:%.*]], i32 9)
@@ -14798,7 +14909,7 @@ define <8 x double> @test_vmulpd_mask_ru(<8 x double> %a0, <8 x double> %a1, i8
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <8 x double> @llvm.x86.avx512.mul.pd.512(<8 x double> [[A0:%.*]], <8 x double> [[A1:%.*]], i32 10)
@@ -14833,7 +14944,7 @@ define <8 x double> @test_vmulpd_mask_rz(<8 x double> %a0, <8 x double> %a1, i8
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <8 x double> @llvm.x86.avx512.mul.pd.512(<8 x double> [[A0:%.*]], <8 x double> [[A1:%.*]], i32 11)
@@ -14868,7 +14979,7 @@ define <16 x float> @test_mm512_maskz_add_round_ps_rn_sae(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -14901,7 +15012,7 @@ define <16 x float> @test_mm512_maskz_add_round_ps_rd_sae(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -14934,7 +15045,7 @@ define <16 x float> @test_mm512_maskz_add_round_ps_ru_sae(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -14968,7 +15079,7 @@ define <16 x float> @test_mm512_maskz_add_round_ps_rz_sae(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -15003,7 +15114,7 @@ define <16 x float> @test_mm512_maskz_add_round_ps_current(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 4)
@@ -15038,7 +15149,7 @@ define <16 x float> @test_mm512_mask_add_round_ps_rn_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -15073,7 +15184,7 @@ define <16 x float> @test_mm512_mask_add_round_ps_rd_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -15108,7 +15219,7 @@ define <16 x float> @test_mm512_mask_add_round_ps_ru_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -15144,7 +15255,7 @@ define <16 x float> @test_mm512_mask_add_round_ps_rz_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -15181,7 +15292,7 @@ define <16 x float> @test_mm512_mask_add_round_ps_current(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 4)
@@ -15215,7 +15326,7 @@ define <16 x float> @test_mm512_add_round_ps_rn_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -15237,7 +15348,7 @@ define <16 x float> @test_mm512_add_round_ps_rd_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -15259,7 +15370,7 @@ define <16 x float> @test_mm512_add_round_ps_ru_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -15282,7 +15393,7 @@ define <16 x float> @test_mm512_add_round_ps_rz_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -15305,7 +15416,7 @@ define <16 x float> @test_mm512_add_round_ps_current(<16 x float> %a0, <16 x flo
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.add.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 4)
@@ -15332,7 +15443,7 @@ define <16 x float> @test_mm512_mask_sub_round_ps_rn_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -15367,7 +15478,7 @@ define <16 x float> @test_mm512_mask_sub_round_ps_rd_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -15402,7 +15513,7 @@ define <16 x float> @test_mm512_mask_sub_round_ps_ru_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -15438,7 +15549,7 @@ define <16 x float> @test_mm512_mask_sub_round_ps_rz_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -15475,7 +15586,7 @@ define <16 x float> @test_mm512_mask_sub_round_ps_current(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 4)
@@ -15508,7 +15619,7 @@ define <16 x float> @test_mm512_sub_round_ps_rn_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -15530,7 +15641,7 @@ define <16 x float> @test_mm512_sub_round_ps_rd_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -15552,7 +15663,7 @@ define <16 x float> @test_mm512_sub_round_ps_ru_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -15575,7 +15686,7 @@ define <16 x float> @test_mm512_sub_round_ps_rz_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -15598,7 +15709,7 @@ define <16 x float> @test_mm512_sub_round_ps_current(<16 x float> %a0, <16 x flo
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sub.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 4)
@@ -15623,7 +15734,7 @@ define <16 x float> @test_mm512_maskz_div_round_ps_rn_sae(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -15656,7 +15767,7 @@ define <16 x float> @test_mm512_maskz_div_round_ps_rd_sae(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -15689,7 +15800,7 @@ define <16 x float> @test_mm512_maskz_div_round_ps_ru_sae(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -15723,7 +15834,7 @@ define <16 x float> @test_mm512_maskz_div_round_ps_rz_sae(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -15758,7 +15869,7 @@ define <16 x float> @test_mm512_maskz_div_round_ps_current(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 4)
@@ -15793,7 +15904,7 @@ define <16 x float> @test_mm512_mask_div_round_ps_rn_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -15828,7 +15939,7 @@ define <16 x float> @test_mm512_mask_div_round_ps_rd_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -15863,7 +15974,7 @@ define <16 x float> @test_mm512_mask_div_round_ps_ru_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -15899,7 +16010,7 @@ define <16 x float> @test_mm512_mask_div_round_ps_rz_sae(<16 x float> %a0, <16 x
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -15936,7 +16047,7 @@ define <16 x float> @test_mm512_mask_div_round_ps_current(<16 x float> %a0, <16
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 4)
@@ -15970,7 +16081,7 @@ define <16 x float> @test_mm512_div_round_ps_rn_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 8)
@@ -15992,7 +16103,7 @@ define <16 x float> @test_mm512_div_round_ps_rd_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 9)
@@ -16014,7 +16125,7 @@ define <16 x float> @test_mm512_div_round_ps_ru_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 10)
@@ -16037,7 +16148,7 @@ define <16 x float> @test_mm512_div_round_ps_rz_sae(<16 x float> %a0, <16 x floa
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 11)
@@ -16060,7 +16171,7 @@ define <16 x float> @test_mm512_div_round_ps_current(<16 x float> %a0, <16 x flo
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.div.ps.512(<16 x float> [[A0:%.*]], <16 x float> [[A1:%.*]], i32 4)
@@ -16083,17 +16194,18 @@ define void @test_mask_compress_store_pd_512(ptr %addr, <8 x double> %data, i8 %
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v8i64(<8 x i64> [[TMP3]], ptr [[TMP8]], <8 x i1> [[TMP5]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <8 x i1> [[TMP4]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v8f64(<8 x double> [[DATA:%.*]], ptr [[ADDR]], <8 x i1> [[TMP5]])
 ; CHECK-NEXT:    ret void
 ;
@@ -16111,14 +16223,15 @@ define void @test_compress_store_pd_512(ptr %addr, <8 x double> %data)  #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v8i64(<8 x i64> [[TMP2]], ptr [[TMP5]], <8 x i1> splat (i1 true))
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v8f64(<8 x double> [[DATA:%.*]], ptr [[ADDR]], <8 x i1> splat (i1 true))
 ; CHECK-NEXT:    ret void
 ;
@@ -16137,17 +16250,18 @@ define void @test_mask_compress_store_ps_512(ptr %addr, <16 x float> %data, i16
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[TMP3]], ptr [[TMP8]], <16 x i1> [[TMP5]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i1> [[TMP4]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16f32(<16 x float> [[DATA:%.*]], ptr [[ADDR]], <16 x i1> [[TMP5]])
 ; CHECK-NEXT:    ret void
 ;
@@ -16165,14 +16279,15 @@ define void @test_compress_store_ps_512(ptr %addr, <16 x float> %data)  #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[TMP2]], ptr [[TMP5]], <16 x i1> splat (i1 true))
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16f32(<16 x float> [[DATA:%.*]], ptr [[ADDR]], <16 x i1> splat (i1 true))
 ; CHECK-NEXT:    ret void
 ;
@@ -16191,17 +16306,18 @@ define void @test_mask_compress_store_q_512(ptr %addr, <8 x i64> %data, i8 %mask
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v8i64(<8 x i64> [[TMP3]], ptr [[TMP8]], <8 x i1> [[TMP5]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <8 x i1> [[TMP4]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v8i64(<8 x i64> [[DATA:%.*]], ptr [[ADDR]], <8 x i1> [[TMP5]])
 ; CHECK-NEXT:    ret void
 ;
@@ -16219,14 +16335,15 @@ define void @test_compress_store_q_512(ptr %addr, <8 x i64> %data)  #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v8i64(<8 x i64> [[TMP2]], ptr [[TMP5]], <8 x i1> splat (i1 true))
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v8i64(<8 x i64> [[DATA:%.*]], ptr [[ADDR]], <8 x i1> splat (i1 true))
 ; CHECK-NEXT:    ret void
 ;
@@ -16245,17 +16362,18 @@ define void @test_mask_compress_store_d_512(ptr %addr, <16 x i32> %data, i16 %ma
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[TMP3]], ptr [[TMP8]], <16 x i1> [[TMP5]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i1> [[TMP4]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[DATA:%.*]], ptr [[ADDR]], <16 x i1> [[TMP5]])
 ; CHECK-NEXT:    ret void
 ;
@@ -16273,14 +16391,15 @@ define void @test_compress_store_d_512(ptr %addr, <16 x i32> %data)  #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[TMP2]], ptr [[TMP5]], <16 x i1> splat (i1 true))
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[DATA:%.*]], ptr [[ADDR]], <16 x i1> splat (i1 true))
 ; CHECK-NEXT:    ret void
 ;
@@ -16299,17 +16418,18 @@ define <8 x double> @test_mask_expand_load_pd_512(ptr %addr, <8 x double> %data,
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[TMP8]], <8 x i1> [[TMP5]], <8 x i64> [[TMP3]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <8 x i1> [[TMP4]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    [[TMP12:%.*]] = call <8 x double> @llvm.masked.expandload.v8f64(ptr [[ADDR]], <8 x i1> [[TMP5]], <8 x double> [[DATA:%.*]])
 ; CHECK-NEXT:    store <8 x i64> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x double> [[TMP12]]
@@ -16328,17 +16448,18 @@ define <8 x double> @test_maskz_expand_load_pd_512(ptr %addr, i8 %mask)  #0 {
 ; CHECK-NEXT:    [[TMP4:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[TMP7]], <8 x i1> [[TMP4]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i1> [[TMP3]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP8]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       11:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <8 x double> @llvm.masked.expandload.v8f64(ptr [[ADDR]], <8 x i1> [[TMP4]], <8 x double> zeroinitializer)
 ; CHECK-NEXT:    store <8 x i64> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x double> [[TMP11]]
@@ -16357,14 +16478,15 @@ define <8 x double> @test_expand_load_pd_512(ptr %addr, <8 x double> %data)  #0
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[TMP5]], <8 x i1> splat (i1 true), <8 x i64> [[TMP2]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <8 x double> @llvm.masked.expandload.v8f64(ptr [[ADDR]], <8 x i1> splat (i1 true), <8 x double> [[DATA:%.*]])
 ; CHECK-NEXT:    store <8 x i64> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x double> [[TMP8]]
@@ -16381,14 +16503,15 @@ define <8 x double> @test_zero_mask_expand_load_pd_512(ptr %addr, <8 x double> %
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[TMP5]], <8 x i1> zeroinitializer, <8 x i64> [[TMP2]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <8 x double> @llvm.masked.expandload.v8f64(ptr [[ADDR]], <8 x i1> zeroinitializer, <8 x double> [[DATA:%.*]])
 ; CHECK-NEXT:    store <8 x i64> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x double> [[TMP8]]
@@ -16408,17 +16531,18 @@ define <16 x float> @test_mask_expand_load_ps_512(ptr %addr, <16 x float> %data,
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP8]], <16 x i1> [[TMP5]], <16 x i32> [[TMP3]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i1> [[TMP4]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    [[TMP12:%.*]] = call <16 x float> @llvm.masked.expandload.v16f32(ptr [[ADDR]], <16 x i1> [[TMP5]], <16 x float> [[DATA:%.*]])
 ; CHECK-NEXT:    store <16 x i32> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x float> [[TMP12]]
@@ -16437,17 +16561,18 @@ define <16 x float> @test_maskz_expand_load_ps_512(ptr %addr, i16 %mask)  #0 {
 ; CHECK-NEXT:    [[TMP4:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP7]], <16 x i1> [[TMP4]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <16 x i1> [[TMP3]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP8]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       11:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <16 x float> @llvm.masked.expandload.v16f32(ptr [[ADDR]], <16 x i1> [[TMP4]], <16 x float> zeroinitializer)
 ; CHECK-NEXT:    store <16 x i32> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x float> [[TMP11]]
@@ -16466,14 +16591,15 @@ define <16 x float> @test_expand_load_ps_512(ptr %addr, <16 x float> %data)  #0
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP5]], <16 x i1> splat (i1 true), <16 x i32> [[TMP2]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x float> @llvm.masked.expandload.v16f32(ptr [[ADDR]], <16 x i1> splat (i1 true), <16 x float> [[DATA:%.*]])
 ; CHECK-NEXT:    store <16 x i32> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x float> [[TMP8]]
@@ -16493,17 +16619,18 @@ define <8 x i64> @test_mask_expand_load_q_512(ptr %addr, <8 x i64> %data, i8 %ma
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[TMP8]], <8 x i1> [[TMP5]], <8 x i64> [[TMP3]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <8 x i1> [[TMP4]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    [[TMP12:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[ADDR]], <8 x i1> [[TMP5]], <8 x i64> [[DATA:%.*]])
 ; CHECK-NEXT:    store <8 x i64> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x i64> [[TMP12]]
@@ -16522,17 +16649,18 @@ define <8 x i64> @test_maskz_expand_load_q_512(ptr %addr, i8 %mask)  #0 {
 ; CHECK-NEXT:    [[TMP4:%.*]] = bitcast i8 [[MASK:%.*]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[TMP7]], <8 x i1> [[TMP4]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i1> [[TMP3]] to i8
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i8 [[TMP8]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       11:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[ADDR]], <8 x i1> [[TMP4]], <8 x i64> zeroinitializer)
 ; CHECK-NEXT:    store <8 x i64> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x i64> [[TMP11]]
@@ -16551,14 +16679,15 @@ define <8 x i64> @test_expand_load_q_512(ptr %addr, <8 x i64> %data)  #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[TMP5]], <8 x i1> splat (i1 true), <8 x i64> [[TMP2]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <8 x i64> @llvm.masked.expandload.v8i64(ptr [[ADDR]], <8 x i1> splat (i1 true), <8 x i64> [[DATA:%.*]])
 ; CHECK-NEXT:    store <8 x i64> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x i64> [[TMP8]]
@@ -16578,17 +16707,18 @@ define <16 x i32> @test_mask_expand_load_d_512(ptr %addr, <16 x i32> %data, i16
 ; CHECK-NEXT:    [[TMP5:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP8]], <16 x i1> [[TMP5]], <16 x i32> [[TMP3]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i1> [[TMP4]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    [[TMP12:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[ADDR]], <16 x i1> [[TMP5]], <16 x i32> [[DATA:%.*]])
 ; CHECK-NEXT:    store <16 x i32> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i32> [[TMP12]]
@@ -16607,17 +16737,18 @@ define <16 x i32> @test_maskz_expand_load_d_512(ptr %addr, i16 %mask)  #0 {
 ; CHECK-NEXT:    [[TMP4:%.*]] = bitcast i16 [[MASK:%.*]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP7]], <16 x i1> [[TMP4]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP2]], 0
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <16 x i1> [[TMP3]] to i16
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP8]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       11:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[ADDR]], <16 x i1> [[TMP4]], <16 x i32> zeroinitializer)
 ; CHECK-NEXT:    store <16 x i32> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i32> [[TMP11]]
@@ -16636,14 +16767,15 @@ define <16 x i32> @test_expand_load_d_512(ptr %addr, <16 x i32> %data)  #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ADDR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP5]], <16 x i1> splat (i1 true), <16 x i32> [[TMP2]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       8:
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[ADDR]], <16 x i1> splat (i1 true), <16 x i32> [[DATA:%.*]])
 ; CHECK-NEXT:    store <16 x i32> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i32> [[TMP8]]
@@ -16995,7 +17127,7 @@ define <8 x double> @test_sqrt_round_pd_512(<8 x double> %a0, <8 x double> %extr
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP2]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <8 x double> @llvm.x86.avx512.sqrt.pd.512(<8 x double> [[A0:%.*]], i32 11)
@@ -17016,7 +17148,7 @@ define <8 x double> @test_mask_sqrt_round_pd_512(<8 x double> %a0, <8 x double>
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP4]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <8 x double> @llvm.x86.avx512.sqrt.pd.512(<8 x double> [[A0:%.*]], i32 11)
@@ -17046,7 +17178,7 @@ define <8 x double> @test_maskz_sqrt_round_pd_512(<8 x double> %a0, i8 %mask)  #
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP3]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[TMP6:%.*]] = call <8 x double> @llvm.x86.avx512.sqrt.pd.512(<8 x double> [[A0:%.*]], i32 11)
@@ -17132,7 +17264,7 @@ define <16 x float> @test_sqrt_round_ps_512(<16 x float> %a0)  #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP2]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[TMP5:%.*]] = call <16 x float> @llvm.x86.avx512.sqrt.ps.512(<16 x float> [[A0:%.*]], i32 11)
@@ -17153,7 +17285,7 @@ define <16 x float> @test_mask_sqrt_round_ps_512(<16 x float> %a0, <16 x float>
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP4]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP7:%.*]] = call <16 x float> @llvm.x86.avx512.sqrt.ps.512(<16 x float> [[A0:%.*]], i32 11)
@@ -17183,7 +17315,7 @@ define <16 x float> @test_maskz_sqrt_round_ps_512(<16 x float> %a0, i16 %mask)
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP3]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[TMP6:%.*]] = call <16 x float> @llvm.x86.avx512.sqrt.ps.512(<16 x float> [[A0:%.*]], i32 11)
@@ -17754,7 +17886,7 @@ define <2 x double>@test_int_x86_avx512_mask_vfmadd_sd(<2 x double> %x0, <2 x do
 ; CHECK-NEXT:    [[_MSOR21:%.*]] = or i1 [[_MSOR]], [[_MSCMP20]]
 ; CHECK-NEXT:    br i1 [[_MSOR21]], label [[TMP23:%.*]], label [[TMP24:%.*]], !prof [[PROF1]]
 ; CHECK:       23:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       24:
 ; CHECK-NEXT:    [[TMP25:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP20]], double [[TMP21]], double [[TMP22]], i32 11)
@@ -17773,7 +17905,7 @@ define <2 x double>@test_int_x86_avx512_mask_vfmadd_sd(<2 x double> %x0, <2 x do
 ; CHECK-NEXT:    [[_MSOR26:%.*]] = or i1 [[_MSOR24]], [[_MSCMP25]]
 ; CHECK-NEXT:    br i1 [[_MSOR26]], label [[TMP30:%.*]], label [[TMP31:%.*]], !prof [[PROF1]]
 ; CHECK:       30:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       31:
 ; CHECK-NEXT:    [[TMP32:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP27]], double [[TMP28]], double [[TMP29]], i32 10)
@@ -17852,7 +17984,7 @@ define <4 x float>@test_int_x86_avx512_mask_vfmadd_ss(<4 x float> %x0, <4 x floa
 ; CHECK-NEXT:    [[_MSOR21:%.*]] = or i1 [[_MSOR]], [[_MSCMP20]]
 ; CHECK-NEXT:    br i1 [[_MSOR21]], label [[TMP23:%.*]], label [[TMP24:%.*]], !prof [[PROF1]]
 ; CHECK:       23:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       24:
 ; CHECK-NEXT:    [[TMP25:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP20]], float [[TMP21]], float [[TMP22]], i32 11)
@@ -17871,7 +18003,7 @@ define <4 x float>@test_int_x86_avx512_mask_vfmadd_ss(<4 x float> %x0, <4 x floa
 ; CHECK-NEXT:    [[_MSOR26:%.*]] = or i1 [[_MSOR24]], [[_MSCMP25]]
 ; CHECK-NEXT:    br i1 [[_MSOR26]], label [[TMP30:%.*]], label [[TMP31:%.*]], !prof [[PROF1]]
 ; CHECK:       30:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       31:
 ; CHECK-NEXT:    [[TMP32:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP27]], float [[TMP28]], float [[TMP29]], i32 10)
@@ -17949,7 +18081,7 @@ define <2 x double>@test_int_x86_avx512_maskz_vfmadd_sd(<2 x double> %x0, <2 x d
 ; CHECK-NEXT:    [[_MSOR16:%.*]] = or i1 [[_MSOR]], [[_MSCMP15]]
 ; CHECK-NEXT:    br i1 [[_MSOR16]], label [[TMP22:%.*]], label [[TMP23:%.*]], !prof [[PROF1]]
 ; CHECK:       22:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       23:
 ; CHECK-NEXT:    [[TMP24:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP19]], double [[TMP20]], double [[TMP21]], i32 11)
@@ -18022,7 +18154,7 @@ define <4 x float>@test_int_x86_avx512_maskz_vfmadd_ss(<4 x float> %x0, <4 x flo
 ; CHECK-NEXT:    [[_MSOR16:%.*]] = or i1 [[_MSOR]], [[_MSCMP15]]
 ; CHECK-NEXT:    br i1 [[_MSOR16]], label [[TMP22:%.*]], label [[TMP23:%.*]], !prof [[PROF1]]
 ; CHECK:       22:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       23:
 ; CHECK-NEXT:    [[TMP24:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP19]], float [[TMP20]], float [[TMP21]], i32 11)
@@ -18095,7 +18227,7 @@ define <2 x double>@test_int_x86_avx512_mask3_vfmadd_sd(<2 x double> %x0, <2 x d
 ; CHECK-NEXT:    [[_MSOR21:%.*]] = or i1 [[_MSOR]], [[_MSCMP20]]
 ; CHECK-NEXT:    br i1 [[_MSOR21]], label [[TMP23:%.*]], label [[TMP24:%.*]], !prof [[PROF1]]
 ; CHECK:       23:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       24:
 ; CHECK-NEXT:    [[TMP25:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP20]], double [[TMP21]], double [[TMP22]], i32 11)
@@ -18114,7 +18246,7 @@ define <2 x double>@test_int_x86_avx512_mask3_vfmadd_sd(<2 x double> %x0, <2 x d
 ; CHECK-NEXT:    [[_MSOR26:%.*]] = or i1 [[_MSOR24]], [[_MSCMP25]]
 ; CHECK-NEXT:    br i1 [[_MSOR26]], label [[TMP30:%.*]], label [[TMP31:%.*]], !prof [[PROF1]]
 ; CHECK:       30:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       31:
 ; CHECK-NEXT:    [[TMP32:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP27]], double [[TMP28]], double [[TMP29]], i32 10)
@@ -18193,7 +18325,7 @@ define <4 x float>@test_int_x86_avx512_mask3_vfmadd_ss(<4 x float> %x0, <4 x flo
 ; CHECK-NEXT:    [[_MSOR21:%.*]] = or i1 [[_MSOR]], [[_MSCMP20]]
 ; CHECK-NEXT:    br i1 [[_MSOR21]], label [[TMP23:%.*]], label [[TMP24:%.*]], !prof [[PROF1]]
 ; CHECK:       23:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       24:
 ; CHECK-NEXT:    [[TMP25:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP20]], float [[TMP21]], float [[TMP22]], i32 11)
@@ -18212,7 +18344,7 @@ define <4 x float>@test_int_x86_avx512_mask3_vfmadd_ss(<4 x float> %x0, <4 x flo
 ; CHECK-NEXT:    [[_MSOR26:%.*]] = or i1 [[_MSOR24]], [[_MSCMP25]]
 ; CHECK-NEXT:    br i1 [[_MSOR26]], label [[TMP30:%.*]], label [[TMP31:%.*]], !prof [[PROF1]]
 ; CHECK:       30:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       31:
 ; CHECK-NEXT:    [[TMP32:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP27]], float [[TMP28]], float [[TMP29]], i32 10)
@@ -18254,15 +18386,16 @@ define void @fmadd_ss_mask_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_par
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[A_VAL:%.*]] = load float, ptr [[A:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[AV0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[A_VAL]], i32 0
@@ -18273,15 +18406,16 @@ define void @fmadd_ss_mask_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_par
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <4 x i32> [[_MSPROP2]], i32 0, i32 3
 ; CHECK-NEXT:    [[AV:%.*]] = insertelement <4 x float> [[AV2]], float 0.000000e+00, i32 3
 ; CHECK-NEXT:    [[_MSCMP17:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP17]], label [[TMP10:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP17]], label [[TMP29:%.*]], label [[TMP30:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    [[B_VAL:%.*]] = load float, ptr [[B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[_MSLD4:%.*]] = load i32, ptr [[TMP13]], align 4
 ; CHECK-NEXT:    [[_MSPROP5:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD4]], i32 0
 ; CHECK-NEXT:    [[BV0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM]], float [[B_VAL]], i32 0
@@ -18317,14 +18451,15 @@ define void @fmadd_ss_mask_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_par
 ; CHECK-NEXT:    [[_MSPROP16:%.*]] = extractelement <4 x i32> [[_MSPROP15]], i32 0
 ; CHECK-NEXT:    [[SR:%.*]] = extractelement <4 x float> [[TMP28]], i32 0
 ; CHECK-NEXT:    [[_MSCMP18:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP18]], label [[TMP30:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
-; CHECK:       30:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSCMP18]], label [[TMP35:%.*]], label [[TMP37:%.*]], !prof [[PROF1]]
+; CHECK:       32:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       31:
+; CHECK:       33:
 ; CHECK-NEXT:    [[TMP31:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP32:%.*]] = xor i64 [[TMP31]], 87960930222080
-; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP32]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP32]], 2097152
+; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    store i32 [[_MSPROP16]], ptr [[TMP33]], align 4
 ; CHECK-NEXT:    store float [[SR]], ptr [[A]], align 4
 ; CHECK-NEXT:    ret void
@@ -18357,15 +18492,16 @@ define void @fmadd_ss_maskz_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_pa
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[A_VAL:%.*]] = load float, ptr [[A:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[AV0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[A_VAL]], i32 0
@@ -18376,15 +18512,16 @@ define void @fmadd_ss_maskz_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_pa
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <4 x i32> [[_MSPROP2]], i32 0, i32 3
 ; CHECK-NEXT:    [[AV:%.*]] = insertelement <4 x float> [[AV2]], float 0.000000e+00, i32 3
 ; CHECK-NEXT:    [[_MSCMP17:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP17]], label [[TMP10:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP17]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    [[B_VAL:%.*]] = load float, ptr [[B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP33:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP33]] to ptr
 ; CHECK-NEXT:    [[_MSLD4:%.*]] = load i32, ptr [[TMP13]], align 4
 ; CHECK-NEXT:    [[_MSPROP5:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD4]], i32 0
 ; CHECK-NEXT:    [[BV0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM]], float [[B_VAL]], i32 0
@@ -18419,14 +18556,15 @@ define void @fmadd_ss_maskz_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_pa
 ; CHECK-NEXT:    [[_MSPROP16:%.*]] = extractelement <4 x i32> [[_MSPROP15]], i32 0
 ; CHECK-NEXT:    [[SR:%.*]] = extractelement <4 x float> [[TMP27]], i32 0
 ; CHECK-NEXT:    [[_MSCMP18:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP18]], label [[TMP29:%.*]], label [[TMP33:%.*]], !prof [[PROF1]]
-; CHECK:       29:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSCMP18]], label [[TMP34:%.*]], label [[TMP36:%.*]], !prof [[PROF1]]
+; CHECK:       31:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       30:
+; CHECK:       32:
 ; CHECK-NEXT:    [[TMP30:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP31:%.*]] = xor i64 [[TMP30]], 87960930222080
-; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP31]] to ptr
+; CHECK-NEXT:    [[TMP35:%.*]] = add i64 [[TMP31]], 2097152
+; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP35]] to ptr
 ; CHECK-NEXT:    store i32 [[_MSPROP16]], ptr [[TMP32]], align 4
 ; CHECK-NEXT:    store float [[SR]], ptr [[A]], align 4
 ; CHECK-NEXT:    ret void
@@ -18459,30 +18597,32 @@ define void @fmadd_sd_mask_memfold(ptr %a, ptr %b, i8 %c, <2 x double> %extra_pa
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[A_VAL:%.*]] = load double, ptr [[A:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[AV0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM:%.*]], double [[A_VAL]], i32 0
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = insertelement <2 x i64> [[_MSPROP]], i64 0, i32 1
 ; CHECK-NEXT:    [[AV:%.*]] = insertelement <2 x double> [[AV0]], double 0.000000e+00, i32 1
 ; CHECK-NEXT:    [[_MSCMP13:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP13]], label [[TMP10:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP13]], label [[TMP29:%.*]], label [[TMP30:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    [[B_VAL:%.*]] = load double, ptr [[B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP13]], align 8
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <2 x i64> [[TMP4]], i64 [[_MSLD2]], i32 0
 ; CHECK-NEXT:    [[BV0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM]], double [[B_VAL]], i32 0
@@ -18514,14 +18654,15 @@ define void @fmadd_sd_mask_memfold(ptr %a, ptr %b, i8 %c, <2 x double> %extra_pa
 ; CHECK-NEXT:    [[_MSPROP12:%.*]] = extractelement <2 x i64> [[_MSPROP11]], i32 0
 ; CHECK-NEXT:    [[SR:%.*]] = extractelement <2 x double> [[TMP28]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label [[TMP30:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
-; CHECK:       30:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label [[TMP35:%.*]], label [[TMP37:%.*]], !prof [[PROF1]]
+; CHECK:       32:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       31:
+; CHECK:       33:
 ; CHECK-NEXT:    [[TMP31:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP32:%.*]] = xor i64 [[TMP31]], 87960930222080
-; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP32]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP32]], 2097152
+; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    store i64 [[_MSPROP12]], ptr [[TMP33]], align 8
 ; CHECK-NEXT:    store double [[SR]], ptr [[A]], align 8
 ; CHECK-NEXT:    ret void
@@ -18550,30 +18691,32 @@ define void @fmadd_sd_maskz_memfold(ptr %a, ptr %b, i8 %c, <2 x double> %extra_p
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[A_VAL:%.*]] = load double, ptr [[A:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[AV0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM:%.*]], double [[A_VAL]], i32 0
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = insertelement <2 x i64> [[_MSPROP]], i64 0, i32 1
 ; CHECK-NEXT:    [[AV:%.*]] = insertelement <2 x double> [[AV0]], double 0.000000e+00, i32 1
 ; CHECK-NEXT:    [[_MSCMP13:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP13]], label [[TMP10:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
-; CHECK:       10:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP13]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
 ; CHECK:       11:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
+; CHECK-NEXT:    unreachable
+; CHECK:       12:
 ; CHECK-NEXT:    [[B_VAL:%.*]] = load double, ptr [[B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP33:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP33]] to ptr
 ; CHECK-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP13]], align 8
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <2 x i64> [[TMP4]], i64 [[_MSLD2]], i32 0
 ; CHECK-NEXT:    [[BV0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM]], double [[B_VAL]], i32 0
@@ -18604,14 +18747,15 @@ define void @fmadd_sd_maskz_memfold(ptr %a, ptr %b, i8 %c, <2 x double> %extra_p
 ; CHECK-NEXT:    [[_MSPROP12:%.*]] = extractelement <2 x i64> [[_MSPROP11]], i32 0
 ; CHECK-NEXT:    [[SR:%.*]] = extractelement <2 x double> [[TMP27]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label [[TMP29:%.*]], label [[TMP33:%.*]], !prof [[PROF1]]
-; CHECK:       29:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label [[TMP34:%.*]], label [[TMP36:%.*]], !prof [[PROF1]]
+; CHECK:       31:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       30:
+; CHECK:       32:
 ; CHECK-NEXT:    [[TMP30:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP31:%.*]] = xor i64 [[TMP30]], 87960930222080
-; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP31]] to ptr
+; CHECK-NEXT:    [[TMP35:%.*]] = add i64 [[TMP31]], 2097152
+; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP35]] to ptr
 ; CHECK-NEXT:    store i64 [[_MSPROP12]], ptr [[TMP32]], align 8
 ; CHECK-NEXT:    store double [[SR]], ptr [[A]], align 8
 ; CHECK-NEXT:    ret void
@@ -18681,7 +18825,7 @@ define <2 x double>@test_int_x86_avx512_mask3_vfmsub_sd(<2 x double> %x0, <2 x d
 ; CHECK-NEXT:    [[_MSOR24:%.*]] = or i1 [[_MSOR]], [[_MSCMP23]]
 ; CHECK-NEXT:    br i1 [[_MSOR24]], label [[TMP26:%.*]], label [[TMP27:%.*]], !prof [[PROF1]]
 ; CHECK:       26:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       27:
 ; CHECK-NEXT:    [[TMP28:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP23]], double [[TMP24]], double [[TMP25]], i32 11)
@@ -18703,7 +18847,7 @@ define <2 x double>@test_int_x86_avx512_mask3_vfmsub_sd(<2 x double> %x0, <2 x d
 ; CHECK-NEXT:    [[_MSOR29:%.*]] = or i1 [[_MSOR27]], [[_MSCMP28]]
 ; CHECK-NEXT:    br i1 [[_MSOR29]], label [[TMP35:%.*]], label [[TMP36:%.*]], !prof [[PROF1]]
 ; CHECK:       35:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       36:
 ; CHECK-NEXT:    [[TMP37:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP32]], double [[TMP33]], double [[TMP34]], i32 10)
@@ -18788,7 +18932,7 @@ define <4 x float>@test_int_x86_avx512_mask3_vfmsub_ss(<4 x float> %x0, <4 x flo
 ; CHECK-NEXT:    [[_MSOR24:%.*]] = or i1 [[_MSOR]], [[_MSCMP23]]
 ; CHECK-NEXT:    br i1 [[_MSOR24]], label [[TMP26:%.*]], label [[TMP27:%.*]], !prof [[PROF1]]
 ; CHECK:       26:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       27:
 ; CHECK-NEXT:    [[TMP28:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP23]], float [[TMP24]], float [[TMP25]], i32 11)
@@ -18810,7 +18954,7 @@ define <4 x float>@test_int_x86_avx512_mask3_vfmsub_ss(<4 x float> %x0, <4 x flo
 ; CHECK-NEXT:    [[_MSOR29:%.*]] = or i1 [[_MSOR27]], [[_MSCMP28]]
 ; CHECK-NEXT:    br i1 [[_MSOR29]], label [[TMP35:%.*]], label [[TMP36:%.*]], !prof [[PROF1]]
 ; CHECK:       35:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       36:
 ; CHECK-NEXT:    [[TMP37:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP32]], float [[TMP33]], float [[TMP34]], i32 10)
@@ -18897,7 +19041,7 @@ define <2 x double>@test_int_x86_avx512_mask3_vfnmsub_sd(<2 x double> %x0, <2 x
 ; CHECK-NEXT:    [[_MSOR24:%.*]] = or i1 [[_MSOR]], [[_MSCMP23]]
 ; CHECK-NEXT:    br i1 [[_MSOR24]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
 ; CHECK:       28:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       29:
 ; CHECK-NEXT:    [[TMP30:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP25]], double [[TMP26]], double [[TMP27]], i32 11)
@@ -18920,7 +19064,7 @@ define <2 x double>@test_int_x86_avx512_mask3_vfnmsub_sd(<2 x double> %x0, <2 x
 ; CHECK-NEXT:    [[_MSOR29:%.*]] = or i1 [[_MSOR27]], [[_MSCMP28]]
 ; CHECK-NEXT:    br i1 [[_MSOR29]], label [[TMP38:%.*]], label [[TMP39:%.*]], !prof [[PROF1]]
 ; CHECK:       38:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       39:
 ; CHECK-NEXT:    [[TMP40:%.*]] = call double @llvm.x86.avx512.vfmadd.f64(double [[TMP35]], double [[TMP36]], double [[TMP37]], i32 10)
@@ -19007,7 +19151,7 @@ define <4 x float>@test_int_x86_avx512_mask3_vfnmsub_ss(<4 x float> %x0, <4 x fl
 ; CHECK-NEXT:    [[_MSOR24:%.*]] = or i1 [[_MSOR]], [[_MSCMP23]]
 ; CHECK-NEXT:    br i1 [[_MSOR24]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
 ; CHECK:       28:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       29:
 ; CHECK-NEXT:    [[TMP30:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP25]], float [[TMP26]], float [[TMP27]], i32 11)
@@ -19030,7 +19174,7 @@ define <4 x float>@test_int_x86_avx512_mask3_vfnmsub_ss(<4 x float> %x0, <4 x fl
 ; CHECK-NEXT:    [[_MSOR29:%.*]] = or i1 [[_MSOR27]], [[_MSCMP28]]
 ; CHECK-NEXT:    br i1 [[_MSOR29]], label [[TMP38:%.*]], label [[TMP39:%.*]], !prof [[PROF1]]
 ; CHECK:       38:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       39:
 ; CHECK-NEXT:    [[TMP40:%.*]] = call float @llvm.x86.avx512.vfmadd.f32(float [[TMP35]], float [[TMP36]], float [[TMP37]], i32 10)
@@ -19077,13 +19221,14 @@ define <4 x float>@test_int_x86_avx512_mask3_vfmadd_ss_rm(<4 x float> %x0, <4 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP25:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP26:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP26]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[Q]], i32 0
@@ -19131,13 +19276,14 @@ define <4 x float>@test_int_x86_avx512_mask_vfmadd_ss_rm(<4 x float> %x0, <4 x f
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP25:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       7:
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP26:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP26]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[Q]], i32 0
@@ -19184,13 +19330,14 @@ define <4 x float>@test_int_x86_avx512_maskz_vfmadd_ss_rm(<4 x float> %x0, <4 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP20:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP21:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP21]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[Q]], i32 0
@@ -19307,7 +19454,7 @@ define <16 x float> @test_int_x86_avx512_mask_cvt_dq2ps_512(<16 x i32> %x0, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP13]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       15:
 ; CHECK-NEXT:    [[TMP16:%.*]] = call <16 x float> @llvm.x86.avx512.sitofp.round.v16f32.v16i32(<16 x i32> [[X0]], i32 8)
@@ -19346,7 +19493,7 @@ define <16 x float> @test_int_x86_avx512_mask_cvt_udq2ps_512(<16 x i32> %x0, <16
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i512 [[TMP13]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       15:
 ; CHECK-NEXT:    [[TMP16:%.*]] = call <16 x float> @llvm.x86.avx512.uitofp.round.v16f32.v16i32(<16 x i32> [[X0]], i32 8)
@@ -19380,7 +19527,7 @@ define <8 x double> @test_mask_compress_pd_512(<8 x double> %data, <8 x double>
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <8 x double> @llvm.x86.avx512.mask.compress.v8f64(<8 x double> [[DATA:%.*]], <8 x double> [[PASSTHRU:%.*]], <8 x i1> [[TMP5]])
@@ -19406,7 +19553,7 @@ define <8 x double> @test_maskz_compress_pd_512(<8 x double> %data, i8 %mask)  #
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <8 x double> @llvm.x86.avx512.mask.compress.v8f64(<8 x double> [[DATA:%.*]], <8 x double> zeroinitializer, <8 x i1> [[TMP4]])
@@ -19429,7 +19576,7 @@ define <8 x double> @test_compress_pd_512(<8 x double> %data, <8 x double> %extr
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP2:%.*]] = call <8 x double> @llvm.x86.avx512.mask.compress.v8f64(<8 x double> [[DATA:%.*]], <8 x double> [[EXTRA_PARAM:%.*]], <8 x i1> splat (i1 true))
@@ -19461,7 +19608,7 @@ define <16 x float> @test_mask_compress_ps_512(<16 x float> %data, <16 x float>
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <16 x float> @llvm.x86.avx512.mask.compress.v16f32(<16 x float> [[DATA:%.*]], <16 x float> [[PASSTHRU:%.*]], <16 x i1> [[TMP5]])
@@ -19487,7 +19634,7 @@ define <16 x float> @test_maskz_compress_ps_512(<16 x float> %data, i16 %mask)
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.mask.compress.v16f32(<16 x float> [[DATA:%.*]], <16 x float> zeroinitializer, <16 x i1> [[TMP4]])
@@ -19510,7 +19657,7 @@ define <16 x float> @test_compress_ps_512(<16 x float> %data, <16 x float> %extr
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP2:%.*]] = call <16 x float> @llvm.x86.avx512.mask.compress.v16f32(<16 x float> [[DATA:%.*]], <16 x float> [[EXTRA_PARAM:%.*]], <16 x i1> splat (i1 true))
@@ -19542,7 +19689,7 @@ define <8 x i64> @test_mask_compress_q_512(<8 x i64> %data, <8 x i64> %passthru,
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <8 x i64> @llvm.x86.avx512.mask.compress.v8i64(<8 x i64> [[DATA:%.*]], <8 x i64> [[PASSTHRU:%.*]], <8 x i1> [[TMP5]])
@@ -19568,7 +19715,7 @@ define <8 x i64> @test_maskz_compress_q_512(<8 x i64> %data, i8 %mask)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <8 x i64> @llvm.x86.avx512.mask.compress.v8i64(<8 x i64> [[DATA:%.*]], <8 x i64> zeroinitializer, <8 x i1> [[TMP4]])
@@ -19591,7 +19738,7 @@ define <8 x i64> @test_compress_q_512(<8 x i64> %data, <8 x i64> %extra_param)
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP2:%.*]] = call <8 x i64> @llvm.x86.avx512.mask.compress.v8i64(<8 x i64> [[DATA:%.*]], <8 x i64> [[EXTRA_PARAM:%.*]], <8 x i1> splat (i1 true))
@@ -19623,7 +19770,7 @@ define <16 x i32> @test_mask_compress_d_512(<16 x i32> %data, <16 x i32> %passth
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <16 x i32> @llvm.x86.avx512.mask.compress.v16i32(<16 x i32> [[DATA:%.*]], <16 x i32> [[PASSTHRU:%.*]], <16 x i1> [[TMP5]])
@@ -19649,7 +19796,7 @@ define <16 x i32> @test_maskz_compress_d_512(<16 x i32> %data, i16 %mask)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x i32> @llvm.x86.avx512.mask.compress.v16i32(<16 x i32> [[DATA:%.*]], <16 x i32> zeroinitializer, <16 x i1> [[TMP4]])
@@ -19672,7 +19819,7 @@ define <16 x i32> @test_compress_d_512(<16 x i32> %data, <16 x i32> %extra_param
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP2:%.*]] = call <16 x i32> @llvm.x86.avx512.mask.compress.v16i32(<16 x i32> [[DATA:%.*]], <16 x i32> [[EXTRA_PARAM:%.*]], <16 x i1> splat (i1 true))
@@ -19697,7 +19844,7 @@ define <8 x double> @test_expand_pd_512(<8 x double> %data, <8 x double> %extra_
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP2:%.*]] = call <8 x double> @llvm.x86.avx512.mask.expand.v8f64(<8 x double> [[DATA:%.*]], <8 x double> [[EXTRA_PARAM:%.*]], <8 x i1> splat (i1 true))
@@ -19727,7 +19874,7 @@ define <8 x double> @test_mask_expand_pd_512(<8 x double> %data, <8 x double> %p
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <8 x double> @llvm.x86.avx512.mask.expand.v8f64(<8 x double> [[DATA:%.*]], <8 x double> [[PASSTHRU:%.*]], <8 x i1> [[TMP5]])
@@ -19753,7 +19900,7 @@ define <8 x double> @test_maskz_expand_pd_512(<8 x double> %data, i8 %mask)  #0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <8 x double> @llvm.x86.avx512.mask.expand.v8f64(<8 x double> [[DATA:%.*]], <8 x double> zeroinitializer, <8 x i1> [[TMP4]])
@@ -19778,7 +19925,7 @@ define <16 x float> @test_expand_ps_512(<16 x float> %data, <16 x float> %extra_
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP2:%.*]] = call <16 x float> @llvm.x86.avx512.mask.expand.v16f32(<16 x float> [[DATA:%.*]], <16 x float> [[EXTRA_PARAM:%.*]], <16 x i1> splat (i1 true))
@@ -19808,7 +19955,7 @@ define <16 x float> @test_mask_expand_ps_512(<16 x float> %data, <16 x float> %p
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <16 x float> @llvm.x86.avx512.mask.expand.v16f32(<16 x float> [[DATA:%.*]], <16 x float> [[PASSTHRU:%.*]], <16 x i1> [[TMP5]])
@@ -19834,7 +19981,7 @@ define <16 x float> @test_maskz_expand_ps_512(<16 x float> %data, i16 %mask)  #0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x float> @llvm.x86.avx512.mask.expand.v16f32(<16 x float> [[DATA:%.*]], <16 x float> zeroinitializer, <16 x i1> [[TMP4]])
@@ -19859,7 +20006,7 @@ define <8 x i64> @test_expand_q_512(<8 x i64> %data, <8 x i64> %extra_param)  #0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP2:%.*]] = call <8 x i64> @llvm.x86.avx512.mask.expand.v8i64(<8 x i64> [[DATA:%.*]], <8 x i64> [[EXTRA_PARAM:%.*]], <8 x i1> splat (i1 true))
@@ -19889,7 +20036,7 @@ define <8 x i64> @test_mask_expand_q_512(<8 x i64> %data, <8 x i64> %passthru, i
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <8 x i64> @llvm.x86.avx512.mask.expand.v8i64(<8 x i64> [[DATA:%.*]], <8 x i64> [[PASSTHRU:%.*]], <8 x i1> [[TMP5]])
@@ -19915,7 +20062,7 @@ define <8 x i64> @test_maskz_expand_q_512(<8 x i64> %data, i8 %mask)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <8 x i64> @llvm.x86.avx512.mask.expand.v8i64(<8 x i64> [[DATA:%.*]], <8 x i64> zeroinitializer, <8 x i1> [[TMP4]])
@@ -19940,7 +20087,7 @@ define <16 x i32> @test_expand_d_512(<16 x i32> %data, <16 x i32> %extra_param)
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[TMP2:%.*]] = call <16 x i32> @llvm.x86.avx512.mask.expand.v16i32(<16 x i32> [[DATA:%.*]], <16 x i32> [[EXTRA_PARAM:%.*]], <16 x i1> splat (i1 true))
@@ -19970,7 +20117,7 @@ define <16 x i32> @test_mask_expand_d_512(<16 x i32> %data, <16 x i32> %passthru
 ; CHECK-NEXT:    [[_MSOR3:%.*]] = or i1 [[_MSOR]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR3]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
 ; CHECK:       9:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       10:
 ; CHECK-NEXT:    [[TMP11:%.*]] = call <16 x i32> @llvm.x86.avx512.mask.expand.v16i32(<16 x i32> [[DATA:%.*]], <16 x i32> [[PASSTHRU:%.*]], <16 x i1> [[TMP5]])
@@ -19996,7 +20143,7 @@ define <16 x i32> @test_maskz_expand_d_512(<16 x i32> %data, i16 %mask)  #0 {
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x i32> @llvm.x86.avx512.mask.expand.v16i32(<16 x i32> [[DATA:%.*]], <16 x i32> zeroinitializer, <16 x i1> [[TMP4]])
@@ -20026,7 +20173,7 @@ define <16 x float> @test_cmp_512(<16 x float> %a, <16 x float> %b, <16 x float>
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x i1> @llvm.x86.avx512.mask.cmp.ps.512(<16 x float> [[A:%.*]], <16 x float> [[B:%.*]], i32 1, <16 x i1> splat (i1 true), i32 8)
@@ -20037,20 +20184,21 @@ define <16 x float> @test_cmp_512(<16 x float> %a, <16 x float> %b, <16 x float>
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
 ; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       13:
 ; CHECK-NEXT:    [[TMP14:%.*]] = call <16 x i1> @llvm.x86.avx512.mask.cmp.ps.512(<16 x float> [[C:%.*]], <16 x float> [[D:%.*]], i32 1, <16 x i1> splat (i1 true), i32 4)
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i64 [[TMP4]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP5]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
 ; CHECK:       15:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR9]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       16:
 ; CHECK-NEXT:    [[TMP17:%.*]] = load <16 x float>, ptr [[P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP18:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP19:%.*]] = xor i64 [[TMP18]], 87960930222080
-; CHECK-NEXT:    [[TMP20:%.*]] = inttoptr i64 [[TMP19]] to ptr
+; CHECK-NEXT:    [[TMP28:%.*]] = add i64 [[TMP19]], 2097152
+; CHECK-NEXT:    [[TMP20:%.*]] = inttoptr i64 [[TMP28]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP20]], align 64
 ; CHECK-NEXT:    [[TMP21:%.*]] = xor <16 x i1> [[TMP9]], [[TMP14]]
 ; CHECK-NEXT:    [[TMP22:%.*]] = select <16 x i1> [[TMP21]], <16 x i32> zeroinitializer, <16 x i32> [[_MSLD]]
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512-intrinsics.ll
index 03624380aac0a..e4bc2590738e2 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512-intrinsics.ll
@@ -783,7 +783,8 @@ define <2 x double> @test_rndscale_sd_mask_load(<2 x double> %a, ptr %bptr, <2 x
 ; CHECK-NEXT:    [[B:%.*]] = load <2 x double>, ptr [[BPTR:%.*]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[BPTR]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP9]], align 16
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <2 x i64> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP10]], 0
@@ -795,11 +796,11 @@ define <2 x double> @test_rndscale_sd_mask_load(<2 x double> %a, ptr %bptr, <2 x
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i8 [[TMP4]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSOR4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
+; CHECK:       14:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       14:
+; CHECK:       15:
 ; CHECK-NEXT:    [[RES:%.*]] = call <2 x double> @llvm.x86.avx512.mask.rndscale.sd(<2 x double> [[A:%.*]], <2 x double> [[B]], <2 x double> [[C:%.*]], i8 [[MASK:%.*]], i32 11, i32 4)
 ; CHECK-NEXT:    store <2 x i64> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <2 x double> [[RES]]
@@ -879,7 +880,8 @@ define <4 x float> @test_rndscale_ss_load(<4 x float> %a, ptr %bptr, <4 x float>
 ; CHECK-NEXT:    [[B:%.*]] = load <4 x float>, ptr [[BPTR:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[BPTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <4 x i32> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP8]], 0
@@ -889,11 +891,11 @@ define <4 x float> @test_rndscale_ss_load(<4 x float> %a, ptr %bptr, <4 x float>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <4 x i32> [[TMP3]] to i128
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i128 [[TMP11]], 0
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx512.mask.rndscale.ss(<4 x float> [[A:%.*]], <4 x float> [[B]], <4 x float> [[EXTRA_PARAM:%.*]], i8 -1, i32 11, i32 4)
 ; CHECK-NEXT:    store <4 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -1650,15 +1652,16 @@ define i32 @test_x86_avx512_cvttss2si_load(ptr %a0) #0 {
 ; CHECK-NEXT:    [[A1:%.*]] = load <4 x float>, ptr [[A0:%.*]], align 16
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A0]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP6]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <4 x i32> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP7]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    [[RES:%.*]] = call i32 @llvm.x86.avx512.cvttss2si(<4 x float> [[A1]], i32 4)
 ; CHECK-NEXT:    store i32 0, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret i32 [[RES]]
@@ -1894,21 +1897,22 @@ define <16 x i16> @test_x86_vcvtps2ph_256(<16 x float> %a0, <16 x i16> %src, i16
 ; CHECK-NEXT:    [[TMP27:%.*]] = sext <16 x i1> [[TMP26]] to <16 x i16>
 ; CHECK-NEXT:    [[TMP20:%.*]] = select <16 x i1> [[TMP25]], <16 x i16> [[TMP27]], <16 x i16> [[TMP3]]
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i16 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP6]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP6]], label [[TMP15:%.*]], label [[TMP21:%.*]], !prof [[PROF1]]
 ; CHECK:       10:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       11:
 ; CHECK-NEXT:    [[RES3:%.*]] = call <16 x i16> @llvm.x86.avx512.mask.vcvtps2ph.512(<16 x float> [[A0]], i32 12, <16 x i16> [[SRC:%.*]], i16 [[MASK]])
 ; CHECK-NEXT:    [[_MSCMP8:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP8]], label [[TMP21:%.*]], label [[TMP22:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP8]], label [[TMP22:%.*]], label [[TMP23:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       13:
 ; CHECK-NEXT:    [[TMP17:%.*]] = ptrtoint ptr [[DST:%.*]] to i64
 ; CHECK-NEXT:    [[TMP18:%.*]] = xor i64 [[TMP17]], 87960930222080
-; CHECK-NEXT:    [[TMP19:%.*]] = inttoptr i64 [[TMP18]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP18]], 2097152
+; CHECK-NEXT:    [[TMP19:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    store <16 x i16> [[TMP8]], ptr [[TMP19]], align 32
 ; CHECK-NEXT:    store <16 x i16> [[RES1]], ptr [[DST]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i16> [[TMP13]], [[TMP20]]
@@ -2030,17 +2034,18 @@ define void @test_mask_store_ss(ptr %ptr, <4 x float> %data, i8 %mask) #0 {
 ; CHECK-NEXT:    [[EXTRACT:%.*]] = shufflevector <8 x i1> [[TMP11]], <8 x i1> [[TMP11]], <4 x i32> <i32 0, i32 1, i32 2, i32 3>
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v4i32.p0(<4 x i32> [[TMP2]], ptr align 1 [[TMP14]], <4 x i1> [[EXTRACT]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP15:%.*]] = bitcast <4 x i1> [[_MSPROP]] to i4
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i4 [[TMP15]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP16:%.*]], label [[TMP17:%.*]], !prof [[PROF1]]
-; CHECK:       16:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP17:%.*]], label [[TMP18:%.*]], !prof [[PROF1]]
+; CHECK:       17:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       17:
+; CHECK:       18:
 ; CHECK-NEXT:    call void @llvm.masked.store.v4f32.p0(<4 x float> [[DATA:%.*]], ptr align 1 [[PTR]], <4 x i1> [[EXTRACT]])
 ; CHECK-NEXT:    ret void
 ;
@@ -4425,7 +4430,8 @@ define <4 x float> @test_mask_add_ss_current_memfold(<4 x float> %a0, ptr %a1, <
 ; CHECK-NEXT:    [[A1_VAL:%.*]] = load float, ptr [[A1:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[A1V0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[A1_VAL]], i32 0
@@ -4445,11 +4451,11 @@ define <4 x float> @test_mask_add_ss_current_memfold(<4 x float> %a0, ptr %a1, <
 ; CHECK-NEXT:    [[_MSOR7:%.*]] = or i1 [[_MSOR]], [[_MSCMP6]]
 ; CHECK-NEXT:    [[_MSCMP8:%.*]] = icmp ne i8 [[TMP4]], 0
 ; CHECK-NEXT:    [[_MSOR9:%.*]] = or i1 [[_MSOR7]], [[_MSCMP8]]
-; CHECK-NEXT:    br i1 [[_MSOR9]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
-; CHECK:       14:
+; CHECK-NEXT:    br i1 [[_MSOR9]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
+; CHECK:       15:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       15:
+; CHECK:       16:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx512.mask.add.ss.round(<4 x float> [[A0:%.*]], <4 x float> [[A1V]], <4 x float> [[A2:%.*]], i8 [[MASK:%.*]], i32 4)
 ; CHECK-NEXT:    store <4 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -4479,7 +4485,8 @@ define <4 x float> @test_maskz_add_ss_current_memfold(<4 x float> %a0, ptr %a1,
 ; CHECK-NEXT:    [[A1_VAL:%.*]] = load float, ptr [[A1:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[A1V0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[A1_VAL]], i32 0
@@ -4496,11 +4503,11 @@ define <4 x float> @test_maskz_add_ss_current_memfold(<4 x float> %a0, ptr %a1,
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP3]], 0
 ; CHECK-NEXT:    [[_MSOR7:%.*]] = or i1 [[_MSOR]], [[_MSCMP6]]
-; CHECK-NEXT:    br i1 [[_MSOR7]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSOR7]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx512.mask.add.ss.round(<4 x float> [[A0:%.*]], <4 x float> [[A1V]], <4 x float> zeroinitializer, i8 [[MASK:%.*]], i32 4)
 ; CHECK-NEXT:    store <4 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -4732,7 +4739,8 @@ define <2 x double> @test_mask_add_sd_current_memfold(<2 x double> %a0, ptr %a1,
 ; CHECK-NEXT:    [[A1_VAL:%.*]] = load double, ptr [[A1:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[A1V0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM:%.*]], double [[A1_VAL]], i32 0
@@ -4748,11 +4756,11 @@ define <2 x double> @test_mask_add_sd_current_memfold(<2 x double> %a0, ptr %a1,
 ; CHECK-NEXT:    [[_MSOR5:%.*]] = or i1 [[_MSOR]], [[_MSCMP4]]
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP4]], 0
 ; CHECK-NEXT:    [[_MSOR7:%.*]] = or i1 [[_MSOR5]], [[_MSCMP6]]
-; CHECK-NEXT:    br i1 [[_MSOR7]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
-; CHECK:       14:
+; CHECK-NEXT:    br i1 [[_MSOR7]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
+; CHECK:       15:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       15:
+; CHECK:       16:
 ; CHECK-NEXT:    [[RES:%.*]] = call <2 x double> @llvm.x86.avx512.mask.add.sd.round(<2 x double> [[A0:%.*]], <2 x double> [[A1V]], <2 x double> [[A2:%.*]], i8 [[MASK:%.*]], i32 4)
 ; CHECK-NEXT:    store <2 x i64> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <2 x double> [[RES]]
@@ -4780,7 +4788,8 @@ define <2 x double> @test_maskz_add_sd_current_memfold(<2 x double> %a0, ptr %a1
 ; CHECK-NEXT:    [[A1_VAL:%.*]] = load double, ptr [[A1:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[A1V0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM:%.*]], double [[A1_VAL]], i32 0
@@ -4793,11 +4802,11 @@ define <2 x double> @test_maskz_add_sd_current_memfold(<2 x double> %a0, ptr %a1
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i8 [[TMP3]], 0
 ; CHECK-NEXT:    [[_MSOR5:%.*]] = or i1 [[_MSOR]], [[_MSCMP4]]
-; CHECK-NEXT:    br i1 [[_MSOR5]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSOR5]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[RES:%.*]] = call <2 x double> @llvm.x86.avx512.mask.add.sd.round(<2 x double> [[A0:%.*]], <2 x double> [[A1V]], <2 x double> zeroinitializer, i8 [[MASK:%.*]], i32 4)
 ; CHECK-NEXT:    store <2 x i64> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <2 x double> [[RES]]
@@ -4986,7 +4995,8 @@ define <4 x float> @test_mask_max_ss_memfold(<4 x float> %a0, ptr %a1, <4 x floa
 ; CHECK-NEXT:    [[A1_VAL:%.*]] = load float, ptr [[A1:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[A1V0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[A1_VAL]], i32 0
@@ -5006,11 +5016,11 @@ define <4 x float> @test_mask_max_ss_memfold(<4 x float> %a0, ptr %a1, <4 x floa
 ; CHECK-NEXT:    [[_MSOR7:%.*]] = or i1 [[_MSOR]], [[_MSCMP6]]
 ; CHECK-NEXT:    [[_MSCMP8:%.*]] = icmp ne i8 [[TMP4]], 0
 ; CHECK-NEXT:    [[_MSOR9:%.*]] = or i1 [[_MSOR7]], [[_MSCMP8]]
-; CHECK-NEXT:    br i1 [[_MSOR9]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
-; CHECK:       14:
+; CHECK-NEXT:    br i1 [[_MSOR9]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
+; CHECK:       15:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       15:
+; CHECK:       16:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx512.mask.max.ss.round(<4 x float> [[A0:%.*]], <4 x float> [[A1V]], <4 x float> [[A2:%.*]], i8 [[MASK:%.*]], i32 4)
 ; CHECK-NEXT:    store <4 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -5040,7 +5050,8 @@ define <4 x float> @test_maskz_max_ss_memfold(<4 x float> %a0, ptr %a1, i8 %mask
 ; CHECK-NEXT:    [[A1_VAL:%.*]] = load float, ptr [[A1:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[A1V0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[A1_VAL]], i32 0
@@ -5057,11 +5068,11 @@ define <4 x float> @test_maskz_max_ss_memfold(<4 x float> %a0, ptr %a1, i8 %mask
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP3]], 0
 ; CHECK-NEXT:    [[_MSOR7:%.*]] = or i1 [[_MSOR]], [[_MSCMP6]]
-; CHECK-NEXT:    br i1 [[_MSOR7]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSOR7]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx512.mask.max.ss.round(<4 x float> [[A0:%.*]], <4 x float> [[A1V]], <4 x float> zeroinitializer, i8 [[MASK:%.*]], i32 4)
 ; CHECK-NEXT:    store <4 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -5251,7 +5262,8 @@ define <2 x double> @test_mask_max_sd_memfold(<2 x double> %a0, ptr %a1, <2 x do
 ; CHECK-NEXT:    [[A1_VAL:%.*]] = load double, ptr [[A1:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[A1V0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM:%.*]], double [[A1_VAL]], i32 0
@@ -5267,11 +5279,11 @@ define <2 x double> @test_mask_max_sd_memfold(<2 x double> %a0, ptr %a1, <2 x do
 ; CHECK-NEXT:    [[_MSOR5:%.*]] = or i1 [[_MSOR]], [[_MSCMP4]]
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP4]], 0
 ; CHECK-NEXT:    [[_MSOR7:%.*]] = or i1 [[_MSOR5]], [[_MSCMP6]]
-; CHECK-NEXT:    br i1 [[_MSOR7]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
-; CHECK:       14:
+; CHECK-NEXT:    br i1 [[_MSOR7]], label [[TMP15:%.*]], label [[TMP16:%.*]], !prof [[PROF1]]
+; CHECK:       15:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       15:
+; CHECK:       16:
 ; CHECK-NEXT:    [[RES:%.*]] = call <2 x double> @llvm.x86.avx512.mask.max.sd.round(<2 x double> [[A0:%.*]], <2 x double> [[A1V]], <2 x double> [[A2:%.*]], i8 [[MASK:%.*]], i32 4)
 ; CHECK-NEXT:    store <2 x i64> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <2 x double> [[RES]]
@@ -5299,7 +5311,8 @@ define <2 x double> @test_maskz_max_sd_memfold(<2 x double> %a0, ptr %a1, i8 %ma
 ; CHECK-NEXT:    [[A1_VAL:%.*]] = load double, ptr [[A1:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[A1V0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM:%.*]], double [[A1_VAL]], i32 0
@@ -5312,11 +5325,11 @@ define <2 x double> @test_maskz_max_sd_memfold(<2 x double> %a0, ptr %a1, i8 %ma
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i8 [[TMP3]], 0
 ; CHECK-NEXT:    [[_MSOR5:%.*]] = or i1 [[_MSOR]], [[_MSCMP4]]
-; CHECK-NEXT:    br i1 [[_MSOR5]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSOR5]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[RES:%.*]] = call <2 x double> @llvm.x86.avx512.mask.max.sd.round(<2 x double> [[A0:%.*]], <2 x double> [[A1V]], <2 x double> zeroinitializer, i8 [[MASK:%.*]], i32 4)
 ; CHECK-NEXT:    store <2 x i64> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <2 x double> [[RES]]
@@ -5385,15 +5398,16 @@ define <4 x float> @test_x86_avx512__mm_cvt_roundu32_ss_mem(<4 x float> %a, ptr
 ; CHECK-NEXT:    [[B:%.*]] = load i32, ptr [[PTR:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[TMP8:%.*]] = insertelement <4 x i32> [[TMP2]], i32 0, i32 0
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i32 [[_MSLD]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       10:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       10:
+; CHECK:       11:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx512.cvtusi2ss(<4 x float> [[A:%.*]], i32 [[B]], i32 9)
 ; CHECK-NEXT:    store <4 x i32> [[TMP8]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -5437,15 +5451,16 @@ define <4 x float> @test_x86_avx512__mm_cvtu32_ss_mem(<4 x float> %a, ptr %ptr)
 ; CHECK-NEXT:    [[B:%.*]] = load i32, ptr [[PTR:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[TMP8:%.*]] = insertelement <4 x i32> [[TMP2]], i32 0, i32 0
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i32 [[_MSLD]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; CHECK:       9:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; CHECK:       10:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       10:
+; CHECK:       11:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx512.cvtusi2ss(<4 x float> [[A:%.*]], i32 [[B]], i32 4)
 ; CHECK-NEXT:    store <4 x i32> [[TMP8]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -5473,17 +5488,18 @@ define <16 x i32>@test_int_x86_avx512_vpermi2var_d_512(<16 x i32> %x0, <16 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP13:%.*]] = trunc <16 x i32> [[X1]] to <16 x i4>
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[TMP2]], <16 x i32> [[X3:%.*]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <16 x i4> [[TMP13]] to i64
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP10]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP12:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP15:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[X0:%.*]], <16 x i32> [[X3]], <16 x i32> [[X2]])
 ; CHECK-NEXT:    store <16 x i32> [[_MSPROP1]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x i32> [[TMP9]]
@@ -5509,17 +5525,18 @@ define <16 x i32>@test_int_x86_avx512_mask_vpermi2var_d_512(<16 x i32> %x0, <16
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP18:%.*]] = trunc <16 x i32> [[TMP3]] to <16 x i4>
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[TMP2]], <16 x i32> [[X1:%.*]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast <16 x i4> [[TMP18]] to i64
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP19]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP20:%.*]], label [[TMP21:%.*]], !prof [[PROF1]]
-; CHECK:       13:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP21:%.*]], label [[TMP22:%.*]], !prof [[PROF1]]
+; CHECK:       14:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       14:
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[X0:%.*]], <16 x i32> [[X1]], <16 x i32> [[X2]])
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i16 [[TMP4]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP12:%.*]] = bitcast i16 [[X3:%.*]] to <16 x i1>
@@ -5752,17 +5769,18 @@ define <16 x i32>@test_int_x86_avx512_maskz_vpermt2var_d_512(<16 x i32> %x0, <16
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP18:%.*]] = trunc <16 x i32> [[X0]] to <16 x i4>
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[TMP2]], <16 x i32> [[X4:%.*]], <16 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast <16 x i4> [[TMP18]] to i64
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP19]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP20:%.*]], label [[TMP21:%.*]], !prof [[PROF1]]
-; CHECK:       13:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP21:%.*]], label [[TMP22:%.*]], !prof [[PROF1]]
+; CHECK:       14:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       14:
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <16 x i32> @llvm.x86.avx512.vpermi2var.d.512(<16 x i32> [[X1:%.*]], <16 x i32> [[X4]], <16 x i32> [[X2]])
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i16 [[TMP4]] to <16 x i1>
 ; CHECK-NEXT:    [[TMP12:%.*]] = bitcast i16 [[X3:%.*]] to <16 x i1>
@@ -5800,7 +5818,8 @@ define <8 x double>@test_int_x86_avx512_maskz_vpermt2var_pd_512(<8 x i64> %x0, <
 ; CHECK-NEXT:    [[X2S:%.*]] = load double, ptr [[X2PTR:%.*]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2PTR]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP27:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP27]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i64> [[TMP5]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[X2INS:%.*]] = insertelement <8 x double> [[EXTRA_PARAM:%.*]], double [[X2S]], i32 0
@@ -5813,11 +5832,11 @@ define <8 x double>@test_int_x86_avx512_maskz_vpermt2var_pd_512(<8 x i64> %x0, <
 ; CHECK-NEXT:    [[TMP25:%.*]] = bitcast <8 x double> [[TMP14]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP26:%.*]] = bitcast <8 x i3> [[TMP11]] to i24
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i24 [[TMP26]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP27:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
-; CHECK:       18:
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
+; CHECK:       19:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       19:
+; CHECK:       20:
 ; CHECK-NEXT:    [[TMP15:%.*]] = call <8 x double> @llvm.x86.avx512.vpermi2var.pd.512(<8 x double> [[X1:%.*]], <8 x i64> [[X4]], <8 x double> [[X2]])
 ; CHECK-NEXT:    [[TMP16:%.*]] = bitcast i8 [[TMP4]] to <8 x i1>
 ; CHECK-NEXT:    [[TMP17:%.*]] = bitcast i8 [[X3:%.*]] to <8 x i1>
@@ -8426,7 +8445,8 @@ define <4 x float> @test_int_x86_avx512_mask_getmant_ss_load(<4 x float> %x0, pt
 ; CHECK-NEXT:    [[X1:%.*]] = load <4 x float>, ptr [[X1P:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[X1P]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <4 x i32> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP8]], 0
@@ -8436,11 +8456,11 @@ define <4 x float> @test_int_x86_avx512_mask_getmant_ss_load(<4 x float> %x0, pt
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <4 x i32> [[TMP3]] to i128
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i128 [[TMP11]], 0
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.avx512.mask.getmant.ss(<4 x float> [[X0:%.*]], <4 x float> [[X1]], i32 11, <4 x float> [[EXTRA_PARAM:%.*]], i8 -1, i32 4)
 ; CHECK-NEXT:    store <4 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -9648,7 +9668,8 @@ define <8 x double>@test_int_x86_avx512_mask_fixupimm_pd_512_load(<8 x double> %
 ; CHECK-NEXT:    [[X2:%.*]] = load <8 x i64>, ptr [[X2PTR:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[X2PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <8 x i64> [[TMP2]] to i512
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i512 [[TMP9]], 0
@@ -9658,11 +9679,11 @@ define <8 x double>@test_int_x86_avx512_mask_fixupimm_pd_512_load(<8 x double> %
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <8 x i64> [[_MSLD]] to i512
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i512 [[TMP11]], 0
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x double> @llvm.x86.avx512.mask.fixupimm.pd.512(<8 x double> [[X0:%.*]], <8 x double> [[X1:%.*]], <8 x i64> [[X2]], i32 3, i8 -1, i32 4)
 ; CHECK-NEXT:    store <8 x i64> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x double> [[RES]]
@@ -9947,7 +9968,8 @@ define <16 x float>@test_int_x86_avx512_mask_fixupimm_ps_512_load(<16 x float> %
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2PTR:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[X2PTR]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i32> [[TMP2]] to i512
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i512 [[TMP9]], 0
@@ -9957,11 +9979,11 @@ define <16 x float>@test_int_x86_avx512_mask_fixupimm_ps_512_load(<16 x float> %
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <16 x i32> [[_MSLD]] to i512
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i512 [[TMP11]], 0
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; CHECK:       12:
+; CHECK-NEXT:    br i1 [[_MSOR4]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
+; CHECK:       13:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       13:
+; CHECK:       14:
 ; CHECK-NEXT:    [[RES:%.*]] = call <16 x float> @llvm.x86.avx512.mask.fixupimm.ps.512(<16 x float> [[X0:%.*]], <16 x float> [[X1:%.*]], <16 x i32> [[X2]], i32 5, i16 -1, i32 4)
 ; CHECK-NEXT:    store <16 x i32> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <16 x float> [[RES]]
@@ -10526,7 +10548,8 @@ define <4 x float> @test_int_x86_avx512_maskz_vfmadd_ss_load0(i8 zeroext %0, ptr
 ; CHECK-NEXT:    [[TMP11:%.*]] = load <4 x float>, ptr [[TMP1:%.*]], align 16
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[TMP1]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP27:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP27]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP14]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = extractelement <4 x i32> [[_MSLD]], i64 0
 ; CHECK-NEXT:    [[TMP15:%.*]] = extractelement <4 x float> [[TMP11]], i64 0
@@ -10794,7 +10817,7 @@ define void @fmadd_ss_mask_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_par
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
@@ -10802,7 +10825,8 @@ define void @fmadd_ss_mask_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_par
 ; CHECK-NEXT:    [[A_VAL:%.*]] = load float, ptr [[A:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[AV0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[A_VAL]], i32 0
@@ -10813,15 +10837,16 @@ define void @fmadd_ss_mask_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_par
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <4 x i32> [[_MSPROP2]], i32 0, i32 3
 ; CHECK-NEXT:    [[AV:%.*]] = insertelement <4 x float> [[AV2]], float 0.000000e+00, i32 3
 ; CHECK-NEXT:    [[_MSCMP17:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP17]], label [[TMP29:%.*]], label [[TMP30:%.*]], !prof [[PROF1]]
-; CHECK:       11:
+; CHECK-NEXT:    br i1 [[_MSCMP17]], label [[TMP30:%.*]], label [[TMP35:%.*]], !prof [[PROF1]]
+; CHECK:       12:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       12:
+; CHECK:       13:
 ; CHECK-NEXT:    [[B_VAL:%.*]] = load float, ptr [[B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    [[_MSLD4:%.*]] = load i32, ptr [[TMP13]], align 4
 ; CHECK-NEXT:    [[_MSPROP5:%.*]] = insertelement <4 x i32> [[TMP5]], i32 [[_MSLD4]], i32 0
 ; CHECK-NEXT:    [[BV0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM2:%.*]], float [[B_VAL]], i32 0
@@ -10857,14 +10882,15 @@ define void @fmadd_ss_mask_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_par
 ; CHECK-NEXT:    [[_MSPROP16:%.*]] = extractelement <4 x i32> [[_MSPROP15]], i32 0
 ; CHECK-NEXT:    [[SR:%.*]] = extractelement <4 x float> [[TMP28]], i32 0
 ; CHECK-NEXT:    [[_MSCMP18:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP18]], label [[TMP34:%.*]], label [[TMP35:%.*]], !prof [[PROF1]]
-; CHECK:       31:
+; CHECK-NEXT:    br i1 [[_MSCMP18]], label [[TMP38:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
+; CHECK:       33:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       32:
+; CHECK:       34:
 ; CHECK-NEXT:    [[TMP31:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP32:%.*]] = xor i64 [[TMP31]], 87960930222080
-; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP32]] to ptr
+; CHECK-NEXT:    [[TMP37:%.*]] = add i64 [[TMP32]], 2097152
+; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP37]] to ptr
 ; CHECK-NEXT:    store i32 [[_MSPROP16]], ptr [[TMP33]], align 4
 ; CHECK-NEXT:    store float [[SR]], ptr [[A]], align 4
 ; CHECK-NEXT:    ret void
@@ -10902,7 +10928,7 @@ define void @fmadd_ss_maskz_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_pa
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
@@ -10910,7 +10936,8 @@ define void @fmadd_ss_maskz_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_pa
 ; CHECK-NEXT:    [[A_VAL:%.*]] = load float, ptr [[A:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[AV0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[A_VAL]], i32 0
@@ -10921,15 +10948,16 @@ define void @fmadd_ss_maskz_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_pa
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <4 x i32> [[_MSPROP2]], i32 0, i32 3
 ; CHECK-NEXT:    [[AV:%.*]] = insertelement <4 x float> [[AV2]], float 0.000000e+00, i32 3
 ; CHECK-NEXT:    [[_MSCMP17:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP17]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
-; CHECK:       11:
+; CHECK-NEXT:    br i1 [[_MSCMP17]], label [[TMP29:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
+; CHECK:       12:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       12:
+; CHECK:       13:
 ; CHECK-NEXT:    [[B_VAL:%.*]] = load float, ptr [[B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP35:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP35]] to ptr
 ; CHECK-NEXT:    [[_MSLD4:%.*]] = load i32, ptr [[TMP13]], align 4
 ; CHECK-NEXT:    [[_MSPROP5:%.*]] = insertelement <4 x i32> [[TMP5]], i32 [[_MSLD4]], i32 0
 ; CHECK-NEXT:    [[BV0:%.*]] = insertelement <4 x float> [[EXTRA_PARAM2:%.*]], float [[B_VAL]], i32 0
@@ -10964,14 +10992,15 @@ define void @fmadd_ss_maskz_memfold(ptr %a, ptr %b, i8 %c, <4 x float> %extra_pa
 ; CHECK-NEXT:    [[_MSPROP16:%.*]] = extractelement <4 x i32> [[_MSPROP15]], i32 0
 ; CHECK-NEXT:    [[SR:%.*]] = extractelement <4 x float> [[TMP27]], i32 0
 ; CHECK-NEXT:    [[_MSCMP18:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP18]], label [[TMP33:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
-; CHECK:       30:
+; CHECK-NEXT:    br i1 [[_MSCMP18]], label [[TMP37:%.*]], label [[TMP33:%.*]], !prof [[PROF1]]
+; CHECK:       32:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       31:
+; CHECK:       33:
 ; CHECK-NEXT:    [[TMP30:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP31:%.*]] = xor i64 [[TMP30]], 87960930222080
-; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP31]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP31]], 2097152
+; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    store i32 [[_MSPROP16]], ptr [[TMP32]], align 4
 ; CHECK-NEXT:    store float [[SR]], ptr [[A]], align 4
 ; CHECK-NEXT:    ret void
@@ -11009,7 +11038,7 @@ define void @fmadd_sd_mask_memfold(ptr %a, ptr %b, i8 %c, <2 x double> %extra_pa
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
@@ -11017,22 +11046,24 @@ define void @fmadd_sd_mask_memfold(ptr %a, ptr %b, i8 %c, <2 x double> %extra_pa
 ; CHECK-NEXT:    [[A_VAL:%.*]] = load double, ptr [[A:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[AV0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM:%.*]], double [[A_VAL]], i32 0
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = insertelement <2 x i64> [[_MSPROP]], i64 0, i32 1
 ; CHECK-NEXT:    [[AV:%.*]] = insertelement <2 x double> [[AV0]], double 0.000000e+00, i32 1
 ; CHECK-NEXT:    [[_MSCMP13:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP13]], label [[TMP29:%.*]], label [[TMP30:%.*]], !prof [[PROF1]]
-; CHECK:       11:
+; CHECK-NEXT:    br i1 [[_MSCMP13]], label [[TMP30:%.*]], label [[TMP35:%.*]], !prof [[PROF1]]
+; CHECK:       12:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       12:
+; CHECK:       13:
 ; CHECK-NEXT:    [[B_VAL:%.*]] = load double, ptr [[B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP13]], align 8
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <2 x i64> [[TMP5]], i64 [[_MSLD2]], i32 0
 ; CHECK-NEXT:    [[BV0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM2:%.*]], double [[B_VAL]], i32 0
@@ -11064,14 +11095,15 @@ define void @fmadd_sd_mask_memfold(ptr %a, ptr %b, i8 %c, <2 x double> %extra_pa
 ; CHECK-NEXT:    [[_MSPROP12:%.*]] = extractelement <2 x i64> [[_MSPROP11]], i32 0
 ; CHECK-NEXT:    [[SR:%.*]] = extractelement <2 x double> [[TMP28]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label [[TMP34:%.*]], label [[TMP35:%.*]], !prof [[PROF1]]
-; CHECK:       31:
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label [[TMP38:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
+; CHECK:       33:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       32:
+; CHECK:       34:
 ; CHECK-NEXT:    [[TMP31:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP32:%.*]] = xor i64 [[TMP31]], 87960930222080
-; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP32]] to ptr
+; CHECK-NEXT:    [[TMP37:%.*]] = add i64 [[TMP32]], 2097152
+; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP37]] to ptr
 ; CHECK-NEXT:    store i64 [[_MSPROP12]], ptr [[TMP33]], align 8
 ; CHECK-NEXT:    store double [[SR]], ptr [[A]], align 8
 ; CHECK-NEXT:    ret void
@@ -11105,7 +11137,7 @@ define void @fmadd_sd_maskz_memfold(ptr %a, ptr %b, i8 %c, <2x double> %extra_pa
 ; CHECK-NEXT:    [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP9:%.*]], label [[TMP28:%.*]], !prof [[PROF1]]
 ; CHECK:       6:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
@@ -11113,22 +11145,24 @@ define void @fmadd_sd_maskz_memfold(ptr %a, ptr %b, i8 %c, <2x double> %extra_pa
 ; CHECK-NEXT:    [[A_VAL:%.*]] = load double, ptr [[A:%.*]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> [[TMP4]], i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[AV0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM:%.*]], double [[A_VAL]], i32 0
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = insertelement <2 x i64> [[_MSPROP]], i64 0, i32 1
 ; CHECK-NEXT:    [[AV:%.*]] = insertelement <2 x double> [[AV0]], double 0.000000e+00, i32 1
 ; CHECK-NEXT:    [[_MSCMP13:%.*]] = icmp ne i64 [[TMP2]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP13]], label [[TMP28:%.*]], label [[TMP29:%.*]], !prof [[PROF1]]
-; CHECK:       11:
+; CHECK-NEXT:    br i1 [[_MSCMP13]], label [[TMP29:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
+; CHECK:       12:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       12:
+; CHECK:       13:
 ; CHECK-NEXT:    [[B_VAL:%.*]] = load double, ptr [[B:%.*]], align 8
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP35:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP35]] to ptr
 ; CHECK-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP13]], align 8
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <2 x i64> [[TMP5]], i64 [[_MSLD2]], i32 0
 ; CHECK-NEXT:    [[BV0:%.*]] = insertelement <2 x double> [[EXTRA_PARAM2:%.*]], double [[B_VAL]], i32 0
@@ -11159,14 +11193,15 @@ define void @fmadd_sd_maskz_memfold(ptr %a, ptr %b, i8 %c, <2x double> %extra_pa
 ; CHECK-NEXT:    [[_MSPROP12:%.*]] = extractelement <2 x i64> [[_MSPROP11]], i32 0
 ; CHECK-NEXT:    [[SR:%.*]] = extractelement <2 x double> [[TMP27]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label [[TMP33:%.*]], label [[TMP34:%.*]], !prof [[PROF1]]
-; CHECK:       30:
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label [[TMP37:%.*]], label [[TMP33:%.*]], !prof [[PROF1]]
+; CHECK:       32:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR10]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       31:
+; CHECK:       33:
 ; CHECK-NEXT:    [[TMP30:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP31:%.*]] = xor i64 [[TMP30]], 87960930222080
-; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP31]] to ptr
+; CHECK-NEXT:    [[TMP36:%.*]] = add i64 [[TMP31]], 2097152
+; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CHECK-NEXT:    store i64 [[_MSPROP12]], ptr [[TMP32]], align 8
 ; CHECK-NEXT:    store double [[SR]], ptr [[A]], align 8
 ; CHECK-NEXT:    ret void
@@ -11732,7 +11767,8 @@ define <4 x float>@test_int_x86_avx512_mask3_vfmadd_ss_rm(<4 x float> %x0, <4 x
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP26:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP26]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[Q]], i32 0
@@ -11792,7 +11828,8 @@ define <4 x float>@test_int_x86_avx512_mask_vfmadd_ss_rm(<4 x float> %x0, <4 x f
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP26:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP26]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP5]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[Q]], i32 0
@@ -11852,7 +11889,8 @@ define <4 x float>@test_int_x86_avx512_maskz_vfmadd_ss_rm(<4 x float> %x0, <4 x
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP21:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP21]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> [[TMP4]], i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> [[EXTRA_PARAM:%.*]], float [[Q]], i32 0
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bf16-mov.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bf16-mov.ll
index ac65645a9ec2c..3c7749b842d1c 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bf16-mov.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bf16-mov.ll
@@ -26,83 +26,91 @@ define dso_local void @funbf16(ptr readonly %src, ptr writeonly %dst) sanitize_m
 ; CHECK-NEXT:    [[TMP4:%.*]] = load <8 x bfloat>, ptr [[SRC]], align 1
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[SRC]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP7]], align 1
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP4]], label %[[BB8:.*]], label %[[BB9:.*]], !prof [[PROF1]]
-; CHECK:       [[BB8]]:
+; CHECK-NEXT:    br i1 [[_MSCMP4]], label %[[BB9:.*]], label %[[BB10:.*]], !prof [[PROF1]]
+; CHECK:       [[BB9]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR3]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB9]]:
+; CHECK:       [[BB10]]:
 ; CHECK-NEXT:    [[TMP10:%.*]] = ptrtoint ptr [[DST]] to i64
 ; CHECK-NEXT:    [[TMP11:%.*]] = xor i64 [[TMP10]], 87960930222080
-; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP11]], 2097152
+; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    store <8 x i16> [[_MSLD]], ptr [[TMP12]], align 1
 ; CHECK-NEXT:    store <8 x bfloat> [[TMP4]], ptr [[DST]], align 1
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i64 [[TMP0]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP5]], label %[[BB13:.*]], label %[[BB14:.*]], !prof [[PROF1]]
-; CHECK:       [[BB13]]:
+; CHECK-NEXT:    br i1 [[_MSCMP5]], label %[[BB15:.*]], label %[[BB16:.*]], !prof [[PROF1]]
+; CHECK:       [[BB15]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR3]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB14]]:
+; CHECK:       [[BB16]]:
 ; CHECK-NEXT:    [[TMP15:%.*]] = load <8 x bfloat>, ptr [[SRC]], align 32
 ; CHECK-NEXT:    [[TMP16:%.*]] = ptrtoint ptr [[SRC]] to i64
 ; CHECK-NEXT:    [[TMP17:%.*]] = xor i64 [[TMP16]], 87960930222080
-; CHECK-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP17]], 2097152
+; CHECK-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    [[_MSLD1:%.*]] = load <8 x i16>, ptr [[TMP18]], align 32
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB19:.*]], label %[[BB20:.*]], !prof [[PROF1]]
-; CHECK:       [[BB19]]:
+; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB22:.*]], label %[[BB23:.*]], !prof [[PROF1]]
+; CHECK:       [[BB22]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR3]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB20]]:
+; CHECK:       [[BB23]]:
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[DST]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP30:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP30]] to ptr
 ; CHECK-NEXT:    store <8 x i16> [[_MSLD1]], ptr [[TMP23]], align 32
 ; CHECK-NEXT:    store <8 x bfloat> [[TMP15]], ptr [[DST]], align 32
 ; CHECK-NEXT:    [[_MSCMP7:%.*]] = icmp ne i64 [[TMP0]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP7]], label %[[BB24:.*]], label %[[BB25:.*]], !prof [[PROF1]]
-; CHECK:       [[BB24]]:
+; CHECK-NEXT:    br i1 [[_MSCMP7]], label %[[BB28:.*]], label %[[BB29:.*]], !prof [[PROF1]]
+; CHECK:       [[BB28]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR3]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB25]]:
+; CHECK:       [[BB29]]:
 ; CHECK-NEXT:    [[TMP26:%.*]] = load <16 x bfloat>, ptr [[SRC]], align 1
 ; CHECK-NEXT:    [[TMP27:%.*]] = ptrtoint ptr [[SRC]] to i64
 ; CHECK-NEXT:    [[TMP28:%.*]] = xor i64 [[TMP27]], 87960930222080
-; CHECK-NEXT:    [[TMP29:%.*]] = inttoptr i64 [[TMP28]] to ptr
+; CHECK-NEXT:    [[TMP35:%.*]] = add i64 [[TMP28]], 2097152
+; CHECK-NEXT:    [[TMP29:%.*]] = inttoptr i64 [[TMP35]] to ptr
 ; CHECK-NEXT:    [[_MSLD2:%.*]] = load <16 x i16>, ptr [[TMP29]], align 1
 ; CHECK-NEXT:    [[_MSCMP8:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP8]], label %[[BB30:.*]], label %[[BB31:.*]], !prof [[PROF1]]
-; CHECK:       [[BB30]]:
+; CHECK-NEXT:    br i1 [[_MSCMP8]], label %[[BB35:.*]], label %[[BB36:.*]], !prof [[PROF1]]
+; CHECK:       [[BB35]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR3]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB31]]:
+; CHECK:       [[BB36]]:
 ; CHECK-NEXT:    [[TMP32:%.*]] = ptrtoint ptr [[DST]] to i64
 ; CHECK-NEXT:    [[TMP33:%.*]] = xor i64 [[TMP32]], 87960930222080
-; CHECK-NEXT:    [[TMP34:%.*]] = inttoptr i64 [[TMP33]] to ptr
+; CHECK-NEXT:    [[TMP41:%.*]] = add i64 [[TMP33]], 2097152
+; CHECK-NEXT:    [[TMP34:%.*]] = inttoptr i64 [[TMP41]] to ptr
 ; CHECK-NEXT:    store <16 x i16> [[_MSLD2]], ptr [[TMP34]], align 1
 ; CHECK-NEXT:    store <16 x bfloat> [[TMP26]], ptr [[DST]], align 1
 ; CHECK-NEXT:    [[_MSCMP9:%.*]] = icmp ne i64 [[TMP0]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB35:.*]], label %[[BB36:.*]], !prof [[PROF1]]
-; CHECK:       [[BB35]]:
+; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB41:.*]], label %[[BB42:.*]], !prof [[PROF1]]
+; CHECK:       [[BB41]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR3]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB36]]:
+; CHECK:       [[BB42]]:
 ; CHECK-NEXT:    [[TMP37:%.*]] = load <16 x bfloat>, ptr [[SRC]], align 32
 ; CHECK-NEXT:    [[TMP38:%.*]] = ptrtoint ptr [[SRC]] to i64
 ; CHECK-NEXT:    [[TMP39:%.*]] = xor i64 [[TMP38]], 87960930222080
-; CHECK-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP39]] to ptr
+; CHECK-NEXT:    [[TMP46:%.*]] = add i64 [[TMP39]], 2097152
+; CHECK-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP46]] to ptr
 ; CHECK-NEXT:    [[_MSLD3:%.*]] = load <16 x i16>, ptr [[TMP40]], align 32
 ; CHECK-NEXT:    [[_MSCMP10:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP10]], label %[[BB41:.*]], label %[[BB42:.*]], !prof [[PROF1]]
-; CHECK:       [[BB41]]:
+; CHECK-NEXT:    br i1 [[_MSCMP10]], label %[[BB48:.*]], label %[[BB49:.*]], !prof [[PROF1]]
+; CHECK:       [[BB48]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR3]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB42]]:
+; CHECK:       [[BB49]]:
 ; CHECK-NEXT:    [[TMP43:%.*]] = ptrtoint ptr [[DST]] to i64
 ; CHECK-NEXT:    [[TMP44:%.*]] = xor i64 [[TMP43]], 87960930222080
-; CHECK-NEXT:    [[TMP45:%.*]] = inttoptr i64 [[TMP44]] to ptr
+; CHECK-NEXT:    [[TMP52:%.*]] = add i64 [[TMP44]], 2097152
+; CHECK-NEXT:    [[TMP45:%.*]] = inttoptr i64 [[TMP52]] to ptr
 ; CHECK-NEXT:    store <16 x i16> [[_MSLD3]], ptr [[TMP45]], align 32
 ; CHECK-NEXT:    store <16 x bfloat> [[TMP37]], ptr [[DST]], align 32
 ; CHECK-NEXT:    ret void
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll
index fd4ad9604e4f5..b42086f8125cb 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics-upgrade.ll
@@ -196,27 +196,29 @@ define void @test_int_x86_avx512_mask_storeu_b_512(ptr %ptr1, ptr %ptr2, <64 x i
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i64 [[X2:%.*]] to <64 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR1:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v64i8.p0(<64 x i8> [[TMP2]], ptr align 1 [[TMP9]], <64 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <64 x i1> [[TMP5]] to i64
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1:![0-9]+]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7:[0-9]+]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1:![0-9]+]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8:[0-9]+]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v64i8.p0(<64 x i8> [[X1:%.*]], ptr align 1 [[PTR1]], <64 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <64 x i8> [[TMP2]], ptr [[TMP17]], align 1
 ; CHECK-NEXT:    store <64 x i8> [[X1]], ptr [[PTR2]], align 1
 ; CHECK-NEXT:    ret void
@@ -239,27 +241,29 @@ define void @test_int_x86_avx512_mask_storeu_w_512(ptr %ptr1, ptr %ptr2, <32 x i
 ; CHECK-NEXT:    [[TMP6:%.*]] = bitcast i32 [[X2:%.*]] to <32 x i1>
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR1:%.*]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v32i16.p0(<32 x i16> [[TMP2]], ptr align 1 [[TMP9]], <32 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast <32 x i1> [[TMP5]] to i32
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i32 [[TMP10]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; CHECK:       11:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
 ; CHECK:       12:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    unreachable
+; CHECK:       13:
 ; CHECK-NEXT:    call void @llvm.masked.store.v32i16.p0(<32 x i16> [[X1:%.*]], ptr align 1 [[PTR1]], <32 x i1> [[TMP6]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP4]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP13:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; CHECK:       13:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
-; CHECK-NEXT:    unreachable
+; CHECK-NEXT:    br i1 [[_MSCMP2]], label [[TMP14:%.*]], label [[TMP19:%.*]], !prof [[PROF1]]
 ; CHECK:       14:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
+; CHECK-NEXT:    unreachable
+; CHECK:       15:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store <32 x i16> [[TMP2]], ptr [[TMP17]], align 1
 ; CHECK-NEXT:    store <32 x i16> [[X1]], ptr [[PTR2]], align 1
 ; CHECK-NEXT:    ret void
@@ -280,45 +284,48 @@ define { <32 x i16>, <32 x i16>, <32 x i16> } @test_int_x86_avx512_mask_loadu_w_
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[TMP6:%.*]] = load <32 x i16>, ptr [[PTR:%.*]], align 1
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 1
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i32 [[TMP2]] to <32 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i32 [[MASK:%.*]] to <32 x i1>
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <32 x i16> @llvm.masked.load.v32i16.p0(ptr align 1 [[TMP14]], <32 x i1> [[TMP11]], <32 x i16> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP15:%.*]] = bitcast <32 x i1> [[TMP10]] to i32
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i32 [[TMP15]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP16:%.*]], label [[TMP17:%.*]], !prof [[PROF1]]
-; CHECK:       16:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP26:%.*]], label [[TMP31:%.*]], !prof [[PROF1]]
+; CHECK:       18:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       17:
+; CHECK:       19:
 ; CHECK-NEXT:    [[TMP18:%.*]] = call <32 x i16> @llvm.masked.load.v32i16.p0(ptr align 1 [[PTR2]], <32 x i1> [[TMP11]], <32 x i16> [[TMP6]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i32 [[TMP2]] to <32 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = bitcast i32 [[MASK]] to <32 x i1>
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <32 x i16> @llvm.masked.load.v32i16.p0(ptr align 1 [[TMP23]], <32 x i1> [[TMP20]], <32 x i16> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP24:%.*]] = bitcast <32 x i1> [[TMP19]] to i32
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i32 [[TMP24]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP25:%.*]], label [[TMP26:%.*]], !prof [[PROF1]]
-; CHECK:       25:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP32:%.*]], label [[TMP33:%.*]], !prof [[PROF1]]
+; CHECK:       28:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       26:
+; CHECK:       29:
 ; CHECK-NEXT:    [[TMP27:%.*]] = call <32 x i16> @llvm.masked.load.v32i16.p0(ptr align 1 [[PTR]], <32 x i1> [[TMP20]], <32 x i16> zeroinitializer)
 ; CHECK-NEXT:    [[TMP28:%.*]] = insertvalue { <32 x i16>, <32 x i16>, <32 x i16> } { <32 x i16> splat (i16 -1), <32 x i16> splat (i16 -1), <32 x i16> splat (i16 -1) }, <32 x i16> [[_MSLD]], 0
 ; CHECK-NEXT:    [[RES3:%.*]] = insertvalue { <32 x i16>, <32 x i16>, <32 x i16> } poison, <32 x i16> [[TMP6]], 0
@@ -349,45 +356,48 @@ define { <64 x i8>, <64 x i8>, <64 x i8> } @test_int_x86_avx512_mask_loadu_b_512
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[TMP6:%.*]] = load <64 x i8>, ptr [[PTR:%.*]], align 1
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP9]], align 1
 ; CHECK-NEXT:    [[TMP10:%.*]] = bitcast i64 [[TMP2]] to <64 x i1>
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast i64 [[MASK:%.*]] to <64 x i1>
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[PTR2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <64 x i8> @llvm.masked.load.v64i8.p0(ptr align 1 [[TMP14]], <64 x i1> [[TMP11]], <64 x i8> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i64 [[TMP3]], 0
 ; CHECK-NEXT:    [[TMP15:%.*]] = bitcast <64 x i1> [[TMP10]] to i64
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i64 [[TMP15]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP2]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP16:%.*]], label [[TMP17:%.*]], !prof [[PROF1]]
-; CHECK:       16:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP26:%.*]], label [[TMP31:%.*]], !prof [[PROF1]]
+; CHECK:       18:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       17:
+; CHECK:       19:
 ; CHECK-NEXT:    [[TMP18:%.*]] = call <64 x i8> @llvm.masked.load.v64i8.p0(ptr align 1 [[PTR2]], <64 x i1> [[TMP11]], <64 x i8> [[TMP6]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = bitcast i64 [[TMP2]] to <64 x i1>
 ; CHECK-NEXT:    [[TMP20:%.*]] = bitcast i64 [[MASK]] to <64 x i1>
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD1:%.*]] = call <64 x i8> @llvm.masked.load.v64i8.p0(ptr align 1 [[TMP23]], <64 x i1> [[TMP20]], <64 x i8> zeroinitializer)
 ; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[TMP24:%.*]] = bitcast <64 x i1> [[TMP19]] to i64
 ; CHECK-NEXT:    [[_MSCMP5:%.*]] = icmp ne i64 [[TMP24]], 0
 ; CHECK-NEXT:    [[_MSOR6:%.*]] = or i1 [[_MSCMP4]], [[_MSCMP5]]
-; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP25:%.*]], label [[TMP26:%.*]], !prof [[PROF1]]
-; CHECK:       25:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    br i1 [[_MSOR6]], label [[TMP32:%.*]], label [[TMP33:%.*]], !prof [[PROF1]]
+; CHECK:       28:
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       26:
+; CHECK:       29:
 ; CHECK-NEXT:    [[TMP27:%.*]] = call <64 x i8> @llvm.masked.load.v64i8.p0(ptr align 1 [[PTR]], <64 x i1> [[TMP20]], <64 x i8> zeroinitializer)
 ; CHECK-NEXT:    [[TMP28:%.*]] = insertvalue { <64 x i8>, <64 x i8>, <64 x i8> } { <64 x i8> splat (i8 -1), <64 x i8> splat (i8 -1), <64 x i8> splat (i8 -1) }, <64 x i8> [[_MSLD]], 0
 ; CHECK-NEXT:    [[RES3:%.*]] = insertvalue { <64 x i8>, <64 x i8>, <64 x i8> } poison, <64 x i8> [[TMP6]], 0
@@ -446,13 +456,14 @@ define <8 x i64> @test_int_x86_avx512_psll_load_dq_512(ptr %p0) nounwind #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP2:%.*]], label [[TMP3:%.*]], !prof [[PROF1]]
 ; CHECK:       2:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       3:
 ; CHECK-NEXT:    [[X0:%.*]] = load <8 x i64>, ptr [[P0:%.*]], align 64
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P0]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP6]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <8 x i64> [[_MSLD]] to <64 x i8>
 ; CHECK-NEXT:    [[CAST:%.*]] = bitcast <8 x i64> [[X0]] to <64 x i8>
@@ -507,13 +518,14 @@ define <8 x i64> @test_int_x86_avx512_psrl_load_dq_512(ptr %p0) nounwind #0 {
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP2:%.*]], label [[TMP3:%.*]], !prof [[PROF1]]
 ; CHECK:       2:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       3:
 ; CHECK-NEXT:    [[X0:%.*]] = load <8 x i64>, ptr [[P0:%.*]], align 64
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P0]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP6]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <8 x i64> [[_MSLD]] to <64 x i8>
 ; CHECK-NEXT:    [[CAST:%.*]] = bitcast <8 x i64> [[X0]] to <64 x i8>
@@ -2113,13 +2125,14 @@ define <32 x i16> @test_mask_packs_epi32_rm_512(<16 x i32> %a, ptr %ptr_b) nounw
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP13:%.*]] = sext <16 x i1> [[TMP8]] to <16 x i32>
@@ -2145,13 +2158,14 @@ define <32 x i16> @test_mask_packs_epi32_rmk_512(<16 x i32> %a, ptr %ptr_b, <32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP22:%.*]] = sext <16 x i1> [[TMP10]] to <16 x i32>
@@ -2184,13 +2198,14 @@ define <32 x i16> @test_mask_packs_epi32_rmkz_512(<16 x i32> %a, ptr %ptr_b, i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP21:%.*]] = sext <16 x i1> [[TMP9]] to <16 x i32>
@@ -2222,13 +2237,14 @@ define <32 x i16> @test_mask_packs_epi32_rmb_512(<16 x i32> %a, ptr %ptr_b) noun
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -2260,13 +2276,14 @@ define <32 x i16> @test_mask_packs_epi32_rmbk_512(<16 x i32> %a, ptr %ptr_b, <32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -2305,13 +2322,14 @@ define <32 x i16> @test_mask_packs_epi32_rmbkz_512(<16 x i32> %a, ptr %ptr_b, i3
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -2424,13 +2442,14 @@ define <64 x i8> @test_mask_packs_epi16_rm_512(<32 x i16> %a, ptr %ptr_b) nounwi
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP13:%.*]] = sext <32 x i1> [[TMP8]] to <32 x i16>
@@ -2456,13 +2475,14 @@ define <64 x i8> @test_mask_packs_epi16_rmk_512(<32 x i16> %a, ptr %ptr_b, <64 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP22:%.*]] = sext <32 x i1> [[TMP10]] to <32 x i16>
@@ -2495,13 +2515,14 @@ define <64 x i8> @test_mask_packs_epi16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i64
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP21:%.*]] = sext <32 x i1> [[TMP9]] to <32 x i16>
@@ -2609,13 +2630,14 @@ define <32 x i16> @test_mask_packus_epi32_rm_512(<16 x i32> %a, ptr %ptr_b) noun
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP13:%.*]] = sext <16 x i1> [[TMP8]] to <16 x i32>
@@ -2641,13 +2663,14 @@ define <32 x i16> @test_mask_packus_epi32_rmk_512(<16 x i32> %a, ptr %ptr_b, <32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP22:%.*]] = sext <16 x i1> [[TMP10]] to <16 x i32>
@@ -2680,13 +2703,14 @@ define <32 x i16> @test_mask_packus_epi32_rmkz_512(<16 x i32> %a, ptr %ptr_b, i3
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP21:%.*]] = sext <16 x i1> [[TMP9]] to <16 x i32>
@@ -2718,13 +2742,14 @@ define <32 x i16> @test_mask_packus_epi32_rmb_512(<16 x i32> %a, ptr %ptr_b) nou
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -2756,13 +2781,14 @@ define <32 x i16> @test_mask_packus_epi32_rmbk_512(<16 x i32> %a, ptr %ptr_b, <3
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -2801,13 +2827,14 @@ define <32 x i16> @test_mask_packus_epi32_rmbkz_512(<16 x i32> %a, ptr %ptr_b, i
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -2920,13 +2947,14 @@ define <64 x i8> @test_mask_packus_epi16_rm_512(<32 x i16> %a, ptr %ptr_b) nounw
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP13:%.*]] = sext <32 x i1> [[TMP8]] to <32 x i16>
@@ -2952,13 +2980,14 @@ define <64 x i8> @test_mask_packus_epi16_rmk_512(<32 x i16> %a, ptr %ptr_b, <64
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP22:%.*]] = sext <32 x i1> [[TMP10]] to <32 x i16>
@@ -2991,13 +3020,14 @@ define <64 x i8> @test_mask_packus_epi16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i64
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP21:%.*]] = sext <32 x i1> [[TMP9]] to <32 x i16>
@@ -4975,7 +5005,7 @@ define <32 x i16> @test_int_x86_avx512_vpermt2var_hi_512(<32 x i16> %x0, <32 x i
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i160 [[TMP5]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP103:%.*]] = call <32 x i16> @llvm.x86.avx512.vpermi2var.hi.512(<32 x i16> [[X1:%.*]], <32 x i16> [[X3]], <32 x i16> [[X2:%.*]])
@@ -4999,7 +5029,7 @@ define <32 x i16> @test_int_x86_avx512_mask_vpermt2var_hi_512(<32 x i16> %x0, <3
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i160 [[TMP6]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP104:%.*]] = call <32 x i16> @llvm.x86.avx512.vpermi2var.hi.512(<32 x i16> [[X1:%.*]], <32 x i16> [[X4]], <32 x i16> [[X2:%.*]])
@@ -5033,7 +5063,7 @@ define <32 x i16> @test_int_x86_avx512_maskz_vpermt2var_hi_512(<32 x i16> %x0, <
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i160 [[TMP6]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP104:%.*]] = call <32 x i16> @llvm.x86.avx512.vpermi2var.hi.512(<32 x i16> [[X1:%.*]], <32 x i16> [[X4]], <32 x i16> [[X2:%.*]])
@@ -5066,7 +5096,7 @@ define <32 x i16> @test_int_x86_avx512_vpermi2var_hi_512(<32 x i16> %x0, <32 x i
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i160 [[TMP5]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP103:%.*]] = call <32 x i16> @llvm.x86.avx512.vpermi2var.hi.512(<32 x i16> [[X0:%.*]], <32 x i16> [[X3]], <32 x i16> [[X2:%.*]])
@@ -5090,7 +5120,7 @@ define <32 x i16> @test_int_x86_avx512_mask_vpermi2var_hi_512(<32 x i16> %x0, <3
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i160 [[TMP7]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
 ; CHECK:       8:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       9:
 ; CHECK-NEXT:    [[TMP104:%.*]] = call <32 x i16> @llvm.x86.avx512.vpermi2var.hi.512(<32 x i16> [[X0:%.*]], <32 x i16> [[X1]], <32 x i16> [[X2:%.*]])
@@ -5125,7 +5155,7 @@ define { <32 x i16>, <32 x i16>, <32 x i16> } @test_int_x86_avx512_mask_dbpsadbw
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
 ; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       8:
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.x86.avx512.dbpsadbw.512(<64 x i8> [[X0:%.*]], <64 x i8> [[X1:%.*]], i32 2)
@@ -5144,7 +5174,7 @@ define { <32 x i16>, <32 x i16>, <32 x i16> } @test_int_x86_avx512_mask_dbpsadbw
 ; CHECK-NEXT:    [[_MSOR5:%.*]] = or i1 [[_MSCMP3]], [[_MSCMP4]]
 ; CHECK-NEXT:    br i1 [[_MSOR5]], label [[TMP19:%.*]], label [[TMP20:%.*]], !prof [[PROF1]]
 ; CHECK:       19:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       20:
 ; CHECK-NEXT:    [[TMP21:%.*]] = call <32 x i16> @llvm.x86.avx512.dbpsadbw.512(<64 x i8> [[X0]], <64 x i8> [[X1]], i32 3)
@@ -5163,7 +5193,7 @@ define { <32 x i16>, <32 x i16>, <32 x i16> } @test_int_x86_avx512_mask_dbpsadbw
 ; CHECK-NEXT:    [[_MSOR8:%.*]] = or i1 [[_MSCMP6]], [[_MSCMP7]]
 ; CHECK-NEXT:    br i1 [[_MSOR8]], label [[TMP31:%.*]], label [[TMP32:%.*]], !prof [[PROF1]]
 ; CHECK:       31:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       32:
 ; CHECK-NEXT:    [[TMP33:%.*]] = call <32 x i16> @llvm.x86.avx512.dbpsadbw.512(<64 x i8> [[X0]], <64 x i8> [[X1]], i32 4)
@@ -5254,13 +5284,14 @@ define <32 x i16> @test_mask_adds_epu16_rm_512(<32 x i16> %a, ptr %ptr_b) nounwi
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <32 x i16> @llvm.uadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -5282,13 +5313,14 @@ define <32 x i16> @test_mask_adds_epu16_rmk_512(<32 x i16> %a, ptr %ptr_b, <32 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <32 x i16> @llvm.uadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -5317,13 +5349,14 @@ define <32 x i16> @test_mask_adds_epu16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.uadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -5414,13 +5447,14 @@ define <32 x i16> @test_mask_subs_epu16_rm_512(<32 x i16> %a, ptr %ptr_b) nounwi
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <32 x i16> @llvm.usub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -5442,13 +5476,14 @@ define <32 x i16> @test_mask_subs_epu16_rmk_512(<32 x i16> %a, ptr %ptr_b, <32 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <32 x i16> @llvm.usub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -5477,13 +5512,14 @@ define <32 x i16> @test_mask_subs_epu16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.usub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -5574,13 +5610,14 @@ define <64 x i8> @test_mask_adds_epu8_rm_512(<64 x i8> %a, ptr %ptr_b) nounwind
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <64 x i8> @llvm.uadd.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -5602,13 +5639,14 @@ define <64 x i8> @test_mask_adds_epu8_rmk_512(<64 x i8> %a, ptr %ptr_b, <64 x i8
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <64 x i8> @llvm.uadd.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -5637,13 +5675,14 @@ define <64 x i8> @test_mask_adds_epu8_rmkz_512(<64 x i8> %a, ptr %ptr_b, i64 %ma
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <64 x i8> @llvm.uadd.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -5734,13 +5773,14 @@ define <64 x i8> @test_mask_subs_epu8_rm_512(<64 x i8> %a, ptr %ptr_b) nounwind
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <64 x i8> @llvm.usub.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -5762,13 +5802,14 @@ define <64 x i8> @test_mask_subs_epu8_rmk_512(<64 x i8> %a, ptr %ptr_b, <64 x i8
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <64 x i8> @llvm.usub.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -5797,13 +5838,14 @@ define <64 x i8> @test_mask_subs_epu8_rmkz_512(<64 x i8> %a, ptr %ptr_b, i64 %ma
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <64 x i8> @llvm.usub.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -5898,13 +5940,14 @@ define <32 x i16> @test_adds_epi16_rm_512(<32 x i16> %a, ptr %ptr_b) nounwind #0
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <32 x i16> @llvm.sadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -5926,13 +5969,14 @@ define <32 x i16> @test_adds_epi16_rmk_512(<32 x i16> %a, ptr %ptr_b, <32 x i16>
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <32 x i16> @llvm.sadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -5963,13 +6007,14 @@ define <32 x i16> @test_adds_epi16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i32 %mask
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.sadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6062,13 +6107,14 @@ define <32 x i16> @test_mask_adds_epi16_rm_512(<32 x i16> %a, ptr %ptr_b) nounwi
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <32 x i16> @llvm.sadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6090,13 +6136,14 @@ define <32 x i16> @test_mask_adds_epi16_rmk_512(<32 x i16> %a, ptr %ptr_b, <32 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <32 x i16> @llvm.sadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6125,13 +6172,14 @@ define <32 x i16> @test_mask_adds_epi16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.sadd.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6226,13 +6274,14 @@ define <32 x i16> @test_subs_epi16_rm_512(<32 x i16> %a, ptr %ptr_b) nounwind #0
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <32 x i16> @llvm.ssub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6254,13 +6303,14 @@ define <32 x i16> @test_subs_epi16_rmk_512(<32 x i16> %a, ptr %ptr_b, <32 x i16>
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <32 x i16> @llvm.ssub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6291,13 +6341,14 @@ define <32 x i16> @test_subs_epi16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i32 %mask
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.ssub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6390,13 +6441,14 @@ define <32 x i16> @test_mask_subs_epi16_rm_512(<32 x i16> %a, ptr %ptr_b) nounwi
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <32 x i16> @llvm.ssub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6418,13 +6470,14 @@ define <32 x i16> @test_mask_subs_epi16_rmk_512(<32 x i16> %a, ptr %ptr_b, <32 x
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <32 x i16> @llvm.ssub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6453,13 +6506,14 @@ define <32 x i16> @test_mask_subs_epi16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i32
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <32 x i16> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <32 x i16> @llvm.ssub.sat.v32i16(<32 x i16> [[A:%.*]], <32 x i16> [[B]])
@@ -6550,13 +6604,14 @@ define <64 x i8> @test_mask_adds_epi8_rm_512(<64 x i8> %a, ptr %ptr_b) nounwind
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <64 x i8> @llvm.sadd.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -6578,13 +6633,14 @@ define <64 x i8> @test_mask_adds_epi8_rmk_512(<64 x i8> %a, ptr %ptr_b, <64 x i8
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <64 x i8> @llvm.sadd.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -6613,13 +6669,14 @@ define <64 x i8> @test_mask_adds_epi8_rmkz_512(<64 x i8> %a, ptr %ptr_b, i64 %ma
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <64 x i8> @llvm.sadd.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -6710,13 +6767,14 @@ define <64 x i8> @test_mask_subs_epi8_rm_512(<64 x i8> %a, ptr %ptr_b) nounwind
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP3:%.*]], label [[TMP4:%.*]], !prof [[PROF1]]
 ; CHECK:       3:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       4:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = call <64 x i8> @llvm.ssub.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -6738,13 +6796,14 @@ define <64 x i8> @test_mask_subs_epi8_rmk_512(<64 x i8> %a, ptr %ptr_b, <64 x i8
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
 ; CHECK:       5:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       6:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = call <64 x i8> @llvm.ssub.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
@@ -6773,13 +6832,14 @@ define <64 x i8> @test_mask_subs_epi8_rmkz_512(<64 x i8> %a, ptr %ptr_b, i64 %ma
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP4:%.*]], label [[TMP5:%.*]], !prof [[PROF1]]
 ; CHECK:       4:
-; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
+; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
 ; CHECK:       5:
 ; CHECK-NEXT:    [[B:%.*]] = load <64 x i8>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <64 x i8>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <64 x i8> [[TMP2]], [[_MSLD]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = call <64 x i8> @llvm.ssub.sat.v64i8(<64 x i8> [[A:%.*]], <64 x i8> [[B]])
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics.ll
index 481751b25eda1..cf5707448d750 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512bw-intrinsics.ll
@@ -379,7 +379,8 @@ define <32 x i16> @test_mask_packs_epi32_rm_512(<16 x i32> %a, ptr %ptr_b) #0 {
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP13:%.*]] = sext <16 x i1> [[TMP8]] to <16 x i32>
@@ -411,7 +412,8 @@ define <32 x i16> @test_mask_packs_epi32_rmk_512(<16 x i32> %a, ptr %ptr_b, <32
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP22:%.*]] = sext <16 x i1> [[TMP10]] to <16 x i32>
@@ -452,7 +454,8 @@ define <32 x i16> @test_mask_packs_epi32_rmkz_512(<16 x i32> %a, ptr %ptr_b, i32
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP21:%.*]] = sext <16 x i1> [[TMP9]] to <16 x i32>
@@ -492,7 +495,8 @@ define <32 x i16> @test_mask_packs_epi32_rmb_512(<16 x i32> %a, ptr %ptr_b) #0 {
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -530,7 +534,8 @@ define <32 x i16> @test_mask_packs_epi32_rmbk_512(<16 x i32> %a, ptr %ptr_b, <32
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -577,7 +582,8 @@ define <32 x i16> @test_mask_packs_epi32_rmbkz_512(<16 x i32> %a, ptr %ptr_b, i3
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -702,7 +708,8 @@ define <64 x i8> @test_mask_packs_epi16_rm_512(<32 x i16> %a, ptr %ptr_b) #0 {
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP13:%.*]] = sext <32 x i1> [[TMP8]] to <32 x i16>
@@ -734,7 +741,8 @@ define <64 x i8> @test_mask_packs_epi16_rmk_512(<32 x i16> %a, ptr %ptr_b, <64 x
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP22:%.*]] = sext <32 x i1> [[TMP10]] to <32 x i16>
@@ -775,7 +783,8 @@ define <64 x i8> @test_mask_packs_epi16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i64
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP21:%.*]] = sext <32 x i1> [[TMP9]] to <32 x i16>
@@ -895,7 +904,8 @@ define <32 x i16> @test_mask_packus_epi32_rm_512(<16 x i32> %a, ptr %ptr_b) #0 {
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP13:%.*]] = sext <16 x i1> [[TMP8]] to <16 x i32>
@@ -927,7 +937,8 @@ define <32 x i16> @test_mask_packus_epi32_rmk_512(<16 x i32> %a, ptr %ptr_b, <32
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP22:%.*]] = sext <16 x i1> [[TMP10]] to <16 x i32>
@@ -968,7 +979,8 @@ define <32 x i16> @test_mask_packus_epi32_rmkz_512(<16 x i32> %a, ptr %ptr_b, i3
 ; CHECK-NEXT:    [[B:%.*]] = load <16 x i32>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <16 x i32> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP21:%.*]] = sext <16 x i1> [[TMP9]] to <16 x i32>
@@ -1008,7 +1020,8 @@ define <32 x i16> @test_mask_packus_epi32_rmb_512(<16 x i32> %a, ptr %ptr_b) #0
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -1046,7 +1059,8 @@ define <32 x i16> @test_mask_packus_epi32_rmbk_512(<16 x i32> %a, ptr %ptr_b, <3
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -1093,7 +1107,8 @@ define <32 x i16> @test_mask_packus_epi32_rmbkz_512(<16 x i32> %a, ptr %ptr_b, i
 ; CHECK-NEXT:    [[Q:%.*]] = load i32, ptr [[PTR_B:%.*]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <16 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <16 x i32> poison, i32 [[Q]], i32 0
@@ -1218,7 +1233,8 @@ define <64 x i8> @test_mask_packus_epi16_rm_512(<32 x i16> %a, ptr %ptr_b) #0 {
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP13:%.*]] = sext <32 x i1> [[TMP8]] to <32 x i16>
@@ -1250,7 +1266,8 @@ define <64 x i8> @test_mask_packus_epi16_rmk_512(<32 x i16> %a, ptr %ptr_b, <64
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP9]], align 64
 ; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP22:%.*]] = sext <32 x i1> [[TMP10]] to <32 x i16>
@@ -1291,7 +1308,8 @@ define <64 x i8> @test_mask_packus_epi16_rmkz_512(<32 x i16> %a, ptr %ptr_b, i64
 ; CHECK-NEXT:    [[B:%.*]] = load <32 x i16>, ptr [[PTR_B:%.*]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne <32 x i16> [[TMP2]], zeroinitializer
 ; CHECK-NEXT:    [[TMP21:%.*]] = sext <32 x i1> [[TMP9]] to <32 x i16>
@@ -2969,7 +2987,8 @@ define <32 x i16> @test_x86_avx512_psrl_w_512_load(<32 x i16> %a0, ptr %p) #0 {
 ; CHECK-NEXT:    [[A1:%.*]] = load <8 x i16>, ptr [[P:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i16> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[TMP9:%.*]] = trunc i128 [[TMP8]] to i64
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-arith-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-arith-intrinsics.ll
index a79e293f54034..7dd97acce2203 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-arith-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-arith-intrinsics.ll
@@ -83,7 +83,8 @@ define <32 x half> @test_int_x86_avx512fp16_maskz_add_ph_512(<32 x half> %src, <
 ; CHECK-NEXT:    [[VAL:%.*]] = load <32 x half>, ptr [[PTR]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = or <32 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <32 x i16> [[_MSPROP4]], zeroinitializer
@@ -217,7 +218,8 @@ define <32 x half> @test_int_x86_avx512fp16_maskz_sub_ph_512(<32 x half> %src, <
 ; CHECK-NEXT:    [[VAL:%.*]] = load <32 x half>, ptr [[PTR]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = or <32 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <32 x i16> [[_MSPROP4]], zeroinitializer
@@ -351,7 +353,8 @@ define <32 x half> @test_int_x86_avx512fp16_maskz_mul_ph_512(<32 x half> %src, <
 ; CHECK-NEXT:    [[VAL:%.*]] = load <32 x half>, ptr [[PTR]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = or <32 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <32 x i16> [[_MSPROP4]], zeroinitializer
@@ -485,7 +488,8 @@ define <32 x half> @test_int_x86_avx512fp16_maskz_div_ph_512(<32 x half> %src, <
 ; CHECK-NEXT:    [[VAL:%.*]] = load <32 x half>, ptr [[PTR]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i16>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = or <32 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <32 x i16> [[_MSPROP4]], zeroinitializer
@@ -790,7 +794,8 @@ define <8 x double> @test_int_x86_avx512_mask_vcvt_ph2pd_load(ptr %px0, <8 x dou
 ; CHECK-NEXT:    [[X0:%.*]] = load <8 x half>, ptr [[PX0]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PX0]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP8]], align 16
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <8 x i16> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP9]], 0
@@ -799,11 +804,11 @@ define <8 x double> @test_int_x86_avx512_mask_vcvt_ph2pd_load(ptr %px0, <8 x dou
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP1]], [[_MSCMP2]]
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[TMP3]], 0
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR4]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1]]
-; CHECK:       [[BB11]]:
+; CHECK-NEXT:    br i1 [[_MSOR4]], label %[[BB12:.*]], label %[[BB13:.*]], !prof [[PROF1]]
+; CHECK:       [[BB12]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB12]]:
+; CHECK:       [[BB13]]:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x double> @llvm.x86.avx512fp16.mask.vcvtph2pd.512(<8 x half> [[X0]], <8 x double> [[X1]], i8 [[X2]], i32 4)
 ; CHECK-NEXT:    store <8 x i64> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x double> [[RES]]
@@ -885,7 +890,8 @@ define <8 x half> @test_int_x86_avx512_mask_vcvt_pd2ph_load(ptr %px0, <8 x half>
 ; CHECK-NEXT:    [[X0:%.*]] = load <8 x double>, ptr [[PX0]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PX0]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <8 x i64> [[_MSLD]] to i512
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i512 [[TMP9]], 0
@@ -894,11 +900,11 @@ define <8 x half> @test_int_x86_avx512_mask_vcvt_pd2ph_load(ptr %px0, <8 x half>
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP1]], [[_MSCMP2]]
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[TMP3]], 0
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR4]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1]]
-; CHECK:       [[BB11]]:
+; CHECK-NEXT:    br i1 [[_MSOR4]], label %[[BB12:.*]], label %[[BB13:.*]], !prof [[PROF1]]
+; CHECK:       [[BB12]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB12]]:
+; CHECK:       [[BB13]]:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.vcvtpd2ph.512(<8 x double> [[X0]], <8 x half> [[X1]], i8 [[X2]], i32 4)
 ; CHECK-NEXT:    store <8 x i16> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES]]
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-arith-vl-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-arith-vl-intrinsics.ll
index c0ba3d599807f..fcdca6cf04f95 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-arith-vl-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-arith-vl-intrinsics.ll
@@ -63,7 +63,8 @@ define <16 x half> @test_int_x86_avx512fp16_mask_add_ph_256(<16 x half> %x1, <16
 ; CHECK-NEXT:    [[VAL:%.*]] = load <16 x half>, ptr [[PTR]], align 32
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i16>, ptr [[TMP11]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[RES0:%.*]] = fadd <16 x half> [[X1]], [[X2]]
@@ -162,7 +163,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_add_ph_128(<8 x half> %x1, <8 x
 ; CHECK-NEXT:    [[VAL:%.*]] = load <8 x half>, ptr [[PTR]], align 16
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP11]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[RES0:%.*]] = fadd <8 x half> [[X1]], [[X2]]
@@ -261,7 +263,8 @@ define <16 x half> @test_int_x86_avx512fp16_mask_sub_ph_256(<16 x half> %x1, <16
 ; CHECK-NEXT:    [[VAL:%.*]] = load <16 x half>, ptr [[PTR]], align 32
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i16>, ptr [[TMP11]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[RES0:%.*]] = fsub <16 x half> [[X1]], [[X2]]
@@ -360,7 +363,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_sub_ph_128(<8 x half> %x1, <8 x
 ; CHECK-NEXT:    [[VAL:%.*]] = load <8 x half>, ptr [[PTR]], align 16
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP11]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[RES0:%.*]] = fsub <8 x half> [[X1]], [[X2]]
@@ -459,7 +463,8 @@ define <16 x half> @test_int_x86_avx512fp16_mask_mul_ph_256(<16 x half> %x1, <16
 ; CHECK-NEXT:    [[VAL:%.*]] = load <16 x half>, ptr [[PTR]], align 32
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i16>, ptr [[TMP11]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[RES0:%.*]] = fmul <16 x half> [[X1]], [[X2]]
@@ -558,7 +563,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_mul_ph_128(<8 x half> %x1, <8 x
 ; CHECK-NEXT:    [[VAL:%.*]] = load <8 x half>, ptr [[PTR]], align 16
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP11]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[RES0:%.*]] = fmul <8 x half> [[X1]], [[X2]]
@@ -672,7 +678,8 @@ define <16 x half> @test_int_x86_avx512fp16_mask_div_ph_256(<16 x half> %x1, <16
 ; CHECK-NEXT:    [[VAL:%.*]] = load <16 x half>, ptr [[PTR]], align 32
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i16>, ptr [[TMP11]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <16 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[RES0:%.*]] = fdiv <16 x half> [[X1]], [[X2]]
@@ -786,7 +793,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_div_ph_128(<8 x half> %x1, <8 x
 ; CHECK-NEXT:    [[VAL:%.*]] = load <8 x half>, ptr [[PTR]], align 16
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP11]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <8 x i16> [[TMP3]], [[TMP4]]
 ; CHECK-NEXT:    [[RES0:%.*]] = fdiv <8 x half> [[X1]], [[X2]]
@@ -1167,7 +1175,8 @@ define <8 x half> @test_int_x86_avx512_mask_vcvt_pd2ph_256_load(ptr %px0, <8 x h
 ; CHECK-NEXT:    [[X0:%.*]] = load <4 x double>, ptr [[PX0]], align 32
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PX0]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i64>, ptr [[TMP8]], align 32
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <4 x i64> [[_MSLD]] to i256
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i256 [[TMP9]], 0
@@ -1176,11 +1185,11 @@ define <8 x half> @test_int_x86_avx512_mask_vcvt_pd2ph_256_load(ptr %px0, <8 x h
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP1]], [[_MSCMP2]]
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[TMP3]], 0
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR4]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1]]
-; CHECK:       [[BB11]]:
+; CHECK-NEXT:    br i1 [[_MSOR4]], label %[[BB12:.*]], label %[[BB13:.*]], !prof [[PROF1]]
+; CHECK:       [[BB12]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB12]]:
+; CHECK:       [[BB13]]:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.vcvtpd2ph.256(<4 x double> [[X0]], <8 x half> [[X1]], i8 [[X2]])
 ; CHECK-NEXT:    store <8 x i16> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES]]
@@ -1235,7 +1244,8 @@ define <8 x half> @test_int_x86_avx512_mask_vcvt_pd2ph_128_load(ptr %px0, <8 x h
 ; CHECK-NEXT:    [[X0:%.*]] = load <2 x double>, ptr [[PX0]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PX0]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP8]], align 16
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <2 x i64> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP9]], 0
@@ -1244,11 +1254,11 @@ define <8 x half> @test_int_x86_avx512_mask_vcvt_pd2ph_128_load(ptr %px0, <8 x h
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP1]], [[_MSCMP2]]
 ; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[TMP3]], 0
 ; CHECK-NEXT:    [[_MSOR4:%.*]] = or i1 [[_MSOR]], [[_MSCMP3]]
-; CHECK-NEXT:    br i1 [[_MSOR4]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1]]
-; CHECK:       [[BB11]]:
+; CHECK-NEXT:    br i1 [[_MSOR4]], label %[[BB12:.*]], label %[[BB13:.*]], !prof [[PROF1]]
+; CHECK:       [[BB12]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB12]]:
+; CHECK:       [[BB13]]:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.vcvtpd2ph.128(<2 x double> [[X0]], <8 x half> [[X1]], i8 [[X2]])
 ; CHECK-NEXT:    store <8 x i16> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES]]
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-intrinsics.ll
index e5d1af3841f10..5207f2bdb0202 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512fp16-intrinsics.ll
@@ -493,7 +493,8 @@ define <8 x half> @test_rsqrt_sh_load(<8 x half> %a0, ptr %a1ptr) #0 {
 ; CHECK-NEXT:    [[A1:%.*]] = load <8 x half>, ptr [[A1PTR]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A1PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i16> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP8]], 0
@@ -652,15 +653,16 @@ define i8 @test_int_x86_avx512_mask_fpclass_sh_load(ptr %x0ptr) #0 {
 ; CHECK-NEXT:    [[X0:%.*]] = load <8 x half>, ptr [[X0PTR]], align 16
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[X0PTR]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP6]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <8 x i16> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP7]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label %[[BB8:.*]], label %[[BB9:.*]], !prof [[PROF1]]
-; CHECK:       [[BB8]]:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label %[[BB9:.*]], label %[[BB10:.*]], !prof [[PROF1]]
+; CHECK:       [[BB9]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB9]]:
+; CHECK:       [[BB10]]:
 ; CHECK-NEXT:    [[RES:%.*]] = call i8 @llvm.x86.avx512fp16.mask.fpclass.sh(<8 x half> [[X0]], i32 4, i8 -1)
 ; CHECK-NEXT:    store i8 0, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret i8 [[RES]]
@@ -737,18 +739,19 @@ define <8 x half> @test_rcp_sh_load(<8 x half> %a0, ptr %a1ptr) #0 {
 ; CHECK-NEXT:    [[A1:%.*]] = load <8 x half>, ptr [[A1PTR]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A1PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i16> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP8]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <8 x i16> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i128 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP1]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label %[[BB10:.*]], label %[[BB11:.*]], !prof [[PROF1]]
-; CHECK:       [[BB10]]:
+; CHECK-NEXT:    br i1 [[_MSOR]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1]]
+; CHECK:       [[BB11]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB11]]:
+; CHECK:       [[BB12]]:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.rcp.sh(<8 x half> [[A0]], <8 x half> [[A1]], <8 x half> zeroinitializer, i8 -1)
 ; CHECK-NEXT:    store <8 x i16> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES]]
@@ -1074,7 +1077,8 @@ define <8 x half>@test_int_x86_avx512_mask_getexp_sh_load(<8 x half> %x0, ptr %x
 ; CHECK-NEXT:    [[X1:%.*]] = load <8 x half>, ptr [[X1PTR]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[X1PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i16> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP8]], 0
@@ -1348,7 +1352,8 @@ define <8 x half>@test_int_x86_avx512_mask_scalef_sh_load(<8 x half> %x0, ptr %x
 ; CHECK-NEXT:    [[X1:%.*]] = load <8 x half>, ptr [[X1PTR]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[X1PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i16> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP8]], 0
@@ -1385,7 +1390,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_add_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[VAL_HALF:%.*]] = load half, ptr [[PTR]], align 2
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP10]], align 2
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i16> splat (i16 -1), i16 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VAL:%.*]] = insertelement <8 x half> poison, half [[VAL_HALF]], i32 0
@@ -1404,11 +1410,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_add_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP21:%.*]] = select i1 [[TMP20]], i16 [[TMP18]], i16 [[TMP19]]
 ; CHECK-NEXT:    [[_MSPROP2:%.*]] = insertelement <8 x i16> [[_MSPROP1]], i16 [[TMP21]], i32 0
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB22:.*]], label %[[BB23:.*]], !prof [[PROF1]]
-; CHECK:       [[BB22]]:
+; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB23:.*]], label %[[BB24:.*]], !prof [[PROF1]]
+; CHECK:       [[BB23]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB23]]:
+; CHECK:       [[BB24]]:
 ; CHECK-NEXT:    [[RES1:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.add.sh.round(<8 x half> [[RES0]], <8 x half> [[X2]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP25:%.*]] = extractelement <8 x i16> [[_MSPROP2]], i32 0
 ; CHECK-NEXT:    [[TMP26:%.*]] = extractelement <8 x i16> [[TMP3]], i32 0
@@ -1418,11 +1424,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_add_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP29:%.*]] = select i1 [[TMP28]], i16 [[TMP27]], i16 0
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <8 x i16> [[_MSPROP2]], i16 [[TMP29]], i32 0
 ; CHECK-NEXT:    [[_MSCMP9:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB30:.*]], label %[[BB31:.*]], !prof [[PROF1]]
-; CHECK:       [[BB30]]:
+; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB31:.*]], label %[[BB32:.*]], !prof [[PROF1]]
+; CHECK:       [[BB31]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB31]]:
+; CHECK:       [[BB32]]:
 ; CHECK-NEXT:    [[RES2:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.add.sh.round(<8 x half> [[RES1]], <8 x half> [[X2]], <8 x half> zeroinitializer, i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP33:%.*]] = extractelement <8 x i16> [[_MSPROP3]], i32 0
 ; CHECK-NEXT:    [[TMP34:%.*]] = extractelement <8 x i16> [[_MSPROP]], i32 0
@@ -1433,11 +1439,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_add_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP38:%.*]] = select i1 [[TMP37]], i16 [[TMP35]], i16 [[TMP36]]
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = insertelement <8 x i16> [[_MSPROP3]], i16 [[TMP38]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB39:.*]], label %[[BB40:.*]], !prof [[PROF1]]
-; CHECK:       [[BB39]]:
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB40:.*]], label %[[BB41:.*]], !prof [[PROF1]]
+; CHECK:       [[BB40]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB40]]:
+; CHECK:       [[BB41]]:
 ; CHECK-NEXT:    [[RES3:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.add.sh.round(<8 x half> [[RES2]], <8 x half> [[VAL]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    store <8 x i16> [[_MSPROP4]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES3]]
@@ -1471,7 +1477,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_sub_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[VAL_HALF:%.*]] = load half, ptr [[PTR]], align 2
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP10]], align 2
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i16> splat (i16 -1), i16 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VAL:%.*]] = insertelement <8 x half> poison, half [[VAL_HALF]], i32 0
@@ -1490,11 +1497,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_sub_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP21:%.*]] = select i1 [[TMP20]], i16 [[TMP18]], i16 [[TMP19]]
 ; CHECK-NEXT:    [[_MSPROP2:%.*]] = insertelement <8 x i16> [[_MSPROP1]], i16 [[TMP21]], i32 0
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB22:.*]], label %[[BB23:.*]], !prof [[PROF1]]
-; CHECK:       [[BB22]]:
+; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB23:.*]], label %[[BB24:.*]], !prof [[PROF1]]
+; CHECK:       [[BB23]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB23]]:
+; CHECK:       [[BB24]]:
 ; CHECK-NEXT:    [[RES1:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.sub.sh.round(<8 x half> [[RES0]], <8 x half> [[X2]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP25:%.*]] = extractelement <8 x i16> [[_MSPROP2]], i32 0
 ; CHECK-NEXT:    [[TMP26:%.*]] = extractelement <8 x i16> [[TMP3]], i32 0
@@ -1504,11 +1511,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_sub_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP29:%.*]] = select i1 [[TMP28]], i16 [[TMP27]], i16 0
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <8 x i16> [[_MSPROP2]], i16 [[TMP29]], i32 0
 ; CHECK-NEXT:    [[_MSCMP9:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB30:.*]], label %[[BB31:.*]], !prof [[PROF1]]
-; CHECK:       [[BB30]]:
+; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB31:.*]], label %[[BB32:.*]], !prof [[PROF1]]
+; CHECK:       [[BB31]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB31]]:
+; CHECK:       [[BB32]]:
 ; CHECK-NEXT:    [[RES2:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.sub.sh.round(<8 x half> [[RES1]], <8 x half> [[X2]], <8 x half> zeroinitializer, i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP33:%.*]] = extractelement <8 x i16> [[_MSPROP3]], i32 0
 ; CHECK-NEXT:    [[TMP34:%.*]] = extractelement <8 x i16> [[_MSPROP]], i32 0
@@ -1519,11 +1526,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_sub_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP38:%.*]] = select i1 [[TMP37]], i16 [[TMP35]], i16 [[TMP36]]
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = insertelement <8 x i16> [[_MSPROP3]], i16 [[TMP38]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB39:.*]], label %[[BB40:.*]], !prof [[PROF1]]
-; CHECK:       [[BB39]]:
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB40:.*]], label %[[BB41:.*]], !prof [[PROF1]]
+; CHECK:       [[BB40]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB40]]:
+; CHECK:       [[BB41]]:
 ; CHECK-NEXT:    [[RES3:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.sub.sh.round(<8 x half> [[RES2]], <8 x half> [[VAL]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    store <8 x i16> [[_MSPROP4]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES3]]
@@ -1557,7 +1564,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_mul_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[VAL_HALF:%.*]] = load half, ptr [[PTR]], align 2
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP10]], align 2
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i16> splat (i16 -1), i16 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VAL:%.*]] = insertelement <8 x half> poison, half [[VAL_HALF]], i32 0
@@ -1576,11 +1584,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_mul_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP21:%.*]] = select i1 [[TMP20]], i16 [[TMP18]], i16 [[TMP19]]
 ; CHECK-NEXT:    [[_MSPROP2:%.*]] = insertelement <8 x i16> [[_MSPROP1]], i16 [[TMP21]], i32 0
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB22:.*]], label %[[BB23:.*]], !prof [[PROF1]]
-; CHECK:       [[BB22]]:
+; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB23:.*]], label %[[BB24:.*]], !prof [[PROF1]]
+; CHECK:       [[BB23]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB23]]:
+; CHECK:       [[BB24]]:
 ; CHECK-NEXT:    [[RES1:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.mul.sh.round(<8 x half> [[RES0]], <8 x half> [[X2]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP25:%.*]] = extractelement <8 x i16> [[_MSPROP2]], i32 0
 ; CHECK-NEXT:    [[TMP26:%.*]] = extractelement <8 x i16> [[TMP3]], i32 0
@@ -1590,11 +1598,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_mul_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP29:%.*]] = select i1 [[TMP28]], i16 [[TMP27]], i16 0
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <8 x i16> [[_MSPROP2]], i16 [[TMP29]], i32 0
 ; CHECK-NEXT:    [[_MSCMP9:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB30:.*]], label %[[BB31:.*]], !prof [[PROF1]]
-; CHECK:       [[BB30]]:
+; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB31:.*]], label %[[BB32:.*]], !prof [[PROF1]]
+; CHECK:       [[BB31]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB31]]:
+; CHECK:       [[BB32]]:
 ; CHECK-NEXT:    [[RES2:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.mul.sh.round(<8 x half> [[RES1]], <8 x half> [[X2]], <8 x half> zeroinitializer, i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP33:%.*]] = extractelement <8 x i16> [[_MSPROP3]], i32 0
 ; CHECK-NEXT:    [[TMP34:%.*]] = extractelement <8 x i16> [[_MSPROP]], i32 0
@@ -1605,11 +1613,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_mul_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP38:%.*]] = select i1 [[TMP37]], i16 [[TMP35]], i16 [[TMP36]]
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = insertelement <8 x i16> [[_MSPROP3]], i16 [[TMP38]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB39:.*]], label %[[BB40:.*]], !prof [[PROF1]]
-; CHECK:       [[BB39]]:
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB40:.*]], label %[[BB41:.*]], !prof [[PROF1]]
+; CHECK:       [[BB40]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB40]]:
+; CHECK:       [[BB41]]:
 ; CHECK-NEXT:    [[RES3:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.mul.sh.round(<8 x half> [[RES2]], <8 x half> [[VAL]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    store <8 x i16> [[_MSPROP4]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES3]]
@@ -1643,7 +1651,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_div_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[VAL_HALF:%.*]] = load half, ptr [[PTR]], align 2
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP10]], align 2
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i16> splat (i16 -1), i16 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VAL:%.*]] = insertelement <8 x half> poison, half [[VAL_HALF]], i32 0
@@ -1662,11 +1671,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_div_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP21:%.*]] = select i1 [[TMP20]], i16 [[TMP18]], i16 [[TMP19]]
 ; CHECK-NEXT:    [[_MSPROP2:%.*]] = insertelement <8 x i16> [[_MSPROP1]], i16 [[TMP21]], i32 0
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB22:.*]], label %[[BB23:.*]], !prof [[PROF1]]
-; CHECK:       [[BB22]]:
+; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB23:.*]], label %[[BB24:.*]], !prof [[PROF1]]
+; CHECK:       [[BB23]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB23]]:
+; CHECK:       [[BB24]]:
 ; CHECK-NEXT:    [[RES1:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.div.sh.round(<8 x half> [[RES0]], <8 x half> [[X2]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP25:%.*]] = extractelement <8 x i16> [[_MSPROP2]], i32 0
 ; CHECK-NEXT:    [[TMP26:%.*]] = extractelement <8 x i16> [[TMP3]], i32 0
@@ -1676,11 +1685,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_div_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP29:%.*]] = select i1 [[TMP28]], i16 [[TMP27]], i16 0
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <8 x i16> [[_MSPROP2]], i16 [[TMP29]], i32 0
 ; CHECK-NEXT:    [[_MSCMP9:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB30:.*]], label %[[BB31:.*]], !prof [[PROF1]]
-; CHECK:       [[BB30]]:
+; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB31:.*]], label %[[BB32:.*]], !prof [[PROF1]]
+; CHECK:       [[BB31]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB31]]:
+; CHECK:       [[BB32]]:
 ; CHECK-NEXT:    [[RES2:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.div.sh.round(<8 x half> [[RES1]], <8 x half> [[X2]], <8 x half> zeroinitializer, i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP33:%.*]] = extractelement <8 x i16> [[_MSPROP3]], i32 0
 ; CHECK-NEXT:    [[TMP34:%.*]] = extractelement <8 x i16> [[_MSPROP]], i32 0
@@ -1691,11 +1700,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_div_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP38:%.*]] = select i1 [[TMP37]], i16 [[TMP35]], i16 [[TMP36]]
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = insertelement <8 x i16> [[_MSPROP3]], i16 [[TMP38]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB39:.*]], label %[[BB40:.*]], !prof [[PROF1]]
-; CHECK:       [[BB39]]:
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB40:.*]], label %[[BB41:.*]], !prof [[PROF1]]
+; CHECK:       [[BB40]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB40]]:
+; CHECK:       [[BB41]]:
 ; CHECK-NEXT:    [[RES3:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.div.sh.round(<8 x half> [[RES2]], <8 x half> [[VAL]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    store <8 x i16> [[_MSPROP4]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES3]]
@@ -1729,7 +1738,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_min_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[VAL_HALF:%.*]] = load half, ptr [[PTR]], align 2
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP10]], align 2
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i16> splat (i16 -1), i16 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VAL:%.*]] = insertelement <8 x half> poison, half [[VAL_HALF]], i32 0
@@ -1748,11 +1758,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_min_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP21:%.*]] = select i1 [[TMP20]], i16 [[TMP18]], i16 [[TMP19]]
 ; CHECK-NEXT:    [[_MSPROP2:%.*]] = insertelement <8 x i16> [[_MSPROP1]], i16 [[TMP21]], i32 0
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB22:.*]], label %[[BB23:.*]], !prof [[PROF1]]
-; CHECK:       [[BB22]]:
+; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB23:.*]], label %[[BB24:.*]], !prof [[PROF1]]
+; CHECK:       [[BB23]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB23]]:
+; CHECK:       [[BB24]]:
 ; CHECK-NEXT:    [[RES1:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.min.sh.round(<8 x half> [[RES0]], <8 x half> [[X2]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP25:%.*]] = extractelement <8 x i16> [[_MSPROP2]], i32 0
 ; CHECK-NEXT:    [[TMP26:%.*]] = extractelement <8 x i16> [[TMP3]], i32 0
@@ -1762,11 +1772,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_min_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP29:%.*]] = select i1 [[TMP28]], i16 [[TMP27]], i16 0
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <8 x i16> [[_MSPROP2]], i16 [[TMP29]], i32 0
 ; CHECK-NEXT:    [[_MSCMP9:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB30:.*]], label %[[BB31:.*]], !prof [[PROF1]]
-; CHECK:       [[BB30]]:
+; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB31:.*]], label %[[BB32:.*]], !prof [[PROF1]]
+; CHECK:       [[BB31]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB31]]:
+; CHECK:       [[BB32]]:
 ; CHECK-NEXT:    [[RES2:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.min.sh.round(<8 x half> [[RES1]], <8 x half> [[X2]], <8 x half> zeroinitializer, i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP33:%.*]] = extractelement <8 x i16> [[_MSPROP3]], i32 0
 ; CHECK-NEXT:    [[TMP34:%.*]] = extractelement <8 x i16> [[_MSPROP]], i32 0
@@ -1777,11 +1787,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_min_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP38:%.*]] = select i1 [[TMP37]], i16 [[TMP35]], i16 [[TMP36]]
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = insertelement <8 x i16> [[_MSPROP3]], i16 [[TMP38]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB39:.*]], label %[[BB40:.*]], !prof [[PROF1]]
-; CHECK:       [[BB39]]:
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB40:.*]], label %[[BB41:.*]], !prof [[PROF1]]
+; CHECK:       [[BB40]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB40]]:
+; CHECK:       [[BB41]]:
 ; CHECK-NEXT:    [[RES3:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.min.sh.round(<8 x half> [[RES2]], <8 x half> [[VAL]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    store <8 x i16> [[_MSPROP4]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES3]]
@@ -1815,7 +1825,8 @@ define <8 x half> @test_int_x86_avx512fp16_mask_max_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[VAL_HALF:%.*]] = load half, ptr [[PTR]], align 2
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP10]], align 2
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <8 x i16> splat (i16 -1), i16 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VAL:%.*]] = insertelement <8 x half> poison, half [[VAL_HALF]], i32 0
@@ -1834,11 +1845,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_max_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP21:%.*]] = select i1 [[TMP20]], i16 [[TMP18]], i16 [[TMP19]]
 ; CHECK-NEXT:    [[_MSPROP2:%.*]] = insertelement <8 x i16> [[_MSPROP1]], i16 [[TMP21]], i32 0
 ; CHECK-NEXT:    [[_MSCMP6:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB22:.*]], label %[[BB23:.*]], !prof [[PROF1]]
-; CHECK:       [[BB22]]:
+; CHECK-NEXT:    br i1 [[_MSCMP6]], label %[[BB23:.*]], label %[[BB24:.*]], !prof [[PROF1]]
+; CHECK:       [[BB23]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB23]]:
+; CHECK:       [[BB24]]:
 ; CHECK-NEXT:    [[RES1:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.max.sh.round(<8 x half> [[RES0]], <8 x half> [[X2]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP25:%.*]] = extractelement <8 x i16> [[_MSPROP2]], i32 0
 ; CHECK-NEXT:    [[TMP26:%.*]] = extractelement <8 x i16> [[TMP3]], i32 0
@@ -1848,11 +1859,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_max_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP29:%.*]] = select i1 [[TMP28]], i16 [[TMP27]], i16 0
 ; CHECK-NEXT:    [[_MSPROP3:%.*]] = insertelement <8 x i16> [[_MSPROP2]], i16 [[TMP29]], i32 0
 ; CHECK-NEXT:    [[_MSCMP9:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB30:.*]], label %[[BB31:.*]], !prof [[PROF1]]
-; CHECK:       [[BB30]]:
+; CHECK-NEXT:    br i1 [[_MSCMP9]], label %[[BB31:.*]], label %[[BB32:.*]], !prof [[PROF1]]
+; CHECK:       [[BB31]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB31]]:
+; CHECK:       [[BB32]]:
 ; CHECK-NEXT:    [[RES2:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.max.sh.round(<8 x half> [[RES1]], <8 x half> [[X2]], <8 x half> zeroinitializer, i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    [[TMP33:%.*]] = extractelement <8 x i16> [[_MSPROP3]], i32 0
 ; CHECK-NEXT:    [[TMP34:%.*]] = extractelement <8 x i16> [[_MSPROP]], i32 0
@@ -1863,11 +1874,11 @@ define <8 x half> @test_int_x86_avx512fp16_mask_max_sh(<8 x half> %x1, <8 x half
 ; CHECK-NEXT:    [[TMP38:%.*]] = select i1 [[TMP37]], i16 [[TMP35]], i16 [[TMP36]]
 ; CHECK-NEXT:    [[_MSPROP4:%.*]] = insertelement <8 x i16> [[_MSPROP3]], i16 [[TMP38]], i32 0
 ; CHECK-NEXT:    [[_MSCMP14:%.*]] = icmp ne i8 [[TMP5]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB39:.*]], label %[[BB40:.*]], !prof [[PROF1]]
-; CHECK:       [[BB39]]:
+; CHECK-NEXT:    br i1 [[_MSCMP14]], label %[[BB40:.*]], label %[[BB41:.*]], !prof [[PROF1]]
+; CHECK:       [[BB40]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR8]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB40]]:
+; CHECK:       [[BB41]]:
 ; CHECK-NEXT:    [[RES3:%.*]] = call <8 x half> @llvm.x86.avx512fp16.mask.max.sh.round(<8 x half> [[RES2]], <8 x half> [[VAL]], <8 x half> [[SRC]], i8 [[MASK]], i32 4)
 ; CHECK-NEXT:    store <8 x i16> [[_MSPROP4]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x half> [[RES3]]
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl-intrinsics.ll
index f20d368e9abbc..b59de75e8ed05 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl-intrinsics.ll
@@ -11907,7 +11907,8 @@ define <4 x float> @test_mask_vfmadd128_ps_rmk(<4 x float> %a0, <4 x float> %a1,
 ; CHECK-NEXT:    [[A2:%.*]] = load <4 x float>, ptr [[PTR_A2]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP15]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <4 x i32> [[TMP12]], [[TMP3]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <4 x i32> [[_MSPROP]], [[_MSLD]]
@@ -11953,7 +11954,8 @@ define <4 x float> @test_mask_vfmadd128_ps_rmka(<4 x float> %a0, <4 x float> %a1
 ; CHECK-NEXT:    [[A2:%.*]] = load <4 x float>, ptr [[PTR_A2]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP15]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <4 x i32> [[TMP12]], [[TMP3]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <4 x i32> [[_MSPROP]], [[_MSLD]]
@@ -11998,7 +12000,8 @@ define <4 x float> @test_mask_vfmadd128_ps_rmkz(<4 x float> %a0, <4 x float> %a1
 ; CHECK-NEXT:    [[A2:%.*]] = load <4 x float>, ptr [[PTR_A2]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP8]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <4 x i32> [[TMP2]], [[TMP3]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <4 x i32> [[_MSPROP]], [[_MSLD]]
@@ -12028,7 +12031,8 @@ define <4 x float> @test_mask_vfmadd128_ps_rmkza(<4 x float> %a0, <4 x float> %a
 ; CHECK-NEXT:    [[A2:%.*]] = load <4 x float>, ptr [[PTR_A2]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <4 x i32> [[TMP2]], [[TMP3]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <4 x i32> [[_MSPROP]], [[_MSLD]]
@@ -12059,7 +12063,8 @@ define <4 x float> @test_mask_vfmadd128_ps_rmb(<4 x float> %a0, <4 x float> %a1,
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_A2]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP15]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> poison, float [[Q]], i32 0
@@ -12117,7 +12122,8 @@ define <4 x float> @test_mask_vfmadd128_ps_rmba(<4 x float> %a0, <4 x float> %a1
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_A2]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP15]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> poison, float [[Q]], i32 0
@@ -12174,7 +12180,8 @@ define <4 x float> @test_mask_vfmadd128_ps_rmbz(<4 x float> %a0, <4 x float> %a1
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_A2]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> poison, float [[Q]], i32 0
@@ -12216,7 +12223,8 @@ define <4 x float> @test_mask_vfmadd128_ps_rmbza(<4 x float> %a0, <4 x float> %a
 ; CHECK-NEXT:    [[Q:%.*]] = load float, ptr [[PTR_A2]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP8]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <4 x i32> splat (i32 -1), i32 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[VECINIT_I:%.*]] = insertelement <4 x float> poison, float [[Q]], i32 0
@@ -12259,7 +12267,8 @@ define <2 x double> @test_mask_vfmadd128_pd_rmk(<2 x double> %a0, <2 x double> %
 ; CHECK-NEXT:    [[A2:%.*]] = load <2 x double>, ptr [[PTR_A2]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP15]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <2 x i64> [[TMP12]], [[TMP3]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <2 x i64> [[_MSPROP]], [[_MSLD]]
@@ -12304,7 +12313,8 @@ define <2 x double> @test_mask_vfmadd128_pd_rmkz(<2 x double> %a0, <2 x double>
 ; CHECK-NEXT:    [[A2:%.*]] = load <2 x double>, ptr [[PTR_A2]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP8]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <2 x i64> [[TMP2]], [[TMP3]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <2 x i64> [[_MSPROP]], [[_MSLD]]
@@ -12335,7 +12345,8 @@ define <4 x double> @test_mask_vfmadd256_pd_rmk(<4 x double> %a0, <4 x double> %
 ; CHECK-NEXT:    [[A2:%.*]] = load <4 x double>, ptr [[PTR_A2]], align 32
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i64>, ptr [[TMP15]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <4 x i64> [[TMP12]], [[TMP3]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <4 x i64> [[_MSPROP]], [[_MSLD]]
@@ -12380,7 +12391,8 @@ define <4 x double> @test_mask_vfmadd256_pd_rmkz(<4 x double> %a0, <4 x double>
 ; CHECK-NEXT:    [[A2:%.*]] = load <4 x double>, ptr [[PTR_A2]], align 32
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[PTR_A2]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i64>, ptr [[TMP8]], align 32
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <4 x i64> [[TMP2]], [[TMP3]]
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or <4 x i64> [[_MSPROP]], [[_MSLD]]
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl_vnni-intrinsics-upgrade.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl_vnni-intrinsics-upgrade.ll
index d0daea5e68fea..a29f47ac426b5 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl_vnni-intrinsics-upgrade.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl_vnni-intrinsics-upgrade.ll
@@ -64,7 +64,8 @@ define { <8 x i32>, <8 x i32> } @test_int_x86_avx512_mask_vpdpbusd_256(<8 x i32>
 ; CHECK-NEXT:    [[X2:%.*]] = load <8 x i32>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP10]], align 32
 ; CHECK-NEXT:    [[TMP32:%.*]] = bitcast <8 x i32> [[TMP3]] to <32 x i8>
 ; CHECK-NEXT:    [[TMP30:%.*]] = bitcast <8 x i32> [[X1]] to <32 x i8>
@@ -189,7 +190,8 @@ define { <4 x i32>, <4 x i32> } @test_int_x86_avx512_mask_vpdpbusd_128(<4 x i32>
 ; CHECK-NEXT:    [[X2:%.*]] = load <4 x i32>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    [[TMP32:%.*]] = bitcast <4 x i32> [[TMP3]] to <16 x i8>
 ; CHECK-NEXT:    [[TMP30:%.*]] = bitcast <4 x i32> [[X1]] to <16 x i8>
@@ -318,7 +320,8 @@ define { <8 x i32>, <8 x i32> } @test_int_x86_avx512_mask_vpdpbusds_256(<8 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <8 x i32>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP10]], align 32
 ; CHECK-NEXT:    [[TMP32:%.*]] = bitcast <8 x i32> [[TMP3]] to <32 x i8>
 ; CHECK-NEXT:    [[TMP30:%.*]] = bitcast <8 x i32> [[X1]] to <32 x i8>
@@ -443,7 +446,8 @@ define { <4 x i32>, <4 x i32> } @test_int_x86_avx512_mask_vpdpbusds_128(<4 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <4 x i32>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    [[TMP32:%.*]] = bitcast <4 x i32> [[TMP3]] to <16 x i8>
 ; CHECK-NEXT:    [[TMP30:%.*]] = bitcast <4 x i32> [[X1]] to <16 x i8>
@@ -572,7 +576,8 @@ define { <8 x i32>, <8 x i32> } @test_int_x86_avx512_mask_vpdpwssd_256(<8 x i32>
 ; CHECK-NEXT:    [[X2:%.*]] = load <8 x i32>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP10]], align 32
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <8 x i32> [[TMP3]] to <16 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <8 x i32> [[X1]] to <16 x i16>
@@ -697,7 +702,8 @@ define { <4 x i32>, <4 x i32> } @test_int_x86_avx512_mask_vpdpwssd_128(<4 x i32>
 ; CHECK-NEXT:    [[X2:%.*]] = load <4 x i32>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <4 x i32> [[TMP3]] to <8 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <4 x i32> [[X1]] to <8 x i16>
@@ -827,7 +833,8 @@ define { <8 x i32>, <8 x i32> } @test_int_x86_avx512_mask_vpdpwssds_256(<8 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <8 x i32>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP10]], align 32
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <8 x i32> [[TMP3]] to <16 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <8 x i32> [[X1]] to <16 x i16>
@@ -952,7 +959,8 @@ define { <4 x i32>, <4 x i32> } @test_int_x86_avx512_mask_vpdpwssds_128(<4 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <4 x i32>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <4 x i32> [[TMP3]] to <8 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <4 x i32> [[X1]] to <8 x i16>
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl_vnni-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl_vnni-intrinsics.ll
index f2b1e16e3d5c2..472f57cec226e 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl_vnni-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vl_vnni-intrinsics.ll
@@ -59,7 +59,8 @@ define { <8 x i32>, <8 x i32> } @test_int_x86_avx512_mask_vpdpbusd_256(<8 x i32>
 ; CHECK-NEXT:    [[X2:%.*]] = load <32 x i8>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <32 x i8>, ptr [[TMP10]], align 32
 ; CHECK-NEXT:    [[TMP35:%.*]] = icmp ne <32 x i8> [[TMP33]], zeroinitializer
 ; CHECK-NEXT:    [[TMP36:%.*]] = icmp ne <32 x i8> [[TMP30]], zeroinitializer
@@ -175,7 +176,8 @@ define { <4 x i32>, <4 x i32> } @test_int_x86_avx512_mask_vpdpbusd_128(<4 x i32>
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i8>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <16 x i8>, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    [[TMP35:%.*]] = icmp ne <16 x i8> [[TMP33]], zeroinitializer
 ; CHECK-NEXT:    [[TMP36:%.*]] = icmp ne <16 x i8> [[TMP30]], zeroinitializer
@@ -297,7 +299,8 @@ define { <8 x i32>, <8 x i32> } @test_int_x86_avx512_mask_vpdpbusds_256(<8 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <32 x i8>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <32 x i8>, ptr [[TMP10]], align 32
 ; CHECK-NEXT:    [[TMP35:%.*]] = icmp ne <32 x i8> [[TMP33]], zeroinitializer
 ; CHECK-NEXT:    [[TMP36:%.*]] = icmp ne <32 x i8> [[TMP30]], zeroinitializer
@@ -413,7 +416,8 @@ define { <4 x i32>, <4 x i32> } @test_int_x86_avx512_mask_vpdpbusds_128(<4 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i8>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <16 x i8>, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    [[TMP35:%.*]] = icmp ne <16 x i8> [[TMP33]], zeroinitializer
 ; CHECK-NEXT:    [[TMP36:%.*]] = icmp ne <16 x i8> [[TMP30]], zeroinitializer
@@ -539,7 +543,8 @@ define { <8 x i32>, <8 x i32> } @test_int_x86_avx512_mask_vpdpwssd_256(<8 x i32>
 ; CHECK-NEXT:    [[X2:%.*]] = load <8 x i32>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP10]], align 32
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <8 x i32> [[TMP3]] to <16 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <8 x i32> [[X1]] to <16 x i16>
@@ -667,7 +672,8 @@ define { <4 x i32>, <4 x i32> } @test_int_x86_avx512_mask_vpdpwssd_128(<4 x i32>
 ; CHECK-NEXT:    [[X2:%.*]] = load <4 x i32>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <4 x i32> [[TMP3]] to <8 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <4 x i32> [[X1]] to <8 x i16>
@@ -801,7 +807,8 @@ define { <8 x i32>, <8 x i32> } @test_int_x86_avx512_mask_vpdpwssds_256(<8 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <8 x i32>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP10]], align 32
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <8 x i32> [[TMP3]] to <16 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <8 x i32> [[X1]] to <16 x i16>
@@ -894,7 +901,8 @@ define <4 x i32>@test_int_x86_avx512_vpdpwssds_128(<4 x i32> %x0, <4 x i32> %x1,
 ; CHECK-NEXT:    [[X2:%.*]] = load <4 x i32>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP27:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP27]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP8]], align 16
 ; CHECK-NEXT:    [[TMP11:%.*]] = bitcast <4 x i32> [[TMP3]] to <8 x i16>
 ; CHECK-NEXT:    [[TMP26:%.*]] = bitcast <4 x i32> [[X1]] to <8 x i16>
@@ -941,7 +949,8 @@ define { <4 x i32>, <4 x i32> } @test_int_x86_avx512_mask_vpdpwssds_128(<4 x i32
 ; CHECK-NEXT:    [[X2:%.*]] = load <4 x i32>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <4 x i32> [[TMP3]] to <8 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <4 x i32> [[X1]] to <8 x i16>
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vnni-intrinsics-upgrade.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vnni-intrinsics-upgrade.ll
index 4e7598b92abcf..9db69355efcd3 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vnni-intrinsics-upgrade.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vnni-intrinsics-upgrade.ll
@@ -64,7 +64,8 @@ define { <16 x i32>, <16 x i32> } @test_int_x86_avx512_mask_vpdpbusd_512(<16 x i
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[TMP32:%.*]] = bitcast <16 x i32> [[TMP3]] to <64 x i8>
 ; CHECK-NEXT:    [[TMP30:%.*]] = bitcast <16 x i32> [[X1]] to <64 x i8>
@@ -189,7 +190,8 @@ define { <16 x i32>, <16 x i32> } @test_int_x86_avx512_mask_vpdpbusds_512(<16 x
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[TMP32:%.*]] = bitcast <16 x i32> [[TMP3]] to <64 x i8>
 ; CHECK-NEXT:    [[TMP30:%.*]] = bitcast <16 x i32> [[X1]] to <64 x i8>
@@ -314,7 +316,8 @@ define { <16 x i32>, <16 x i32> } @test_int_x86_avx512_mask_vpdpwssd_512(<16 x i
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <16 x i32> [[TMP3]] to <32 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <16 x i32> [[X1]] to <32 x i16>
@@ -439,7 +442,8 @@ define { <16 x i32>, <16 x i32> } @test_int_x86_avx512_mask_vpdpwssds_512(<16 x
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <16 x i32> [[TMP3]] to <32 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <16 x i32> [[X1]] to <32 x i16>
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vnni-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vnni-intrinsics.ll
index 0fd60149cc15e..d1ac9c847ace8 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vnni-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avx512vnni-intrinsics.ll
@@ -59,7 +59,8 @@ define { <16 x i32>, <16 x i32> } @test_int_x86_avx512_mask_vpdpbusd_512(<16 x i
 ; CHECK-NEXT:    [[TMP31:%.*]] = load <64 x i8>, ptr [[X2P]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <64 x i8>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[TMP35:%.*]] = icmp ne <64 x i8> [[TMP33]], zeroinitializer
 ; CHECK-NEXT:    [[TMP36:%.*]] = icmp ne <64 x i8> [[TMP30]], zeroinitializer
@@ -175,7 +176,8 @@ define { <16 x i32>, <16 x i32> } @test_int_x86_avx512_mask_vpdpbusds_512(<16 x
 ; CHECK-NEXT:    [[TMP31:%.*]] = load <64 x i8>, ptr [[X2P]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP34]] to ptr
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <64 x i8>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[TMP35:%.*]] = icmp ne <64 x i8> [[TMP33]], zeroinitializer
 ; CHECK-NEXT:    [[TMP36:%.*]] = icmp ne <64 x i8> [[TMP30]], zeroinitializer
@@ -295,7 +297,8 @@ define { <16 x i32>, <16 x i32> } @test_int_x86_avx512_mask_vpdpwssd_512(<16 x i
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <16 x i32> [[TMP3]] to <32 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <16 x i32> [[X1]] to <32 x i16>
@@ -423,7 +426,8 @@ define { <16 x i32>, <16 x i32> } @test_int_x86_avx512_mask_vpdpwssds_512(<16 x
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i32>, ptr [[X2P]], align 64
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP63:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP63]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP10]], align 64
 ; CHECK-NEXT:    [[TMP31:%.*]] = bitcast <16 x i32> [[TMP3]] to <32 x i16>
 ; CHECK-NEXT:    [[TMP29:%.*]] = bitcast <16 x i32> [[X1]] to <32 x i16>
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/avxvnniint8-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/avxvnniint8-intrinsics.ll
index 4a7050790007b..019d0aae134dc 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/avxvnniint8-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/avxvnniint8-intrinsics.ll
@@ -29,7 +29,8 @@ define <4 x i32>@test_int_x86_avx2_vpdpbssd_128(<4 x i32> %x0, <16 x i8> %x1, pt
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <16 x i8>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[TMP29:%.*]] = load <16 x i8>, ptr [[TMP9]], align 16
 ; CHECK-NEXT:    [[TMP15:%.*]] = icmp ne <16 x i8> [[TMP13]], zeroinitializer
 ; CHECK-NEXT:    [[TMP16:%.*]] = icmp ne <16 x i8> [[TMP29]], zeroinitializer
@@ -92,7 +93,8 @@ define <4 x i32>@test_int_x86_avx2_vpdpbssds_128(<4 x i32> %x0, <16 x i8> %x1, p
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <16 x i8>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[TMP29:%.*]] = load <16 x i8>, ptr [[TMP9]], align 16
 ; CHECK-NEXT:    [[TMP15:%.*]] = icmp ne <16 x i8> [[TMP13]], zeroinitializer
 ; CHECK-NEXT:    [[TMP16:%.*]] = icmp ne <16 x i8> [[TMP29]], zeroinitializer
@@ -155,7 +157,8 @@ define <8 x i32>@test_int_x86_avx2_vpdpbssd_256(<8 x i32> %x0, <32 x i8> %x1, pt
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <32 x i8>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[TMP29:%.*]] = load <32 x i8>, ptr [[TMP9]], align 32
 ; CHECK-NEXT:    [[TMP15:%.*]] = icmp ne <32 x i8> [[TMP13]], zeroinitializer
 ; CHECK-NEXT:    [[TMP16:%.*]] = icmp ne <32 x i8> [[TMP29]], zeroinitializer
@@ -218,7 +221,8 @@ define <8 x i32>@test_int_x86_avx2_vpdpbssds_256(<8 x i32> %x0, <32 x i8> %x1, p
 ; CHECK-NEXT:    [[TMP30:%.*]] = load <32 x i8>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    [[TMP29:%.*]] = load <32 x i8>, ptr [[TMP9]], align 32
 ; CHECK-NEXT:    [[TMP15:%.*]] = icmp ne <32 x i8> [[TMP13]], zeroinitializer
 ; CHECK-NEXT:    [[TMP16:%.*]] = icmp ne <32 x i8> [[TMP29]], zeroinitializer
@@ -281,7 +285,8 @@ define <4 x i32>@test_int_x86_avx2_vpdpbsud_128(<4 x i32> %x0, <16 x i8> %x1, pt
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i8>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP38:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i8>, ptr [[TMP9]], align 16
 ; CHECK-NEXT:    [[TMP23:%.*]] = icmp ne <16 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP24:%.*]] = icmp ne <16 x i8> [[_MSLD]], zeroinitializer
@@ -344,7 +349,8 @@ define <4 x i32>@test_int_x86_avx2_vpdpbsuds_128(<4 x i32> %x0, <16 x i8> %x1, p
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i8>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP38:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i8>, ptr [[TMP9]], align 16
 ; CHECK-NEXT:    [[TMP23:%.*]] = icmp ne <16 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP24:%.*]] = icmp ne <16 x i8> [[_MSLD]], zeroinitializer
@@ -407,7 +413,8 @@ define <8 x i32>@test_int_x86_avx2_vpdpbsud_256(<8 x i32> %x0, <32 x i8> %x1, pt
 ; CHECK-NEXT:    [[X2:%.*]] = load <32 x i8>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP38:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i8>, ptr [[TMP9]], align 32
 ; CHECK-NEXT:    [[TMP23:%.*]] = icmp ne <32 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP24:%.*]] = icmp ne <32 x i8> [[_MSLD]], zeroinitializer
@@ -470,7 +477,8 @@ define <8 x i32>@test_int_x86_avx2_vpdpbsuds_256(<8 x i32> %x0, <32 x i8> %x1, p
 ; CHECK-NEXT:    [[X2:%.*]] = load <32 x i8>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP38:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i8>, ptr [[TMP9]], align 32
 ; CHECK-NEXT:    [[TMP23:%.*]] = icmp ne <32 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP24:%.*]] = icmp ne <32 x i8> [[_MSLD]], zeroinitializer
@@ -533,7 +541,8 @@ define <4 x i32>@test_int_x86_avx2_vpdpbuud_128(<4 x i32> %x0, <16 x i8> %x1, pt
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i8>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP38:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i8>, ptr [[TMP9]], align 16
 ; CHECK-NEXT:    [[TMP23:%.*]] = icmp ne <16 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP24:%.*]] = icmp ne <16 x i8> [[_MSLD]], zeroinitializer
@@ -596,7 +605,8 @@ define <4 x i32>@test_int_x86_avx2_vpdpbuuds_128(<4 x i32> %x0, <16 x i8> %x1, p
 ; CHECK-NEXT:    [[X2:%.*]] = load <16 x i8>, ptr [[X2P]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP38:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i8>, ptr [[TMP9]], align 16
 ; CHECK-NEXT:    [[TMP23:%.*]] = icmp ne <16 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP24:%.*]] = icmp ne <16 x i8> [[_MSLD]], zeroinitializer
@@ -659,7 +669,8 @@ define <8 x i32>@test_int_x86_avx2_vpdpbuud_256(<8 x i32> %x0, <32 x i8> %x1, pt
 ; CHECK-NEXT:    [[X2:%.*]] = load <32 x i8>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP38:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i8>, ptr [[TMP9]], align 32
 ; CHECK-NEXT:    [[TMP23:%.*]] = icmp ne <32 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP24:%.*]] = icmp ne <32 x i8> [[_MSLD]], zeroinitializer
@@ -722,7 +733,8 @@ define <8 x i32>@test_int_x86_avx2_vpdpbuuds_256(<8 x i32> %x0, <32 x i8> %x1, p
 ; CHECK-NEXT:    [[X2:%.*]] = load <32 x i8>, ptr [[X2P]], align 32
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X2P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP38:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <32 x i8>, ptr [[TMP9]], align 32
 ; CHECK-NEXT:    [[TMP23:%.*]] = icmp ne <32 x i8> [[TMP3]], zeroinitializer
 ; CHECK-NEXT:    [[TMP24:%.*]] = icmp ne <32 x i8> [[_MSLD]], zeroinitializer
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/f16c-intrinsics-upgrade.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/f16c-intrinsics-upgrade.ll
index a169ead8ea20a..d7bed9659dd75 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/f16c-intrinsics-upgrade.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/f16c-intrinsics-upgrade.ll
@@ -41,7 +41,8 @@ define <4 x float> @test_x86_vcvtph2ps_128_m(ptr nocapture %a) #0 {
 ; CHECK-NEXT:    [[LOAD:%.*]] = load <8 x i16>, ptr [[A]], align 16
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP6]], align 16
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = shufflevector <8 x i16> [[_MSLD]], <8 x i16> [[_MSLD]], <4 x i32> <i32 0, i32 1, i32 2, i32 3>
 ; CHECK-NEXT:    [[TMP7:%.*]] = shufflevector <8 x i16> [[LOAD]], <8 x i16> [[LOAD]], <4 x i32> <i32 0, i32 1, i32 2, i32 3>
@@ -86,7 +87,8 @@ define <8 x float> @test_x86_vcvtph2ps_256_m(ptr nocapture %a) nounwind #0 {
 ; CHECK-NEXT:    [[LOAD:%.*]] = load <8 x i16>, ptr [[A]], align 16
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP6]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = bitcast <8 x i16> [[LOAD]] to <8 x half>
 ; CHECK-NEXT:    [[TMP8:%.*]] = zext <8 x i16> [[_MSLD]] to <8 x i32>
@@ -113,7 +115,8 @@ define <4 x float> @test_x86_vcvtph2ps_128_scalar(ptr %ptr) #0 {
 ; CHECK-NEXT:    [[LOAD:%.*]] = load i64, ptr [[PTR]], align 8
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP6]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> splat (i64 -1), i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[INS1:%.*]] = insertelement <2 x i64> poison, i64 [[LOAD]], i32 0
@@ -151,7 +154,8 @@ define <4 x float> @test_x86_vcvtph2ps_128_scalar2(ptr %ptr) #0 {
 ; CHECK-NEXT:    [[LOAD:%.*]] = load i64, ptr [[PTR]], align 8
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP6]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = insertelement <2 x i64> splat (i64 -1), i64 [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[INS:%.*]] = insertelement <2 x i64> poison, i64 [[LOAD]], i32 0
@@ -173,3 +177,6 @@ define <4 x float> @test_x86_vcvtph2ps_128_scalar2(ptr %ptr) #0 {
 }
 
 attributes #0 = { sanitize_memory }
+;.
+; CHECK: [[PROF1]] = !{!"branch_weights", i32 1, i32 1048575}
+;.
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/f16c-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/f16c-intrinsics.ll
index cd2ccaf32e946..f428b2589d79a 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/f16c-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/f16c-intrinsics.ll
@@ -61,7 +61,8 @@ define void @test_x86_vcvtps2ph_256_m(ptr nocapture %d, <8 x float> %a) nounwind
 ; CHECK:       [[BB6]]:
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[D]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    store <8 x i16> [[TMP21]], ptr [[TMP3]], align 16
 ; CHECK-NEXT:    store <8 x i16> [[TMP0]], ptr [[D]], align 16
 ; CHECK-NEXT:    ret void
@@ -93,7 +94,8 @@ define void @test_x86_vcvtps2ph_128_m(ptr nocapture %d, <4 x float> %a) nounwind
 ; CHECK:       [[BB8]]:
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[D]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    store <4 x i16> [[_MSPROP]], ptr [[TMP4]], align 8
 ; CHECK-NEXT:    store <4 x i16> [[TMP1]], ptr [[D]], align 8
 ; CHECK-NEXT:    ret void
@@ -128,7 +130,8 @@ define void @test_x86_vcvtps2ph_128_m2(ptr nocapture %hf4x16, <4 x float> %f4X86
 ; CHECK:       [[BB9]]:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[HF4X16]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store i64 [[_MSPROP]], ptr [[TMP17]], align 8
 ; CHECK-NEXT:    store double [[VECEXT]], ptr [[HF4X16]], align 8
 ; CHECK-NEXT:    ret void
@@ -164,7 +167,8 @@ define void @test_x86_vcvtps2ph_128_m3(ptr nocapture %hf4x16, <4 x float> %f4X86
 ; CHECK:       [[BB9]]:
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[HF4X16]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    store i64 [[VECEXT]], ptr [[TMP17]], align 8
 ; CHECK-NEXT:    store i64 [[VECEXT1]], ptr [[HF4X16]], align 8
 ; CHECK-NEXT:    ret void
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/mmx-intrinsics.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/mmx-intrinsics.ll
index d62fd7e8d1a89..cf77d0b2a35ca 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/mmx-intrinsics.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/mmx-intrinsics.ll
@@ -2648,14 +2648,15 @@ define void @test25(ptr %p, <1 x i64> %a) nounwind optsize ssp #0 {
 ; CHECK-NEXT:    [[MMX_VAR_I:%.*]] = bitcast i64 [[TMP0]] to <1 x i64>
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CHECK-NEXT:    store <1 x i64> [[TMP3]], ptr [[TMP6]], align 1
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1:![0-9]+]]
-; CHECK:       7:
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1:![0-9]+]]
+; CHECK:       8:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR6:[0-9]+]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       8:
+; CHECK:       9:
 ; CHECK-NEXT:    tail call void @llvm.x86.mmx.movnt.dq(ptr [[P]], <1 x i64> [[MMX_VAR_I]]) #[[ATTR2]]
 ; CHECK-NEXT:    ret void
 ;
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/sse-intrinsics-x86.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/sse-intrinsics-x86.ll
index 46e814806f383..36c6120d3afb2 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/sse-intrinsics-x86.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/sse-intrinsics-x86.ll
@@ -204,16 +204,17 @@ define void @test_x86_sse_ldmxcsr(ptr %a0) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_LDMXCSR:%.*]] = load i32, ptr [[TMP4]], align 1
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i32 [[_LDMXCSR]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
-; CHECK:       5:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
+; CHECK:       6:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       6:
+; CHECK:       7:
 ; CHECK-NEXT:    call void @llvm.x86.sse.ldmxcsr(ptr [[A0]])
 ; CHECK-NEXT:    ret void
 ;
@@ -374,14 +375,15 @@ define void @test_x86_sse_stmxcsr(ptr %a0) #0 {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A0:%.*]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    store i32 0, ptr [[TMP4]], align 4
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP5:%.*]], label [[TMP6:%.*]], !prof [[PROF1]]
-; CHECK:       5:
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !prof [[PROF1]]
+; CHECK:       6:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       6:
+; CHECK:       7:
 ; CHECK-NEXT:    call void @llvm.x86.sse.stmxcsr(ptr [[A0]])
 ; CHECK-NEXT:    ret void
 ;
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/sse2-intrinsics-x86.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/sse2-intrinsics-x86.ll
index fc7b01b034f33..f1e7eb1ad3f17 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/sse2-intrinsics-x86.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/sse2-intrinsics-x86.ll
@@ -216,7 +216,8 @@ define <2 x i64> @test_mm_cvtpd_epi32_zext_load(ptr %p0) nounwind #0 {
 ; CHECK-NEXT:    [[A0:%.*]] = load <2 x double>, ptr [[P0:%.*]], align 16
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P0]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP6]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = icmp ne <2 x i64> [[_MSLD]], zeroinitializer
 ; CHECK-NEXT:    [[TMP8:%.*]] = sext <2 x i1> [[TMP7]] to <2 x i32>
@@ -284,7 +285,8 @@ define <4 x float> @test_x86_sse2_cvtpd2ps_zext_load(ptr %p0) nounwind #0 {
 ; CHECK-NEXT:    [[A0:%.*]] = load <2 x double>, ptr [[P0:%.*]], align 16
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P0]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP6]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = icmp ne <2 x i64> [[_MSLD]], zeroinitializer
 ; CHECK-NEXT:    [[TMP8:%.*]] = sext <2 x i1> [[TMP7]] to <2 x i32>
@@ -375,16 +377,17 @@ define <4 x float> @test_x86_sse2_cvtsd2ss_load(<4 x float> %a0, ptr %p1) #0 {
 ; CHECK-NEXT:    [[A1:%.*]] = load <2 x double>, ptr [[P1:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P1]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = extractelement <2 x i64> [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = insertelement <4 x i32> [[TMP2]], i32 0, i32 0
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP8]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
+; CHECK:       11:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR5]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       11:
+; CHECK:       12:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.sse2.cvtsd2ss(<4 x float> [[A0:%.*]], <2 x double> [[A1]])
 ; CHECK-NEXT:    store <4 x i32> [[TMP9]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -409,16 +412,17 @@ define <4 x float> @test_x86_sse2_cvtsd2ss_load_optsize(<4 x float> %a0, ptr %p1
 ; CHECK-NEXT:    [[A1:%.*]] = load <2 x double>, ptr [[P1:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P1]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = extractelement <2 x i64> [[_MSLD]], i32 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = insertelement <4 x i32> [[TMP2]], i32 0, i32 0
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i64 [[TMP8]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
+; CHECK-NEXT:    br i1 [[_MSCMP1]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
+; CHECK:       11:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR5]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       11:
+; CHECK:       12:
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.sse2.cvtsd2ss(<4 x float> [[A0:%.*]], <2 x double> [[A1]])
 ; CHECK-NEXT:    store <4 x i32> [[TMP9]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <4 x float> [[RES]]
@@ -481,7 +485,8 @@ define <2 x i64> @test_mm_cvttpd_epi32_zext_load(ptr %p0) nounwind #0 {
 ; CHECK-NEXT:    [[A0:%.*]] = load <2 x double>, ptr [[P0:%.*]], align 16
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P0]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP6]], align 16
 ; CHECK-NEXT:    [[TMP7:%.*]] = icmp ne <2 x i64> [[_MSLD]], zeroinitializer
 ; CHECK-NEXT:    [[TMP8:%.*]] = sext <2 x i1> [[TMP7]] to <2 x i32>
@@ -1128,7 +1133,8 @@ define <8 x i16> @test_x86_sse2_psrl_w_load(<8 x i16> %a0, ptr %p) #0 {
 ; CHECK-NEXT:    [[A1:%.*]] = load <8 x i16>, ptr [[P:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i16>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <8 x i16> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[TMP9:%.*]] = trunc i128 [[TMP8]] to i64
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/sse41-intrinsics-x86.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/sse41-intrinsics-x86.ll
index 618dde9b3dac6..cb8725307d2c0 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/sse41-intrinsics-x86.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/sse41-intrinsics-x86.ll
@@ -166,18 +166,19 @@ define <8 x i16> @test_x86_sse41_mpsadbw_load_op0(ptr %ptr, <16 x i8> %a1) #0 {
 ; CHECK-NEXT:    [[A0:%.*]] = load <16 x i8>, ptr [[PTR:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[PTR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i8>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = bitcast <16 x i8> [[_MSLD]] to i128
 ; CHECK-NEXT:    [[_MSCMP1:%.*]] = icmp ne i128 [[TMP8]], 0
 ; CHECK-NEXT:    [[TMP9:%.*]] = bitcast <16 x i8> [[TMP2]] to i128
 ; CHECK-NEXT:    [[_MSCMP2:%.*]] = icmp ne i128 [[TMP9]], 0
 ; CHECK-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP1]], [[_MSCMP2]]
-; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
-; CHECK:       10:
+; CHECK-NEXT:    br i1 [[_MSOR]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
+; CHECK:       11:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn()
 ; CHECK-NEXT:    unreachable
-; CHECK:       11:
+; CHECK:       12:
 ; CHECK-NEXT:    [[RES:%.*]] = call <8 x i16> @llvm.x86.sse41.mpsadbw(<16 x i8> [[A0]], <16 x i8> [[A1:%.*]], i8 7)
 ; CHECK-NEXT:    store <8 x i16> zeroinitializer, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x i16> [[RES]]
@@ -374,7 +375,8 @@ define <2 x double> @test_x86_sse41_round_sd_load(<2 x double> %a0, ptr %a1) #0
 ; CHECK-NEXT:    [[A1B:%.*]] = load <2 x double>, ptr [[A1:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i64>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = shufflevector <2 x i64> [[TMP2]], <2 x i64> [[_MSLD]], <2 x i32> <i32 2, i32 1>
 ; CHECK-NEXT:    [[RES:%.*]] = call <2 x double> @llvm.x86.sse41.round.sd(<2 x double> [[A0:%.*]], <2 x double> [[A1B]], i32 7)
@@ -401,7 +403,8 @@ define <4 x float> @test_x86_sse41_round_ss_load(<4 x float> %a0, ptr %a1) #0 {
 ; CHECK-NEXT:    [[A1B:%.*]] = load <4 x float>, ptr [[A1:%.*]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A1]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP7]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = shufflevector <4 x i32> [[TMP2]], <4 x i32> [[_MSLD]], <4 x i32> <i32 4, i32 1, i32 2, i32 3>
 ; CHECK-NEXT:    [[RES:%.*]] = call <4 x float> @llvm.x86.sse41.round.ss(<4 x float> [[A0:%.*]], <4 x float> [[A1B]], i32 7)
diff --git a/llvm/test/Instrumentation/MemorySanitizer/X86/vararg_shadow.ll b/llvm/test/Instrumentation/MemorySanitizer/X86/vararg_shadow.ll
index c549c165ee966..d2c5d44a12b5e 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/X86/vararg_shadow.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/X86/vararg_shadow.ll
@@ -22,11 +22,13 @@ define linkonce_odr dso_local void @_Z4testIcEvT_(i8 noundef signext %arg) sanit
 ; CHECK-NEXT:    [[ARG_ADDR:%.*]] = alloca i8, align 1
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 1 [[TMP3]], i8 -1, i64 1, i1 false)
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    store i8 [[TMP0]], ptr [[TMP6]], align 1
 ; CHECK-NEXT:    store i8 [[ARG]], ptr [[ARG_ADDR]], align 1
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -34,7 +36,8 @@ define linkonce_odr dso_local void @_Z4testIcEvT_(i8 noundef signext %arg) sanit
 ; CHECK-NEXT:    [[TMP7:%.*]] = load i8, ptr [[ARG_ADDR]], align 1
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i8, ptr [[TMP10]], align 1
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = sext i8 [[_MSLD]] to i32
 ; CHECK-NEXT:    [[CONV:%.*]] = sext i8 [[TMP7]] to i32
@@ -65,11 +68,13 @@ define linkonce_odr dso_local void @_Z4testIiEvT_(i32 noundef %arg) sanitize_mem
 ; CHECK-NEXT:    [[ARG_ADDR:%.*]] = alloca i32, align 4
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP3]], i8 -1, i64 4, i1 false)
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP0]], ptr [[TMP6]], align 4
 ; CHECK-NEXT:    store i32 [[ARG]], ptr [[ARG_ADDR]], align 4
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -77,7 +82,8 @@ define linkonce_odr dso_local void @_Z4testIiEvT_(i32 noundef %arg) sanitize_mem
 ; CHECK-NEXT:    [[TMP7:%.*]] = load i32, ptr [[ARG_ADDR]], align 4
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP10]], align 4
 ; CHECK-NEXT:    store i32 [[_MSLD]], ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    store i32 0, ptr getelementptr (i8, ptr @__msan_param_tls, i64 8), align 8
@@ -105,11 +111,13 @@ define linkonce_odr dso_local void @_Z4testIfEvT_(float noundef %arg) sanitize_m
 ; CHECK-NEXT:    [[ARG_ADDR:%.*]] = alloca float, align 4
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP3]], i8 -1, i64 4, i1 false)
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP0]], ptr [[TMP6]], align 4
 ; CHECK-NEXT:    store float [[ARG]], ptr [[ARG_ADDR]], align 4
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -117,7 +125,8 @@ define linkonce_odr dso_local void @_Z4testIfEvT_(float noundef %arg) sanitize_m
 ; CHECK-NEXT:    [[TMP7:%.*]] = load float, ptr [[ARG_ADDR]], align 4
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP10]], align 4
 ; CHECK-NEXT:    [[TMP11:%.*]] = zext i32 [[_MSLD]] to i64
 ; CHECK-NEXT:    [[CONV:%.*]] = fpext float [[TMP7]] to double
@@ -148,11 +157,13 @@ define linkonce_odr dso_local void @_Z4testIdEvT_(double noundef %arg) sanitize_
 ; CHECK-NEXT:    [[ARG_ADDR:%.*]] = alloca double, align 8
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 -1, i64 8, i1 false)
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    store i64 [[TMP0]], ptr [[TMP6]], align 8
 ; CHECK-NEXT:    store double [[ARG]], ptr [[ARG_ADDR]], align 8
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -160,7 +171,8 @@ define linkonce_odr dso_local void @_Z4testIdEvT_(double noundef %arg) sanitize_
 ; CHECK-NEXT:    [[TMP7:%.*]] = load double, ptr [[ARG_ADDR]], align 8
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP10]], align 8
 ; CHECK-NEXT:    store i64 [[_MSLD]], ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    store i32 0, ptr getelementptr (i8, ptr @__msan_param_tls, i64 8), align 8
@@ -188,11 +200,13 @@ define linkonce_odr dso_local void @_Z4testIeEvT_(x86_fp80 noundef %arg) sanitiz
 ; CHECK-NEXT:    [[ARG_ADDR:%.*]] = alloca x86_fp80, align 16
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP3]], i8 -1, i64 16, i1 false)
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    store i80 [[TMP0]], ptr [[TMP6]], align 16
 ; CHECK-NEXT:    store x86_fp80 [[ARG]], ptr [[ARG_ADDR]], align 16
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -200,7 +214,8 @@ define linkonce_odr dso_local void @_Z4testIeEvT_(x86_fp80 noundef %arg) sanitiz
 ; CHECK-NEXT:    [[TMP7:%.*]] = load x86_fp80, ptr [[ARG_ADDR]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[ARG_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i80, ptr [[TMP10]], align 16
 ; CHECK-NEXT:    store i80 [[_MSLD]], ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    store i32 0, ptr getelementptr (i8, ptr @__msan_param_tls, i64 16), align 8
@@ -228,11 +243,13 @@ define linkonce_odr dso_local void @_Z4testI6IntIntEvT_(i64 %arg.coerce) sanitiz
 ; CHECK-NEXT:    [[ARG:%.*]] = alloca [[STRUCT_INTINT:%.*]], align 8
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 -1, i64 8, i1 false)
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    store i64 [[TMP0]], ptr [[TMP6]], align 8
 ; CHECK-NEXT:    store i64 [[ARG_COERCE]], ptr [[ARG]], align 8
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -240,7 +257,8 @@ define linkonce_odr dso_local void @_Z4testI6IntIntEvT_(i64 %arg.coerce) sanitiz
 ; CHECK-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load i64, ptr [[ARG]], align 8
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP9]], align 8
 ; CHECK-NEXT:    store i64 [[_MSLD]], ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    store i32 0, ptr getelementptr (i8, ptr @__msan_param_tls, i64 8), align 8
@@ -269,17 +287,20 @@ define linkonce_odr dso_local void @_Z4testI10Int64Int64EvT_(i64 %arg.coerce0, i
 ; CHECK-NEXT:    [[ARG:%.*]] = alloca [[STRUCT_INT64INT64:%.*]], align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 -1, i64 16, i1 false)
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    store i64 [[TMP0]], ptr [[TMP7]], align 8
 ; CHECK-NEXT:    store i64 [[ARG_COERCE0]], ptr [[ARG]], align 8
 ; CHECK-NEXT:    [[TMP8:%.*]] = getelementptr inbounds { i64, i64 }, ptr [[ARG]], i64 0, i32 1
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    store i64 [[TMP1]], ptr [[TMP11]], align 8
 ; CHECK-NEXT:    store i64 [[ARG_COERCE1]], ptr [[TMP8]], align 8
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -287,12 +308,14 @@ define linkonce_odr dso_local void @_Z4testI10Int64Int64EvT_(i64 %arg.coerce0, i
 ; CHECK-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load i64, ptr [[ARG]], align 8
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP14]], align 8
 ; CHECK-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load i64, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[TMP8]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP21:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP21]] to ptr
 ; CHECK-NEXT:    [[_MSLD1:%.*]] = load i64, ptr [[TMP17]], align 8
 ; CHECK-NEXT:    store i64 [[_MSLD]], ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    store i64 [[_MSLD1]], ptr getelementptr (i8, ptr @__msan_param_tls, i64 8), align 8
@@ -327,17 +350,20 @@ define linkonce_odr dso_local void @_Z4testI12DoubleDoubleEvT_(double %arg.coerc
 ; CHECK-NEXT:    [[ARG:%.*]] = alloca [[STRUCT_DOUBLEDOUBLE:%.*]], align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 -1, i64 16, i1 false)
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    store i64 [[TMP0]], ptr [[TMP7]], align 8
 ; CHECK-NEXT:    store double [[ARG_COERCE0]], ptr [[ARG]], align 8
 ; CHECK-NEXT:    [[TMP8:%.*]] = getelementptr inbounds { double, double }, ptr [[ARG]], i64 0, i32 1
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    store i64 [[TMP1]], ptr [[TMP11]], align 8
 ; CHECK-NEXT:    store double [[ARG_COERCE1]], ptr [[TMP8]], align 8
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -345,12 +371,14 @@ define linkonce_odr dso_local void @_Z4testI12DoubleDoubleEvT_(double %arg.coerc
 ; CHECK-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load double, ptr [[ARG]], align 8
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP14]], align 8
 ; CHECK-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load double, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[TMP8]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP21:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP21]] to ptr
 ; CHECK-NEXT:    [[_MSLD1:%.*]] = load i64, ptr [[TMP17]], align 8
 ; CHECK-NEXT:    store i64 [[_MSLD]], ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    store i64 [[_MSLD1]], ptr getelementptr (i8, ptr @__msan_param_tls, i64 8), align 8
@@ -381,23 +409,27 @@ define linkonce_odr dso_local void @_Z4testI7Double4EvT_(ptr noundef byval(%stru
 ; CHECK-NEXT:  entry:
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP1]], 2097152
+; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_param_tls, i64 32, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @_Z3usePv(ptr noundef nonnull [[ARG]])
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 @__msan_param_tls, ptr align 8 [[TMP5]], i64 32, i1 false)
 ; CHECK-NEXT:    store i32 0, ptr getelementptr (i8, ptr @__msan_param_tls, i64 32), align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 40), ptr align 8 [[TMP8]], i64 32, i1 false)
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 176), ptr align 8 [[TMP11]], i64 32, i1 false)
 ; CHECK-NEXT:    store i64 32, ptr @__msan_va_arg_overflow_size_tls, align 8
 ; CHECK-NEXT:    call void (ptr, i32, ...) @_Z5test2I7Double4EvT_iz(ptr noundef nonnull byval([[STRUCT_DOUBLE4]]) align 8 [[ARG]], i32 noundef 1, ptr noundef nonnull byval([[STRUCT_DOUBLE4]]) align 8 [[ARG]])
@@ -421,17 +453,20 @@ define linkonce_odr dso_local void @_Z4testI11DoubleFloatEvT_(double %arg.coerce
 ; CHECK-NEXT:    [[ARG:%.*]] = alloca [[STRUCT_DOUBLEFLOAT:%.*]], align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 -1, i64 16, i1 false)
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    store i64 [[TMP0]], ptr [[TMP7]], align 8
 ; CHECK-NEXT:    store double [[ARG_COERCE0]], ptr [[ARG]], align 8
 ; CHECK-NEXT:    [[TMP8:%.*]] = getelementptr inbounds { double, float }, ptr [[ARG]], i64 0, i32 1
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP1]], ptr [[TMP11]], align 8
 ; CHECK-NEXT:    store float [[ARG_COERCE1]], ptr [[TMP8]], align 8
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -439,12 +474,14 @@ define linkonce_odr dso_local void @_Z4testI11DoubleFloatEvT_(double %arg.coerce
 ; CHECK-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load double, ptr [[ARG]], align 8
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP14]], align 8
 ; CHECK-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load float, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[TMP8]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP21:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP21]] to ptr
 ; CHECK-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP17]], align 8
 ; CHECK-NEXT:    store i64 [[_MSLD]], ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    store i32 [[_MSLD1]], ptr getelementptr (i8, ptr @__msan_param_tls, i64 8), align 8
@@ -475,23 +512,27 @@ define linkonce_odr dso_local void @_Z4testI11LongDouble2EvT_(ptr noundef byval(
 ; CHECK-NEXT:  entry:
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP1]], 2097152
+; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_param_tls, i64 32, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @_Z3usePv(ptr noundef nonnull [[ARG]])
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 @__msan_param_tls, ptr align 8 [[TMP5]], i64 32, i1 false)
 ; CHECK-NEXT:    store i32 0, ptr getelementptr (i8, ptr @__msan_param_tls, i64 32), align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 40), ptr align 8 [[TMP8]], i64 32, i1 false)
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 176), ptr align 8 [[TMP11]], i64 32, i1 false)
 ; CHECK-NEXT:    store i64 32, ptr @__msan_va_arg_overflow_size_tls, align 8
 ; CHECK-NEXT:    call void (ptr, i32, ...) @_Z5test2I11LongDouble2EvT_iz(ptr noundef nonnull byval([[STRUCT_LONGDOUBLE2]]) align 16 [[ARG]], i32 noundef 1, ptr noundef nonnull byval([[STRUCT_LONGDOUBLE2]]) align 16 [[ARG]])
@@ -509,23 +550,27 @@ define linkonce_odr dso_local void @_Z4testI11LongDouble4EvT_(ptr noundef byval(
 ; CHECK-NEXT:  entry:
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP1]], 2097152
+; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_param_tls, i64 64, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @_Z3usePv(ptr noundef nonnull [[ARG]])
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 @__msan_param_tls, ptr align 8 [[TMP5]], i64 64, i1 false)
 ; CHECK-NEXT:    store i32 0, ptr getelementptr (i8, ptr @__msan_param_tls, i64 64), align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 72), ptr align 8 [[TMP8]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 176), ptr align 8 [[TMP11]], i64 64, i1 false)
 ; CHECK-NEXT:    store i64 64, ptr @__msan_va_arg_overflow_size_tls, align 8
 ; CHECK-NEXT:    call void (ptr, i32, ...) @_Z5test2I11LongDouble4EvT_iz(ptr noundef nonnull byval([[STRUCT_LONGDOUBLE4]]) align 16 [[ARG]], i32 noundef 1, ptr noundef nonnull byval([[STRUCT_LONGDOUBLE4]]) align 16 [[ARG]])
@@ -550,28 +595,32 @@ define linkonce_odr dso_local void @_Z5test2IcEvT_iz(i8 noundef signext %t, i32
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -609,28 +658,32 @@ define linkonce_odr dso_local void @_Z5test2IiEvT_iz(i32 noundef %t, i32 noundef
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -660,28 +713,32 @@ define linkonce_odr dso_local void @_Z5test2IfEvT_iz(float noundef %t, i32 nound
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -711,28 +768,32 @@ define linkonce_odr dso_local void @_Z5test2IdEvT_iz(double noundef %t, i32 noun
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -762,28 +823,32 @@ define linkonce_odr dso_local void @_Z5test2IeEvT_iz(x86_fp80 noundef %t, i32 no
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -813,28 +878,32 @@ define linkonce_odr dso_local void @_Z5test2I6IntIntEvT_iz(i64 %t.coerce, i32 no
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -864,28 +933,32 @@ define linkonce_odr dso_local void @_Z5test2I10Int64Int64EvT_iz(i64 %t.coerce0,
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -915,28 +988,32 @@ define linkonce_odr dso_local void @_Z5test2I12DoubleDoubleEvT_iz(double %t.coer
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -966,28 +1043,32 @@ define linkonce_odr dso_local void @_Z5test2I7Double4EvT_iz(ptr noundef byval(%s
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -1017,28 +1098,32 @@ define linkonce_odr dso_local void @_Z5test2I11DoubleFloatEvT_iz(double %t.coerc
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -1068,28 +1153,32 @@ define linkonce_odr dso_local void @_Z5test2I11LongDouble2EvT_iz(ptr noundef byv
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -1119,28 +1208,32 @@ define linkonce_odr dso_local void @_Z5test2I11LongDouble4EvT_iz(ptr noundef byv
 ; CHECK-NEXT:    [[TMP3:%.*]] = call i64 @llvm.umin.i64(i64 [[TMP1]], i64 800)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_va_arg_tls, i64 [[TMP3]], i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
-; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[ARGS:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP6]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[ARGS]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP9]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr nonnull [[ARGS]])
 ; CHECK-NEXT:    [[TMP12:%.*]] = getelementptr i8, ptr [[ARGS]], i64 16
 ; CHECK-NEXT:    [[TMP13:%.*]] = load ptr, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[TMP13]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP16]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[ARGS]], i64 8
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    [[TMP24:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP24]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -1165,95 +1258,117 @@ define linkonce_odr dso_local void @_Z4test3I11LongDouble4EvT_(ptr noundef byval
 ; CHECK-NEXT:  entry:
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; CHECK-NEXT:    [[TMP67:%.*]] = add i64 [[TMP1]], 2097152
+; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP67]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP2]], ptr align 8 @__msan_param_tls, i64 64, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @_Z3usePv(ptr noundef nonnull [[ARG]])
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP68:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP68]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 @__msan_param_tls, ptr align 8 [[TMP5]], i64 64, i1 false)
 ; CHECK-NEXT:    store i32 0, ptr getelementptr (i8, ptr @__msan_param_tls, i64 64), align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP69:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP69]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 72), ptr align 8 [[TMP8]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP71:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP71]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 136), ptr align 8 [[TMP11]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP72:%.*]] = add i64 [[TMP13]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP72]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 200), ptr align 8 [[TMP14]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP73:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP73]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 264), ptr align 8 [[TMP17]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP18:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP19:%.*]] = xor i64 [[TMP18]], 87960930222080
-; CHECK-NEXT:    [[TMP20:%.*]] = inttoptr i64 [[TMP19]] to ptr
+; CHECK-NEXT:    [[TMP75:%.*]] = add i64 [[TMP19]], 2097152
+; CHECK-NEXT:    [[TMP20:%.*]] = inttoptr i64 [[TMP75]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 328), ptr align 8 [[TMP20]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP76:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP76]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 392), ptr align 8 [[TMP23]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP24:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP25:%.*]] = xor i64 [[TMP24]], 87960930222080
-; CHECK-NEXT:    [[TMP26:%.*]] = inttoptr i64 [[TMP25]] to ptr
+; CHECK-NEXT:    [[TMP77:%.*]] = add i64 [[TMP25]], 2097152
+; CHECK-NEXT:    [[TMP26:%.*]] = inttoptr i64 [[TMP77]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 456), ptr align 8 [[TMP26]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP27:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP28:%.*]] = xor i64 [[TMP27]], 87960930222080
-; CHECK-NEXT:    [[TMP29:%.*]] = inttoptr i64 [[TMP28]] to ptr
+; CHECK-NEXT:    [[TMP79:%.*]] = add i64 [[TMP28]], 2097152
+; CHECK-NEXT:    [[TMP29:%.*]] = inttoptr i64 [[TMP79]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 520), ptr align 8 [[TMP29]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP30:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP31:%.*]] = xor i64 [[TMP30]], 87960930222080
-; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP31]] to ptr
+; CHECK-NEXT:    [[TMP80:%.*]] = add i64 [[TMP31]], 2097152
+; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP80]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 584), ptr align 8 [[TMP32]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP33:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP34:%.*]] = xor i64 [[TMP33]], 87960930222080
-; CHECK-NEXT:    [[TMP35:%.*]] = inttoptr i64 [[TMP34]] to ptr
+; CHECK-NEXT:    [[TMP81:%.*]] = add i64 [[TMP34]], 2097152
+; CHECK-NEXT:    [[TMP35:%.*]] = inttoptr i64 [[TMP81]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 648), ptr align 8 [[TMP35]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP36:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP37:%.*]] = xor i64 [[TMP36]], 87960930222080
-; CHECK-NEXT:    [[TMP38:%.*]] = inttoptr i64 [[TMP37]] to ptr
+; CHECK-NEXT:    [[TMP83:%.*]] = add i64 [[TMP37]], 2097152
+; CHECK-NEXT:    [[TMP38:%.*]] = inttoptr i64 [[TMP83]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 712), ptr align 8 [[TMP38]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP39:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP40:%.*]] = xor i64 [[TMP39]], 87960930222080
-; CHECK-NEXT:    [[TMP41:%.*]] = inttoptr i64 [[TMP40]] to ptr
+; CHECK-NEXT:    [[TMP84:%.*]] = add i64 [[TMP40]], 2097152
+; CHECK-NEXT:    [[TMP41:%.*]] = inttoptr i64 [[TMP84]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 176), ptr align 8 [[TMP41]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP42:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP43:%.*]] = xor i64 [[TMP42]], 87960930222080
-; CHECK-NEXT:    [[TMP44:%.*]] = inttoptr i64 [[TMP43]] to ptr
+; CHECK-NEXT:    [[TMP85:%.*]] = add i64 [[TMP43]], 2097152
+; CHECK-NEXT:    [[TMP44:%.*]] = inttoptr i64 [[TMP85]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 240), ptr align 8 [[TMP44]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP45:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP46:%.*]] = xor i64 [[TMP45]], 87960930222080
-; CHECK-NEXT:    [[TMP47:%.*]] = inttoptr i64 [[TMP46]] to ptr
+; CHECK-NEXT:    [[TMP87:%.*]] = add i64 [[TMP46]], 2097152
+; CHECK-NEXT:    [[TMP47:%.*]] = inttoptr i64 [[TMP87]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 304), ptr align 8 [[TMP47]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP48:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP49:%.*]] = xor i64 [[TMP48]], 87960930222080
-; CHECK-NEXT:    [[TMP50:%.*]] = inttoptr i64 [[TMP49]] to ptr
+; CHECK-NEXT:    [[TMP66:%.*]] = add i64 [[TMP49]], 2097152
+; CHECK-NEXT:    [[TMP50:%.*]] = inttoptr i64 [[TMP66]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 368), ptr align 8 [[TMP50]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP51:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP52:%.*]] = xor i64 [[TMP51]], 87960930222080
-; CHECK-NEXT:    [[TMP53:%.*]] = inttoptr i64 [[TMP52]] to ptr
+; CHECK-NEXT:    [[TMP70:%.*]] = add i64 [[TMP52]], 2097152
+; CHECK-NEXT:    [[TMP53:%.*]] = inttoptr i64 [[TMP70]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 432), ptr align 8 [[TMP53]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP54:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP55:%.*]] = xor i64 [[TMP54]], 87960930222080
-; CHECK-NEXT:    [[TMP56:%.*]] = inttoptr i64 [[TMP55]] to ptr
+; CHECK-NEXT:    [[TMP74:%.*]] = add i64 [[TMP55]], 2097152
+; CHECK-NEXT:    [[TMP56:%.*]] = inttoptr i64 [[TMP74]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 496), ptr align 8 [[TMP56]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP57:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP58:%.*]] = xor i64 [[TMP57]], 87960930222080
-; CHECK-NEXT:    [[TMP59:%.*]] = inttoptr i64 [[TMP58]] to ptr
+; CHECK-NEXT:    [[TMP78:%.*]] = add i64 [[TMP58]], 2097152
+; CHECK-NEXT:    [[TMP59:%.*]] = inttoptr i64 [[TMP78]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 560), ptr align 8 [[TMP59]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP60:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP61:%.*]] = xor i64 [[TMP60]], 87960930222080
-; CHECK-NEXT:    [[TMP62:%.*]] = inttoptr i64 [[TMP61]] to ptr
+; CHECK-NEXT:    [[TMP82:%.*]] = add i64 [[TMP61]], 2097152
+; CHECK-NEXT:    [[TMP62:%.*]] = inttoptr i64 [[TMP82]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 624), ptr align 8 [[TMP62]], i64 64, i1 false)
 ; CHECK-NEXT:    [[TMP63:%.*]] = ptrtoint ptr [[ARG]] to i64
 ; CHECK-NEXT:    [[TMP64:%.*]] = xor i64 [[TMP63]], 87960930222080
-; CHECK-NEXT:    [[TMP65:%.*]] = inttoptr i64 [[TMP64]] to ptr
+; CHECK-NEXT:    [[TMP86:%.*]] = add i64 [[TMP64]], 2097152
+; CHECK-NEXT:    [[TMP65:%.*]] = inttoptr i64 [[TMP86]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 688), ptr align 8 [[TMP65]], i64 64, i1 false)
 ; CHECK-NEXT:    call void @llvm.memset.p0.i32(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 752), i8 0, i32 48, i1 false)
 ; CHECK-NEXT:    store i64 1280, ptr @__msan_va_arg_overflow_size_tls, align 8
diff --git a/llvm/test/Instrumentation/MemorySanitizer/byval.ll b/llvm/test/Instrumentation/MemorySanitizer/byval.ll
index 9f6a7cb189547..6805617b16882 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/byval.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/byval.ll
@@ -13,22 +13,24 @@ define i128 @ByValArgument(i32, ptr byval(i128) %p) sanitize_memory {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP3]], ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 16, i1 false)
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP5]], ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), i64 16, i1 false)
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP2]], 17592188141568
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP4]], ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 16, i1 false)
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP6]], ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), i64 16, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[X:%.*]] = load i128, ptr [[P]], align 8
-; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[P]] to i64
-; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
-; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 17592186044416
+; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[P]] to i64
+; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP8]], 2097152
 ; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; CHECK-NEXT:    [[_MSLD:%.*]] = load i128, ptr [[TMP8]], align 8
-; CHECK-NEXT:    [[TMP11:%.*]] = load i32, ptr [[TMP10]], align 8
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 17592188141568
+; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
+; CHECK-NEXT:    [[_MSLD:%.*]] = load i128, ptr [[TMP10]], align 8
+; CHECK-NEXT:    [[TMP13:%.*]] = load i32, ptr [[TMP12]], align 8
 ; CHECK-NEXT:    store i128 [[_MSLD]], ptr @__msan_retval_tls, align 8
-; CHECK-NEXT:    store i32 [[TMP11]], ptr @__msan_retval_origin_tls, align 4
+; CHECK-NEXT:    store i32 [[TMP13]], ptr @__msan_retval_origin_tls, align 4
 ; CHECK-NEXT:    ret i128 [[X]]
 ;
 entry:
@@ -42,10 +44,11 @@ define i128 @ByValArgumentNoSanitize(i32, ptr byval(i128) %p) {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 0, i64 16, i1 false)
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP2]], 17592188141568
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 0, i64 16, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[X:%.*]] = load i128, ptr [[P]], align 8
 ; CHECK-NEXT:    store i128 0, ptr @__msan_retval_tls, align 8
@@ -63,11 +66,12 @@ define void @ByValForward(i32, ptr byval(i128) %p) sanitize_memory {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP3]], ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 16, i1 false)
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP5]], ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), i64 16, i1 false)
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP2]], 17592188141568
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP4]], ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 16, i1 false)
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP6]], ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), i64 16, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @Fn(ptr [[P]])
@@ -84,10 +88,11 @@ define void @ByValForwardNoSanitize(i32, ptr byval(i128) %p) {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 0, i64 16, i1 false)
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP2]], 17592188141568
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 0, i64 16, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    call void @Fn(ptr [[P]])
@@ -104,16 +109,18 @@ define void @ByValForwardByVal(i32, ptr byval(i128) %p) sanitize_memory {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP3]], ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 16, i1 false)
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 [[TMP12]], ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 16, i1 false)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP5]], ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), i64 16, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
-; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 17592186044416
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 17592188141568
 ; CHECK-NEXT:    [[TMP10:%.*]] = and i64 [[TMP9]], -4
 ; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr @__msan_param_tls, ptr [[TMP8]], i64 16, i1 false)
@@ -132,15 +139,17 @@ define void @ByValForwardByValNoSanitize(i32, ptr byval(i128) %p) {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 0, i64 16, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
-; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 17592186044416
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 17592188141568
 ; CHECK-NEXT:    [[TMP10:%.*]] = and i64 [[TMP9]], -4
 ; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr @__msan_param_tls, i8 0, i64 16, i1 false)
@@ -161,18 +170,20 @@ define i8 @ByValArgument8(i32, ptr byval(i8) %p) sanitize_memory {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[TMP3]], ptr align 1 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 1, i1 false)
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[TMP14]], ptr align 1 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 1, i1 false)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP6]], ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), i64 4, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[X:%.*]] = load i8, ptr [[P]], align 1
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
-; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592186044416
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592188141568
 ; CHECK-NEXT:    [[TMP11:%.*]] = and i64 [[TMP10]], -4
 ; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i8, ptr [[TMP9]], align 1
@@ -192,8 +203,9 @@ define i8 @ByValArgumentNoSanitize8(i32, ptr byval(i8) %p) {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 1 [[TMP3]], i8 0, i64 1, i1 false)
@@ -214,11 +226,12 @@ define void @ByValForward8(i32, ptr byval(i8) %p) sanitize_memory {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[TMP3]], ptr align 1 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 1, i1 false)
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[TMP7]], ptr align 1 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 1, i1 false)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP6]], ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), i64 4, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
@@ -236,8 +249,9 @@ define void @ByValForwardNoSanitize8(i32, ptr byval(i8) %p) {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 1 [[TMP3]], i8 0, i64 1, i1 false)
@@ -257,17 +271,19 @@ define void @ByValForwardByVal8(i32, ptr byval(i8) %p) sanitize_memory {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[TMP3]], ptr align 1 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 1, i1 false)
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 1 [[TMP13]], ptr align 1 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 1, i1 false)
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP6]], ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), i64 4, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
-; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592186044416
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592188141568
 ; CHECK-NEXT:    [[TMP11:%.*]] = and i64 [[TMP10]], -4
 ; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr @__msan_param_tls, ptr [[TMP9]], i64 1, i1 false)
@@ -286,16 +302,18 @@ define void @ByValForwardByValNoSanitize8(i32, ptr byval(i8) %p) {
 ; CHECK-NEXT:  [[ENTRY:.*:]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 1 [[TMP3]], i8 0, i64 1, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
-; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592186044416
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592188141568
 ; CHECK-NEXT:    [[TMP11:%.*]] = and i64 [[TMP10]], -4
 ; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr @__msan_param_tls, i8 0, i64 1, i1 false)
diff --git a/llvm/test/Instrumentation/MemorySanitizer/disambiguate-origin.ll b/llvm/test/Instrumentation/MemorySanitizer/disambiguate-origin.ll
index 8feca5829b3ef..1882b73e9bde7 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/disambiguate-origin.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/disambiguate-origin.ll
@@ -1,5 +1,5 @@
 ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
-; RUN: opt < %s -msan-track-origins=2 -msan-eager-checks=1 -S -passes=msan 2>&1 | FileCheck %s --implicit-check-not="call void @__msan_"
+; RUN: opt < %s -msan-track-origins=2 -msan-eager-checks=1 -S -passes=msan 2>&1 | FileCheck %s
 
 target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128"
 target triple = "x86_64-unknown-linux-gnu"
@@ -23,22 +23,23 @@ declare void @ManyArgs(i32 noundef %a, i32 noundef %b, i32 noundef %c) nounwind
 define void @TestOne(ptr noundef %a)  nounwind uwtable sanitize_memory {
 ; CHECK-LABEL: @TestOne(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    call void @llvm.donothing(), !dbg [[DBG1:![0-9]+]]
-; CHECK-NEXT:    [[V:%.*]] = load i32, ptr [[A:%.*]], align 4, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592186044416, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr, !dbg [[DBG1]]
-; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP2]], align 4, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4, !dbg [[DBG1]]
-; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0, !dbg [[DBG7:![0-9]+]]
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP6:%.*]], label [[TMP7:%.*]], !dbg [[DBG7]], !prof [[PROF8:![0-9]+]]
-; CHECK:       6:
-; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP5]]) #[[ATTR3:[0-9]+]], !dbg [[DBG7]]
-; CHECK-NEXT:    unreachable, !dbg [[DBG7]]
+; CHECK-NEXT:    call void @llvm.donothing(), !dbg [[DBG2:![0-9]+]]
+; CHECK-NEXT:    [[V:%.*]] = load i32, ptr [[A:%.*]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP2:%.*]] = add i64 [[TMP1]], 2097152, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP1]], 17592188141568, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0, !dbg [[DBG8:![0-9]+]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !dbg [[DBG8]], !prof [[PROF9:![0-9]+]]
 ; CHECK:       7:
-; CHECK-NEXT:    call void @OneArg(i32 noundef [[V]]), !dbg [[DBG7]]
+; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP6]]) #[[ATTR3:[0-9]+]], !dbg [[DBG8]]
+; CHECK-NEXT:    unreachable, !dbg [[DBG8]]
+; CHECK:       8:
+; CHECK-NEXT:    call void @OneArg(i32 noundef [[V]]), !dbg [[DBG8]]
 ; CHECK-NEXT:    ret void
 ;
 entry:
@@ -50,53 +51,56 @@ entry:
 define void @TestMany(ptr noundef %a)  nounwind uwtable sanitize_memory {
 ; CHECK-LABEL: @TestMany(
 ; CHECK-NEXT:  entry:
-; CHECK-NEXT:    call void @llvm.donothing(), !dbg [[DBG1]]
-; CHECK-NEXT:    [[X:%.*]] = load i32, ptr [[A:%.*]], align 4, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592186044416, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr, !dbg [[DBG1]]
-; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP2]], align 4, !dbg [[DBG1]]
-; CHECK-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4, !dbg [[DBG1]]
-; CHECK-NEXT:    [[Y:%.*]] = load i32, ptr [[A]], align 4, !dbg [[DBG9:![0-9]+]]
-; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP7]], 17592186044416, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr, !dbg [[DBG9]]
-; CHECK-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP8]], align 4, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP11:%.*]] = load i32, ptr [[TMP10]], align 4, !dbg [[DBG9]]
-; CHECK-NEXT:    [[Z:%.*]] = load i32, ptr [[A]], align 4, !dbg [[DBG10:![0-9]+]]
-; CHECK-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP13]], 17592186044416, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr, !dbg [[DBG10]]
-; CHECK-NEXT:    [[_MSLD2:%.*]] = load i32, ptr [[TMP14]], align 4, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP17:%.*]] = load i32, ptr [[TMP16]], align 4, !dbg [[DBG10]]
-; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0, !dbg [[DBG7]]
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP18:%.*]], label [[TMP20:%.*]], !dbg [[DBG7]], !prof [[PROF8]]
-; CHECK:       18:
-; CHECK-NEXT:    [[TMP19:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP5]]), !dbg [[DBG1]]
-; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP19]]) #[[ATTR3]], !dbg [[DBG7]]
-; CHECK-NEXT:    unreachable, !dbg [[DBG7]]
-; CHECK:       20:
-; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i32 [[_MSLD1]], 0, !dbg [[DBG7]]
-; CHECK-NEXT:    br i1 [[_MSCMP3]], label [[TMP21:%.*]], label [[TMP23:%.*]], !dbg [[DBG7]], !prof [[PROF8]]
+; CHECK-NEXT:    call void @llvm.donothing(), !dbg [[DBG2]]
+; CHECK-NEXT:    [[X:%.*]] = load i32, ptr [[A:%.*]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP2:%.*]] = add i64 [[TMP1]], 2097152, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP1]], 17592188141568, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[Y:%.*]] = load i32, ptr [[A]], align 4, !dbg [[DBG10:![0-9]+]]
+; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP8]], 2097152, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 17592188141568, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr, !dbg [[DBG10]]
+; CHECK-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP10]], align 4, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP13:%.*]] = load i32, ptr [[TMP12]], align 4, !dbg [[DBG10]]
+; CHECK-NEXT:    [[Z:%.*]] = load i32, ptr [[A]], align 4, !dbg [[DBG11:![0-9]+]]
+; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG11]]
+; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080, !dbg [[DBG11]]
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP15]], 2097152, !dbg [[DBG11]]
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr, !dbg [[DBG11]]
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP15]], 17592188141568, !dbg [[DBG11]]
+; CHECK-NEXT:    [[TMP19:%.*]] = inttoptr i64 [[TMP18]] to ptr, !dbg [[DBG11]]
+; CHECK-NEXT:    [[_MSLD2:%.*]] = load i32, ptr [[TMP17]], align 4, !dbg [[DBG11]]
+; CHECK-NEXT:    [[TMP20:%.*]] = load i32, ptr [[TMP19]], align 4, !dbg [[DBG11]]
+; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0, !dbg [[DBG8]]
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP21:%.*]], label [[TMP23:%.*]], !dbg [[DBG8]], !prof [[PROF9]]
 ; CHECK:       21:
-; CHECK-NEXT:    [[TMP22:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP11]]), !dbg [[DBG9]]
-; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP22]]) #[[ATTR3]], !dbg [[DBG7]]
-; CHECK-NEXT:    unreachable, !dbg [[DBG7]]
+; CHECK-NEXT:    [[TMP22:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP6]]), !dbg [[DBG2]]
+; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP22]]) #[[ATTR3]], !dbg [[DBG8]]
+; CHECK-NEXT:    unreachable, !dbg [[DBG8]]
 ; CHECK:       23:
-; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i32 [[_MSLD2]], 0, !dbg [[DBG7]]
-; CHECK-NEXT:    br i1 [[_MSCMP4]], label [[TMP24:%.*]], label [[TMP26:%.*]], !dbg [[DBG7]], !prof [[PROF8]]
+; CHECK-NEXT:    [[_MSCMP3:%.*]] = icmp ne i32 [[_MSLD1]], 0, !dbg [[DBG8]]
+; CHECK-NEXT:    br i1 [[_MSCMP3]], label [[TMP24:%.*]], label [[TMP26:%.*]], !dbg [[DBG8]], !prof [[PROF9]]
 ; CHECK:       24:
-; CHECK-NEXT:    [[TMP25:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP17]]), !dbg [[DBG10]]
-; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP25]]) #[[ATTR3]], !dbg [[DBG7]]
-; CHECK-NEXT:    unreachable, !dbg [[DBG7]]
+; CHECK-NEXT:    [[TMP25:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP13]]), !dbg [[DBG10]]
+; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP25]]) #[[ATTR3]], !dbg [[DBG8]]
+; CHECK-NEXT:    unreachable, !dbg [[DBG8]]
 ; CHECK:       26:
-; CHECK-NEXT:    call void @ManyArgs(i32 noundef [[X]], i32 noundef [[Y]], i32 noundef [[Z]]), !dbg [[DBG7]]
+; CHECK-NEXT:    [[_MSCMP4:%.*]] = icmp ne i32 [[_MSLD2]], 0, !dbg [[DBG8]]
+; CHECK-NEXT:    br i1 [[_MSCMP4]], label [[TMP27:%.*]], label [[TMP29:%.*]], !dbg [[DBG8]], !prof [[PROF9]]
+; CHECK:       27:
+; CHECK-NEXT:    [[TMP28:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP20]]), !dbg [[DBG11]]
+; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP28]]) #[[ATTR3]], !dbg [[DBG8]]
+; CHECK-NEXT:    unreachable, !dbg [[DBG8]]
+; CHECK:       29:
+; CHECK-NEXT:    call void @ManyArgs(i32 noundef [[X]], i32 noundef [[Y]], i32 noundef [[Z]]), !dbg [[DBG8]]
 ; CHECK-NEXT:    ret void
 ;
 entry:
@@ -107,5 +111,3 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: define internal void @msan.module_ctor()
-; CHECK:         call void @__msan_init()
diff --git a/llvm/test/Instrumentation/MemorySanitizer/masked-store-load.ll b/llvm/test/Instrumentation/MemorySanitizer/masked-store-load.ll
index 77b48360a64d8..d70959545ddbe 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/masked-store-load.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/masked-store-load.ll
@@ -23,9 +23,10 @@ define void @Store(ptr %p, <4 x i64> %v, <4 x i1> %mask) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[TMP0]], ptr align 1 [[TMP3]], <4 x i1> [[MASK:%.*]])
-; CHECK-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
+; CHECK-NEXT:    tail call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
 ; CHECK-NEXT:    ret void
 ;
 ; ADDR-LABEL: @Store(
@@ -36,18 +37,19 @@ define void @Store(ptr %p, <4 x i64> %v, <4 x i1> %mask) sanitize_memory {
 ; ADDR-NEXT:    call void @llvm.donothing()
 ; ADDR-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; ADDR-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP4]], 2097152
+; ADDR-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[TMP0]], ptr align 1 [[TMP5]], <4 x i1> [[MASK:%.*]])
 ; ADDR-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; ADDR-NEXT:    [[TMP6:%.*]] = bitcast <4 x i1> [[TMP2]] to i4
 ; ADDR-NEXT:    [[_MSCMP1:%.*]] = icmp ne i4 [[TMP6]], 0
 ; ADDR-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1:![0-9]+]]
-; ADDR:       7:
+; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1:![0-9]+]]
+; ADDR:       8:
 ; ADDR-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7:[0-9]+]]
 ; ADDR-NEXT:    unreachable
-; ADDR:       8:
-; ADDR-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
+; ADDR:       9:
+; ADDR-NEXT:    tail call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
 ; ADDR-NEXT:    ret void
 ;
 ; ORIGINS-LABEL: @Store(
@@ -57,8 +59,9 @@ define void @Store(ptr %p, <4 x i64> %v, <4 x i1> %mask) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP15:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; ORIGINS-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[TMP0]], ptr align 1 [[TMP4]], <4 x i1> [[MASK:%.*]])
@@ -77,7 +80,7 @@ define void @Store(ptr %p, <4 x i64> %v, <4 x i1> %mask) sanitize_memory {
 ; ORIGINS-NEXT:    store i32 [[TMP1]], ptr [[TMP13]], align 4
 ; ORIGINS-NEXT:    [[TMP14:%.*]] = getelementptr i32, ptr [[TMP7]], i32 7
 ; ORIGINS-NEXT:    store i32 [[TMP1]], ptr [[TMP14]], align 4
-; ORIGINS-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
+; ORIGINS-NEXT:    tail call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
 ; ORIGINS-NEXT:    ret void
 ;
 entry:
@@ -92,7 +95,8 @@ define <4 x double> @Load(ptr %p, <4 x double> %v, <4 x i1> %mask) sanitize_memo
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDLD:%.*]] = call <4 x i64> @llvm.masked.load.v4i64.p0(ptr align 1 [[TMP3]], <4 x i1> [[MASK:%.*]], <4 x i64> [[TMP0]])
 ; CHECK-NEXT:    [[X:%.*]] = call <4 x double> @llvm.masked.load.v4f64.p0(ptr align 1 [[P]], <4 x i1> [[MASK]], <4 x double> [[V:%.*]])
 ; CHECK-NEXT:    store <4 x i64> [[_MSMASKEDLD]], ptr @__msan_retval_tls, align 8
@@ -106,17 +110,18 @@ define <4 x double> @Load(ptr %p, <4 x double> %v, <4 x i1> %mask) sanitize_memo
 ; ADDR-NEXT:    call void @llvm.donothing()
 ; ADDR-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; ADDR-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP4]], 2097152
+; ADDR-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    [[_MSMASKEDLD:%.*]] = call <4 x i64> @llvm.masked.load.v4i64.p0(ptr align 1 [[TMP5]], <4 x i1> [[MASK:%.*]], <4 x i64> [[TMP2]])
 ; ADDR-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP0]], 0
 ; ADDR-NEXT:    [[TMP6:%.*]] = bitcast <4 x i1> [[TMP1]] to i4
 ; ADDR-NEXT:    [[_MSCMP1:%.*]] = icmp ne i4 [[TMP6]], 0
 ; ADDR-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
-; ADDR:       7:
+; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
+; ADDR:       8:
 ; ADDR-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
 ; ADDR-NEXT:    unreachable
-; ADDR:       8:
+; ADDR:       9:
 ; ADDR-NEXT:    [[X:%.*]] = call <4 x double> @llvm.masked.load.v4f64.p0(ptr align 1 [[P]], <4 x i1> [[MASK]], <4 x double> [[V:%.*]])
 ; ADDR-NEXT:    store <4 x i64> [[_MSMASKEDLD]], ptr @__msan_retval_tls, align 8
 ; ADDR-NEXT:    ret <4 x double> [[X]]
@@ -128,8 +133,9 @@ define <4 x double> @Load(ptr %p, <4 x double> %v, <4 x i1> %mask) sanitize_memo
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP14:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; ORIGINS-NEXT:    [[_MSMASKEDLD:%.*]] = call <4 x i64> @llvm.masked.load.v4i64.p0(ptr align 1 [[TMP4]], <4 x i1> [[MASK:%.*]], <4 x i64> [[TMP0]])
@@ -156,9 +162,10 @@ define void @StoreNoSanitize(ptr %p, <4 x i64> %v, <4 x i1> %mask) {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 2097152
+; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP3]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> zeroinitializer, ptr align 1 [[TMP2]], <4 x i1> [[MASK:%.*]])
-; CHECK-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
+; CHECK-NEXT:    tail call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
 ; CHECK-NEXT:    ret void
 ;
 ; ADDR-LABEL: @StoreNoSanitize(
@@ -166,9 +173,10 @@ define void @StoreNoSanitize(ptr %p, <4 x i64> %v, <4 x i1> %mask) {
 ; ADDR-NEXT:    call void @llvm.donothing()
 ; ADDR-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; ADDR-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; ADDR-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 2097152
+; ADDR-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP3]] to ptr
 ; ADDR-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> zeroinitializer, ptr align 1 [[TMP2]], <4 x i1> [[MASK:%.*]])
-; ADDR-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
+; ADDR-NEXT:    tail call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
 ; ADDR-NEXT:    ret void
 ;
 ; ORIGINS-LABEL: @StoreNoSanitize(
@@ -176,8 +184,9 @@ define void @StoreNoSanitize(ptr %p, <4 x i64> %v, <4 x i1> %mask) {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; ORIGINS-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
-; ORIGINS-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592186044416
+; ORIGINS-NEXT:    [[TMP13:%.*]] = add i64 [[TMP1]], 2097152
+; ORIGINS-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; ORIGINS-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], -4
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> zeroinitializer, ptr align 1 [[TMP2]], <4 x i1> [[MASK:%.*]])
@@ -196,7 +205,7 @@ define void @StoreNoSanitize(ptr %p, <4 x i64> %v, <4 x i1> %mask) {
 ; ORIGINS-NEXT:    store i32 0, ptr [[TMP11]], align 4
 ; ORIGINS-NEXT:    [[TMP12:%.*]] = getelementptr i32, ptr [[TMP5]], i32 7
 ; ORIGINS-NEXT:    store i32 0, ptr [[TMP12]], align 4
-; ORIGINS-NEXT:    call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
+; ORIGINS-NEXT:    tail call void @llvm.masked.store.v4i64.p0(<4 x i64> [[V:%.*]], ptr align 1 [[P]], <4 x i1> [[MASK]])
 ; ORIGINS-NEXT:    ret void
 ;
 entry:
@@ -239,7 +248,8 @@ define <16 x float> @Gather(<16 x ptr> %ptrs, <16 x i1> %mask, <16 x float> %pas
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint <16 x ptr> [[PTRS:%.*]] to <16 x i64>
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor <16 x i64> [[TMP2]], splat (i64 87960930222080)
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr <16 x i64> [[TMP3]] to <16 x ptr>
+; CHECK-NEXT:    [[TMP5:%.*]] = add <16 x i64> [[TMP3]], splat (i64 2097152)
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr <16 x i64> [[TMP5]] to <16 x ptr>
 ; CHECK-NEXT:    [[_MSMASKEDGATHER:%.*]] = call <16 x i32> @llvm.masked.gather.v16i32.v16p0(<16 x ptr> align 4 [[TMP4]], <16 x i1> [[MASK:%.*]], <16 x i32> [[TMP1]])
 ; CHECK-NEXT:    [[RET:%.*]] = call <16 x float> @llvm.masked.gather.v16f32.v16p0(<16 x ptr> align 4 [[PTRS]], <16 x i1> [[MASK]], <16 x float> [[PASSTHRU:%.*]])
 ; CHECK-NEXT:    store <16 x i32> [[_MSMASKEDGATHER]], ptr @__msan_retval_tls, align 8
@@ -253,18 +263,19 @@ define <16 x float> @Gather(<16 x ptr> %ptrs, <16 x i1> %mask, <16 x float> %pas
 ; ADDR-NEXT:    [[_MSMASKEDPTRS:%.*]] = select <16 x i1> [[MASK:%.*]], <16 x i64> [[TMP2]], <16 x i64> zeroinitializer
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint <16 x ptr> [[PTRS:%.*]] to <16 x i64>
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor <16 x i64> [[TMP4]], splat (i64 87960930222080)
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr <16 x i64> [[TMP5]] to <16 x ptr>
+; ADDR-NEXT:    [[TMP9:%.*]] = add <16 x i64> [[TMP5]], splat (i64 2097152)
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr <16 x i64> [[TMP9]] to <16 x ptr>
 ; ADDR-NEXT:    [[_MSMASKEDGATHER:%.*]] = call <16 x i32> @llvm.masked.gather.v16i32.v16p0(<16 x ptr> align 4 [[TMP6]], <16 x i1> [[MASK]], <16 x i32> [[TMP3]])
 ; ADDR-NEXT:    [[TMP7:%.*]] = bitcast <16 x i1> [[TMP1]] to i16
 ; ADDR-NEXT:    [[_MSCMP:%.*]] = icmp ne i16 [[TMP7]], 0
 ; ADDR-NEXT:    [[TMP8:%.*]] = bitcast <16 x i64> [[_MSMASKEDPTRS]] to i1024
 ; ADDR-NEXT:    [[_MSCMP1:%.*]] = icmp ne i1024 [[TMP8]], 0
 ; ADDR-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; ADDR:       9:
+; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; ADDR:       10:
 ; ADDR-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
 ; ADDR-NEXT:    unreachable
-; ADDR:       10:
+; ADDR:       11:
 ; ADDR-NEXT:    [[RET:%.*]] = call <16 x float> @llvm.masked.gather.v16f32.v16p0(<16 x ptr> align 4 [[PTRS]], <16 x i1> [[MASK]], <16 x float> [[PASSTHRU:%.*]])
 ; ADDR-NEXT:    store <16 x i32> [[_MSMASKEDGATHER]], ptr @__msan_retval_tls, align 8
 ; ADDR-NEXT:    ret <16 x float> [[RET]]
@@ -275,8 +286,9 @@ define <16 x float> @Gather(<16 x ptr> %ptrs, <16 x i1> %mask, <16 x float> %pas
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = ptrtoint <16 x ptr> [[PTRS:%.*]] to <16 x i64>
 ; ORIGINS-NEXT:    [[TMP4:%.*]] = xor <16 x i64> [[TMP3]], splat (i64 87960930222080)
-; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr <16 x i64> [[TMP4]] to <16 x ptr>
-; ORIGINS-NEXT:    [[TMP6:%.*]] = add <16 x i64> [[TMP4]], splat (i64 17592186044416)
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add <16 x i64> [[TMP4]], splat (i64 2097152)
+; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr <16 x i64> [[TMP8]] to <16 x ptr>
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add <16 x i64> [[TMP4]], splat (i64 17592188141568)
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = inttoptr <16 x i64> [[TMP6]] to <16 x ptr>
 ; ORIGINS-NEXT:    [[_MSMASKEDGATHER:%.*]] = call <16 x i32> @llvm.masked.gather.v16i32.v16p0(<16 x ptr> align 4 [[TMP5]], <16 x i1> [[MASK:%.*]], <16 x i32> [[TMP1]])
 ; ORIGINS-NEXT:    [[RET:%.*]] = call <16 x float> @llvm.masked.gather.v16f32.v16p0(<16 x ptr> align 4 [[PTRS]], <16 x i1> [[MASK]], <16 x float> [[PASSTHRU:%.*]])
@@ -320,7 +332,8 @@ define void @Scatter(<8 x i32> %value, <8 x ptr> %ptrs, <8 x i1> %mask) sanitize
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint <8 x ptr> [[PTRS:%.*]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor <8 x i64> [[TMP2]], splat (i64 87960930222080)
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr <8 x i64> [[TMP3]] to <8 x ptr>
+; CHECK-NEXT:    [[TMP5:%.*]] = add <8 x i64> [[TMP3]], splat (i64 2097152)
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr <8 x i64> [[TMP5]] to <8 x ptr>
 ; CHECK-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[TMP1]], <8 x ptr> align 8 [[TMP4]], <8 x i1> [[MASK:%.*]])
 ; CHECK-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[VALUE:%.*]], <8 x ptr> align 8 [[PTRS]], <8 x i1> [[MASK]])
 ; CHECK-NEXT:    ret void
@@ -333,18 +346,19 @@ define void @Scatter(<8 x i32> %value, <8 x ptr> %ptrs, <8 x i1> %mask) sanitize
 ; ADDR-NEXT:    [[_MSMASKEDPTRS:%.*]] = select <8 x i1> [[MASK:%.*]], <8 x i64> [[TMP2]], <8 x i64> zeroinitializer
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint <8 x ptr> [[PTRS:%.*]] to <8 x i64>
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor <8 x i64> [[TMP4]], splat (i64 87960930222080)
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr <8 x i64> [[TMP5]] to <8 x ptr>
+; ADDR-NEXT:    [[TMP9:%.*]] = add <8 x i64> [[TMP5]], splat (i64 2097152)
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr <8 x i64> [[TMP9]] to <8 x ptr>
 ; ADDR-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[TMP3]], <8 x ptr> align 8 [[TMP6]], <8 x i1> [[MASK]])
 ; ADDR-NEXT:    [[TMP7:%.*]] = bitcast <8 x i1> [[TMP1]] to i8
 ; ADDR-NEXT:    [[_MSCMP:%.*]] = icmp ne i8 [[TMP7]], 0
 ; ADDR-NEXT:    [[TMP8:%.*]] = bitcast <8 x i64> [[_MSMASKEDPTRS]] to i512
 ; ADDR-NEXT:    [[_MSCMP1:%.*]] = icmp ne i512 [[TMP8]], 0
 ; ADDR-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
-; ADDR:       9:
+; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP10:%.*]], label [[TMP11:%.*]], !prof [[PROF1]]
+; ADDR:       10:
 ; ADDR-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
 ; ADDR-NEXT:    unreachable
-; ADDR:       10:
+; ADDR:       11:
 ; ADDR-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[VALUE:%.*]], <8 x ptr> align 8 [[PTRS]], <8 x i1> [[MASK]])
 ; ADDR-NEXT:    ret void
 ;
@@ -354,8 +368,9 @@ define void @Scatter(<8 x i32> %value, <8 x ptr> %ptrs, <8 x i1> %mask) sanitize
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = ptrtoint <8 x ptr> [[PTRS:%.*]] to <8 x i64>
 ; ORIGINS-NEXT:    [[TMP4:%.*]] = xor <8 x i64> [[TMP3]], splat (i64 87960930222080)
-; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr <8 x i64> [[TMP4]] to <8 x ptr>
-; ORIGINS-NEXT:    [[TMP6:%.*]] = add <8 x i64> [[TMP4]], splat (i64 17592186044416)
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add <8 x i64> [[TMP4]], splat (i64 2097152)
+; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr <8 x i64> [[TMP8]] to <8 x ptr>
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add <8 x i64> [[TMP4]], splat (i64 17592188141568)
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = inttoptr <8 x i64> [[TMP6]] to <8 x ptr>
 ; ORIGINS-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[TMP1]], <8 x ptr> align 8 [[TMP5]], <8 x i1> [[MASK:%.*]])
 ; ORIGINS-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[VALUE:%.*]], <8 x ptr> align 8 [[PTRS]], <8 x i1> [[MASK]])
@@ -370,7 +385,8 @@ define void @ScatterNoSanitize(<8 x i32> %value, <8 x ptr> %ptrs, <8 x i1> %mask
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint <8 x ptr> [[PTRS:%.*]] to <8 x i64>
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor <8 x i64> [[TMP1]], splat (i64 87960930222080)
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr <8 x i64> [[TMP2]] to <8 x ptr>
+; CHECK-NEXT:    [[TMP4:%.*]] = add <8 x i64> [[TMP2]], splat (i64 2097152)
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr <8 x i64> [[TMP4]] to <8 x ptr>
 ; CHECK-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> zeroinitializer, <8 x ptr> align 8 [[TMP3]], <8 x i1> [[MASK:%.*]])
 ; CHECK-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[VALUE:%.*]], <8 x ptr> align 8 [[PTRS]], <8 x i1> [[MASK]])
 ; CHECK-NEXT:    ret void
@@ -380,7 +396,8 @@ define void @ScatterNoSanitize(<8 x i32> %value, <8 x ptr> %ptrs, <8 x i1> %mask
 ; ADDR-NEXT:    [[_MSMASKEDPTRS:%.*]] = select <8 x i1> [[MASK:%.*]], <8 x i64> zeroinitializer, <8 x i64> zeroinitializer
 ; ADDR-NEXT:    [[TMP1:%.*]] = ptrtoint <8 x ptr> [[PTRS:%.*]] to <8 x i64>
 ; ADDR-NEXT:    [[TMP2:%.*]] = xor <8 x i64> [[TMP1]], splat (i64 87960930222080)
-; ADDR-NEXT:    [[TMP3:%.*]] = inttoptr <8 x i64> [[TMP2]] to <8 x ptr>
+; ADDR-NEXT:    [[TMP4:%.*]] = add <8 x i64> [[TMP2]], splat (i64 2097152)
+; ADDR-NEXT:    [[TMP3:%.*]] = inttoptr <8 x i64> [[TMP4]] to <8 x ptr>
 ; ADDR-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> zeroinitializer, <8 x ptr> align 8 [[TMP3]], <8 x i1> [[MASK]])
 ; ADDR-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[VALUE:%.*]], <8 x ptr> align 8 [[PTRS]], <8 x i1> [[MASK]])
 ; ADDR-NEXT:    ret void
@@ -389,8 +406,9 @@ define void @ScatterNoSanitize(<8 x i32> %value, <8 x ptr> %ptrs, <8 x i1> %mask
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint <8 x ptr> [[PTRS:%.*]] to <8 x i64>
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor <8 x i64> [[TMP1]], splat (i64 87960930222080)
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr <8 x i64> [[TMP2]] to <8 x ptr>
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add <8 x i64> [[TMP2]], splat (i64 17592186044416)
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add <8 x i64> [[TMP2]], splat (i64 2097152)
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr <8 x i64> [[TMP6]] to <8 x ptr>
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add <8 x i64> [[TMP2]], splat (i64 17592188141568)
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr <8 x i64> [[TMP4]] to <8 x ptr>
 ; ORIGINS-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> zeroinitializer, <8 x ptr> align 8 [[TMP3]], <8 x i1> [[MASK:%.*]])
 ; ORIGINS-NEXT:    call void @llvm.masked.scatter.v8i32.v8p0(<8 x i32> [[VALUE:%.*]], <8 x ptr> align 8 [[PTRS]], <8 x i1> [[MASK]])
@@ -407,7 +425,8 @@ define <16 x float> @ExpandLoad(ptr %ptr, <16 x i1> %mask, <16 x float> %passthr
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP4]], <16 x i1> [[MASK:%.*]], <16 x i32> [[TMP1]])
 ; CHECK-NEXT:    [[RET:%.*]] = call <16 x float> @llvm.masked.expandload.v16f32(ptr [[PTR]], <16 x i1> [[MASK]], <16 x float> [[PASSTHRU:%.*]])
 ; CHECK-NEXT:    store <16 x i32> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
@@ -420,17 +439,18 @@ define <16 x float> @ExpandLoad(ptr %ptr, <16 x i1> %mask, <16 x float> %passthr
 ; ADDR-NEXT:    call void @llvm.donothing()
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP6]], <16 x i1> [[MASK:%.*]], <16 x i32> [[TMP3]])
 ; ADDR-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; ADDR-NEXT:    [[TMP7:%.*]] = bitcast <16 x i1> [[TMP2]] to i16
 ; ADDR-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP7]], 0
 ; ADDR-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; ADDR:       8:
+; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; ADDR:       9:
 ; ADDR-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
 ; ADDR-NEXT:    unreachable
-; ADDR:       9:
+; ADDR:       10:
 ; ADDR-NEXT:    [[RET:%.*]] = call <16 x float> @llvm.masked.expandload.v16f32(ptr [[PTR]], <16 x i1> [[MASK]], <16 x float> [[PASSTHRU:%.*]])
 ; ADDR-NEXT:    store <16 x i32> [[_MSMASKEDEXPLOAD]], ptr @__msan_retval_tls, align 8
 ; ADDR-NEXT:    ret <16 x float> [[RET]]
@@ -441,8 +461,9 @@ define <16 x float> @ExpandLoad(ptr %ptr, <16 x i1> %mask, <16 x float> %passthr
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; ORIGINS-NEXT:    [[TMP9:%.*]] = add i64 [[TMP4]], 2097152
+; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = and i64 [[TMP6]], -4
 ; ORIGINS-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ORIGINS-NEXT:    [[_MSMASKEDEXPLOAD:%.*]] = call <16 x i32> @llvm.masked.expandload.v16i32(ptr [[TMP5]], <16 x i1> [[MASK:%.*]], <16 x i32> [[TMP1]])
@@ -486,7 +507,8 @@ define void @CompressStore(<16 x float> %value, ptr %ptr, <16 x i1> %mask) sanit
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[TMP1]], ptr [[TMP4]], <16 x i1> [[MASK:%.*]])
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16f32(<16 x float> [[VALUE:%.*]], ptr [[PTR]], <16 x i1> [[MASK]])
 ; CHECK-NEXT:    ret void
@@ -498,17 +520,18 @@ define void @CompressStore(<16 x float> %value, ptr %ptr, <16 x i1> %mask) sanit
 ; ADDR-NEXT:    call void @llvm.donothing()
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[TMP3]], ptr [[TMP6]], <16 x i1> [[MASK:%.*]])
 ; ADDR-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP1]], 0
 ; ADDR-NEXT:    [[TMP7:%.*]] = bitcast <16 x i1> [[TMP2]] to i16
 ; ADDR-NEXT:    [[_MSCMP1:%.*]] = icmp ne i16 [[TMP7]], 0
 ; ADDR-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP1]]
-; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; ADDR:       8:
+; ADDR-NEXT:    br i1 [[_MSOR]], label [[TMP9:%.*]], label [[TMP10:%.*]], !prof [[PROF1]]
+; ADDR:       9:
 ; ADDR-NEXT:    call void @__msan_warning_noreturn() #[[ATTR7]]
 ; ADDR-NEXT:    unreachable
-; ADDR:       9:
+; ADDR:       10:
 ; ADDR-NEXT:    call void @llvm.masked.compressstore.v16f32(<16 x float> [[VALUE:%.*]], ptr [[PTR]], <16 x i1> [[MASK]])
 ; ADDR-NEXT:    ret void
 ;
@@ -518,8 +541,9 @@ define void @CompressStore(<16 x float> %value, ptr %ptr, <16 x i1> %mask) sanit
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; ORIGINS-NEXT:    [[TMP9:%.*]] = add i64 [[TMP4]], 2097152
+; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = and i64 [[TMP6]], -4
 ; ORIGINS-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ORIGINS-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> [[TMP1]], ptr [[TMP5]], <16 x i1> [[MASK:%.*]])
@@ -535,7 +559,8 @@ define void @CompressStoreNoSanitize(<16 x float> %value, ptr %ptr, <16 x i1> %m
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> zeroinitializer, ptr [[TMP3]], <16 x i1> [[MASK:%.*]])
 ; CHECK-NEXT:    call void @llvm.masked.compressstore.v16f32(<16 x float> [[VALUE:%.*]], ptr [[PTR]], <16 x i1> [[MASK]])
 ; CHECK-NEXT:    ret void
@@ -544,7 +569,8 @@ define void @CompressStoreNoSanitize(<16 x float> %value, ptr %ptr, <16 x i1> %m
 ; ADDR-NEXT:    call void @llvm.donothing()
 ; ADDR-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; ADDR-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ADDR-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; ADDR-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; ADDR-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ADDR-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> zeroinitializer, ptr [[TMP3]], <16 x i1> [[MASK:%.*]])
 ; ADDR-NEXT:    call void @llvm.masked.compressstore.v16f32(<16 x float> [[VALUE:%.*]], ptr [[PTR]], <16 x i1> [[MASK]])
 ; ADDR-NEXT:    ret void
@@ -553,8 +579,9 @@ define void @CompressStoreNoSanitize(<16 x float> %value, ptr %ptr, <16 x i1> %m
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[PTR:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP7:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    call void @llvm.masked.compressstore.v16i32(<16 x i32> zeroinitializer, ptr [[TMP3]], <16 x i1> [[MASK:%.*]])
diff --git a/llvm/test/Instrumentation/MemorySanitizer/msan-pass-second-run.ll b/llvm/test/Instrumentation/MemorySanitizer/msan-pass-second-run.ll
index 5c135d63e56fd..98ea74c41222e 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/msan-pass-second-run.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/msan-pass-second-run.ll
@@ -25,7 +25,8 @@ define void @array() sanitize_memory {
 ; CHECK-NEXT:    [[X:%.*]] = alloca i32, i64 5, align 4
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[X]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 2097152
+; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP3]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP2]], i8 -1, i64 20, i1 false)
 ; CHECK-NEXT:    ret void
 ;
diff --git a/llvm/test/Instrumentation/MemorySanitizer/msan_asm_conservative.ll b/llvm/test/Instrumentation/MemorySanitizer/msan_asm_conservative.ll
index 1b9ffaf9e505a..7f48633dfee61 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/msan_asm_conservative.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/msan_asm_conservative.ll
@@ -1,3 +1,4 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6
 ; Test for handling of asm constraints in MSan instrumentation.
 ; RUN: opt < %s -msan-check-access-address=0 -msan-handle-asm-conservative=0 -S -passes=msan 2>&1 | FileCheck %s
 ; RUN: opt < %s -msan-check-access-address=0 -S -passes=msan 2>&1 | FileCheck --check-prefixes=CHECK,USER-CONS %s
@@ -35,6 +36,55 @@ target triple = "x86_64-unknown-linux-gnu"
 ; One input register, one output register:
 ;   asm("" : "=r" (id1) : "r" (is1));
 define dso_local void @f_1i_1o_reg() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_1i_1o_reg(
+; USER-CONS-SAME: ) #[[ATTR0:[0-9]+]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    [[TMP0:%.*]] = load i32, ptr @is1, align 4
+; USER-CONS-NEXT:    [[_MSLD:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @is1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; USER-CONS-NEXT:    br i1 [[_MSCMP]], label %[[BB1:.*]], label %[[BB2:.*]], !prof [[PROF1:![0-9]+]]
+; USER-CONS:       [[BB1]]:
+; USER-CONS-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4:[0-9]+]]
+; USER-CONS-NEXT:    unreachable
+; USER-CONS:       [[BB2]]:
+; USER-CONS-NEXT:    [[TMP3:%.*]] = call i32 asm "", "=r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP0]])
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[TMP3]], ptr @id1, align 4
+; USER-CONS-NEXT:    ret void
+;
+; KMSAN-LABEL: define dso_local void @f_1i_1o_reg(
+; KMSAN-SAME: ) #[[ATTR0:[0-9]+]] {
+; KMSAN-NEXT:  [[ENTRY:.*:]]
+; KMSAN-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; KMSAN-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; KMSAN-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; KMSAN-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; KMSAN-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; KMSAN-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; KMSAN-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; KMSAN-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; KMSAN-NEXT:    call void @llvm.donothing()
+; KMSAN-NEXT:    [[TMP1:%.*]] = load i32, ptr @is1, align 4
+; KMSAN-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @is1)
+; KMSAN-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; KMSAN-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; KMSAN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
+; KMSAN-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4
+; KMSAN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP]], label %[[BB6:.*]], label %[[BB7:.*]], !prof [[PROF1:![0-9]+]]
+; KMSAN:       [[BB6]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP5]]) #[[ATTR2:[0-9]+]]
+; KMSAN-NEXT:    br label %[[BB7]]
+; KMSAN:       [[BB7]]:
+; KMSAN-NEXT:    [[TMP8:%.*]] = call i32 asm "", "=r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP1]])
+; KMSAN-NEXT:    [[TMP9:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id1)
+; KMSAN-NEXT:    [[TMP10:%.*]] = extractvalue { ptr, ptr } [[TMP9]], 0
+; KMSAN-NEXT:    [[TMP11:%.*]] = extractvalue { ptr, ptr } [[TMP9]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP10]], align 4
+; KMSAN-NEXT:    store i32 [[TMP8]], ptr @id1, align 4
+; KMSAN-NEXT:    ret void
+;
 entry:
   %0 = load i32, ptr @is1, align 4
   %1 = call i32 asm "", "=r,r,~{dirflag},~{fpsr},~{flags}"(i32 %0)
@@ -42,18 +92,87 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: @f_1i_1o_reg
-; CHECK: [[IS1_F1:%.*]] = load i32, ptr @is1, align 4
-; CHECK: call void @__msan_warning
-; CHECK: call i32 asm "",{{.*}}(i32 [[IS1_F1]])
-; KMSAN: [[PACK1_F1:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id1{{.*}})
-; KMSAN: [[EXT1_F1:%.*]] = extractvalue { ptr, ptr } [[PACK1_F1]], 0
-; KMSAN: store i32 0, ptr [[EXT1_F1]]
 
 
 ; Two input registers, two output registers:
 ;   asm("" : "=r" (id1), "=r" (id2) : "r" (is1), "r"(is2));
 define dso_local void @f_2i_2o_reg() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_2i_2o_reg(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    [[TMP0:%.*]] = load i32, ptr @is1, align 4
+; USER-CONS-NEXT:    [[_MSLD:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @is1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[TMP1:%.*]] = load i32, ptr @is2, align 4
+; USER-CONS-NEXT:    [[_MSLD1:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @is2 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; USER-CONS-NEXT:    [[_MSCMP2:%.*]] = icmp ne i32 [[_MSLD1]], 0
+; USER-CONS-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
+; USER-CONS-NEXT:    br i1 [[_MSOR]], label %[[BB2:.*]], label %[[BB3:.*]], !prof [[PROF1]]
+; USER-CONS:       [[BB2]]:
+; USER-CONS-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
+; USER-CONS-NEXT:    unreachable
+; USER-CONS:       [[BB3]]:
+; USER-CONS-NEXT:    [[TMP4:%.*]] = call { i32, i32 } asm "", "=r,=r,r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP0]], i32 [[TMP1]])
+; USER-CONS-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32 } [[TMP4]], 0
+; USER-CONS-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32 } [[TMP4]], 1
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT1]], ptr @id2, align 4
+; USER-CONS-NEXT:    ret void
+;
+; KMSAN-LABEL: define dso_local void @f_2i_2o_reg(
+; KMSAN-SAME: ) #[[ATTR0]] {
+; KMSAN-NEXT:  [[ENTRY:.*:]]
+; KMSAN-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; KMSAN-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; KMSAN-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; KMSAN-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; KMSAN-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; KMSAN-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; KMSAN-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; KMSAN-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; KMSAN-NEXT:    call void @llvm.donothing()
+; KMSAN-NEXT:    [[TMP1:%.*]] = load i32, ptr @is1, align 4
+; KMSAN-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @is1)
+; KMSAN-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; KMSAN-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; KMSAN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
+; KMSAN-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4
+; KMSAN-NEXT:    [[TMP6:%.*]] = load i32, ptr @is2, align 4
+; KMSAN-NEXT:    [[TMP7:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @is2)
+; KMSAN-NEXT:    [[TMP8:%.*]] = extractvalue { ptr, ptr } [[TMP7]], 0
+; KMSAN-NEXT:    [[TMP9:%.*]] = extractvalue { ptr, ptr } [[TMP7]], 1
+; KMSAN-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP8]], align 4
+; KMSAN-NEXT:    [[TMP10:%.*]] = load i32, ptr [[TMP9]], align 4
+; KMSAN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1]]
+; KMSAN:       [[BB11]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP5]]) #[[ATTR2]]
+; KMSAN-NEXT:    br label %[[BB12]]
+; KMSAN:       [[BB12]]:
+; KMSAN-NEXT:    [[_MSCMP2:%.*]] = icmp ne i32 [[_MSLD1]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP2]], label %[[BB13:.*]], label %[[BB14:.*]], !prof [[PROF1]]
+; KMSAN:       [[BB13]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP10]]) #[[ATTR2]]
+; KMSAN-NEXT:    br label %[[BB14]]
+; KMSAN:       [[BB14]]:
+; KMSAN-NEXT:    [[TMP15:%.*]] = call { i32, i32 } asm "", "=r,=r,r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP1]], i32 [[TMP6]])
+; KMSAN-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32 } [[TMP15]], 0
+; KMSAN-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32 } [[TMP15]], 1
+; KMSAN-NEXT:    [[TMP16:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id1)
+; KMSAN-NEXT:    [[TMP17:%.*]] = extractvalue { ptr, ptr } [[TMP16]], 0
+; KMSAN-NEXT:    [[TMP18:%.*]] = extractvalue { ptr, ptr } [[TMP16]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP17]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; KMSAN-NEXT:    [[TMP19:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id2)
+; KMSAN-NEXT:    [[TMP20:%.*]] = extractvalue { ptr, ptr } [[TMP19]], 0
+; KMSAN-NEXT:    [[TMP21:%.*]] = extractvalue { ptr, ptr } [[TMP19]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP20]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT1]], ptr @id2, align 4
+; KMSAN-NEXT:    ret void
+;
 entry:
   %0 = load i32, ptr @is1, align 4
   %1 = load i32, ptr @is2, align 4
@@ -65,22 +184,86 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: @f_2i_2o_reg
-; CHECK: [[IS1_F2:%.*]] = load i32, ptr @is1, align 4
-; CHECK: [[IS2_F2:%.*]] = load i32, ptr @is2, align 4
-; CHECK: call void @__msan_warning
-; KMSAN: call void @__msan_warning
-; CHECK: call { i32, i32 } asm "",{{.*}}(i32 [[IS1_F2]], i32 [[IS2_F2]])
-; KMSAN: [[PACK1_F2:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id1{{.*}})
-; KMSAN: [[EXT1_F2:%.*]] = extractvalue { ptr, ptr } [[PACK1_F2]], 0
-; KMSAN: store i32 0, ptr [[EXT1_F2]]
-; KMSAN: [[PACK2_F2:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id2{{.*}})
-; KMSAN: [[EXT2_F2:%.*]] = extractvalue { ptr, ptr } [[PACK2_F2]], 0
-; KMSAN: store i32 0, ptr [[EXT2_F2]]
 
 ; Input same as output, used twice:
 ;   asm("" : "=r" (id1), "=r" (id2) : "r" (id1), "r" (id2));
 define dso_local void @f_2i_2o_reuse2_reg() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_2i_2o_reuse2_reg(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    [[TMP0:%.*]] = load i32, ptr @id1, align 4
+; USER-CONS-NEXT:    [[_MSLD:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[TMP1:%.*]] = load i32, ptr @id2, align 4
+; USER-CONS-NEXT:    [[_MSLD1:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; USER-CONS-NEXT:    [[_MSCMP2:%.*]] = icmp ne i32 [[_MSLD1]], 0
+; USER-CONS-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
+; USER-CONS-NEXT:    br i1 [[_MSOR]], label %[[BB2:.*]], label %[[BB3:.*]], !prof [[PROF1]]
+; USER-CONS:       [[BB2]]:
+; USER-CONS-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
+; USER-CONS-NEXT:    unreachable
+; USER-CONS:       [[BB3]]:
+; USER-CONS-NEXT:    [[TMP4:%.*]] = call { i32, i32 } asm "", "=r,=r,r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP0]], i32 [[TMP1]])
+; USER-CONS-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32 } [[TMP4]], 0
+; USER-CONS-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32 } [[TMP4]], 1
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT1]], ptr @id2, align 4
+; USER-CONS-NEXT:    ret void
+;
+; KMSAN-LABEL: define dso_local void @f_2i_2o_reuse2_reg(
+; KMSAN-SAME: ) #[[ATTR0]] {
+; KMSAN-NEXT:  [[ENTRY:.*:]]
+; KMSAN-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; KMSAN-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; KMSAN-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; KMSAN-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; KMSAN-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; KMSAN-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; KMSAN-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; KMSAN-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; KMSAN-NEXT:    call void @llvm.donothing()
+; KMSAN-NEXT:    [[TMP1:%.*]] = load i32, ptr @id1, align 4
+; KMSAN-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @id1)
+; KMSAN-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; KMSAN-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; KMSAN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
+; KMSAN-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4
+; KMSAN-NEXT:    [[TMP6:%.*]] = load i32, ptr @id2, align 4
+; KMSAN-NEXT:    [[TMP7:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @id2)
+; KMSAN-NEXT:    [[TMP8:%.*]] = extractvalue { ptr, ptr } [[TMP7]], 0
+; KMSAN-NEXT:    [[TMP9:%.*]] = extractvalue { ptr, ptr } [[TMP7]], 1
+; KMSAN-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP8]], align 4
+; KMSAN-NEXT:    [[TMP10:%.*]] = load i32, ptr [[TMP9]], align 4
+; KMSAN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1]]
+; KMSAN:       [[BB11]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP5]]) #[[ATTR2]]
+; KMSAN-NEXT:    br label %[[BB12]]
+; KMSAN:       [[BB12]]:
+; KMSAN-NEXT:    [[_MSCMP2:%.*]] = icmp ne i32 [[_MSLD1]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP2]], label %[[BB13:.*]], label %[[BB14:.*]], !prof [[PROF1]]
+; KMSAN:       [[BB13]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP10]]) #[[ATTR2]]
+; KMSAN-NEXT:    br label %[[BB14]]
+; KMSAN:       [[BB14]]:
+; KMSAN-NEXT:    [[TMP15:%.*]] = call { i32, i32 } asm "", "=r,=r,r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP1]], i32 [[TMP6]])
+; KMSAN-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32 } [[TMP15]], 0
+; KMSAN-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32 } [[TMP15]], 1
+; KMSAN-NEXT:    [[TMP16:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id1)
+; KMSAN-NEXT:    [[TMP17:%.*]] = extractvalue { ptr, ptr } [[TMP16]], 0
+; KMSAN-NEXT:    [[TMP18:%.*]] = extractvalue { ptr, ptr } [[TMP16]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP17]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; KMSAN-NEXT:    [[TMP19:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id2)
+; KMSAN-NEXT:    [[TMP20:%.*]] = extractvalue { ptr, ptr } [[TMP19]], 0
+; KMSAN-NEXT:    [[TMP21:%.*]] = extractvalue { ptr, ptr } [[TMP19]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP20]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT1]], ptr @id2, align 4
+; KMSAN-NEXT:    ret void
+;
 entry:
   %0 = load i32, ptr @id1, align 4
   %1 = load i32, ptr @id2, align 4
@@ -92,23 +275,87 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: @f_2i_2o_reuse2_reg
-; CHECK: [[ID1_F3:%.*]] = load i32, ptr @id1, align 4
-; CHECK: [[ID2_F3:%.*]] = load i32, ptr @id2, align 4
-; CHECK: call void @__msan_warning
-; KMSAN: call void @__msan_warning
-; CHECK: call { i32, i32 } asm "",{{.*}}(i32 [[ID1_F3]], i32 [[ID2_F3]])
-; KMSAN: [[PACK1_F3:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id1{{.*}})
-; KMSAN: [[EXT1_F3:%.*]] = extractvalue { ptr, ptr } [[PACK1_F3]], 0
-; KMSAN: store i32 0, ptr [[EXT1_F3]]
-; KMSAN: [[PACK2_F3:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id2{{.*}})
-; KMSAN: [[EXT2_F3:%.*]] = extractvalue { ptr, ptr } [[PACK2_F3]], 0
-; KMSAN: store i32 0, ptr [[EXT2_F3]]
 
 
 ; One of the input registers is also an output:
 ;   asm("" : "=r" (id1), "=r" (id2) : "r" (id1), "r"(is1));
 define dso_local void @f_2i_2o_reuse1_reg() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_2i_2o_reuse1_reg(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    [[TMP0:%.*]] = load i32, ptr @id1, align 4
+; USER-CONS-NEXT:    [[_MSLD:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[TMP1:%.*]] = load i32, ptr @is1, align 4
+; USER-CONS-NEXT:    [[_MSLD1:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @is1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; USER-CONS-NEXT:    [[_MSCMP2:%.*]] = icmp ne i32 [[_MSLD1]], 0
+; USER-CONS-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP2]]
+; USER-CONS-NEXT:    br i1 [[_MSOR]], label %[[BB2:.*]], label %[[BB3:.*]], !prof [[PROF1]]
+; USER-CONS:       [[BB2]]:
+; USER-CONS-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
+; USER-CONS-NEXT:    unreachable
+; USER-CONS:       [[BB3]]:
+; USER-CONS-NEXT:    [[TMP4:%.*]] = call { i32, i32 } asm "", "=r,=r,r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP0]], i32 [[TMP1]])
+; USER-CONS-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32 } [[TMP4]], 0
+; USER-CONS-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32 } [[TMP4]], 1
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT1]], ptr @id2, align 4
+; USER-CONS-NEXT:    ret void
+;
+; KMSAN-LABEL: define dso_local void @f_2i_2o_reuse1_reg(
+; KMSAN-SAME: ) #[[ATTR0]] {
+; KMSAN-NEXT:  [[ENTRY:.*:]]
+; KMSAN-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; KMSAN-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; KMSAN-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; KMSAN-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; KMSAN-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; KMSAN-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; KMSAN-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; KMSAN-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; KMSAN-NEXT:    call void @llvm.donothing()
+; KMSAN-NEXT:    [[TMP1:%.*]] = load i32, ptr @id1, align 4
+; KMSAN-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @id1)
+; KMSAN-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; KMSAN-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; KMSAN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
+; KMSAN-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4
+; KMSAN-NEXT:    [[TMP6:%.*]] = load i32, ptr @is1, align 4
+; KMSAN-NEXT:    [[TMP7:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @is1)
+; KMSAN-NEXT:    [[TMP8:%.*]] = extractvalue { ptr, ptr } [[TMP7]], 0
+; KMSAN-NEXT:    [[TMP9:%.*]] = extractvalue { ptr, ptr } [[TMP7]], 1
+; KMSAN-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP8]], align 4
+; KMSAN-NEXT:    [[TMP10:%.*]] = load i32, ptr [[TMP9]], align 4
+; KMSAN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1]]
+; KMSAN:       [[BB11]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP5]]) #[[ATTR2]]
+; KMSAN-NEXT:    br label %[[BB12]]
+; KMSAN:       [[BB12]]:
+; KMSAN-NEXT:    [[_MSCMP2:%.*]] = icmp ne i32 [[_MSLD1]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP2]], label %[[BB13:.*]], label %[[BB14:.*]], !prof [[PROF1]]
+; KMSAN:       [[BB13]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP10]]) #[[ATTR2]]
+; KMSAN-NEXT:    br label %[[BB14]]
+; KMSAN:       [[BB14]]:
+; KMSAN-NEXT:    [[TMP15:%.*]] = call { i32, i32 } asm "", "=r,=r,r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP1]], i32 [[TMP6]])
+; KMSAN-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32 } [[TMP15]], 0
+; KMSAN-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32 } [[TMP15]], 1
+; KMSAN-NEXT:    [[TMP16:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id1)
+; KMSAN-NEXT:    [[TMP17:%.*]] = extractvalue { ptr, ptr } [[TMP16]], 0
+; KMSAN-NEXT:    [[TMP18:%.*]] = extractvalue { ptr, ptr } [[TMP16]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP17]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; KMSAN-NEXT:    [[TMP19:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id2)
+; KMSAN-NEXT:    [[TMP20:%.*]] = extractvalue { ptr, ptr } [[TMP19]], 0
+; KMSAN-NEXT:    [[TMP21:%.*]] = extractvalue { ptr, ptr } [[TMP19]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP20]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT1]], ptr @id2, align 4
+; KMSAN-NEXT:    ret void
+;
 entry:
   %0 = load i32, ptr @id1, align 4
   %1 = load i32, ptr @is1, align 4
@@ -120,23 +367,80 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: @f_2i_2o_reuse1_reg
-; CHECK: [[ID1_F4:%.*]] = load i32, ptr @id1, align 4
-; CHECK: [[IS1_F4:%.*]] = load i32, ptr @is1, align 4
-; CHECK: call void @__msan_warning
-; KMSAN: call void @__msan_warning
-; CHECK: call { i32, i32 } asm "",{{.*}}(i32 [[ID1_F4]], i32 [[IS1_F4]])
-; KMSAN: [[PACK1_F4:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id1{{.*}})
-; KMSAN: [[EXT1_F4:%.*]] = extractvalue { ptr, ptr } [[PACK1_F4]], 0
-; KMSAN: store i32 0, ptr [[EXT1_F4]]
-; KMSAN: [[PACK2_F4:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id2{{.*}})
-; KMSAN: [[EXT2_F4:%.*]] = extractvalue { ptr, ptr } [[PACK2_F4]], 0
-; KMSAN: store i32 0, ptr [[EXT2_F4]]
 
 
 ; One input register, three output registers:
 ;   asm("" : "=r" (id1), "=r" (id2), "=r" (id3) : "r" (is1));
 define dso_local void @f_1i_3o_reg() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_1i_3o_reg(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    [[TMP0:%.*]] = load i32, ptr @is1, align 4
+; USER-CONS-NEXT:    [[_MSLD:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @is1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; USER-CONS-NEXT:    br i1 [[_MSCMP]], label %[[BB1:.*]], label %[[BB2:.*]], !prof [[PROF1]]
+; USER-CONS:       [[BB1]]:
+; USER-CONS-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
+; USER-CONS-NEXT:    unreachable
+; USER-CONS:       [[BB2]]:
+; USER-CONS-NEXT:    [[TMP3:%.*]] = call { i32, i32, i32 } asm "", "=r,=r,=r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP0]])
+; USER-CONS-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32, i32 } [[TMP3]], 0
+; USER-CONS-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32, i32 } [[TMP3]], 1
+; USER-CONS-NEXT:    [[ASMRESULT2:%.*]] = extractvalue { i32, i32, i32 } [[TMP3]], 2
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT1]], ptr @id2, align 4
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id3 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT2]], ptr @id3, align 4
+; USER-CONS-NEXT:    ret void
+;
+; KMSAN-LABEL: define dso_local void @f_1i_3o_reg(
+; KMSAN-SAME: ) #[[ATTR0]] {
+; KMSAN-NEXT:  [[ENTRY:.*:]]
+; KMSAN-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; KMSAN-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; KMSAN-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; KMSAN-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; KMSAN-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; KMSAN-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; KMSAN-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; KMSAN-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; KMSAN-NEXT:    call void @llvm.donothing()
+; KMSAN-NEXT:    [[TMP1:%.*]] = load i32, ptr @is1, align 4
+; KMSAN-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @is1)
+; KMSAN-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; KMSAN-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; KMSAN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
+; KMSAN-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4
+; KMSAN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP]], label %[[BB6:.*]], label %[[BB7:.*]], !prof [[PROF1]]
+; KMSAN:       [[BB6]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP5]]) #[[ATTR2]]
+; KMSAN-NEXT:    br label %[[BB7]]
+; KMSAN:       [[BB7]]:
+; KMSAN-NEXT:    [[TMP8:%.*]] = call { i32, i32, i32 } asm "", "=r,=r,=r,r,~{dirflag},~{fpsr},~{flags}"(i32 [[TMP1]])
+; KMSAN-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32, i32 } [[TMP8]], 0
+; KMSAN-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32, i32 } [[TMP8]], 1
+; KMSAN-NEXT:    [[ASMRESULT2:%.*]] = extractvalue { i32, i32, i32 } [[TMP8]], 2
+; KMSAN-NEXT:    [[TMP9:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id1)
+; KMSAN-NEXT:    [[TMP10:%.*]] = extractvalue { ptr, ptr } [[TMP9]], 0
+; KMSAN-NEXT:    [[TMP11:%.*]] = extractvalue { ptr, ptr } [[TMP9]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP10]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; KMSAN-NEXT:    [[TMP12:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id2)
+; KMSAN-NEXT:    [[TMP13:%.*]] = extractvalue { ptr, ptr } [[TMP12]], 0
+; KMSAN-NEXT:    [[TMP14:%.*]] = extractvalue { ptr, ptr } [[TMP12]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP13]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT1]], ptr @id2, align 4
+; KMSAN-NEXT:    [[TMP15:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id3)
+; KMSAN-NEXT:    [[TMP16:%.*]] = extractvalue { ptr, ptr } [[TMP15]], 0
+; KMSAN-NEXT:    [[TMP17:%.*]] = extractvalue { ptr, ptr } [[TMP15]], 1
+; KMSAN-NEXT:    store i32 0, ptr [[TMP16]], align 4
+; KMSAN-NEXT:    store i32 [[ASMRESULT2]], ptr @id3, align 4
+; KMSAN-NEXT:    ret void
+;
 entry:
   %0 = load i32, ptr @is1, align 4
   %1 = call { i32, i32, i32 } asm "", "=r,=r,=r,r,~{dirflag},~{fpsr},~{flags}"(i32 %0)
@@ -149,40 +453,98 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: @f_1i_3o_reg
-; CHECK: [[IS1_F5:%.*]] = load i32, ptr @is1, align 4
-; CHECK: call void @__msan_warning
-; CHECK: call { i32, i32, i32 } asm "",{{.*}}(i32 [[IS1_F5]])
-; KMSAN: [[PACK1_F5:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id1{{.*}})
-; KMSAN: [[EXT1_F5:%.*]] = extractvalue { ptr, ptr } [[PACK1_F5]], 0
-; KMSAN: store i32 0, ptr [[EXT1_F5]]
-; KMSAN: [[PACK2_F5:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id2{{.*}})
-; KMSAN: [[EXT2_F5:%.*]] = extractvalue { ptr, ptr } [[PACK2_F5]], 0
-; KMSAN: store i32 0, ptr [[EXT2_F5]]
-; KMSAN: [[PACK3_F5:%.*]] = call {{.*}} @__msan_metadata_ptr_for_store_4({{.*}}@id3{{.*}})
-; KMSAN: [[EXT3_F5:%.*]] = extractvalue { ptr, ptr } [[PACK3_F5]], 0
-; KMSAN: store i32 0, ptr [[EXT3_F5]]
 
 
 ; 2 input memory args, 2 output memory args:
 ;  asm("" : "=m" (id1), "=m" (id2) : "m" (is1), "m"(is2))
 define dso_local void @f_2i_2o_mem() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_2i_2o_mem(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    call void asm "", "=*m,=*m,*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id1, ptr elementtype(i32) @id2, ptr elementtype(i32) @is1, ptr elementtype(i32) @is2)
+; USER-CONS-NEXT:    ret void
+;
+; CHECK-CONS-LABEL: define dso_local void @f_2i_2o_mem(
+; CHECK-CONS-SAME: ) #[[ATTR0]] {
+; CHECK-CONS-NEXT:  [[ENTRY:.*:]]
+; CHECK-CONS-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; CHECK-CONS-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; CHECK-CONS-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; CHECK-CONS-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; CHECK-CONS-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; CHECK-CONS-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; CHECK-CONS-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; CHECK-CONS-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; CHECK-CONS-NEXT:    call void @llvm.donothing()
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @id1, i64 4)
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @id2, i64 4)
+; CHECK-CONS-NEXT:    call void asm "", "=*m,=*m,*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id1, ptr elementtype(i32) @id2, ptr elementtype(i32) @is1, ptr elementtype(i32) @is2)
+; CHECK-CONS-NEXT:    ret void
+;
 entry:
   call void asm "", "=*m,=*m,*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id1, ptr elementtype(i32) @id2, ptr elementtype(i32) @is1, ptr elementtype(i32) @is2)
   ret void
 }
 
-; CHECK-LABEL: @f_2i_2o_mem
-; USER-CONS:  store i32 0, ptr inttoptr (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080) to ptr), align 1
-; USER-CONS:  store i32 0, ptr inttoptr (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080) to ptr), align 1
-; CHECK-CONS: call void @__msan_instrument_asm_store({{.*}}@id1{{.*}}, i64 4)
-; CHECK-CONS: call void @__msan_instrument_asm_store({{.*}}@id2{{.*}}, i64 4)
-; CHECK: call void asm "", "=*m,=*m,*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id1, ptr elementtype(i32) @id2, ptr elementtype(i32) @is1, ptr elementtype(i32) @is2)
 
 
 ; Same input and output passed as both memory and register:
 ;  asm("" : "=r" (id1), "=m"(id1) : "r"(is1), "m"(is1));
 define dso_local void @f_1i_1o_memreg() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_1i_1o_memreg(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    [[TMP0:%.*]] = load i32, ptr @is1, align 4
+; USER-CONS-NEXT:    [[_MSLD:%.*]] = load i32, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @is1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; USER-CONS-NEXT:    br i1 [[_MSCMP]], label %[[BB1:.*]], label %[[BB2:.*]], !prof [[PROF1]]
+; USER-CONS:       [[BB1]]:
+; USER-CONS-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
+; USER-CONS-NEXT:    unreachable
+; USER-CONS:       [[BB2]]:
+; USER-CONS-NEXT:    [[TMP3:%.*]] = call i32 asm "", "=r,=*m,r,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id1, i32 [[TMP0]], ptr elementtype(i32) @is1)
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[TMP3]], ptr @id1, align 4
+; USER-CONS-NEXT:    ret void
+;
+; CHECK-CONS-LABEL: define dso_local void @f_1i_1o_memreg(
+; CHECK-CONS-SAME: ) #[[ATTR0]] {
+; CHECK-CONS-NEXT:  [[ENTRY:.*:]]
+; CHECK-CONS-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; CHECK-CONS-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; CHECK-CONS-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; CHECK-CONS-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; CHECK-CONS-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; CHECK-CONS-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; CHECK-CONS-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; CHECK-CONS-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; CHECK-CONS-NEXT:    call void @llvm.donothing()
+; CHECK-CONS-NEXT:    [[TMP1:%.*]] = load i32, ptr @is1, align 4
+; CHECK-CONS-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_4(ptr @is1)
+; CHECK-CONS-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; CHECK-CONS-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; CHECK-CONS-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
+; CHECK-CONS-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @id1, i64 4)
+; CHECK-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
+; CHECK-CONS-NEXT:    br i1 [[_MSCMP]], label %[[BB6:.*]], label %[[BB7:.*]], !prof [[PROF1]]
+; CHECK-CONS:       [[BB6]]:
+; CHECK-CONS-NEXT:    call void @__msan_warning(i32 [[TMP5]]) #[[ATTR2]]
+; CHECK-CONS-NEXT:    br label %[[BB7]]
+; CHECK-CONS:       [[BB7]]:
+; CHECK-CONS-NEXT:    [[TMP8:%.*]] = call i32 asm "", "=r,=*m,r,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id1, i32 [[TMP1]], ptr elementtype(i32) @is1)
+; CHECK-CONS-NEXT:    [[TMP9:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id1)
+; CHECK-CONS-NEXT:    [[TMP10:%.*]] = extractvalue { ptr, ptr } [[TMP9]], 0
+; CHECK-CONS-NEXT:    [[TMP11:%.*]] = extractvalue { ptr, ptr } [[TMP9]], 1
+; CHECK-CONS-NEXT:    store i32 0, ptr [[TMP10]], align 4
+; CHECK-CONS-NEXT:    store i32 [[TMP8]], ptr @id1, align 4
+; CHECK-CONS-NEXT:    ret void
+;
 entry:
   %0 = load i32, ptr @is1, align 4
   %1 = call i32 asm "", "=r,=*m,r,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id1, i32 %0, ptr elementtype(i32) @is1)
@@ -190,16 +552,52 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: @f_1i_1o_memreg
-; CHECK: [[IS1_F7:%.*]] = load i32, ptr @is1, align 4
-; USER-CONS:  store i32 0, ptr inttoptr (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080) to ptr), align 1
-; CHECK-CONS: call void @__msan_instrument_asm_store({{.*}}@id1{{.*}}, i64 4)
-; CHECK: call void @__msan_warning
-; CHECK: call i32 asm "", "=r,=*m,r,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id1, i32 [[IS1_F7]], ptr elementtype(i32) @is1)
 
 ; Three outputs, first and last returned via regs, second via mem:
 ;  asm("" : "=r" (id1), "=m"(id2), "=r" (id3):);
 define dso_local void @f_3o_reg_mem_reg() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_3o_reg_mem_reg(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    [[TMP0:%.*]] = call { i32, i32 } asm "", "=r,=*m,=r,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id2)
+; USER-CONS-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32 } [[TMP0]], 0
+; USER-CONS-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32 } [[TMP0]], 1
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id3 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[ASMRESULT1]], ptr @id3, align 4
+; USER-CONS-NEXT:    ret void
+;
+; CHECK-CONS-LABEL: define dso_local void @f_3o_reg_mem_reg(
+; CHECK-CONS-SAME: ) #[[ATTR0]] {
+; CHECK-CONS-NEXT:  [[ENTRY:.*:]]
+; CHECK-CONS-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; CHECK-CONS-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; CHECK-CONS-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; CHECK-CONS-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; CHECK-CONS-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; CHECK-CONS-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; CHECK-CONS-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; CHECK-CONS-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; CHECK-CONS-NEXT:    call void @llvm.donothing()
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @id2, i64 4)
+; CHECK-CONS-NEXT:    [[TMP1:%.*]] = call { i32, i32 } asm "", "=r,=*m,=r,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id2)
+; CHECK-CONS-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i32, i32 } [[TMP1]], 0
+; CHECK-CONS-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i32, i32 } [[TMP1]], 1
+; CHECK-CONS-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id1)
+; CHECK-CONS-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; CHECK-CONS-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; CHECK-CONS-NEXT:    store i32 0, ptr [[TMP3]], align 4
+; CHECK-CONS-NEXT:    store i32 [[ASMRESULT]], ptr @id1, align 4
+; CHECK-CONS-NEXT:    [[TMP5:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id3)
+; CHECK-CONS-NEXT:    [[TMP6:%.*]] = extractvalue { ptr, ptr } [[TMP5]], 0
+; CHECK-CONS-NEXT:    [[TMP7:%.*]] = extractvalue { ptr, ptr } [[TMP5]], 1
+; CHECK-CONS-NEXT:    store i32 0, ptr [[TMP6]], align 4
+; CHECK-CONS-NEXT:    store i32 [[ASMRESULT1]], ptr @id3, align 4
+; CHECK-CONS-NEXT:    ret void
+;
 entry:
   %0 = call { i32, i32 } asm "", "=r,=*m,=r,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id2)
   %asmresult = extractvalue { i32, i32 } %0, 0
@@ -209,16 +607,106 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: @f_3o_reg_mem_reg
-; USER-CONS:  store i32 0, ptr inttoptr (i64 xor (i64 ptrtoint (ptr @id2 to i64), i64 87960930222080) to ptr), align 1
-; CHECK-CONS: call void @__msan_instrument_asm_store(ptr @id2, i64 4)
-; CHECK: call { i32, i32 } asm "", "=r,=*m,=r,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(i32) @id2)
 
 
 ; Three inputs and three outputs of different types: a pair, a char, a function pointer.
 ; Everything is meant to be passed in registers, but LLVM chooses to return the integer pair by pointer:
 ;  asm("" : "=r" (pair2), "=r" (c2), "=r" (memcpy_d1) : "r"(pair1), "r"(c1), "r"(memcpy_s1));
 define dso_local void @f_3i_3o_complex_reg() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_3i_3o_complex_reg(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    [[TMP0:%.*]] = load i64, ptr @pair1, align 4
+; USER-CONS-NEXT:    [[_MSLD:%.*]] = load i64, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @pair1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    [[TMP1:%.*]] = load i8, ptr @c1, align 1
+; USER-CONS-NEXT:    [[_MSLD1:%.*]] = load i8, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @c1 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    [[TMP2:%.*]] = load ptr, ptr @memcpy_s1, align 8
+; USER-CONS-NEXT:    [[_MSLD2:%.*]] = load i64, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @memcpy_s1 to i64), i64 87960930222080), i64 2097152) to ptr), align 8
+; USER-CONS-NEXT:    store { i32, i32 } zeroinitializer, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @pair2 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[_MSLD]], 0
+; USER-CONS-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[_MSLD1]], 0
+; USER-CONS-NEXT:    [[_MSOR:%.*]] = or i1 [[_MSCMP]], [[_MSCMP3]]
+; USER-CONS-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[_MSLD2]], 0
+; USER-CONS-NEXT:    [[_MSOR5:%.*]] = or i1 [[_MSOR]], [[_MSCMP4]]
+; USER-CONS-NEXT:    br i1 [[_MSOR5]], label %[[BB3:.*]], label %[[BB4:.*]], !prof [[PROF1]]
+; USER-CONS:       [[BB3]]:
+; USER-CONS-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
+; USER-CONS-NEXT:    unreachable
+; USER-CONS:       [[BB4]]:
+; USER-CONS-NEXT:    [[TMP5:%.*]] = call { i8, ptr } asm "", "=*r,=r,=r,r,r,r,~{dirflag},~{fpsr},~{flags}"(ptr elementtype([[STRUCT_PAIR:%.*]]) @pair2, i64 [[TMP0]], i8 [[TMP1]], ptr [[TMP2]])
+; USER-CONS-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i8, ptr } [[TMP5]], 0
+; USER-CONS-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i8, ptr } [[TMP5]], 1
+; USER-CONS-NEXT:    store i8 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @c2 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    store i8 [[ASMRESULT]], ptr @c2, align 1
+; USER-CONS-NEXT:    store i64 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @memcpy_d1 to i64), i64 87960930222080), i64 2097152) to ptr), align 8
+; USER-CONS-NEXT:    store ptr [[ASMRESULT1]], ptr @memcpy_d1, align 8
+; USER-CONS-NEXT:    ret void
+;
+; CHECK-CONS-LABEL: define dso_local void @f_3i_3o_complex_reg(
+; CHECK-CONS-SAME: ) #[[ATTR0]] {
+; CHECK-CONS-NEXT:  [[ENTRY:.*:]]
+; CHECK-CONS-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; CHECK-CONS-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; CHECK-CONS-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; CHECK-CONS-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; CHECK-CONS-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; CHECK-CONS-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; CHECK-CONS-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; CHECK-CONS-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; CHECK-CONS-NEXT:    call void @llvm.donothing()
+; CHECK-CONS-NEXT:    [[TMP1:%.*]] = load i64, ptr @pair1, align 4
+; CHECK-CONS-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_8(ptr @pair1)
+; CHECK-CONS-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; CHECK-CONS-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; CHECK-CONS-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP3]], align 4
+; CHECK-CONS-NEXT:    [[TMP5:%.*]] = load i32, ptr [[TMP4]], align 4
+; CHECK-CONS-NEXT:    [[TMP6:%.*]] = load i8, ptr @c1, align 1
+; CHECK-CONS-NEXT:    [[TMP7:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_1(ptr @c1)
+; CHECK-CONS-NEXT:    [[TMP8:%.*]] = extractvalue { ptr, ptr } [[TMP7]], 0
+; CHECK-CONS-NEXT:    [[TMP9:%.*]] = extractvalue { ptr, ptr } [[TMP7]], 1
+; CHECK-CONS-NEXT:    [[_MSLD1:%.*]] = load i8, ptr [[TMP8]], align 1
+; CHECK-CONS-NEXT:    [[TMP10:%.*]] = load i32, ptr [[TMP9]], align 4
+; CHECK-CONS-NEXT:    [[TMP11:%.*]] = load ptr, ptr @memcpy_s1, align 8
+; CHECK-CONS-NEXT:    [[TMP12:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_load_8(ptr @memcpy_s1)
+; CHECK-CONS-NEXT:    [[TMP13:%.*]] = extractvalue { ptr, ptr } [[TMP12]], 0
+; CHECK-CONS-NEXT:    [[TMP14:%.*]] = extractvalue { ptr, ptr } [[TMP12]], 1
+; CHECK-CONS-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP13]], align 8
+; CHECK-CONS-NEXT:    [[TMP15:%.*]] = load i32, ptr [[TMP14]], align 8
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @pair2, i64 8)
+; CHECK-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[_MSLD]], 0
+; CHECK-CONS-NEXT:    br i1 [[_MSCMP]], label %[[BB16:.*]], label %[[BB17:.*]], !prof [[PROF1]]
+; CHECK-CONS:       [[BB16]]:
+; CHECK-CONS-NEXT:    call void @__msan_warning(i32 [[TMP5]]) #[[ATTR2]]
+; CHECK-CONS-NEXT:    br label %[[BB17]]
+; CHECK-CONS:       [[BB17]]:
+; CHECK-CONS-NEXT:    [[_MSCMP3:%.*]] = icmp ne i8 [[_MSLD1]], 0
+; CHECK-CONS-NEXT:    br i1 [[_MSCMP3]], label %[[BB18:.*]], label %[[BB19:.*]], !prof [[PROF1]]
+; CHECK-CONS:       [[BB18]]:
+; CHECK-CONS-NEXT:    call void @__msan_warning(i32 [[TMP10]]) #[[ATTR2]]
+; CHECK-CONS-NEXT:    br label %[[BB19]]
+; CHECK-CONS:       [[BB19]]:
+; CHECK-CONS-NEXT:    [[_MSCMP4:%.*]] = icmp ne i64 [[_MSLD2]], 0
+; CHECK-CONS-NEXT:    br i1 [[_MSCMP4]], label %[[BB20:.*]], label %[[BB21:.*]], !prof [[PROF1]]
+; CHECK-CONS:       [[BB20]]:
+; CHECK-CONS-NEXT:    call void @__msan_warning(i32 [[TMP15]]) #[[ATTR2]]
+; CHECK-CONS-NEXT:    br label %[[BB21]]
+; CHECK-CONS:       [[BB21]]:
+; CHECK-CONS-NEXT:    [[TMP22:%.*]] = call { i8, ptr } asm "", "=*r,=r,=r,r,r,r,~{dirflag},~{fpsr},~{flags}"(ptr elementtype([[STRUCT_PAIR:%.*]]) @pair2, i64 [[TMP1]], i8 [[TMP6]], ptr [[TMP11]])
+; CHECK-CONS-NEXT:    [[ASMRESULT:%.*]] = extractvalue { i8, ptr } [[TMP22]], 0
+; CHECK-CONS-NEXT:    [[ASMRESULT1:%.*]] = extractvalue { i8, ptr } [[TMP22]], 1
+; CHECK-CONS-NEXT:    [[TMP23:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_1(ptr @c2)
+; CHECK-CONS-NEXT:    [[TMP24:%.*]] = extractvalue { ptr, ptr } [[TMP23]], 0
+; CHECK-CONS-NEXT:    [[TMP25:%.*]] = extractvalue { ptr, ptr } [[TMP23]], 1
+; CHECK-CONS-NEXT:    store i8 0, ptr [[TMP24]], align 1
+; CHECK-CONS-NEXT:    store i8 [[ASMRESULT]], ptr @c2, align 1
+; CHECK-CONS-NEXT:    [[TMP26:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_8(ptr @memcpy_d1)
+; CHECK-CONS-NEXT:    [[TMP27:%.*]] = extractvalue { ptr, ptr } [[TMP26]], 0
+; CHECK-CONS-NEXT:    [[TMP28:%.*]] = extractvalue { ptr, ptr } [[TMP26]], 1
+; CHECK-CONS-NEXT:    store i64 0, ptr [[TMP27]], align 8
+; CHECK-CONS-NEXT:    store ptr [[ASMRESULT1]], ptr @memcpy_d1, align 8
+; CHECK-CONS-NEXT:    ret void
+;
 entry:
   %0 = load i64, ptr @pair1, align 4
   %1 = load i8, ptr @c1, align 1
@@ -231,46 +719,83 @@ entry:
   ret void
 }
 
-; CHECK-LABEL: @f_3i_3o_complex_reg
-; CHECK: [[PAIR1_F9:%.*]] = load {{.*}} @pair1
-; CHECK: [[C1_F9:%.*]] = load {{.*}} @c1
-; CHECK: [[MEMCPY_S1_F9:%.*]] = load {{.*}} @memcpy_s1
-; USER-CONS:  store { i32, i32 } zeroinitializer, ptr inttoptr (i64 xor (i64 ptrtoint (ptr @pair2 to i64), i64 87960930222080) to ptr), align 1
-; CHECK-CONS: call void @__msan_instrument_asm_store({{.*}}@pair2{{.*}}, i64 8)
-; CHECK: call void @__msan_warning
-; KMSAN: call void @__msan_warning
-; KMSAN: call void @__msan_warning
-; CHECK: call { i8, ptr } asm "", "=*r,=r,=r,r,r,r,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(%struct.pair) @pair2, {{.*}}[[PAIR1_F9]], i8 [[C1_F9]], {{.*}} [[MEMCPY_S1_F9]])
 
 ; Three inputs and three outputs of different types: a pair, a char, a function pointer.
 ; Everything is passed in memory:
 ;  asm("" : "=m" (pair2), "=m" (c2), "=m" (memcpy_d1) : "m"(pair1), "m"(c1), "m"(memcpy_s1));
 define dso_local void @f_3i_3o_complex_mem() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_3i_3o_complex_mem(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    store { i32, i32 } zeroinitializer, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @pair2 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    store i8 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @c2 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    store i64 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @memcpy_d1 to i64), i64 87960930222080), i64 2097152) to ptr), align 1
+; USER-CONS-NEXT:    call void asm "", "=*m,=*m,=*m,*m,*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype([[STRUCT_PAIR:%.*]]) @pair2, ptr elementtype(i8) @c2, ptr elementtype(ptr) @memcpy_d1, ptr elementtype([[STRUCT_PAIR]]) @pair1, ptr elementtype(i8) @c1, ptr elementtype(ptr) @memcpy_s1)
+; USER-CONS-NEXT:    ret void
+;
+; CHECK-CONS-LABEL: define dso_local void @f_3i_3o_complex_mem(
+; CHECK-CONS-SAME: ) #[[ATTR0]] {
+; CHECK-CONS-NEXT:  [[ENTRY:.*:]]
+; CHECK-CONS-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; CHECK-CONS-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; CHECK-CONS-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; CHECK-CONS-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; CHECK-CONS-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; CHECK-CONS-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; CHECK-CONS-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; CHECK-CONS-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; CHECK-CONS-NEXT:    call void @llvm.donothing()
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @pair2, i64 8)
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @c2, i64 1)
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @memcpy_d1, i64 8)
+; CHECK-CONS-NEXT:    call void asm "", "=*m,=*m,=*m,*m,*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype([[STRUCT_PAIR:%.*]]) @pair2, ptr elementtype(i8) @c2, ptr elementtype(ptr) @memcpy_d1, ptr elementtype([[STRUCT_PAIR]]) @pair1, ptr elementtype(i8) @c1, ptr elementtype(ptr) @memcpy_s1)
+; CHECK-CONS-NEXT:    ret void
+;
 entry:
   call void asm "", "=*m,=*m,=*m,*m,*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(%struct.pair) @pair2, ptr elementtype(i8) @c2, ptr elementtype(ptr) @memcpy_d1, ptr elementtype(%struct.pair) @pair1, ptr elementtype(i8) @c1, ptr elementtype(ptr) @memcpy_s1)
   ret void
 }
 
-; CHECK-LABEL: @f_3i_3o_complex_mem
-; USER-CONS:       store { i32, i32 } zeroinitializer, ptr inttoptr (i64 xor (i64 ptrtoint (ptr @pair2 to i64), i64 87960930222080) to ptr), align 1
-; USER-CONS-NEXT:  store i8 0, ptr inttoptr (i64 xor (i64 ptrtoint (ptr @c2 to i64), i64 87960930222080) to ptr), align 1
-; USER-CONS-NEXT:  store i64 0, ptr inttoptr (i64 xor (i64 ptrtoint (ptr @memcpy_d1 to i64), i64 87960930222080) to ptr), align 1
-; CHECK-CONS:      call void @__msan_instrument_asm_store({{.*}}@pair2{{.*}}, i64 8)
-; CHECK-CONS:      call void @__msan_instrument_asm_store({{.*}}@c2{{.*}}, i64 1)
-; CHECK-CONS:      call void @__msan_instrument_asm_store({{.*}}@memcpy_d1{{.*}}, i64 8)
-; CHECK: call void asm "", "=*m,=*m,=*m,*m,*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(%struct.pair) @pair2, ptr elementtype(i8) @c2, ptr elementtype(ptr) @memcpy_d1, ptr elementtype(%struct.pair) @pair1, ptr elementtype(i8) @c1, ptr elementtype(ptr) @memcpy_s1)
 
 ; Use memset when the size is larger.
 define dso_local void @f_1i_1o_mem_large() sanitize_memory {
+; USER-CONS-LABEL: define dso_local void @f_1i_1o_mem_large(
+; USER-CONS-SAME: ) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    call void @llvm.memset.p0.i64(ptr align 1 inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @large to i64), i64 87960930222080), i64 2097152) to ptr), i8 0, i64 33, i1 false)
+; USER-CONS-NEXT:    [[TMP0:%.*]] = call i32 asm "", "=r,=*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype([[STRUCT_LARGE:%.*]]) @large, ptr elementtype([[STRUCT_LARGE]]) @large)
+; USER-CONS-NEXT:    store i32 0, ptr inttoptr (i64 add (i64 xor (i64 ptrtoint (ptr @id1 to i64), i64 87960930222080), i64 2097152) to ptr), align 4
+; USER-CONS-NEXT:    store i32 [[TMP0]], ptr @id1, align 4
+; USER-CONS-NEXT:    ret void
+;
+; CHECK-CONS-LABEL: define dso_local void @f_1i_1o_mem_large(
+; CHECK-CONS-SAME: ) #[[ATTR0]] {
+; CHECK-CONS-NEXT:  [[ENTRY:.*:]]
+; CHECK-CONS-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; CHECK-CONS-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; CHECK-CONS-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; CHECK-CONS-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; CHECK-CONS-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; CHECK-CONS-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; CHECK-CONS-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; CHECK-CONS-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; CHECK-CONS-NEXT:    call void @llvm.donothing()
+; CHECK-CONS-NEXT:    call void @__msan_instrument_asm_store(ptr @large, i64 33)
+; CHECK-CONS-NEXT:    [[TMP1:%.*]] = call i32 asm "", "=r,=*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype([[STRUCT_LARGE:%.*]]) @large, ptr elementtype([[STRUCT_LARGE]]) @large)
+; CHECK-CONS-NEXT:    [[TMP2:%.*]] = call { ptr, ptr } @__msan_metadata_ptr_for_store_4(ptr @id1)
+; CHECK-CONS-NEXT:    [[TMP3:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 0
+; CHECK-CONS-NEXT:    [[TMP4:%.*]] = extractvalue { ptr, ptr } [[TMP2]], 1
+; CHECK-CONS-NEXT:    store i32 0, ptr [[TMP3]], align 4
+; CHECK-CONS-NEXT:    store i32 [[TMP1]], ptr @id1, align 4
+; CHECK-CONS-NEXT:    ret void
+;
 entry:
   %0 = call i32 asm "", "=r,=*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(%struct.large) @large, ptr elementtype(%struct.large) @large)
   store i32 %0, ptr @id1, align 4
   ret void
 }
-; CHECK-LABEL: @f_1i_1o_mem_large(
-; USER-CONS:  call void @llvm.memset.p0.i64(ptr align 1 inttoptr (i64 xor (i64 ptrtoint (ptr @large to i64), i64 87960930222080) to ptr), i8 0, i64 33, i1 false)
-; CHECK-CONS: call void @__msan_instrument_asm_store({{.*}}@large{{.*}}, i64 33)
-; CHECK: call i32 asm "", "=r,=*m,*m,~{dirflag},~{fpsr},~{flags}"(ptr elementtype(%struct.large) @large, ptr elementtype(%struct.large) @large)
 
 ; A simple asm goto construct to check that callbr is handled correctly:
 ;  int asm_goto(int n) {
@@ -283,9 +808,64 @@ entry:
 ; asm goto statements can't have outputs, so just make sure we check the input
 ; and the compiler doesn't crash.
 define dso_local i32 @asm_goto(i32 %n) sanitize_memory {
+; USER-CONS-LABEL: define dso_local i32 @asm_goto(
+; USER-CONS-SAME: i32 [[N:%.*]]) #[[ATTR0]] {
+; USER-CONS-NEXT:  [[ENTRY:.*:]]
+; USER-CONS-NEXT:    [[TMP0:%.*]] = load i32, ptr @__msan_param_tls, align 8
+; USER-CONS-NEXT:    call void @llvm.donothing()
+; USER-CONS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP0]], 0
+; USER-CONS-NEXT:    br i1 [[_MSCMP]], label %[[BB1:.*]], label %[[BB2:.*]], !prof [[PROF1]]
+; USER-CONS:       [[BB1]]:
+; USER-CONS-NEXT:    call void @__msan_warning_noreturn() #[[ATTR4]]
+; USER-CONS-NEXT:    unreachable
+; USER-CONS:       [[BB2]]:
+; USER-CONS-NEXT:    callbr void asm sideeffect "cmp $0, $1
+; USER-CONS-NEXT:            to label %[[CLEANUP:.*]] [label %[[SKIP_LABEL:.*]]]
+; USER-CONS:       [[SKIP_LABEL]]:
+; USER-CONS-NEXT:    br label %[[CLEANUP]]
+; USER-CONS:       [[CLEANUP]]:
+; USER-CONS-NEXT:    [[_MSPHI_S:%.*]] = phi i32 [ 0, %[[SKIP_LABEL]] ], [ 0, %[[BB2]] ]
+; USER-CONS-NEXT:    [[RETVAL_0:%.*]] = phi i32 [ 2, %[[SKIP_LABEL]] ], [ 1, %[[BB2]] ]
+; USER-CONS-NEXT:    store i32 [[_MSPHI_S]], ptr @__msan_retval_tls, align 8
+; USER-CONS-NEXT:    ret i32 [[RETVAL_0]]
+;
+; KMSAN-LABEL: define dso_local i32 @asm_goto(
+; KMSAN-SAME: i32 [[N:%.*]]) #[[ATTR0]] {
+; KMSAN-NEXT:  [[ENTRY:.*:]]
+; KMSAN-NEXT:    [[TMP0:%.*]] = call ptr @__msan_get_context_state()
+; KMSAN-NEXT:    [[PARAM_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 0
+; KMSAN-NEXT:    [[RETVAL_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 1
+; KMSAN-NEXT:    [[VA_ARG_SHADOW:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 2
+; KMSAN-NEXT:    [[VA_ARG_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 3
+; KMSAN-NEXT:    [[VA_ARG_OVERFLOW_SIZE:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 4
+; KMSAN-NEXT:    [[PARAM_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 5
+; KMSAN-NEXT:    [[RETVAL_ORIGIN:%.*]] = getelementptr { [100 x i64], [100 x i64], [100 x i64], [100 x i64], i64, [200 x i32], i32, i32 }, ptr [[TMP0]], i32 0, i32 6
+; KMSAN-NEXT:    [[_MSARG:%.*]] = getelementptr i8, ptr [[PARAM_SHADOW]], i64 0
+; KMSAN-NEXT:    [[TMP1:%.*]] = load i32, ptr [[_MSARG]], align 8
+; KMSAN-NEXT:    [[_MSARG_O:%.*]] = getelementptr i8, ptr [[PARAM_ORIGIN]], i64 0
+; KMSAN-NEXT:    [[TMP2:%.*]] = load i32, ptr [[_MSARG_O]], align 4
+; KMSAN-NEXT:    call void @llvm.donothing()
+; KMSAN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP1]], 0
+; KMSAN-NEXT:    br i1 [[_MSCMP]], label %[[BB3:.*]], label %[[BB4:.*]], !prof [[PROF1]]
+; KMSAN:       [[BB3]]:
+; KMSAN-NEXT:    call void @__msan_warning(i32 [[TMP2]]) #[[ATTR2]]
+; KMSAN-NEXT:    br label %[[BB4]]
+; KMSAN:       [[BB4]]:
+; KMSAN-NEXT:    callbr void asm sideeffect "cmp $0, $1
+; KMSAN-NEXT:            to label %[[CLEANUP:.*]] [label %[[SKIP_LABEL:.*]]]
+; KMSAN:       [[SKIP_LABEL]]:
+; KMSAN-NEXT:    br label %[[CLEANUP]]
+; KMSAN:       [[CLEANUP]]:
+; KMSAN-NEXT:    [[_MSPHI_S:%.*]] = phi i32 [ 0, %[[SKIP_LABEL]] ], [ 0, %[[BB4]] ]
+; KMSAN-NEXT:    [[_MSPHI_O:%.*]] = phi i32 [ 0, %[[SKIP_LABEL]] ], [ 0, %[[BB4]] ]
+; KMSAN-NEXT:    [[RETVAL_0:%.*]] = phi i32 [ 2, %[[SKIP_LABEL]] ], [ 1, %[[BB4]] ]
+; KMSAN-NEXT:    store i32 [[_MSPHI_S]], ptr [[RETVAL_SHADOW]], align 8
+; KMSAN-NEXT:    store i32 [[_MSPHI_O]], ptr [[RETVAL_ORIGIN]], align 4
+; KMSAN-NEXT:    ret i32 [[RETVAL_0]]
+;
 entry:
   callbr void asm sideeffect "cmp $0, $1; jnz ${2:l}", "r,r,!i,~{dirflag},~{fpsr},~{flags}"(i32 %n, i32 1)
-          to label %cleanup [label %skip_label]
+  to label %cleanup [label %skip_label]
 
 skip_label:                                       ; preds = %entry
   br label %cleanup
@@ -295,9 +875,10 @@ cleanup:                                          ; preds = %entry, %skip_label
   ret i32 %retval.0
 }
 
-; CHECK-LABEL: @asm_goto
-; KMSAN: [[LOAD_ARG:%.*]] = load {{.*}} %_msarg
-; KMSAN: [[CMP:%.*]] = icmp ne {{.*}} [[LOAD_ARG]], 0
-; KMSAN: br {{.*}} [[CMP]], label %[[LABEL:.*]], label
-; KMSAN: [[LABEL]]:
-; KMSAN-NEXT: call void @__msan_warning
+;.
+; USER-CONS: [[PROF1]] = !{!"branch_weights", i32 1, i32 1048575}
+;.
+; CHECK-CONS: [[PROF1]] = !{!"branch_weights", i32 1, i32 1048575}
+;.
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; CHECK: {{.*}}
diff --git a/llvm/test/Instrumentation/MemorySanitizer/msan_basic.ll b/llvm/test/Instrumentation/MemorySanitizer/msan_basic.ll
index 0ad9e4dd32adf..4ca6fec291436 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/msan_basic.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/msan_basic.ll
@@ -22,7 +22,8 @@ define void @Store(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP0]], ptr [[TMP3]], align 4
 ; CHECK-NEXT:    store i32 [[X]], ptr [[P]], align 4
 ; CHECK-NEXT:    ret void
@@ -35,16 +36,17 @@ define void @Store(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    call void @llvm.donothing()
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP7:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    store i32 [[TMP0]], ptr [[TMP4]], align 4
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP0]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB7:.*]], label %[[BB8:.*]], !prof [[PROF1:![0-9]+]]
-; ORIGIN:       [[BB7]]:
-; ORIGIN-NEXT:    store i32 [[TMP1]], ptr [[TMP6]], align 4
-; ORIGIN-NEXT:    br label %[[BB8]]
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB8:.*]], label %[[BB9:.*]], !prof [[PROF1:![0-9]+]]
 ; ORIGIN:       [[BB8]]:
+; ORIGIN-NEXT:    store i32 [[TMP1]], ptr [[TMP6]], align 4
+; ORIGIN-NEXT:    br label %[[BB9]]
+; ORIGIN:       [[BB9]]:
 ; ORIGIN-NEXT:    store i32 [[X]], ptr [[P]], align 4
 ; ORIGIN-NEXT:    ret void
 ;
@@ -59,8 +61,9 @@ define void @Store(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP0]], i32 zeroext [[TMP1]])
 ; CALLS-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P]] to i64
 ; CALLS-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CALLS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
-; CALLS-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592186044416
+; CALLS-NEXT:    [[TMP9:%.*]] = add i64 [[TMP5]], 2097152
+; CALLS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CALLS-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592188141568
 ; CALLS-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CALLS-NEXT:    store i32 [[TMP2]], ptr [[TMP6]], align 4
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[TMP2]], ptr [[P]], i32 zeroext [[TMP3]])
@@ -84,7 +87,8 @@ define void @AlignedStore(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_me
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP0]], ptr [[TMP3]], align 32
 ; CHECK-NEXT:    store i32 [[X]], ptr [[P]], align 32
 ; CHECK-NEXT:    ret void
@@ -97,19 +101,20 @@ define void @AlignedStore(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_me
 ; ORIGIN-NEXT:    call void @llvm.donothing()
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP7:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    store i32 [[TMP0]], ptr [[TMP4]], align 32
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP0]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB7:.*]], label %[[BB11:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB7]]:
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB8:.*]], label %[[BB12:.*]], !prof [[PROF1]]
+; ORIGIN:       [[BB8]]:
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = zext i32 [[TMP1]] to i64
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = shl i64 [[TMP8]], 32
 ; ORIGIN-NEXT:    [[TMP10:%.*]] = or i64 [[TMP8]], [[TMP9]]
 ; ORIGIN-NEXT:    store i32 [[TMP1]], ptr [[TMP6]], align 32
-; ORIGIN-NEXT:    br label %[[BB11]]
-; ORIGIN:       [[BB11]]:
+; ORIGIN-NEXT:    br label %[[BB12]]
+; ORIGIN:       [[BB12]]:
 ; ORIGIN-NEXT:    store i32 [[X]], ptr [[P]], align 32
 ; ORIGIN-NEXT:    ret void
 ;
@@ -124,8 +129,9 @@ define void @AlignedStore(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_me
 ; CALLS-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP0]], i32 zeroext [[TMP1]])
 ; CALLS-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P]] to i64
 ; CALLS-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CALLS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
-; CALLS-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592186044416
+; CALLS-NEXT:    [[TMP9:%.*]] = add i64 [[TMP5]], 2097152
+; CALLS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CALLS-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592188141568
 ; CALLS-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CALLS-NEXT:    store i32 [[TMP2]], ptr [[TMP6]], align 32
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[TMP2]], ptr [[P]], i32 zeroext [[TMP3]])
@@ -146,7 +152,8 @@ define void @LoadAndCmp(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[TMP0:%.*]] = load i32, ptr [[A]], align 4
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i32 [[TMP0]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = or i32 [[_MSLD]], 0
@@ -156,11 +163,11 @@ define void @LoadAndCmp(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[TMP9:%.*]] = icmp eq i32 [[TMP8]], 0
 ; CHECK-NEXT:    [[_MSPROP_ICMP:%.*]] = and i1 [[TMP6]], [[TMP9]]
 ; CHECK-NEXT:    [[TOBOOL:%.*]] = icmp eq i32 [[TMP0]], 0
-; CHECK-NEXT:    br i1 [[_MSPROP_ICMP]], label %[[BB10:.*]], label %[[BB11:.*]], !prof [[PROF1:![0-9]+]]
-; CHECK:       [[BB10]]:
+; CHECK-NEXT:    br i1 [[_MSPROP_ICMP]], label %[[BB11:.*]], label %[[BB12:.*]], !prof [[PROF1:![0-9]+]]
+; CHECK:       [[BB11]]:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR12:[0-9]+]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       [[BB11]]:
+; CHECK:       [[BB12]]:
 ; CHECK-NEXT:    br i1 [[TOBOOL]], label %[[IF_END:.*]], label %[[IF_THEN:.*]]
 ; CHECK:       [[IF_THEN]]:
 ; CHECK-NEXT:    store i64 0, ptr @__msan_va_arg_overflow_size_tls, align 8
@@ -176,8 +183,9 @@ define void @LoadAndCmp(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP0:%.*]] = load i32, ptr [[A]], align 4
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP13:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP13]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 4
@@ -189,11 +197,11 @@ define void @LoadAndCmp(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = icmp eq i32 [[TMP11]], 0
 ; ORIGIN-NEXT:    [[_MSPROP_ICMP:%.*]] = and i1 [[TMP9]], [[TMP12]]
 ; ORIGIN-NEXT:    [[TOBOOL:%.*]] = icmp eq i32 [[TMP0]], 0
-; ORIGIN-NEXT:    br i1 [[_MSPROP_ICMP]], label %[[BB13:.*]], label %[[BB14:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB13]]:
+; ORIGIN-NEXT:    br i1 [[_MSPROP_ICMP]], label %[[BB14:.*]], label %[[BB15:.*]], !prof [[PROF1]]
+; ORIGIN:       [[BB14]]:
 ; ORIGIN-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP6]]) #[[ATTR12:[0-9]+]]
 ; ORIGIN-NEXT:    unreachable
-; ORIGIN:       [[BB14]]:
+; ORIGIN:       [[BB15]]:
 ; ORIGIN-NEXT:    br i1 [[TOBOOL]], label %[[IF_END:.*]], label %[[IF_THEN:.*]]
 ; ORIGIN:       [[IF_THEN]]:
 ; ORIGIN-NEXT:    store i64 0, ptr @__msan_va_arg_overflow_size_tls, align 8
@@ -212,8 +220,9 @@ define void @LoadAndCmp(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    [[TMP2:%.*]] = load i32, ptr [[A]], align 4
 ; CALLS-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[A]] to i64
 ; CALLS-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; CALLS-NEXT:    [[TMP16:%.*]] = add i64 [[TMP4]], 2097152
+; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP5]], align 4
 ; CALLS-NEXT:    [[TMP8:%.*]] = load i32, ptr [[TMP7]], align 4
@@ -291,7 +300,8 @@ define void @CopyRetVal(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[_MSRET:%.*]] = load i32, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 2097152
+; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP3]] to ptr
 ; CHECK-NEXT:    store i32 [[_MSRET]], ptr [[TMP2]], align 4
 ; CHECK-NEXT:    store i32 [[CALL]], ptr [[A]], align 4
 ; CHECK-NEXT:    ret void
@@ -306,16 +316,17 @@ define void @CopyRetVal(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP0:%.*]] = load i32, ptr @__msan_retval_origin_tls, align 4
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGIN-NEXT:    store i32 [[_MSRET]], ptr [[TMP3]], align 4
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSRET]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB6:.*]], label %[[BB7:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB6]]:
-; ORIGIN-NEXT:    store i32 [[TMP0]], ptr [[TMP5]], align 4
-; ORIGIN-NEXT:    br label %[[BB7]]
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB7:.*]], label %[[BB8:.*]], !prof [[PROF1]]
 ; ORIGIN:       [[BB7]]:
+; ORIGIN-NEXT:    store i32 [[TMP0]], ptr [[TMP5]], align 4
+; ORIGIN-NEXT:    br label %[[BB8]]
+; ORIGIN:       [[BB8]]:
 ; ORIGIN-NEXT:    store i32 [[CALL]], ptr [[A]], align 4
 ; ORIGIN-NEXT:    ret void
 ;
@@ -332,8 +343,9 @@ define void @CopyRetVal(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP0]], i32 zeroext [[TMP1]])
 ; CALLS-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[A]] to i64
 ; CALLS-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; CALLS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP4]], 2097152
+; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CALLS-NEXT:    store i32 [[_MSRET]], ptr [[TMP5]], align 4
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[_MSRET]], ptr [[A]], i32 zeroext [[TMP2]])
@@ -374,14 +386,16 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
 ; CHECK-NEXT:    [[TMP10:%.*]] = load i32, ptr [[B]], align 4
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP21:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP21]] to ptr
 ; CHECK-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP13]], align 4
 ; CHECK-NEXT:    br label %[[IF_END:.*]]
 ; CHECK:       [[IF_ELSE]]:
 ; CHECK-NEXT:    [[TMP14:%.*]] = load i32, ptr [[C]], align 4
 ; CHECK-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[C]] to i64
 ; CHECK-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP16]], 2097152
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP23]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP17]], align 4
 ; CHECK-NEXT:    br label %[[IF_END]]
 ; CHECK:       [[IF_END]]:
@@ -389,7 +403,8 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
 ; CHECK-NEXT:    [[T_0:%.*]] = phi i32 [ [[TMP10]], %[[IF_THEN]] ], [ [[TMP14]], %[[IF_ELSE]] ]
 ; CHECK-NEXT:    [[TMP18:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP19:%.*]] = xor i64 [[TMP18]], 87960930222080
-; CHECK-NEXT:    [[TMP20:%.*]] = inttoptr i64 [[TMP19]] to ptr
+; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP19]], 2097152
+; CHECK-NEXT:    [[TMP20:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CHECK-NEXT:    store i32 [[_MSPHI_S]], ptr [[TMP20]], align 4
 ; CHECK-NEXT:    store i32 [[T_0]], ptr [[A]], align 4
 ; CHECK-NEXT:    ret void
@@ -419,8 +434,9 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
 ; ORIGIN-NEXT:    [[TMP11:%.*]] = load i32, ptr [[B]], align 4
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; ORIGIN-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
-; ORIGIN-NEXT:    [[TMP15:%.*]] = add i64 [[TMP13]], 17592186044416
+; ORIGIN-NEXT:    [[TMP30:%.*]] = add i64 [[TMP13]], 2097152
+; ORIGIN-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP30]] to ptr
+; ORIGIN-NEXT:    [[TMP15:%.*]] = add i64 [[TMP13]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP14]], align 4
 ; ORIGIN-NEXT:    [[TMP17:%.*]] = load i32, ptr [[TMP16]], align 4
@@ -429,8 +445,9 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
 ; ORIGIN-NEXT:    [[TMP18:%.*]] = load i32, ptr [[C]], align 4
 ; ORIGIN-NEXT:    [[TMP19:%.*]] = ptrtoint ptr [[C]] to i64
 ; ORIGIN-NEXT:    [[TMP20:%.*]] = xor i64 [[TMP19]], 87960930222080
-; ORIGIN-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP20]] to ptr
-; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592186044416
+; ORIGIN-NEXT:    [[TMP31:%.*]] = add i64 [[TMP20]], 2097152
+; ORIGIN-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP31]] to ptr
+; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP21]], align 4
 ; ORIGIN-NEXT:    [[TMP24:%.*]] = load i32, ptr [[TMP23]], align 4
@@ -441,16 +458,17 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
 ; ORIGIN-NEXT:    [[T_0:%.*]] = phi i32 [ [[TMP11]], %[[IF_THEN]] ], [ [[TMP18]], %[[IF_ELSE]] ]
 ; ORIGIN-NEXT:    [[TMP25:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP26:%.*]] = xor i64 [[TMP25]], 87960930222080
-; ORIGIN-NEXT:    [[TMP27:%.*]] = inttoptr i64 [[TMP26]] to ptr
-; ORIGIN-NEXT:    [[TMP28:%.*]] = add i64 [[TMP26]], 17592186044416
+; ORIGIN-NEXT:    [[TMP32:%.*]] = add i64 [[TMP26]], 2097152
+; ORIGIN-NEXT:    [[TMP27:%.*]] = inttoptr i64 [[TMP32]] to ptr
+; ORIGIN-NEXT:    [[TMP28:%.*]] = add i64 [[TMP26]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP29:%.*]] = inttoptr i64 [[TMP28]] to ptr
 ; ORIGIN-NEXT:    store i32 [[_MSPHI_S]], ptr [[TMP27]], align 4
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSPHI_S]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB30:.*]], label %[[BB31:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB30]]:
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB33:.*]], label %[[BB34:.*]], !prof [[PROF1]]
+; ORIGIN:       [[BB33]]:
 ; ORIGIN-NEXT:    store i32 [[_MSPHI_O]], ptr [[TMP29]], align 4
-; ORIGIN-NEXT:    br label %[[BB31]]
-; ORIGIN:       [[BB31]]:
+; ORIGIN-NEXT:    br label %[[BB34]]
+; ORIGIN:       [[BB34]]:
 ; ORIGIN-NEXT:    store i32 [[T_0]], ptr [[A]], align 4
 ; ORIGIN-NEXT:    ret void
 ;
@@ -481,8 +499,9 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
 ; CALLS-NEXT:    [[TMP14:%.*]] = load i32, ptr [[B]], align 4
 ; CALLS-NEXT:    [[TMP15:%.*]] = ptrtoint ptr [[B]] to i64
 ; CALLS-NEXT:    [[TMP16:%.*]] = xor i64 [[TMP15]], 87960930222080
-; CALLS-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
-; CALLS-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 17592186044416
+; CALLS-NEXT:    [[TMP33:%.*]] = add i64 [[TMP16]], 2097152
+; CALLS-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP33]] to ptr
+; CALLS-NEXT:    [[TMP18:%.*]] = add i64 [[TMP16]], 17592188141568
 ; CALLS-NEXT:    [[TMP19:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CALLS-NEXT:    [[_MSLD1:%.*]] = load i32, ptr [[TMP17]], align 4
 ; CALLS-NEXT:    [[TMP20:%.*]] = load i32, ptr [[TMP19]], align 4
@@ -492,8 +511,9 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
 ; CALLS-NEXT:    [[TMP21:%.*]] = load i32, ptr [[C]], align 4
 ; CALLS-NEXT:    [[TMP22:%.*]] = ptrtoint ptr [[C]] to i64
 ; CALLS-NEXT:    [[TMP23:%.*]] = xor i64 [[TMP22]], 87960930222080
-; CALLS-NEXT:    [[TMP24:%.*]] = inttoptr i64 [[TMP23]] to ptr
-; CALLS-NEXT:    [[TMP25:%.*]] = add i64 [[TMP23]], 17592186044416
+; CALLS-NEXT:    [[TMP34:%.*]] = add i64 [[TMP23]], 2097152
+; CALLS-NEXT:    [[TMP24:%.*]] = inttoptr i64 [[TMP34]] to ptr
+; CALLS-NEXT:    [[TMP25:%.*]] = add i64 [[TMP23]], 17592188141568
 ; CALLS-NEXT:    [[TMP26:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP24]], align 4
 ; CALLS-NEXT:    [[TMP27:%.*]] = load i32, ptr [[TMP26]], align 4
@@ -505,8 +525,9 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
 ; CALLS-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP4]], i32 zeroext [[TMP5]])
 ; CALLS-NEXT:    [[TMP28:%.*]] = ptrtoint ptr [[A]] to i64
 ; CALLS-NEXT:    [[TMP29:%.*]] = xor i64 [[TMP28]], 87960930222080
-; CALLS-NEXT:    [[TMP30:%.*]] = inttoptr i64 [[TMP29]] to ptr
-; CALLS-NEXT:    [[TMP31:%.*]] = add i64 [[TMP29]], 17592186044416
+; CALLS-NEXT:    [[TMP35:%.*]] = add i64 [[TMP29]], 2097152
+; CALLS-NEXT:    [[TMP30:%.*]] = inttoptr i64 [[TMP35]] to ptr
+; CALLS-NEXT:    [[TMP31:%.*]] = add i64 [[TMP29]], 17592188141568
 ; CALLS-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP31]] to ptr
 ; CALLS-NEXT:    store i32 [[_MSPHI_S]], ptr [[TMP30]], align 4
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[_MSPHI_S]], ptr [[A]], i32 zeroext [[_MSPHI_O]])
@@ -540,14 +561,16 @@ define void @ShlConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[TMP0:%.*]] = load i32, ptr [[X]], align 4
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[X]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
 ; CHECK-NEXT:    [[TMP4:%.*]] = shl i32 [[_MSLD]], 10
 ; CHECK-NEXT:    [[TMP5:%.*]] = or i32 [[TMP4]], 0
 ; CHECK-NEXT:    [[TMP6:%.*]] = shl i32 [[TMP0]], 10
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP10]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP5]], ptr [[TMP9]], align 4
 ; CHECK-NEXT:    store i32 [[TMP6]], ptr [[X]], align 4
 ; CHECK-NEXT:    ret void
@@ -559,8 +582,9 @@ define void @ShlConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP0:%.*]] = load i32, ptr [[X]], align 4
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[X]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP15:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 4
@@ -569,16 +593,17 @@ define void @ShlConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = shl i32 [[TMP0]], 10
 ; ORIGIN-NEXT:    [[TMP10:%.*]] = ptrtoint ptr [[X]] to i64
 ; ORIGIN-NEXT:    [[TMP11:%.*]] = xor i64 [[TMP10]], 87960930222080
-; ORIGIN-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
-; ORIGIN-NEXT:    [[TMP13:%.*]] = add i64 [[TMP11]], 17592186044416
+; ORIGIN-NEXT:    [[TMP16:%.*]] = add i64 [[TMP11]], 2097152
+; ORIGIN-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; ORIGIN-NEXT:    [[TMP13:%.*]] = add i64 [[TMP11]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; ORIGIN-NEXT:    store i32 [[TMP8]], ptr [[TMP12]], align 4
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP8]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB15:.*]], label %[[BB16:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB15]]:
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB17:.*]], label %[[BB18:.*]], !prof [[PROF1]]
+; ORIGIN:       [[BB17]]:
 ; ORIGIN-NEXT:    store i32 [[TMP6]], ptr [[TMP14]], align 4
-; ORIGIN-NEXT:    br label %[[BB16]]
-; ORIGIN:       [[BB16]]:
+; ORIGIN-NEXT:    br label %[[BB18]]
+; ORIGIN:       [[BB18]]:
 ; ORIGIN-NEXT:    store i32 [[TMP9]], ptr [[X]], align 4
 ; ORIGIN-NEXT:    ret void
 ;
@@ -592,8 +617,9 @@ define void @ShlConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    [[TMP2:%.*]] = load i32, ptr [[X]], align 4
 ; CALLS-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[X]] to i64
 ; CALLS-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; CALLS-NEXT:    [[TMP17:%.*]] = add i64 [[TMP4]], 2097152
+; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP17]] to ptr
+; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP5]], align 4
 ; CALLS-NEXT:    [[TMP8:%.*]] = load i32, ptr [[TMP7]], align 4
@@ -603,8 +629,9 @@ define void @ShlConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP0]], i32 zeroext [[TMP1]])
 ; CALLS-NEXT:    [[TMP12:%.*]] = ptrtoint ptr [[X]] to i64
 ; CALLS-NEXT:    [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080
-; CALLS-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr
-; CALLS-NEXT:    [[TMP15:%.*]] = add i64 [[TMP13]], 17592186044416
+; CALLS-NEXT:    [[TMP18:%.*]] = add i64 [[TMP13]], 2097152
+; CALLS-NEXT:    [[TMP14:%.*]] = inttoptr i64 [[TMP18]] to ptr
+; CALLS-NEXT:    [[TMP15:%.*]] = add i64 [[TMP13]], 17592188141568
 ; CALLS-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CALLS-NEXT:    store i32 [[TMP10]], ptr [[TMP14]], align 4
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[TMP10]], ptr [[X]], i32 zeroext [[TMP8]])
@@ -628,7 +655,8 @@ define void @ShlNonConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[TMP0:%.*]] = load i32, ptr [[X]], align 4
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[X]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
 ; CHECK-NEXT:    [[TMP4:%.*]] = icmp ne i32 [[_MSLD]], 0
 ; CHECK-NEXT:    [[TMP5:%.*]] = sext i1 [[TMP4]] to i32
@@ -637,7 +665,8 @@ define void @ShlNonConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[TMP8:%.*]] = shl i32 10, [[TMP0]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[X]] to i64
 ; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP10]], 2097152
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP7]], ptr [[TMP11]], align 4
 ; CHECK-NEXT:    store i32 [[TMP8]], ptr [[X]], align 4
 ; CHECK-NEXT:    ret void
@@ -649,8 +678,9 @@ define void @ShlNonConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP0:%.*]] = load i32, ptr [[X]], align 4
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[X]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP19:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP19]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 4
@@ -663,16 +693,17 @@ define void @ShlNonConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = shl i32 10, [[TMP0]]
 ; ORIGIN-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[X]] to i64
 ; ORIGIN-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; ORIGIN-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
-; ORIGIN-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592186044416
+; ORIGIN-NEXT:    [[TMP20:%.*]] = add i64 [[TMP15]], 2097152
+; ORIGIN-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP20]] to ptr
+; ORIGIN-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; ORIGIN-NEXT:    store i32 [[TMP10]], ptr [[TMP16]], align 4
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP10]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB19:.*]], label %[[BB20:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB19]]:
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB21:.*]], label %[[BB22:.*]], !prof [[PROF1]]
+; ORIGIN:       [[BB21]]:
 ; ORIGIN-NEXT:    store i32 [[TMP12]], ptr [[TMP18]], align 4
-; ORIGIN-NEXT:    br label %[[BB20]]
-; ORIGIN:       [[BB20]]:
+; ORIGIN-NEXT:    br label %[[BB22]]
+; ORIGIN:       [[BB22]]:
 ; ORIGIN-NEXT:    store i32 [[TMP13]], ptr [[X]], align 4
 ; ORIGIN-NEXT:    ret void
 ;
@@ -686,8 +717,9 @@ define void @ShlNonConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    [[TMP2:%.*]] = load i32, ptr [[X]], align 4
 ; CALLS-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[X]] to i64
 ; CALLS-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; CALLS-NEXT:    [[TMP21:%.*]] = add i64 [[TMP4]], 2097152
+; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP21]] to ptr
+; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP5]], align 4
 ; CALLS-NEXT:    [[TMP8:%.*]] = load i32, ptr [[TMP7]], align 4
@@ -701,8 +733,9 @@ define void @ShlNonConst(ptr nocapture %x) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP0]], i32 zeroext [[TMP1]])
 ; CALLS-NEXT:    [[TMP16:%.*]] = ptrtoint ptr [[X]] to i64
 ; CALLS-NEXT:    [[TMP17:%.*]] = xor i64 [[TMP16]], 87960930222080
-; CALLS-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
-; CALLS-NEXT:    [[TMP19:%.*]] = add i64 [[TMP17]], 17592186044416
+; CALLS-NEXT:    [[TMP22:%.*]] = add i64 [[TMP17]], 2097152
+; CALLS-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CALLS-NEXT:    [[TMP19:%.*]] = add i64 [[TMP17]], 17592188141568
 ; CALLS-NEXT:    [[TMP20:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CALLS-NEXT:    store i32 [[TMP12]], ptr [[TMP18]], align 4
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[TMP12]], ptr [[X]], i32 zeroext [[TMP14]])
@@ -726,13 +759,15 @@ define void @SExt(ptr nocapture %a, ptr nocapture %b) nounwind uwtable sanitize_
 ; CHECK-NEXT:    [[TMP0:%.*]] = load i16, ptr [[B]], align 2
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP3]], align 2
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = sext i16 [[_MSLD]] to i32
 ; CHECK-NEXT:    [[TMP4:%.*]] = sext i16 [[TMP0]] to i32
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store i32 [[_MSPROP]], ptr [[TMP7]], align 4
 ; CHECK-NEXT:    store i32 [[TMP4]], ptr [[A]], align 4
 ; CHECK-NEXT:    ret void
@@ -744,8 +779,9 @@ define void @SExt(ptr nocapture %a, ptr nocapture %b) nounwind uwtable sanitize_
 ; ORIGIN-NEXT:    [[TMP0:%.*]] = load i16, ptr [[B]], align 2
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP14:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP3]], align 2
@@ -754,16 +790,17 @@ define void @SExt(ptr nocapture %a, ptr nocapture %b) nounwind uwtable sanitize_
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = sext i16 [[TMP0]] to i32
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; ORIGIN-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
-; ORIGIN-NEXT:    [[TMP12:%.*]] = add i64 [[TMP10]], 17592186044416
+; ORIGIN-NEXT:    [[TMP15:%.*]] = add i64 [[TMP10]], 2097152
+; ORIGIN-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; ORIGIN-NEXT:    [[TMP12:%.*]] = add i64 [[TMP10]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; ORIGIN-NEXT:    store i32 [[_MSPROP]], ptr [[TMP11]], align 4
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSPROP]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB14:.*]], label %[[BB15:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB14]]:
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB16:.*]], label %[[BB17:.*]], !prof [[PROF1]]
+; ORIGIN:       [[BB16]]:
 ; ORIGIN-NEXT:    store i32 [[TMP7]], ptr [[TMP13]], align 4
-; ORIGIN-NEXT:    br label %[[BB15]]
-; ORIGIN:       [[BB15]]:
+; ORIGIN-NEXT:    br label %[[BB17]]
+; ORIGIN:       [[BB17]]:
 ; ORIGIN-NEXT:    store i32 [[TMP8]], ptr [[A]], align 4
 ; ORIGIN-NEXT:    ret void
 ;
@@ -779,8 +816,9 @@ define void @SExt(ptr nocapture %a, ptr nocapture %b) nounwind uwtable sanitize_
 ; CALLS-NEXT:    [[TMP4:%.*]] = load i16, ptr [[B]], align 2
 ; CALLS-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[B]] to i64
 ; CALLS-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
-; CALLS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 17592186044416
+; CALLS-NEXT:    [[TMP18:%.*]] = add i64 [[TMP6]], 2097152
+; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP18]] to ptr
+; CALLS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 17592188141568
 ; CALLS-NEXT:    [[TMP9:%.*]] = and i64 [[TMP8]], -4
 ; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP7]], align 2
@@ -790,8 +828,9 @@ define void @SExt(ptr nocapture %a, ptr nocapture %b) nounwind uwtable sanitize_
 ; CALLS-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP2]], i32 zeroext [[TMP3]])
 ; CALLS-NEXT:    [[TMP13:%.*]] = ptrtoint ptr [[A]] to i64
 ; CALLS-NEXT:    [[TMP14:%.*]] = xor i64 [[TMP13]], 87960930222080
-; CALLS-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP14]] to ptr
-; CALLS-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 17592186044416
+; CALLS-NEXT:    [[TMP19:%.*]] = add i64 [[TMP14]], 2097152
+; CALLS-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP19]] to ptr
+; CALLS-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 17592188141568
 ; CALLS-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr
 ; CALLS-NEXT:    store i32 [[_MSPROP]], ptr [[TMP15]], align 4
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[_MSPROP]], ptr [[A]], i32 zeroext [[TMP11]])
@@ -2273,12 +2312,14 @@ define i32 @ShadowLoadAlignmentLarge() nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[Y:%.*]] = alloca i32, align 64
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[Y]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 64 [[TMP3]], i8 -1, i64 4, i1 false)
 ; CHECK-NEXT:    [[TMP4:%.*]] = load volatile i32, ptr [[Y]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[Y]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 64
 ; CHECK-NEXT:    store i32 [[_MSLD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret i32 [[TMP4]]
@@ -2289,8 +2330,9 @@ define i32 @ShadowLoadAlignmentLarge() nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[Y:%.*]] = alloca i32, align 64
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[Y]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP14:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 64 [[TMP3]], i8 -1, i64 4, i1 false)
@@ -2298,8 +2340,9 @@ define i32 @ShadowLoadAlignmentLarge() nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = load volatile i32, ptr [[Y]], align 64
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[Y]] to i64
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; ORIGIN-NEXT:    [[TMP15:%.*]] = add i64 [[TMP9]], 2097152
+; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP10]], align 64
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = load i32, ptr [[TMP12]], align 64
@@ -2313,8 +2356,9 @@ define i32 @ShadowLoadAlignmentLarge() nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    [[Y:%.*]] = alloca i32, align 64
 ; CALLS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[Y]] to i64
 ; CALLS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CALLS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CALLS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CALLS-NEXT:    [[TMP14:%.*]] = add i64 [[TMP2]], 2097152
+; CALLS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; CALLS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CALLS-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; CALLS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 64 [[TMP3]], i8 -1, i64 4, i1 false)
@@ -2322,8 +2366,9 @@ define i32 @ShadowLoadAlignmentLarge() nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    [[TMP7:%.*]] = load volatile i32, ptr [[Y]], align 64
 ; CALLS-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[Y]] to i64
 ; CALLS-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; CALLS-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; CALLS-NEXT:    [[TMP15:%.*]] = add i64 [[TMP9]], 2097152
+; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CALLS-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; CALLS-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP10]], align 64
 ; CALLS-NEXT:    [[TMP13:%.*]] = load i32, ptr [[TMP12]], align 64
@@ -2344,12 +2389,14 @@ define i32 @ShadowLoadAlignmentSmall() nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[Y:%.*]] = alloca i32, align 2
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[Y]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 2 [[TMP3]], i8 -1, i64 4, i1 false)
 ; CHECK-NEXT:    [[TMP4:%.*]] = load volatile i32, ptr [[Y]], align 2
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[Y]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP7]], align 2
 ; CHECK-NEXT:    store i32 [[_MSLD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret i32 [[TMP4]]
@@ -2360,8 +2407,9 @@ define i32 @ShadowLoadAlignmentSmall() nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[Y:%.*]] = alloca i32, align 2
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[Y]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP15:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 2 [[TMP3]], i8 -1, i64 4, i1 false)
@@ -2369,8 +2417,9 @@ define i32 @ShadowLoadAlignmentSmall() nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = load volatile i32, ptr [[Y]], align 2
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[Y]] to i64
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; ORIGIN-NEXT:    [[TMP16:%.*]] = add i64 [[TMP9]], 2097152
+; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = and i64 [[TMP11]], -4
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP10]], align 2
@@ -2385,8 +2434,9 @@ define i32 @ShadowLoadAlignmentSmall() nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    [[Y:%.*]] = alloca i32, align 2
 ; CALLS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[Y]] to i64
 ; CALLS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CALLS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CALLS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CALLS-NEXT:    [[TMP15:%.*]] = add i64 [[TMP2]], 2097152
+; CALLS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CALLS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CALLS-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; CALLS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 2 [[TMP3]], i8 -1, i64 4, i1 false)
@@ -2394,8 +2444,9 @@ define i32 @ShadowLoadAlignmentSmall() nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    [[TMP7:%.*]] = load volatile i32, ptr [[Y]], align 2
 ; CALLS-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[Y]] to i64
 ; CALLS-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; CALLS-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; CALLS-NEXT:    [[TMP16:%.*]] = add i64 [[TMP9]], 2097152
+; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP16]] to ptr
+; CALLS-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; CALLS-NEXT:    [[TMP12:%.*]] = and i64 [[TMP11]], -4
 ; CALLS-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP10]], align 2
@@ -2634,7 +2685,8 @@ define <8 x ptr> @VectorOfPointers(ptr %p) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[X:%.*]] = load <8 x ptr>, ptr [[P]], align 64
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP3]], align 64
 ; CHECK-NEXT:    store <8 x i64> [[_MSLD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <8 x ptr> [[X]]
@@ -2645,8 +2697,9 @@ define <8 x ptr> @VectorOfPointers(ptr %p) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    [[X:%.*]] = load <8 x ptr>, ptr [[P]], align 64
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP7:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP3]], align 64
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 64
@@ -2663,8 +2716,9 @@ define <8 x ptr> @VectorOfPointers(ptr %p) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    [[X:%.*]] = load <8 x ptr>, ptr [[P]], align 64
 ; CALLS-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[P]] to i64
 ; CALLS-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; CALLS-NEXT:    [[TMP9:%.*]] = add i64 [[TMP4]], 2097152
+; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load <8 x i64>, ptr [[TMP5]], align 64
 ; CALLS-NEXT:    [[TMP8:%.*]] = load i32, ptr [[TMP7]], align 64
@@ -2687,7 +2741,8 @@ define void @VACopy(ptr %p1, ptr %p2) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P1]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_copy.p0(ptr [[P1]], ptr [[P2]]) #[[ATTR5]]
 ; CHECK-NEXT:    ret void
@@ -2697,8 +2752,9 @@ define void @VACopy(ptr %p1, ptr %p2) nounwind uwtable sanitize_memory {
 ; ORIGIN-NEXT:    call void @llvm.donothing()
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P1]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 0, i64 24, i1 false)
 ; ORIGIN-NEXT:    call void @llvm.va_copy.p0(ptr [[P1]], ptr [[P2]]) #[[ATTR5]]
@@ -2709,8 +2765,9 @@ define void @VACopy(ptr %p1, ptr %p2) nounwind uwtable sanitize_memory {
 ; CALLS-NEXT:    call void @llvm.donothing()
 ; CALLS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P1]] to i64
 ; CALLS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CALLS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CALLS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CALLS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; CALLS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CALLS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 0, i64 24, i1 false)
 ; CALLS-NEXT:    call void @llvm.va_copy.p0(ptr [[P1]], ptr [[P2]]) #[[ATTR5]]
@@ -2744,34 +2801,40 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
 ; CHECK-NEXT:    [[X_ADDR:%.*]] = alloca i32, align 4
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[X_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP7]], i8 -1, i64 4, i1 false)
-; CHECK-NEXT:    [[VA:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT:    [[VA:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CHECK-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[VA]] to i64
 ; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP9]], 2097152
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP10]], i8 -1, i64 24, i1 false)
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[X_ADDR]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP24:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP4]], ptr [[TMP13]], align 4
 ; CHECK-NEXT:    store i32 [[X]], ptr [[X_ADDR]], align 4
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[VA]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP32:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP32]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP16]], i8 0, i64 24, i1 false)
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr [[VA]])
 ; CHECK-NEXT:    [[TMP19:%.*]] = getelementptr i8, ptr [[VA]], i64 16
 ; CHECK-NEXT:    [[TMP20:%.*]] = load ptr, ptr [[TMP19]], align 8
 ; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[TMP20]] to i64
 ; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 2097152
+; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP25]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP23]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CHECK-NEXT:    [[TMP26:%.*]] = getelementptr i8, ptr [[VA]], i64 8
 ; CHECK-NEXT:    [[TMP27:%.*]] = load ptr, ptr [[TMP26]], align 8
 ; CHECK-NEXT:    [[TMP28:%.*]] = ptrtoint ptr [[TMP27]] to i64
 ; CHECK-NEXT:    [[TMP29:%.*]] = xor i64 [[TMP28]], 87960930222080
-; CHECK-NEXT:    [[TMP30:%.*]] = inttoptr i64 [[TMP29]] to ptr
+; CHECK-NEXT:    [[TMP33:%.*]] = add i64 [[TMP29]], 2097152
+; CHECK-NEXT:    [[TMP30:%.*]] = inttoptr i64 [[TMP33]] to ptr
 ; CHECK-NEXT:    [[TMP31:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP30]], ptr align 16 [[TMP31]], i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    ret void
@@ -2793,38 +2856,42 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
 ; ORIGIN-NEXT:    [[X_ADDR:%.*]] = alloca i32, align 4
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X_ADDR]] to i64
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; ORIGIN-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
-; ORIGIN-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592186044416
+; ORIGIN-NEXT:    [[TMP24:%.*]] = add i64 [[TMP8]], 2097152
+; ORIGIN-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP24]] to ptr
+; ORIGIN-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP11:%.*]] = and i64 [[TMP10]], -4
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP9]], i8 -1, i64 4, i1 false)
 ; ORIGIN-NEXT:    call void @__msan_set_alloca_origin_with_descr(ptr [[X_ADDR]], i64 4, ptr @[[GLOB4:[0-9]+]], ptr @[[GLOB5:[0-9]+]])
-; ORIGIN-NEXT:    [[VA:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; ORIGIN-NEXT:    [[VA:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = ptrtoint ptr [[VA]] to i64
 ; ORIGIN-NEXT:    [[TMP14:%.*]] = xor i64 [[TMP13]], 87960930222080
-; ORIGIN-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP14]] to ptr
-; ORIGIN-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 17592186044416
+; ORIGIN-NEXT:    [[TMP25:%.*]] = add i64 [[TMP14]], 2097152
+; ORIGIN-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP25]] to ptr
+; ORIGIN-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP17:%.*]] = and i64 [[TMP16]], -4
 ; ORIGIN-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP15]], i8 -1, i64 24, i1 false)
 ; ORIGIN-NEXT:    call void @__msan_set_alloca_origin_with_descr(ptr [[VA]], i64 24, ptr @[[GLOB6:[0-9]+]], ptr @[[GLOB7:[0-9]+]])
 ; ORIGIN-NEXT:    [[TMP19:%.*]] = ptrtoint ptr [[X_ADDR]] to i64
 ; ORIGIN-NEXT:    [[TMP20:%.*]] = xor i64 [[TMP19]], 87960930222080
-; ORIGIN-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP20]] to ptr
-; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592186044416
+; ORIGIN-NEXT:    [[TMP32:%.*]] = add i64 [[TMP20]], 2097152
+; ORIGIN-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP32]] to ptr
+; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; ORIGIN-NEXT:    store i32 [[TMP5]], ptr [[TMP21]], align 4
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP5]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB24:.*]], label %[[BB25:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB24]]:
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB27:.*]], label %[[BB28:.*]], !prof [[PROF1]]
+; ORIGIN:       [[BB27]]:
 ; ORIGIN-NEXT:    store i32 [[TMP6]], ptr [[TMP23]], align 4
-; ORIGIN-NEXT:    br label %[[BB25]]
-; ORIGIN:       [[BB25]]:
+; ORIGIN-NEXT:    br label %[[BB28]]
+; ORIGIN:       [[BB28]]:
 ; ORIGIN-NEXT:    store i32 [[X]], ptr [[X_ADDR]], align 4
 ; ORIGIN-NEXT:    [[TMP26:%.*]] = ptrtoint ptr [[VA]] to i64
 ; ORIGIN-NEXT:    [[TMP27:%.*]] = xor i64 [[TMP26]], 87960930222080
-; ORIGIN-NEXT:    [[TMP28:%.*]] = inttoptr i64 [[TMP27]] to ptr
-; ORIGIN-NEXT:    [[TMP29:%.*]] = add i64 [[TMP27]], 17592186044416
+; ORIGIN-NEXT:    [[TMP31:%.*]] = add i64 [[TMP27]], 2097152
+; ORIGIN-NEXT:    [[TMP28:%.*]] = inttoptr i64 [[TMP31]] to ptr
+; ORIGIN-NEXT:    [[TMP29:%.*]] = add i64 [[TMP27]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP30:%.*]] = inttoptr i64 [[TMP29]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP28]], i8 0, i64 24, i1 false)
 ; ORIGIN-NEXT:    call void @llvm.va_start.p0(ptr [[VA]])
@@ -2832,8 +2899,9 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP34:%.*]] = load ptr, ptr [[TMP33]], align 8
 ; ORIGIN-NEXT:    [[TMP35:%.*]] = ptrtoint ptr [[TMP34]] to i64
 ; ORIGIN-NEXT:    [[TMP36:%.*]] = xor i64 [[TMP35]], 87960930222080
-; ORIGIN-NEXT:    [[TMP37:%.*]] = inttoptr i64 [[TMP36]] to ptr
-; ORIGIN-NEXT:    [[TMP38:%.*]] = add i64 [[TMP36]], 17592186044416
+; ORIGIN-NEXT:    [[TMP40:%.*]] = add i64 [[TMP36]], 2097152
+; ORIGIN-NEXT:    [[TMP37:%.*]] = inttoptr i64 [[TMP40]] to ptr
+; ORIGIN-NEXT:    [[TMP38:%.*]] = add i64 [[TMP36]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP39:%.*]] = inttoptr i64 [[TMP38]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP37]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP39]], ptr align 16 [[TMP4]], i64 176, i1 false)
@@ -2841,8 +2909,9 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP43:%.*]] = load ptr, ptr [[TMP42]], align 8
 ; ORIGIN-NEXT:    [[TMP44:%.*]] = ptrtoint ptr [[TMP43]] to i64
 ; ORIGIN-NEXT:    [[TMP45:%.*]] = xor i64 [[TMP44]], 87960930222080
-; ORIGIN-NEXT:    [[TMP46:%.*]] = inttoptr i64 [[TMP45]] to ptr
-; ORIGIN-NEXT:    [[TMP47:%.*]] = add i64 [[TMP45]], 17592186044416
+; ORIGIN-NEXT:    [[TMP51:%.*]] = add i64 [[TMP45]], 2097152
+; ORIGIN-NEXT:    [[TMP46:%.*]] = inttoptr i64 [[TMP51]] to ptr
+; ORIGIN-NEXT:    [[TMP47:%.*]] = add i64 [[TMP45]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP48:%.*]] = inttoptr i64 [[TMP47]] to ptr
 ; ORIGIN-NEXT:    [[TMP49:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP46]], ptr align 16 [[TMP49]], i64 [[TMP0]], i1 false)
@@ -2867,33 +2936,37 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
 ; CALLS-NEXT:    [[X_ADDR:%.*]] = alloca i32, align 4
 ; CALLS-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X_ADDR]] to i64
 ; CALLS-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CALLS-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
-; CALLS-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592186044416
+; CALLS-NEXT:    [[TMP30:%.*]] = add i64 [[TMP8]], 2097152
+; CALLS-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP30]] to ptr
+; CALLS-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592188141568
 ; CALLS-NEXT:    [[TMP11:%.*]] = and i64 [[TMP10]], -4
 ; CALLS-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP9]], i8 -1, i64 4, i1 false)
 ; CALLS-NEXT:    call void @__msan_set_alloca_origin_with_descr(ptr [[X_ADDR]], i64 4, ptr @[[GLOB4:[0-9]+]], ptr @[[GLOB5:[0-9]+]])
-; CALLS-NEXT:    [[VA:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CALLS-NEXT:    [[VA:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
 ; CALLS-NEXT:    [[TMP13:%.*]] = ptrtoint ptr [[VA]] to i64
 ; CALLS-NEXT:    [[TMP14:%.*]] = xor i64 [[TMP13]], 87960930222080
-; CALLS-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP14]] to ptr
-; CALLS-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 17592186044416
+; CALLS-NEXT:    [[TMP38:%.*]] = add i64 [[TMP14]], 2097152
+; CALLS-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP38]] to ptr
+; CALLS-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 17592188141568
 ; CALLS-NEXT:    [[TMP17:%.*]] = and i64 [[TMP16]], -4
 ; CALLS-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP15]], i8 -1, i64 24, i1 false)
 ; CALLS-NEXT:    call void @__msan_set_alloca_origin_with_descr(ptr [[VA]], i64 24, ptr @[[GLOB6:[0-9]+]], ptr @[[GLOB7:[0-9]+]])
 ; CALLS-NEXT:    [[TMP19:%.*]] = ptrtoint ptr [[X_ADDR]] to i64
 ; CALLS-NEXT:    [[TMP20:%.*]] = xor i64 [[TMP19]], 87960930222080
-; CALLS-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP20]] to ptr
-; CALLS-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592186044416
+; CALLS-NEXT:    [[TMP39:%.*]] = add i64 [[TMP20]], 2097152
+; CALLS-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP39]] to ptr
+; CALLS-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592188141568
 ; CALLS-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; CALLS-NEXT:    store i32 [[TMP5]], ptr [[TMP21]], align 4
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[TMP5]], ptr [[X_ADDR]], i32 zeroext [[TMP6]])
 ; CALLS-NEXT:    store i32 [[X]], ptr [[X_ADDR]], align 4
 ; CALLS-NEXT:    [[TMP24:%.*]] = ptrtoint ptr [[VA]] to i64
 ; CALLS-NEXT:    [[TMP25:%.*]] = xor i64 [[TMP24]], 87960930222080
-; CALLS-NEXT:    [[TMP26:%.*]] = inttoptr i64 [[TMP25]] to ptr
-; CALLS-NEXT:    [[TMP27:%.*]] = add i64 [[TMP25]], 17592186044416
+; CALLS-NEXT:    [[TMP29:%.*]] = add i64 [[TMP25]], 2097152
+; CALLS-NEXT:    [[TMP26:%.*]] = inttoptr i64 [[TMP29]] to ptr
+; CALLS-NEXT:    [[TMP27:%.*]] = add i64 [[TMP25]], 17592188141568
 ; CALLS-NEXT:    [[TMP28:%.*]] = inttoptr i64 [[TMP27]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP26]], i8 0, i64 24, i1 false)
 ; CALLS-NEXT:    call void @llvm.va_start.p0(ptr [[VA]])
@@ -2901,8 +2974,9 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
 ; CALLS-NEXT:    [[TMP32:%.*]] = load ptr, ptr [[TMP31]], align 8
 ; CALLS-NEXT:    [[TMP33:%.*]] = ptrtoint ptr [[TMP32]] to i64
 ; CALLS-NEXT:    [[TMP34:%.*]] = xor i64 [[TMP33]], 87960930222080
-; CALLS-NEXT:    [[TMP35:%.*]] = inttoptr i64 [[TMP34]] to ptr
-; CALLS-NEXT:    [[TMP36:%.*]] = add i64 [[TMP34]], 17592186044416
+; CALLS-NEXT:    [[TMP49:%.*]] = add i64 [[TMP34]], 2097152
+; CALLS-NEXT:    [[TMP35:%.*]] = inttoptr i64 [[TMP49]] to ptr
+; CALLS-NEXT:    [[TMP36:%.*]] = add i64 [[TMP34]], 17592188141568
 ; CALLS-NEXT:    [[TMP37:%.*]] = inttoptr i64 [[TMP36]] to ptr
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP35]], ptr align 16 [[TMP2]], i64 176, i1 false)
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP37]], ptr align 16 [[TMP4]], i64 176, i1 false)
@@ -2910,8 +2984,9 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
 ; CALLS-NEXT:    [[TMP41:%.*]] = load ptr, ptr [[TMP40]], align 8
 ; CALLS-NEXT:    [[TMP42:%.*]] = ptrtoint ptr [[TMP41]] to i64
 ; CALLS-NEXT:    [[TMP43:%.*]] = xor i64 [[TMP42]], 87960930222080
-; CALLS-NEXT:    [[TMP44:%.*]] = inttoptr i64 [[TMP43]] to ptr
-; CALLS-NEXT:    [[TMP45:%.*]] = add i64 [[TMP43]], 17592186044416
+; CALLS-NEXT:    [[TMP50:%.*]] = add i64 [[TMP43]], 2097152
+; CALLS-NEXT:    [[TMP44:%.*]] = inttoptr i64 [[TMP50]] to ptr
+; CALLS-NEXT:    [[TMP45:%.*]] = add i64 [[TMP43]], 17592188141568
 ; CALLS-NEXT:    [[TMP46:%.*]] = inttoptr i64 [[TMP45]] to ptr
 ; CALLS-NEXT:    [[TMP47:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP44]], ptr align 16 [[TMP47]], i64 [[TMP0]], i1 false)
@@ -2940,7 +3015,8 @@ define void @VolatileStore(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_m
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP0]], ptr [[TMP3]], align 4
 ; CHECK-NEXT:    store volatile i32 [[X]], ptr [[P]], align 4
 ; CHECK-NEXT:    ret void
@@ -2953,16 +3029,17 @@ define void @VolatileStore(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_m
 ; ORIGIN-NEXT:    call void @llvm.donothing()
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP7:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    store i32 [[TMP0]], ptr [[TMP4]], align 4
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP0]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB7:.*]], label %[[BB8:.*]], !prof [[PROF1]]
-; ORIGIN:       [[BB7]]:
-; ORIGIN-NEXT:    store i32 [[TMP1]], ptr [[TMP6]], align 4
-; ORIGIN-NEXT:    br label %[[BB8]]
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label %[[BB8:.*]], label %[[BB9:.*]], !prof [[PROF1]]
 ; ORIGIN:       [[BB8]]:
+; ORIGIN-NEXT:    store i32 [[TMP1]], ptr [[TMP6]], align 4
+; ORIGIN-NEXT:    br label %[[BB9]]
+; ORIGIN:       [[BB9]]:
 ; ORIGIN-NEXT:    store volatile i32 [[X]], ptr [[P]], align 4
 ; ORIGIN-NEXT:    ret void
 ;
@@ -2977,8 +3054,9 @@ define void @VolatileStore(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_m
 ; CALLS-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP0]], i32 zeroext [[TMP1]])
 ; CALLS-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P]] to i64
 ; CALLS-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CALLS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
-; CALLS-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592186044416
+; CALLS-NEXT:    [[TMP9:%.*]] = add i64 [[TMP5]], 2097152
+; CALLS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP9]] to ptr
+; CALLS-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592188141568
 ; CALLS-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CALLS-NEXT:    store i32 [[TMP2]], ptr [[TMP6]], align 4
 ; CALLS-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[TMP2]], ptr [[P]], i32 zeroext [[TMP3]])
@@ -3075,7 +3153,8 @@ define i32 @NoSanitizeMemoryAlloca() {
 ; CHECK-NEXT:    [[P:%.*]] = alloca i32, align 4
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 2097152
+; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP3]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP2]], i8 0, i64 4, i1 false)
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    store i32 0, ptr @__msan_retval_tls, align 8
@@ -3090,8 +3169,9 @@ define i32 @NoSanitizeMemoryAlloca() {
 ; ORIGIN-NEXT:    [[P:%.*]] = alloca i32, align 4
 ; ORIGIN-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; ORIGIN-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
-; ORIGIN-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592186044416
+; ORIGIN-NEXT:    [[TMP7:%.*]] = add i64 [[TMP1]], 2097152
+; ORIGIN-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; ORIGIN-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], -4
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP2]], i8 0, i64 4, i1 false)
@@ -3110,8 +3190,9 @@ define i32 @NoSanitizeMemoryAlloca() {
 ; CALLS-NEXT:    [[P:%.*]] = alloca i32, align 4
 ; CALLS-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[P]] to i64
 ; CALLS-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080
-; CALLS-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr
-; CALLS-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592186044416
+; CALLS-NEXT:    [[TMP7:%.*]] = add i64 [[TMP1]], 2097152
+; CALLS-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CALLS-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592188141568
 ; CALLS-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], -4
 ; CALLS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP2]], i8 0, i64 4, i1 false)
@@ -3429,12 +3510,14 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CHECK-NEXT:    [[AGG_TMP2:%.*]] = alloca [[STRUCT_STRUCTBYVAL:%.*]], align 8
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 -1, i64 16, i1 false)
 ; CHECK-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load i64, ptr [[S]], align 4
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[S]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP6]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or i64 [[TMP0]], 0
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or i64 [[_MSPROP]], 0
@@ -3442,7 +3525,8 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CHECK-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load i64, ptr [[AGG_TMP_SROA_2_0__SROA_IDX]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[AGG_TMP_SROA_2_0__SROA_IDX]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[TMP10:%.*]] = call ptr @__msan_memcpy(ptr [[AGG_TMP2]], ptr [[S]], i64 16)
 ; CHECK-NEXT:    store i32 -1, ptr @__msan_param_tls, align 8
@@ -3452,7 +3536,8 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CHECK-NEXT:    store i64 [[_MSLD2]], ptr getelementptr (i8, ptr @__msan_param_tls, i64 32), align 8
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP21:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP21]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 40), ptr align 8 [[TMP13]], i64 16, i1 false)
 ; CHECK-NEXT:    store i64 [[_MSLD]], ptr getelementptr (i8, ptr @__msan_va_arg_tls, i64 8), align 8
 ; CHECK-NEXT:    store i64 [[_MSLD2]], ptr getelementptr (i8, ptr @__msan_va_arg_tls, i64 16), align 8
@@ -3460,7 +3545,8 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CHECK-NEXT:    store i64 [[_MSLD2]], ptr getelementptr (i8, ptr @__msan_va_arg_tls, i64 32), align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 176), ptr align 8 [[TMP16]], i64 16, i1 false)
 ; CHECK-NEXT:    store i64 16, ptr @__msan_va_arg_overflow_size_tls, align 8
 ; CHECK-NEXT:    call void (i32, ...) @VAArgStructFn(i32 undef, i64 [[AGG_TMP_SROA_0_0_COPYLOAD]], i64 [[AGG_TMP_SROA_2_0_COPYLOAD]], i64 [[AGG_TMP_SROA_0_0_COPYLOAD]], i64 [[AGG_TMP_SROA_2_0_COPYLOAD]], ptr byval([[STRUCT_STRUCTBYVAL]]) align 8 [[AGG_TMP2]])
@@ -3475,8 +3561,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; ORIGIN-NEXT:    [[AGG_TMP2:%.*]] = alloca [[STRUCT_STRUCTBYVAL:%.*]], align 8
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP43:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP43]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 -1, i64 16, i1 false)
@@ -3484,8 +3571,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; ORIGIN-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load i64, ptr [[S]], align 4
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[S]] to i64
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; ORIGIN-NEXT:    [[TMP45:%.*]] = add i64 [[TMP9]], 2097152
+; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP45]] to ptr
+; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP10]], align 4
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = load i32, ptr [[TMP12]], align 4
@@ -3495,8 +3583,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; ORIGIN-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load i64, ptr [[AGG_TMP_SROA_2_0__SROA_IDX]], align 4
 ; ORIGIN-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[AGG_TMP_SROA_2_0__SROA_IDX]] to i64
 ; ORIGIN-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; ORIGIN-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
-; ORIGIN-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592186044416
+; ORIGIN-NEXT:    [[TMP46:%.*]] = add i64 [[TMP15]], 2097152
+; ORIGIN-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP46]] to ptr
+; ORIGIN-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP16]], align 4
 ; ORIGIN-NEXT:    [[TMP19:%.*]] = load i32, ptr [[TMP18]], align 4
@@ -3513,8 +3602,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; ORIGIN-NEXT:    store i32 [[TMP19]], ptr getelementptr (i8, ptr @__msan_param_origin_tls, i64 32), align 4
 ; ORIGIN-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; ORIGIN-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; ORIGIN-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
-; ORIGIN-NEXT:    [[TMP24:%.*]] = add i64 [[TMP22]], 17592186044416
+; ORIGIN-NEXT:    [[TMP47:%.*]] = add i64 [[TMP22]], 2097152
+; ORIGIN-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP47]] to ptr
+; ORIGIN-NEXT:    [[TMP24:%.*]] = add i64 [[TMP22]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP25:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 40), ptr align 8 [[TMP23]], i64 16, i1 false)
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 40), ptr align 4 [[TMP25]], i64 16, i1 false)
@@ -3540,8 +3630,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; ORIGIN-NEXT:    store i64 [[TMP37]], ptr getelementptr (i8, ptr @__msan_va_arg_origin_tls, i64 32), align 8
 ; ORIGIN-NEXT:    [[TMP38:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; ORIGIN-NEXT:    [[TMP39:%.*]] = xor i64 [[TMP38]], 87960930222080
-; ORIGIN-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP39]] to ptr
-; ORIGIN-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592186044416
+; ORIGIN-NEXT:    [[TMP44:%.*]] = add i64 [[TMP39]], 2097152
+; ORIGIN-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP44]] to ptr
+; ORIGIN-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP42:%.*]] = inttoptr i64 [[TMP41]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 176), ptr align 8 [[TMP40]], i64 16, i1 false)
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_origin_tls, i64 176), ptr align 8 [[TMP42]], i64 16, i1 false)
@@ -3558,8 +3649,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CALLS-NEXT:    [[AGG_TMP2:%.*]] = alloca [[STRUCT_STRUCTBYVAL:%.*]], align 8
 ; CALLS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CALLS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CALLS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; CALLS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; CALLS-NEXT:    [[TMP43:%.*]] = add i64 [[TMP3]], 2097152
+; CALLS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP43]] to ptr
+; CALLS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; CALLS-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4
 ; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 -1, i64 16, i1 false)
@@ -3568,8 +3660,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CALLS-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load i64, ptr [[S]], align 4
 ; CALLS-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[S]] to i64
 ; CALLS-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; CALLS-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; CALLS-NEXT:    [[TMP45:%.*]] = add i64 [[TMP9]], 2097152
+; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP45]] to ptr
+; CALLS-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; CALLS-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP10]], align 4
 ; CALLS-NEXT:    [[TMP13:%.*]] = load i32, ptr [[TMP12]], align 4
@@ -3580,8 +3673,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CALLS-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load i64, ptr [[AGG_TMP_SROA_2_0__SROA_IDX]], align 4
 ; CALLS-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[AGG_TMP_SROA_2_0__SROA_IDX]] to i64
 ; CALLS-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CALLS-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
-; CALLS-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592186044416
+; CALLS-NEXT:    [[TMP46:%.*]] = add i64 [[TMP15]], 2097152
+; CALLS-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP46]] to ptr
+; CALLS-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592188141568
 ; CALLS-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CALLS-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP16]], align 4
 ; CALLS-NEXT:    [[TMP19:%.*]] = load i32, ptr [[TMP18]], align 4
@@ -3598,8 +3692,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CALLS-NEXT:    store i32 [[TMP19]], ptr getelementptr (i8, ptr @__msan_param_origin_tls, i64 32), align 4
 ; CALLS-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CALLS-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CALLS-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
-; CALLS-NEXT:    [[TMP24:%.*]] = add i64 [[TMP22]], 17592186044416
+; CALLS-NEXT:    [[TMP47:%.*]] = add i64 [[TMP22]], 2097152
+; CALLS-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP47]] to ptr
+; CALLS-NEXT:    [[TMP24:%.*]] = add i64 [[TMP22]], 17592188141568
 ; CALLS-NEXT:    [[TMP25:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 40), ptr align 8 [[TMP23]], i64 16, i1 false)
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 40), ptr align 4 [[TMP25]], i64 16, i1 false)
@@ -3625,8 +3720,9 @@ define void @VAArgStruct(ptr nocapture %s) sanitize_memory {
 ; CALLS-NEXT:    store i64 [[TMP37]], ptr getelementptr (i8, ptr @__msan_va_arg_origin_tls, i64 32), align 8
 ; CALLS-NEXT:    [[TMP38:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CALLS-NEXT:    [[TMP39:%.*]] = xor i64 [[TMP38]], 87960930222080
-; CALLS-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP39]] to ptr
-; CALLS-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592186044416
+; CALLS-NEXT:    [[TMP44:%.*]] = add i64 [[TMP39]], 2097152
+; CALLS-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP44]] to ptr
+; CALLS-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592188141568
 ; CALLS-NEXT:    [[TMP42:%.*]] = inttoptr i64 [[TMP41]] to ptr
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 176), ptr align 8 [[TMP40]], i64 16, i1 false)
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_origin_tls, i64 176), ptr align 8 [[TMP42]], i64 16, i1 false)
@@ -3656,12 +3752,14 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CHECK-NEXT:    [[AGG_TMP2:%.*]] = alloca [[STRUCT_STRUCTBYVAL:%.*]], align 8
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP17:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP3]], i8 -1, i64 16, i1 false)
 ; CHECK-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load i64, ptr [[S]], align 4
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[S]] to i64
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP5]], 2097152
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP18]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP6]], align 4
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or i64 [[TMP0]], 0
 ; CHECK-NEXT:    [[_MSPROP1:%.*]] = or i64 [[_MSPROP]], 0
@@ -3669,7 +3767,8 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CHECK-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load i64, ptr [[AGG_TMP_SROA_2_0__SROA_IDX]], align 4
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[AGG_TMP_SROA_2_0__SROA_IDX]] to i64
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP19:%.*]] = add i64 [[TMP8]], 2097152
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP19]] to ptr
 ; CHECK-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP9]], align 4
 ; CHECK-NEXT:    [[TMP10:%.*]] = call ptr @__msan_memcpy(ptr [[AGG_TMP2]], ptr [[S]], i64 16)
 ; CHECK-NEXT:    store i32 -1, ptr @__msan_param_tls, align 8
@@ -3679,7 +3778,8 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CHECK-NEXT:    store i64 [[_MSLD2]], ptr getelementptr (i8, ptr @__msan_param_tls, i64 32), align 8
 ; CHECK-NEXT:    [[TMP11:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CHECK-NEXT:    [[TMP12:%.*]] = xor i64 [[TMP11]], 87960930222080
-; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP21:%.*]] = add i64 [[TMP12]], 2097152
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP21]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 40), ptr align 8 [[TMP13]], i64 16, i1 false)
 ; CHECK-NEXT:    store i64 [[_MSLD]], ptr getelementptr (i8, ptr @__msan_va_arg_tls, i64 8), align 8
 ; CHECK-NEXT:    store i64 [[_MSLD2]], ptr getelementptr (i8, ptr @__msan_va_arg_tls, i64 16), align 8
@@ -3687,7 +3787,8 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CHECK-NEXT:    store i64 [[_MSLD2]], ptr getelementptr (i8, ptr @__msan_va_arg_tls, i64 32), align 8
 ; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; CHECK-NEXT:    [[TMP20:%.*]] = add i64 [[TMP15]], 2097152
+; CHECK-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP20]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 48), ptr align 8 [[TMP16]], i64 16, i1 false)
 ; CHECK-NEXT:    store i64 16, ptr @__msan_va_arg_overflow_size_tls, align 8
 ; CHECK-NEXT:    call void (i32, ...) @VAArgStructFn(i32 undef, i64 [[AGG_TMP_SROA_0_0_COPYLOAD]], i64 [[AGG_TMP_SROA_2_0_COPYLOAD]], i64 [[AGG_TMP_SROA_0_0_COPYLOAD]], i64 [[AGG_TMP_SROA_2_0_COPYLOAD]], ptr byval([[STRUCT_STRUCTBYVAL]]) align 8 [[AGG_TMP2]])
@@ -3702,8 +3803,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; ORIGIN-NEXT:    [[AGG_TMP2:%.*]] = alloca [[STRUCT_STRUCTBYVAL:%.*]], align 8
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP43:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP43]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 -1, i64 16, i1 false)
@@ -3711,8 +3813,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; ORIGIN-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load i64, ptr [[S]], align 4
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[S]] to i64
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; ORIGIN-NEXT:    [[TMP45:%.*]] = add i64 [[TMP9]], 2097152
+; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP45]] to ptr
+; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP10]], align 4
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = load i32, ptr [[TMP12]], align 4
@@ -3722,8 +3825,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; ORIGIN-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load i64, ptr [[AGG_TMP_SROA_2_0__SROA_IDX]], align 4
 ; ORIGIN-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[AGG_TMP_SROA_2_0__SROA_IDX]] to i64
 ; ORIGIN-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; ORIGIN-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
-; ORIGIN-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592186044416
+; ORIGIN-NEXT:    [[TMP46:%.*]] = add i64 [[TMP15]], 2097152
+; ORIGIN-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP46]] to ptr
+; ORIGIN-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP16]], align 4
 ; ORIGIN-NEXT:    [[TMP19:%.*]] = load i32, ptr [[TMP18]], align 4
@@ -3740,8 +3844,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; ORIGIN-NEXT:    store i32 [[TMP19]], ptr getelementptr (i8, ptr @__msan_param_origin_tls, i64 32), align 4
 ; ORIGIN-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; ORIGIN-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; ORIGIN-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
-; ORIGIN-NEXT:    [[TMP24:%.*]] = add i64 [[TMP22]], 17592186044416
+; ORIGIN-NEXT:    [[TMP47:%.*]] = add i64 [[TMP22]], 2097152
+; ORIGIN-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP47]] to ptr
+; ORIGIN-NEXT:    [[TMP24:%.*]] = add i64 [[TMP22]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP25:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 40), ptr align 8 [[TMP23]], i64 16, i1 false)
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 40), ptr align 4 [[TMP25]], i64 16, i1 false)
@@ -3767,8 +3872,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; ORIGIN-NEXT:    store i64 [[TMP37]], ptr getelementptr (i8, ptr @__msan_va_arg_origin_tls, i64 32), align 8
 ; ORIGIN-NEXT:    [[TMP38:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; ORIGIN-NEXT:    [[TMP39:%.*]] = xor i64 [[TMP38]], 87960930222080
-; ORIGIN-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP39]] to ptr
-; ORIGIN-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592186044416
+; ORIGIN-NEXT:    [[TMP44:%.*]] = add i64 [[TMP39]], 2097152
+; ORIGIN-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP44]] to ptr
+; ORIGIN-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP42:%.*]] = inttoptr i64 [[TMP41]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 48), ptr align 8 [[TMP40]], i64 16, i1 false)
 ; ORIGIN-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_origin_tls, i64 48), ptr align 8 [[TMP42]], i64 16, i1 false)
@@ -3785,8 +3891,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CALLS-NEXT:    [[AGG_TMP2:%.*]] = alloca [[STRUCT_STRUCTBYVAL:%.*]], align 8
 ; CALLS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CALLS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CALLS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; CALLS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; CALLS-NEXT:    [[TMP43:%.*]] = add i64 [[TMP3]], 2097152
+; CALLS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP43]] to ptr
+; CALLS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; CALLS-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4
 ; CALLS-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CALLS-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP4]], i8 -1, i64 16, i1 false)
@@ -3795,8 +3902,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CALLS-NEXT:    [[AGG_TMP_SROA_0_0_COPYLOAD:%.*]] = load i64, ptr [[S]], align 4
 ; CALLS-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[S]] to i64
 ; CALLS-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; CALLS-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; CALLS-NEXT:    [[TMP45:%.*]] = add i64 [[TMP9]], 2097152
+; CALLS-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP45]] to ptr
+; CALLS-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; CALLS-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; CALLS-NEXT:    [[_MSLD:%.*]] = load i64, ptr [[TMP10]], align 4
 ; CALLS-NEXT:    [[TMP13:%.*]] = load i32, ptr [[TMP12]], align 4
@@ -3807,8 +3915,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CALLS-NEXT:    [[AGG_TMP_SROA_2_0_COPYLOAD:%.*]] = load i64, ptr [[AGG_TMP_SROA_2_0__SROA_IDX]], align 4
 ; CALLS-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[AGG_TMP_SROA_2_0__SROA_IDX]] to i64
 ; CALLS-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080
-; CALLS-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr
-; CALLS-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592186044416
+; CALLS-NEXT:    [[TMP46:%.*]] = add i64 [[TMP15]], 2097152
+; CALLS-NEXT:    [[TMP16:%.*]] = inttoptr i64 [[TMP46]] to ptr
+; CALLS-NEXT:    [[TMP17:%.*]] = add i64 [[TMP15]], 17592188141568
 ; CALLS-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr
 ; CALLS-NEXT:    [[_MSLD2:%.*]] = load i64, ptr [[TMP16]], align 4
 ; CALLS-NEXT:    [[TMP19:%.*]] = load i32, ptr [[TMP18]], align 4
@@ -3825,8 +3934,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CALLS-NEXT:    store i32 [[TMP19]], ptr getelementptr (i8, ptr @__msan_param_origin_tls, i64 32), align 4
 ; CALLS-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CALLS-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080
-; CALLS-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
-; CALLS-NEXT:    [[TMP24:%.*]] = add i64 [[TMP22]], 17592186044416
+; CALLS-NEXT:    [[TMP47:%.*]] = add i64 [[TMP22]], 2097152
+; CALLS-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP47]] to ptr
+; CALLS-NEXT:    [[TMP24:%.*]] = add i64 [[TMP22]], 17592188141568
 ; CALLS-NEXT:    [[TMP25:%.*]] = inttoptr i64 [[TMP24]] to ptr
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_param_tls, i64 40), ptr align 8 [[TMP23]], i64 16, i1 false)
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 getelementptr (i8, ptr @__msan_param_origin_tls, i64 40), ptr align 4 [[TMP25]], i64 16, i1 false)
@@ -3852,8 +3962,9 @@ define void @VAArgStructNoSSE(ptr nocapture %s) sanitize_memory #0 {
 ; CALLS-NEXT:    store i64 [[TMP37]], ptr getelementptr (i8, ptr @__msan_va_arg_origin_tls, i64 32), align 8
 ; CALLS-NEXT:    [[TMP38:%.*]] = ptrtoint ptr [[AGG_TMP2]] to i64
 ; CALLS-NEXT:    [[TMP39:%.*]] = xor i64 [[TMP38]], 87960930222080
-; CALLS-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP39]] to ptr
-; CALLS-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592186044416
+; CALLS-NEXT:    [[TMP44:%.*]] = add i64 [[TMP39]], 2097152
+; CALLS-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP44]] to ptr
+; CALLS-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592188141568
 ; CALLS-NEXT:    [[TMP42:%.*]] = inttoptr i64 [[TMP41]] to ptr
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_tls, i64 48), ptr align 8 [[TMP40]], i64 16, i1 false)
 ; CALLS-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 8 getelementptr (i8, ptr @__msan_va_arg_origin_tls, i64 48), ptr align 8 [[TMP42]], i64 16, i1 false)
diff --git a/llvm/test/Instrumentation/MemorySanitizer/msan_debug_info.ll b/llvm/test/Instrumentation/MemorySanitizer/msan_debug_info.ll
index 846912ebef54a..a89e8c23c7d49 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/msan_debug_info.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/msan_debug_info.ll
@@ -30,10 +30,11 @@ define void @Store(ptr nocapture %p, i32 %x) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP0]], i32 zeroext [[TMP1]]), !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592186044416, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    store i32 [[TMP2]], ptr [[TMP6]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP5]], 2097152, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP5]], 17592188141568, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    store i32 [[TMP2]], ptr [[TMP7]], align 4, !dbg [[DBG2]]
 ; CHECK-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[TMP2]], ptr [[P]], i32 zeroext [[TMP3]]), !dbg [[DBG2]]
 ; CHECK-NEXT:    store i32 [[X:%.*]], ptr [[P]], align 4, !dbg [[DBG2]]
 ; CHECK-NEXT:    ret void
@@ -53,21 +54,22 @@ define void @LoadAndCmp(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[TMP2:%.*]] = load i32, ptr [[A:%.*]], align 4, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[A]] to i64, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP5]], align 4, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP8:%.*]] = load i32, ptr [[TMP7]], align 4, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP9:%.*]] = xor i32 [[TMP2]], 0, !dbg [[DBG8:![0-9]+]]
-; CHECK-NEXT:    [[TMP10:%.*]] = or i32 [[_MSLD]], 0, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP11:%.*]] = icmp ne i32 [[TMP10]], 0, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP12:%.*]] = xor i32 [[TMP10]], -1, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP13:%.*]] = and i32 [[TMP12]], [[TMP9]], !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP14:%.*]] = icmp eq i32 [[TMP13]], 0, !dbg [[DBG8]]
-; CHECK-NEXT:    [[_MSPROP_ICMP:%.*]] = and i1 [[TMP11]], [[TMP14]], !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP4]], 2097152, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP4]], 17592188141568, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP6]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP9:%.*]] = load i32, ptr [[TMP8]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP10:%.*]] = xor i32 [[TMP2]], 0, !dbg [[DBG8:![0-9]+]]
+; CHECK-NEXT:    [[TMP11:%.*]] = or i32 [[_MSLD]], 0, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP12:%.*]] = icmp ne i32 [[TMP11]], 0, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP13:%.*]] = xor i32 [[TMP11]], -1, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP14:%.*]] = and i32 [[TMP13]], [[TMP10]], !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP15:%.*]] = icmp eq i32 [[TMP14]], 0, !dbg [[DBG8]]
+; CHECK-NEXT:    [[_MSPROP_ICMP:%.*]] = and i1 [[TMP12]], [[TMP15]], !dbg [[DBG8]]
 ; CHECK-NEXT:    [[TOBOOL:%.*]] = icmp eq i32 [[TMP2]], 0, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP15:%.*]] = zext i1 [[_MSPROP_ICMP]] to i8, !dbg [[DBG9:![0-9]+]]
-; CHECK-NEXT:    call void @__msan_maybe_warning_1(i8 zeroext [[TMP15]], i32 zeroext [[TMP8]]), !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP16:%.*]] = zext i1 [[_MSPROP_ICMP]] to i8, !dbg [[DBG9:![0-9]+]]
+; CHECK-NEXT:    call void @__msan_maybe_warning_1(i8 zeroext [[TMP16]], i32 zeroext [[TMP9]]), !dbg [[DBG9]]
 ; CHECK-NEXT:    br i1 [[TOBOOL]], label [[IF_END:%.*]], label [[IF_THEN:%.*]], !dbg [[DBG9]]
 ; CHECK:       if.then:
 ; CHECK-NEXT:    store i64 0, ptr @__msan_va_arg_overflow_size_tls, align 8
@@ -114,10 +116,11 @@ define void @CopyRetVal(ptr nocapture %a) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP0]], i32 zeroext [[TMP1]]), !dbg [[DBG8]]
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[A:%.*]] to i64, !dbg [[DBG8]]
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr, !dbg [[DBG8]]
-; CHECK-NEXT:    store i32 [[_MSRET]], ptr [[TMP5]], align 4, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP4]], 2097152, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP4]], 17592188141568, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr, !dbg [[DBG8]]
+; CHECK-NEXT:    store i32 [[_MSRET]], ptr [[TMP6]], align 4, !dbg [[DBG8]]
 ; CHECK-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[_MSRET]], ptr [[A]], i32 zeroext [[TMP2]]), !dbg [[DBG8]]
 ; CHECK-NEXT:    store i32 [[CALL]], ptr [[A]], align 4, !dbg [[DBG8]]
 ; CHECK-NEXT:    ret void
@@ -142,23 +145,25 @@ define void @SExt(ptr nocapture %a, ptr nocapture %b) nounwind uwtable sanitize_
 ; CHECK-NEXT:    [[TMP4:%.*]] = load i16, ptr [[B:%.*]], align 2, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[B]] to i64, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 17592186044416, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP9:%.*]] = and i64 [[TMP8]], -4, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP7]], align 2, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP11:%.*]] = load i32, ptr [[TMP10]], align 4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP6]], 2097152, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP6]], 17592188141568, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP10:%.*]] = and i64 [[TMP9]], -4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[_MSLD:%.*]] = load i16, ptr [[TMP8]], align 2, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP12:%.*]] = load i32, ptr [[TMP11]], align 4, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = sext i16 [[_MSLD]] to i32, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP12:%.*]] = sext i16 [[TMP4]] to i32, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP13:%.*]] = sext i16 [[TMP4]] to i32, !dbg [[DBG8]]
 ; CHECK-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP2]], i32 zeroext [[TMP3]]), !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP13:%.*]] = ptrtoint ptr [[A:%.*]] to i64, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP14:%.*]] = xor i64 [[TMP13]], 87960930222080, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP14]] to ptr, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 17592186044416, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[A:%.*]] to i64, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP15]], 2097152, !dbg [[DBG9]]
 ; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr, !dbg [[DBG9]]
-; CHECK-NEXT:    store i32 [[_MSPROP]], ptr [[TMP15]], align 4, !dbg [[DBG9]]
-; CHECK-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[_MSPROP]], ptr [[A]], i32 zeroext [[TMP11]]), !dbg [[DBG9]]
-; CHECK-NEXT:    store i32 [[TMP12]], ptr [[A]], align 4, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP15]], 17592188141568, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP19:%.*]] = inttoptr i64 [[TMP18]] to ptr, !dbg [[DBG9]]
+; CHECK-NEXT:    store i32 [[_MSPROP]], ptr [[TMP17]], align 4, !dbg [[DBG9]]
+; CHECK-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[_MSPROP]], ptr [[A]], i32 zeroext [[TMP12]]), !dbg [[DBG9]]
+; CHECK-NEXT:    store i32 [[TMP13]], ptr [[A]], align 4, !dbg [[DBG9]]
 ; CHECK-NEXT:    ret void
 ;
 entry:
@@ -402,20 +407,22 @@ define i32 @ShadowLoadAlignmentLarge() nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[Y:%.*]] = alloca i32, align 64, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[Y]] to i64, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 64 [[TMP3]], i8 -1, i64 4, i1 false), !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP2]], 17592188141568, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 64 [[TMP4]], i8 -1, i64 4, i1 false), !dbg [[DBG2]]
 ; CHECK-NEXT:    call void @__msan_set_alloca_origin_with_descr(ptr [[Y]], i64 4, ptr @[[GLOB0:[0-9]+]], ptr @[[GLOB1:[0-9]+]]), !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = load volatile i32, ptr [[Y]], align 64, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP13:%.*]] = ptrtoint ptr [[Y]] to i64, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP13]], 87960930222080, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[Y]] to i64, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP10]], 2097152, !dbg [[DBG8]]
 ; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr, !dbg [[DBG8]]
-; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP10]], align 64, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP14:%.*]] = load i32, ptr [[TMP12]], align 64, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP10]], 17592188141568, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP13]] to ptr, !dbg [[DBG8]]
+; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP12]], align 64, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP14:%.*]] = load i32, ptr [[TMP15]], align 64, !dbg [[DBG8]]
 ; CHECK-NEXT:    store i32 [[_MSLD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    store i32 [[TMP14]], ptr @__msan_retval_origin_tls, align 4
 ; CHECK-NEXT:    ret i32 [[TMP8]]
@@ -513,56 +520,62 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
 ; CHECK-NEXT:    [[X_ADDR:%.*]] = alloca i32, align 4, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP7:%.*]] = ptrtoint ptr [[X_ADDR]] to i64, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP8:%.*]] = xor i64 [[TMP7]], 87960930222080, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP10:%.*]] = add i64 [[TMP8]], 17592186044416, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP11:%.*]] = and i64 [[TMP10]], -4, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP9]], i8 -1, i64 4, i1 false), !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP8]], 2097152, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP8]], 17592188141568, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP12:%.*]] = and i64 [[TMP11]], -4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP10]], i8 -1, i64 4, i1 false), !dbg [[DBG2]]
 ; CHECK-NEXT:    call void @__msan_set_alloca_origin_with_descr(ptr [[X_ADDR]], i64 4, ptr @[[GLOB2:[0-9]+]], ptr @[[GLOB3:[0-9]+]]), !dbg [[DBG2]]
-; CHECK-NEXT:    [[VA:%.*]] = alloca [1 x %struct.__va_list_tag], align 16, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP13:%.*]] = ptrtoint ptr [[VA]] to i64, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP14:%.*]] = xor i64 [[TMP13]], 87960930222080, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP15:%.*]] = inttoptr i64 [[TMP14]] to ptr, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP14]], 17592186044416, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP17:%.*]] = and i64 [[TMP16]], -4, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP18:%.*]] = inttoptr i64 [[TMP17]] to ptr, !dbg [[DBG8]]
-; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP15]], i8 -1, i64 24, i1 false), !dbg [[DBG8]]
+; CHECK-NEXT:    [[VA:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP14:%.*]] = ptrtoint ptr [[VA]] to i64, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP15:%.*]] = xor i64 [[TMP14]], 87960930222080, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP16:%.*]] = add i64 [[TMP15]], 2097152, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP17:%.*]] = inttoptr i64 [[TMP16]] to ptr, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP18:%.*]] = add i64 [[TMP15]], 17592188141568, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP19:%.*]] = and i64 [[TMP18]], -4, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP20:%.*]] = inttoptr i64 [[TMP19]] to ptr, !dbg [[DBG8]]
+; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 16 [[TMP17]], i8 -1, i64 24, i1 false), !dbg [[DBG8]]
 ; CHECK-NEXT:    call void @__msan_set_alloca_origin_with_descr(ptr [[VA]], i64 24, ptr @[[GLOB4:[0-9]+]], ptr @[[GLOB5:[0-9]+]]), !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP19:%.*]] = ptrtoint ptr [[X_ADDR]] to i64, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP20:%.*]] = xor i64 [[TMP19]], 87960930222080, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP20]] to ptr, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592186044416, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr, !dbg [[DBG9]]
-; CHECK-NEXT:    store i32 [[TMP5]], ptr [[TMP21]], align 4, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP21:%.*]] = ptrtoint ptr [[X_ADDR]] to i64, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP22:%.*]] = xor i64 [[TMP21]], 87960930222080, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP23:%.*]] = add i64 [[TMP22]], 2097152, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP24:%.*]] = inttoptr i64 [[TMP23]] to ptr, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP25:%.*]] = add i64 [[TMP22]], 17592188141568, !dbg [[DBG9]]
+; CHECK-NEXT:    [[TMP26:%.*]] = inttoptr i64 [[TMP25]] to ptr, !dbg [[DBG9]]
+; CHECK-NEXT:    store i32 [[TMP5]], ptr [[TMP24]], align 4, !dbg [[DBG9]]
 ; CHECK-NEXT:    call void @__msan_maybe_store_origin_4(i32 zeroext [[TMP5]], ptr [[X_ADDR]], i32 zeroext [[TMP6]]), !dbg [[DBG9]]
 ; CHECK-NEXT:    store i32 [[X:%.*]], ptr [[X_ADDR]], align 4, !dbg [[DBG9]]
-; CHECK-NEXT:    [[TMP24:%.*]] = ptrtoint ptr [[VA]] to i64, !dbg [[DBG10:![0-9]+]]
-; CHECK-NEXT:    [[TMP25:%.*]] = xor i64 [[TMP24]], 87960930222080, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP26:%.*]] = inttoptr i64 [[TMP25]] to ptr, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP27:%.*]] = add i64 [[TMP25]], 17592186044416, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP28:%.*]] = inttoptr i64 [[TMP27]] to ptr, !dbg [[DBG10]]
-; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP26]], i8 0, i64 24, i1 false), !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP27:%.*]] = ptrtoint ptr [[VA]] to i64, !dbg [[DBG10:![0-9]+]]
+; CHECK-NEXT:    [[TMP28:%.*]] = xor i64 [[TMP27]], 87960930222080, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP29:%.*]] = add i64 [[TMP28]], 2097152, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP30:%.*]] = inttoptr i64 [[TMP29]] to ptr, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP31:%.*]] = add i64 [[TMP28]], 17592188141568, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP32:%.*]] = inttoptr i64 [[TMP31]] to ptr, !dbg [[DBG10]]
+; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 8 [[TMP30]], i8 0, i64 24, i1 false), !dbg [[DBG10]]
 ; CHECK-NEXT:    call void @llvm.va_start.p0(ptr [[VA]]), !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP29:%.*]] = getelementptr i8, ptr [[VA]], i64 16, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP30:%.*]] = load ptr, ptr [[TMP29]], align 8, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP31:%.*]] = ptrtoint ptr [[TMP30]] to i64, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP32:%.*]] = xor i64 [[TMP31]], 87960930222080, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP33:%.*]] = inttoptr i64 [[TMP32]] to ptr, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP34:%.*]] = add i64 [[TMP32]], 17592186044416, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP35:%.*]] = inttoptr i64 [[TMP34]] to ptr, !dbg [[DBG10]]
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP33]], ptr align 16 [[TMP2]], i64 176, i1 false), !dbg [[DBG10]]
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP35]], ptr align 16 [[TMP4]], i64 176, i1 false), !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP36:%.*]] = getelementptr i8, ptr [[VA]], i64 8, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP37:%.*]] = load ptr, ptr [[TMP36]], align 8, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP38:%.*]] = ptrtoint ptr [[TMP37]] to i64, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP39:%.*]] = xor i64 [[TMP38]], 87960930222080, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP33:%.*]] = getelementptr i8, ptr [[VA]], i64 16, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP34:%.*]] = load ptr, ptr [[TMP33]], align 8, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP35:%.*]] = ptrtoint ptr [[TMP34]] to i64, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP36:%.*]] = xor i64 [[TMP35]], 87960930222080, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP37:%.*]] = add i64 [[TMP36]], 2097152, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP38:%.*]] = inttoptr i64 [[TMP37]] to ptr, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP39:%.*]] = add i64 [[TMP36]], 17592188141568, !dbg [[DBG10]]
 ; CHECK-NEXT:    [[TMP40:%.*]] = inttoptr i64 [[TMP39]] to ptr, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP41:%.*]] = add i64 [[TMP39]], 17592186044416, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP42:%.*]] = inttoptr i64 [[TMP41]] to ptr, !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP43:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176, !dbg [[DBG10]]
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP40]], ptr align 16 [[TMP43]], i64 [[TMP0]], i1 false), !dbg [[DBG10]]
-; CHECK-NEXT:    [[TMP44:%.*]] = getelementptr i8, ptr [[TMP4]], i32 176, !dbg [[DBG10]]
-; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP42]], ptr align 16 [[TMP44]], i64 [[TMP0]], i1 false), !dbg [[DBG10]]
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP38]], ptr align 16 [[TMP2]], i64 176, i1 false), !dbg [[DBG10]]
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP40]], ptr align 16 [[TMP4]], i64 176, i1 false), !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP41:%.*]] = getelementptr i8, ptr [[VA]], i64 8, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP42:%.*]] = load ptr, ptr [[TMP41]], align 8, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP43:%.*]] = ptrtoint ptr [[TMP42]] to i64, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP44:%.*]] = xor i64 [[TMP43]], 87960930222080, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP45:%.*]] = add i64 [[TMP44]], 2097152, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP46:%.*]] = inttoptr i64 [[TMP45]] to ptr, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP47:%.*]] = add i64 [[TMP44]], 17592188141568, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP48:%.*]] = inttoptr i64 [[TMP47]] to ptr, !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP49:%.*]] = getelementptr i8, ptr [[TMP2]], i32 176, !dbg [[DBG10]]
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP46]], ptr align 16 [[TMP49]], i64 [[TMP0]], i1 false), !dbg [[DBG10]]
+; CHECK-NEXT:    [[TMP50:%.*]] = getelementptr i8, ptr [[TMP4]], i32 176, !dbg [[DBG10]]
+; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 16 [[TMP48]], ptr align 16 [[TMP50]], i64 [[TMP0]], i1 false), !dbg [[DBG10]]
 ; CHECK-NEXT:    ret void
 ;
 entry:
@@ -615,11 +628,12 @@ define i32 @NoSanitizeMemoryAlloca() {
 ; CHECK-NEXT:    [[P:%.*]] = alloca i32, align 4, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[P]] to i64, !dbg [[DBG2]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592186044416, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], -4, !dbg [[DBG2]]
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr, !dbg [[DBG2]]
-; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP2]], i8 0, i64 4, i1 false), !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP2:%.*]] = add i64 [[TMP1]], 2097152, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP1]], 17592188141568, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4, !dbg [[DBG2]]
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP5]] to ptr, !dbg [[DBG2]]
+; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP3]], i8 0, i64 4, i1 false), !dbg [[DBG2]]
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8, !dbg [[DBG8]]
 ; CHECK-NEXT:    store i32 0, ptr @__msan_retval_tls, align 8, !dbg [[DBG8]]
 ; CHECK-NEXT:    [[X:%.*]] = call i32 @NoSanitizeMemoryAllocaHelper(ptr [[P]]), !dbg [[DBG8]]
@@ -673,11 +687,12 @@ define void @msan() sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.lifetime.start.p0(ptr [[TEXT]]), !dbg [[DBG8]]
 ; CHECK-NEXT:    [[TMP0:%.*]] = ptrtoint ptr [[TEXT]] to i64, !dbg [[DBG8]]
 ; CHECK-NEXT:    [[TMP1:%.*]] = xor i64 [[TMP0]], 87960930222080, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP2:%.*]] = inttoptr i64 [[TMP1]] to ptr, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP1]], 17592186044416, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP4:%.*]] = and i64 [[TMP3]], -4, !dbg [[DBG8]]
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr, !dbg [[DBG8]]
-; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 1 [[TMP2]], i8 -1, i64 1, i1 false), !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP2:%.*]] = add i64 [[TMP1]], 2097152, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP1]], 17592188141568, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4, !dbg [[DBG8]]
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr, !dbg [[DBG8]]
+; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 1 [[TMP3]], i8 -1, i64 1, i1 false), !dbg [[DBG8]]
 ; CHECK-NEXT:    call void @__msan_set_alloca_origin_with_descr(ptr [[TEXT]], i64 1, ptr @[[GLOB6:[0-9]+]], ptr @[[GLOB7:[0-9]+]]), !dbg [[DBG8]]
 ; CHECK-NEXT:    store i64 0, ptr @__msan_param_tls, align 8, !dbg [[DBG9]]
 ; CHECK-NEXT:    call void @foo8(ptr [[TEXT]]), !dbg [[DBG9]]
diff --git a/llvm/test/Instrumentation/MemorySanitizer/msan_eager.ll b/llvm/test/Instrumentation/MemorySanitizer/msan_eager.ll
index 13a50c28aa286..2ed1b505d12ff 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/msan_eager.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/msan_eager.ll
@@ -30,17 +30,18 @@ define noundef i32 @LoadedRet() nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[O:%.*]] = load i32, ptr [[P]], align 4
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 4
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1:![0-9]+]]
-; CHECK:       7:
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1:![0-9]+]]
+; CHECK:       8:
 ; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP6]]) #[[ATTR3:[0-9]+]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       8:
+; CHECK:       9:
 ; CHECK-NEXT:    ret i32 [[O]]
 ;
   %p = inttoptr i64 0 to ptr
@@ -55,8 +56,9 @@ define void @NormalArg(i32 noundef %a) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[P:%.*]] = inttoptr i64 0 to ptr
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store i32 0, ptr [[TMP3]], align 4
 ; CHECK-NEXT:    store i32 [[A:%.*]], ptr [[P]], align 4
@@ -75,16 +77,17 @@ define void @NormalArgAfterNoUndef(i32 noundef %a, i32 %b) nounwind uwtable sani
 ; CHECK-NEXT:    [[P:%.*]] = inttoptr i64 0 to ptr
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP1]], ptr [[TMP5]], align 4
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP10:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    store i32 [[TMP2]], ptr [[TMP7]], align 4
 ; CHECK-NEXT:    br label [[TMP9]]
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    store i32 [[B:%.*]], ptr [[P]], align 4
 ; CHECK-NEXT:    ret void
 ;
@@ -101,16 +104,17 @@ define void @PartialArg(i32 %a) nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[P:%.*]] = inttoptr i64 0 to ptr
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592186044416
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP4]], 17592188141568
 ; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    store i32 [[TMP1]], ptr [[TMP5]], align 4
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP1]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
-; CHECK:       8:
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP10:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
+; CHECK:       9:
 ; CHECK-NEXT:    store i32 [[TMP2]], ptr [[TMP7]], align 4
 ; CHECK-NEXT:    br label [[TMP9]]
-; CHECK:       9:
+; CHECK:       10:
 ; CHECK-NEXT:    store i32 [[A:%.*]], ptr [[P]], align 4
 ; CHECK-NEXT:    ret void
 ;
@@ -151,17 +155,18 @@ define void @CallWithLoaded() nounwind uwtable sanitize_memory {
 ; CHECK-NEXT:    [[O:%.*]] = load i32, ptr [[P]], align 4
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load i32, ptr [[TMP3]], align 4
 ; CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 4
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[_MSLD]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1]]
-; CHECK:       7:
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1]]
+; CHECK:       8:
 ; CHECK-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP6]]) #[[ATTR3]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       8:
+; CHECK:       9:
 ; CHECK-NEXT:    call void @NormalArg(i32 [[O]]) #[[ATTR0]]
 ; CHECK-NEXT:    ret void
 ;
diff --git a/llvm/test/Instrumentation/MemorySanitizer/opaque-ptr.ll b/llvm/test/Instrumentation/MemorySanitizer/opaque-ptr.ll
index e88341663fa3a..21f40e2d878b5 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/opaque-ptr.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/opaque-ptr.ll
@@ -7,7 +7,8 @@ define void @test_memcpy(ptr %p, ptr byval(i32) %p2) sanitize_memory {
 ; CHECK-LABEL: @test_memcpy(
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP3]], ptr align 4 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 4, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = call ptr @__msan_memcpy(ptr [[P:%.*]], ptr [[P2]], i64 4)
@@ -21,7 +22,8 @@ define void @test_memmove(ptr %p, ptr byval(i32) %p2) sanitize_memory {
 ; CHECK-LABEL: @test_memmove(
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P2:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    call void @llvm.memcpy.p0.p0.i64(ptr align 4 [[TMP3]], ptr align 4 getelementptr (i8, ptr @__msan_param_tls, i64 8), i64 4, i1 false)
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP4:%.*]] = call ptr @__msan_memmove(ptr [[P:%.*]], ptr [[P2]], i64 4)
diff --git a/llvm/test/Instrumentation/MemorySanitizer/reduce.ll b/llvm/test/Instrumentation/MemorySanitizer/reduce.ll
index 54e87b9e10a50..9e339c99ce583 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/reduce.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/reduce.ll
@@ -16,8 +16,9 @@ define i32 @reduce_add() sanitize_memory {
 ; CHECK-NEXT:    [[O:%.*]] = load <3 x i32>, ptr [[P]], align 16
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <3 x i32>, ptr [[TMP3]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 16
@@ -41,8 +42,9 @@ define i32 @reduce_and() sanitize_memory {
 ; CHECK-NEXT:    [[O:%.*]] = load <3 x i32>, ptr [[P]], align 16
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP11:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP11]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <3 x i32>, ptr [[TMP3]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 16
@@ -69,8 +71,9 @@ define i32 @reduce_or() sanitize_memory {
 ; CHECK-NEXT:    [[O:%.*]] = load <3 x i32>, ptr [[P]], align 16
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; CHECK-NEXT:    [[TMP12:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP12]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <3 x i32>, ptr [[TMP3]], align 16
 ; CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 16
diff --git a/llvm/test/Instrumentation/MemorySanitizer/vector-load-store.ll b/llvm/test/Instrumentation/MemorySanitizer/vector-load-store.ll
index d01974016f6c9..1ffb365d66916 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/vector-load-store.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/vector-load-store.ll
@@ -12,7 +12,8 @@ define void @load.v1i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <1 x i32>, ptr [[P:%.*]], align 4
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <1 x i32>, ptr [[TMP4]], align 4
 ; CHECK-NEXT:    ret void
 ;
@@ -28,7 +29,8 @@ define void @load.v1i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <1 x i32>, ptr [[P:%.*]], align 4
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <1 x i32>, ptr [[TMP7]], align 4
 ; ADDR-NEXT:    ret void
 ;
@@ -37,8 +39,9 @@ define void @load.v1i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <1 x i32>, ptr [[P:%.*]], align 4
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <1 x i32>, ptr [[TMP4]], align 4
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 4
@@ -54,7 +57,8 @@ define void @load.v2i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <2 x i32>, ptr [[P:%.*]], align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <2 x i32>, ptr [[TMP4]], align 8
 ; CHECK-NEXT:    ret void
 ;
@@ -70,7 +74,8 @@ define void @load.v2i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <2 x i32>, ptr [[P:%.*]], align 8
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <2 x i32>, ptr [[TMP7]], align 8
 ; ADDR-NEXT:    ret void
 ;
@@ -79,8 +84,9 @@ define void @load.v2i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <2 x i32>, ptr [[P:%.*]], align 8
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <2 x i32>, ptr [[TMP4]], align 8
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 8
@@ -96,7 +102,8 @@ define void @load.v4i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <4 x i32>, ptr [[P:%.*]], align 16
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP4]], align 16
 ; CHECK-NEXT:    ret void
 ;
@@ -112,7 +119,8 @@ define void @load.v4i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <4 x i32>, ptr [[P:%.*]], align 16
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP7]], align 16
 ; ADDR-NEXT:    ret void
 ;
@@ -121,8 +129,9 @@ define void @load.v4i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <4 x i32>, ptr [[P:%.*]], align 16
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <4 x i32>, ptr [[TMP4]], align 16
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 16
@@ -138,7 +147,8 @@ define void @load.v8i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <8 x i32>, ptr [[P:%.*]], align 32
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP4]], align 32
 ; CHECK-NEXT:    ret void
 ;
@@ -154,7 +164,8 @@ define void @load.v8i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <8 x i32>, ptr [[P:%.*]], align 32
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP7]], align 32
 ; ADDR-NEXT:    ret void
 ;
@@ -163,8 +174,9 @@ define void @load.v8i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <8 x i32>, ptr [[P:%.*]], align 32
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <8 x i32>, ptr [[TMP4]], align 32
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 32
@@ -180,7 +192,8 @@ define void @load.v16i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <16 x i32>, ptr [[P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP4]], align 64
 ; CHECK-NEXT:    ret void
 ;
@@ -196,7 +209,8 @@ define void @load.v16i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <16 x i32>, ptr [[P:%.*]], align 64
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP7]], align 64
 ; ADDR-NEXT:    ret void
 ;
@@ -205,8 +219,9 @@ define void @load.v16i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <16 x i32>, ptr [[P:%.*]], align 64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <16 x i32>, ptr [[TMP4]], align 64
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 64
@@ -222,7 +237,8 @@ define void @store.v1i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <1 x i32> zeroinitializer, ptr [[TMP3]], align 4
 ; CHECK-NEXT:    store <1 x i32> zeroinitializer, ptr [[P]], align 4
 ; CHECK-NEXT:    ret void
@@ -238,7 +254,8 @@ define void @store.v1i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <1 x i32> zeroinitializer, ptr [[TMP6]], align 4
 ; ADDR-NEXT:    store <1 x i32> zeroinitializer, ptr [[P]], align 4
 ; ADDR-NEXT:    ret void
@@ -247,8 +264,9 @@ define void @store.v1i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <1 x i32> zeroinitializer, ptr [[TMP3]], align 4
 ; ORIGINS-NEXT:    store <1 x i32> zeroinitializer, ptr [[P]], align 4
@@ -263,7 +281,8 @@ define void @store.v2i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <2 x i32> zeroinitializer, ptr [[TMP3]], align 8
 ; CHECK-NEXT:    store <2 x i32> zeroinitializer, ptr [[P]], align 8
 ; CHECK-NEXT:    ret void
@@ -279,7 +298,8 @@ define void @store.v2i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <2 x i32> zeroinitializer, ptr [[TMP6]], align 8
 ; ADDR-NEXT:    store <2 x i32> zeroinitializer, ptr [[P]], align 8
 ; ADDR-NEXT:    ret void
@@ -288,8 +308,9 @@ define void @store.v2i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <2 x i32> zeroinitializer, ptr [[TMP3]], align 8
 ; ORIGINS-NEXT:    store <2 x i32> zeroinitializer, ptr [[P]], align 8
@@ -304,7 +325,8 @@ define void @store.v4i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <4 x i32> zeroinitializer, ptr [[TMP3]], align 16
 ; CHECK-NEXT:    store <4 x i32> zeroinitializer, ptr [[P]], align 16
 ; CHECK-NEXT:    ret void
@@ -320,7 +342,8 @@ define void @store.v4i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <4 x i32> zeroinitializer, ptr [[TMP6]], align 16
 ; ADDR-NEXT:    store <4 x i32> zeroinitializer, ptr [[P]], align 16
 ; ADDR-NEXT:    ret void
@@ -329,8 +352,9 @@ define void @store.v4i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <4 x i32> zeroinitializer, ptr [[TMP3]], align 16
 ; ORIGINS-NEXT:    store <4 x i32> zeroinitializer, ptr [[P]], align 16
@@ -345,7 +369,8 @@ define void @store.v8i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <8 x i32> zeroinitializer, ptr [[TMP3]], align 32
 ; CHECK-NEXT:    store <8 x i32> zeroinitializer, ptr [[P]], align 32
 ; CHECK-NEXT:    ret void
@@ -361,7 +386,8 @@ define void @store.v8i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <8 x i32> zeroinitializer, ptr [[TMP6]], align 32
 ; ADDR-NEXT:    store <8 x i32> zeroinitializer, ptr [[P]], align 32
 ; ADDR-NEXT:    ret void
@@ -370,8 +396,9 @@ define void @store.v8i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <8 x i32> zeroinitializer, ptr [[TMP3]], align 32
 ; ORIGINS-NEXT:    store <8 x i32> zeroinitializer, ptr [[P]], align 32
@@ -386,7 +413,8 @@ define void @store.v16i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <16 x i32> zeroinitializer, ptr [[TMP3]], align 64
 ; CHECK-NEXT:    store <16 x i32> zeroinitializer, ptr [[P]], align 64
 ; CHECK-NEXT:    ret void
@@ -402,7 +430,8 @@ define void @store.v16i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <16 x i32> zeroinitializer, ptr [[TMP6]], align 64
 ; ADDR-NEXT:    store <16 x i32> zeroinitializer, ptr [[P]], align 64
 ; ADDR-NEXT:    ret void
@@ -411,8 +440,9 @@ define void @store.v16i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP6:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <16 x i32> zeroinitializer, ptr [[TMP3]], align 64
 ; ORIGINS-NEXT:    store <16 x i32> zeroinitializer, ptr [[P]], align 64
@@ -428,7 +458,8 @@ define void @load.nxv1i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 1 x i32>, ptr [[P:%.*]], align 4
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 1 x i32>, ptr [[TMP4]], align 4
 ; CHECK-NEXT:    ret void
 ;
@@ -444,7 +475,8 @@ define void @load.nxv1i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <vscale x 1 x i32>, ptr [[P:%.*]], align 4
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <vscale x 1 x i32>, ptr [[TMP7]], align 4
 ; ADDR-NEXT:    ret void
 ;
@@ -453,8 +485,9 @@ define void @load.nxv1i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <vscale x 1 x i32>, ptr [[P:%.*]], align 4
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <vscale x 1 x i32>, ptr [[TMP4]], align 4
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 4
@@ -470,7 +503,8 @@ define void @load.nxv2i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 2 x i32>, ptr [[P:%.*]], align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP4]], align 8
 ; CHECK-NEXT:    ret void
 ;
@@ -486,7 +520,8 @@ define void @load.nxv2i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <vscale x 2 x i32>, ptr [[P:%.*]], align 8
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP7]], align 8
 ; ADDR-NEXT:    ret void
 ;
@@ -495,8 +530,9 @@ define void @load.nxv2i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <vscale x 2 x i32>, ptr [[P:%.*]], align 8
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP4]], align 8
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 8
@@ -512,7 +548,8 @@ define void @load.nxv4i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 4 x i32>, ptr [[P:%.*]], align 16
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 4 x i32>, ptr [[TMP4]], align 16
 ; CHECK-NEXT:    ret void
 ;
@@ -528,7 +565,8 @@ define void @load.nxv4i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <vscale x 4 x i32>, ptr [[P:%.*]], align 16
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <vscale x 4 x i32>, ptr [[TMP7]], align 16
 ; ADDR-NEXT:    ret void
 ;
@@ -537,8 +575,9 @@ define void @load.nxv4i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <vscale x 4 x i32>, ptr [[P:%.*]], align 16
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <vscale x 4 x i32>, ptr [[TMP4]], align 16
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 16
@@ -554,7 +593,8 @@ define void @load.nxv8i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 8 x i32>, ptr [[P:%.*]], align 32
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 8 x i32>, ptr [[TMP4]], align 32
 ; CHECK-NEXT:    ret void
 ;
@@ -570,7 +610,8 @@ define void @load.nxv8i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <vscale x 8 x i32>, ptr [[P:%.*]], align 32
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <vscale x 8 x i32>, ptr [[TMP7]], align 32
 ; ADDR-NEXT:    ret void
 ;
@@ -579,8 +620,9 @@ define void @load.nxv8i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <vscale x 8 x i32>, ptr [[P:%.*]], align 32
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <vscale x 8 x i32>, ptr [[TMP4]], align 32
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 32
@@ -596,7 +638,8 @@ define void @load.nxv16i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 16 x i32>, ptr [[P:%.*]], align 64
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 16 x i32>, ptr [[TMP4]], align 64
 ; CHECK-NEXT:    ret void
 ;
@@ -612,7 +655,8 @@ define void @load.nxv16i32(ptr %p) sanitize_memory {
 ; ADDR-NEXT:    [[TMP4:%.*]] = load <vscale x 16 x i32>, ptr [[P:%.*]], align 64
 ; ADDR-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[P]] to i64
 ; ADDR-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; ADDR-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; ADDR-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ADDR-NEXT:    [[_MSLD:%.*]] = load <vscale x 16 x i32>, ptr [[TMP7]], align 64
 ; ADDR-NEXT:    ret void
 ;
@@ -621,8 +665,9 @@ define void @load.nxv16i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = load <vscale x 16 x i32>, ptr [[P:%.*]], align 64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64
 ; ORIGINS-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGINS-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGINS-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGINS-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGINS-NEXT:    [[_MSLD:%.*]] = load <vscale x 16 x i32>, ptr [[TMP4]], align 64
 ; ORIGINS-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 64
@@ -638,7 +683,8 @@ define void @store.nxv1i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <vscale x 1 x i32> zeroinitializer, ptr [[TMP3]], align 4
 ; CHECK-NEXT:    store <vscale x 1 x i32> zeroinitializer, ptr [[P]], align 4
 ; CHECK-NEXT:    ret void
@@ -654,7 +700,8 @@ define void @store.nxv1i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <vscale x 1 x i32> zeroinitializer, ptr [[TMP6]], align 4
 ; ADDR-NEXT:    store <vscale x 1 x i32> zeroinitializer, ptr [[P]], align 4
 ; ADDR-NEXT:    ret void
@@ -663,14 +710,15 @@ define void @store.nxv1i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP14:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <vscale x 1 x i32> zeroinitializer, ptr [[TMP3]], align 4
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = call i32 @llvm.vector.reduce.or.nxv1i32(<vscale x 1 x i32> zeroinitializer)
 ; ORIGINS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP6]], 0
 ; ORIGINS-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP13:%.*]], !prof [[PROF1:![0-9]+]]
-; ORIGINS:       7:
+; ORIGINS:       8:
 ; ORIGINS-NEXT:    [[TMP8:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGINS-NEXT:    [[TMP9:%.*]] = mul nuw i64 [[TMP8]], 4
 ; ORIGINS-NEXT:    [[TMP10:%.*]] = add i64 [[TMP9]], 3
@@ -685,7 +733,7 @@ define void @store.nxv1i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGINS:       .split.split:
 ; ORIGINS-NEXT:    br label [[TMP13]]
-; ORIGINS:       13:
+; ORIGINS:       14:
 ; ORIGINS-NEXT:    store <vscale x 1 x i32> zeroinitializer, ptr [[P]], align 4
 ; ORIGINS-NEXT:    ret void
 ;
@@ -698,7 +746,8 @@ define void @store.nxv2i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <vscale x 2 x i32> zeroinitializer, ptr [[TMP3]], align 8
 ; CHECK-NEXT:    store <vscale x 2 x i32> zeroinitializer, ptr [[P]], align 8
 ; CHECK-NEXT:    ret void
@@ -714,7 +763,8 @@ define void @store.nxv2i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <vscale x 2 x i32> zeroinitializer, ptr [[TMP6]], align 8
 ; ADDR-NEXT:    store <vscale x 2 x i32> zeroinitializer, ptr [[P]], align 8
 ; ADDR-NEXT:    ret void
@@ -723,14 +773,15 @@ define void @store.nxv2i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP14:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <vscale x 2 x i32> zeroinitializer, ptr [[TMP3]], align 8
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = call i32 @llvm.vector.reduce.or.nxv2i32(<vscale x 2 x i32> zeroinitializer)
 ; ORIGINS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP6]], 0
 ; ORIGINS-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; ORIGINS:       7:
+; ORIGINS:       8:
 ; ORIGINS-NEXT:    [[TMP8:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGINS-NEXT:    [[TMP9:%.*]] = mul nuw i64 [[TMP8]], 8
 ; ORIGINS-NEXT:    [[TMP10:%.*]] = add i64 [[TMP9]], 3
@@ -745,7 +796,7 @@ define void @store.nxv2i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGINS:       .split.split:
 ; ORIGINS-NEXT:    br label [[TMP13]]
-; ORIGINS:       13:
+; ORIGINS:       14:
 ; ORIGINS-NEXT:    store <vscale x 2 x i32> zeroinitializer, ptr [[P]], align 8
 ; ORIGINS-NEXT:    ret void
 ;
@@ -758,7 +809,8 @@ define void @store.nxv4i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <vscale x 4 x i32> zeroinitializer, ptr [[TMP3]], align 16
 ; CHECK-NEXT:    store <vscale x 4 x i32> zeroinitializer, ptr [[P]], align 16
 ; CHECK-NEXT:    ret void
@@ -774,7 +826,8 @@ define void @store.nxv4i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <vscale x 4 x i32> zeroinitializer, ptr [[TMP6]], align 16
 ; ADDR-NEXT:    store <vscale x 4 x i32> zeroinitializer, ptr [[P]], align 16
 ; ADDR-NEXT:    ret void
@@ -783,14 +836,15 @@ define void @store.nxv4i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP14:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <vscale x 4 x i32> zeroinitializer, ptr [[TMP3]], align 16
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = call i32 @llvm.vector.reduce.or.nxv4i32(<vscale x 4 x i32> zeroinitializer)
 ; ORIGINS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP6]], 0
 ; ORIGINS-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; ORIGINS:       7:
+; ORIGINS:       8:
 ; ORIGINS-NEXT:    [[TMP8:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGINS-NEXT:    [[TMP9:%.*]] = mul nuw i64 [[TMP8]], 16
 ; ORIGINS-NEXT:    [[TMP10:%.*]] = add i64 [[TMP9]], 3
@@ -805,7 +859,7 @@ define void @store.nxv4i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGINS:       .split.split:
 ; ORIGINS-NEXT:    br label [[TMP13]]
-; ORIGINS:       13:
+; ORIGINS:       14:
 ; ORIGINS-NEXT:    store <vscale x 4 x i32> zeroinitializer, ptr [[P]], align 16
 ; ORIGINS-NEXT:    ret void
 ;
@@ -818,7 +872,8 @@ define void @store.nxv8i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <vscale x 8 x i32> zeroinitializer, ptr [[TMP3]], align 32
 ; CHECK-NEXT:    store <vscale x 8 x i32> zeroinitializer, ptr [[P]], align 32
 ; CHECK-NEXT:    ret void
@@ -834,7 +889,8 @@ define void @store.nxv8i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <vscale x 8 x i32> zeroinitializer, ptr [[TMP6]], align 32
 ; ADDR-NEXT:    store <vscale x 8 x i32> zeroinitializer, ptr [[P]], align 32
 ; ADDR-NEXT:    ret void
@@ -843,14 +899,15 @@ define void @store.nxv8i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP14:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <vscale x 8 x i32> zeroinitializer, ptr [[TMP3]], align 32
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = call i32 @llvm.vector.reduce.or.nxv8i32(<vscale x 8 x i32> zeroinitializer)
 ; ORIGINS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP6]], 0
 ; ORIGINS-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; ORIGINS:       7:
+; ORIGINS:       8:
 ; ORIGINS-NEXT:    [[TMP8:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGINS-NEXT:    [[TMP9:%.*]] = mul nuw i64 [[TMP8]], 32
 ; ORIGINS-NEXT:    [[TMP10:%.*]] = add i64 [[TMP9]], 3
@@ -865,7 +922,7 @@ define void @store.nxv8i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGINS:       .split.split:
 ; ORIGINS-NEXT:    br label [[TMP13]]
-; ORIGINS:       13:
+; ORIGINS:       14:
 ; ORIGINS-NEXT:    store <vscale x 8 x i32> zeroinitializer, ptr [[P]], align 32
 ; ORIGINS-NEXT:    ret void
 ;
@@ -878,7 +935,8 @@ define void @store.nxv16i32(ptr %p) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <vscale x 16 x i32> zeroinitializer, ptr [[TMP3]], align 64
 ; CHECK-NEXT:    store <vscale x 16 x i32> zeroinitializer, ptr [[P]], align 64
 ; CHECK-NEXT:    ret void
@@ -894,7 +952,8 @@ define void @store.nxv16i32(ptr %p) sanitize_memory {
 ; ADDR:       3:
 ; ADDR-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ADDR-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; ADDR-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 2097152
+; ADDR-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ADDR-NEXT:    store <vscale x 16 x i32> zeroinitializer, ptr [[TMP6]], align 64
 ; ADDR-NEXT:    store <vscale x 16 x i32> zeroinitializer, ptr [[P]], align 64
 ; ADDR-NEXT:    ret void
@@ -903,14 +962,15 @@ define void @store.nxv16i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    call void @llvm.donothing()
 ; ORIGINS-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P:%.*]] to i64
 ; ORIGINS-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGINS-NEXT:    [[TMP14:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGINS-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP14]] to ptr
+; ORIGINS-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGINS-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGINS-NEXT:    store <vscale x 16 x i32> zeroinitializer, ptr [[TMP3]], align 64
 ; ORIGINS-NEXT:    [[TMP6:%.*]] = call i32 @llvm.vector.reduce.or.nxv16i32(<vscale x 16 x i32> zeroinitializer)
 ; ORIGINS-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP6]], 0
 ; ORIGINS-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
-; ORIGINS:       7:
+; ORIGINS:       8:
 ; ORIGINS-NEXT:    [[TMP8:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGINS-NEXT:    [[TMP9:%.*]] = mul nuw i64 [[TMP8]], 64
 ; ORIGINS-NEXT:    [[TMP10:%.*]] = add i64 [[TMP9]], 3
@@ -925,7 +985,7 @@ define void @store.nxv16i32(ptr %p) sanitize_memory {
 ; ORIGINS-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGINS:       .split.split:
 ; ORIGINS-NEXT:    br label [[TMP13]]
-; ORIGINS:       13:
+; ORIGINS:       14:
 ; ORIGINS-NEXT:    store <vscale x 16 x i32> zeroinitializer, ptr [[P]], align 64
 ; ORIGINS-NEXT:    ret void
 ;
diff --git a/llvm/test/Instrumentation/MemorySanitizer/vscale.ll b/llvm/test/Instrumentation/MemorySanitizer/vscale.ll
index 514abedf8fe1a..6c38edfbc8bba 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/vscale.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/vscale.ll
@@ -12,11 +12,13 @@ define void @test_load_store_i32(ptr %a, ptr %b) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 4 x i32>, ptr [[A]], align 16
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 4 x i32>, ptr [[TMP4]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store <vscale x 4 x i32> [[_MSLD]], ptr [[TMP7]], align 16
 ; CHECK-NEXT:    store <vscale x 4 x i32> [[TMP1]], ptr [[B]], align 16
 ; CHECK-NEXT:    ret void
@@ -27,21 +29,23 @@ define void @test_load_store_i32(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = load <vscale x 4 x i32>, ptr [[A]], align 16
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load <vscale x 4 x i32>, ptr [[TMP4]], align 16
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 16
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; ORIGIN-NEXT:    [[TMP23:%.*]] = add i64 [[TMP9]], 2097152
+; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP23]] to ptr
+; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; ORIGIN-NEXT:    store <vscale x 4 x i32> [[_MSLD]], ptr [[TMP10]], align 16
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = call i32 @llvm.vector.reduce.or.nxv4i32(<vscale x 4 x i32> [[_MSLD]])
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP13]], 0
 ; ORIGIN-NEXT:    br i1 [[_MSCMP]], label [[TMP14:%.*]], label [[TMP21:%.*]], !prof [[PROF1:![0-9]+]]
-; ORIGIN:       14:
+; ORIGIN:       16:
 ; ORIGIN-NEXT:    [[TMP15:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP7]])
 ; ORIGIN-NEXT:    [[TMP16:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGIN-NEXT:    [[TMP17:%.*]] = mul nuw i64 [[TMP16]], 16
@@ -57,7 +61,7 @@ define void @test_load_store_i32(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGIN:       .split.split:
 ; ORIGIN-NEXT:    br label [[TMP21]]
-; ORIGIN:       21:
+; ORIGIN:       23:
 ; ORIGIN-NEXT:    store <vscale x 4 x i32> [[TMP1]], ptr [[B]], align 16
 ; ORIGIN-NEXT:    ret void
 ;
@@ -73,18 +77,21 @@ define void @test_load_store_add_int(ptr %a, ptr %b) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 8 x i64>, ptr [[A]], align 64
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 8 x i64>, ptr [[TMP4]], align 64
 ; CHECK-NEXT:    [[TMP5:%.*]] = load <vscale x 8 x i64>, ptr [[B]], align 64
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD1:%.*]] = load <vscale x 8 x i64>, ptr [[TMP8]], align 64
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <vscale x 8 x i64> [[_MSLD]], [[_MSLD1]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = add <vscale x 8 x i64> [[TMP1]], [[TMP5]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP11:%.*]] = xor i64 [[TMP10]], 87960930222080
-; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP11]], 2097152
+; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    store <vscale x 8 x i64> [[_MSLD1]], ptr [[TMP12]], align 64
 ; CHECK-NEXT:    store <vscale x 8 x i64> [[TMP5]], ptr [[B]], align 64
 ; CHECK-NEXT:    ret void
@@ -95,16 +102,18 @@ define void @test_load_store_add_int(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = load <vscale x 8 x i64>, ptr [[A]], align 64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP33:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP33]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load <vscale x 8 x i64>, ptr [[TMP4]], align 64
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 64
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = load <vscale x 8 x i64>, ptr [[B]], align 64
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; ORIGIN-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
-; ORIGIN-NEXT:    [[TMP12:%.*]] = add i64 [[TMP10]], 17592186044416
+; ORIGIN-NEXT:    [[TMP34:%.*]] = add i64 [[TMP10]], 2097152
+; ORIGIN-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP34]] to ptr
+; ORIGIN-NEXT:    [[TMP12:%.*]] = add i64 [[TMP10]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD1:%.*]] = load <vscale x 8 x i64>, ptr [[TMP11]], align 64
 ; ORIGIN-NEXT:    [[TMP14:%.*]] = load i32, ptr [[TMP13]], align 64
@@ -115,14 +124,15 @@ define void @test_load_store_add_int(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP18:%.*]] = add <vscale x 8 x i64> [[TMP1]], [[TMP8]]
 ; ORIGIN-NEXT:    [[TMP19:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP20:%.*]] = xor i64 [[TMP19]], 87960930222080
-; ORIGIN-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP20]] to ptr
-; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592186044416
+; ORIGIN-NEXT:    [[TMP35:%.*]] = add i64 [[TMP20]], 2097152
+; ORIGIN-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP35]] to ptr
+; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; ORIGIN-NEXT:    store <vscale x 8 x i64> [[_MSLD1]], ptr [[TMP21]], align 64
 ; ORIGIN-NEXT:    [[TMP24:%.*]] = call i64 @llvm.vector.reduce.or.nxv8i64(<vscale x 8 x i64> [[_MSLD1]])
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i64 [[TMP24]], 0
 ; ORIGIN-NEXT:    br i1 [[_MSCMP]], label [[TMP25:%.*]], label [[TMP32:%.*]], !prof [[PROF1]]
-; ORIGIN:       25:
+; ORIGIN:       28:
 ; ORIGIN-NEXT:    [[TMP26:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP14]])
 ; ORIGIN-NEXT:    [[TMP27:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGIN-NEXT:    [[TMP28:%.*]] = mul nuw i64 [[TMP27]], 64
@@ -138,7 +148,7 @@ define void @test_load_store_add_int(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGIN:       .split.split:
 ; ORIGIN-NEXT:    br label [[TMP32]]
-; ORIGIN:       32:
+; ORIGIN:       35:
 ; ORIGIN-NEXT:    store <vscale x 8 x i64> [[TMP8]], ptr [[B]], align 64
 ; ORIGIN-NEXT:    ret void
 ;
@@ -156,11 +166,13 @@ define void @test_load_store_float(ptr %a, ptr %b) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 4 x float>, ptr [[A]], align 16
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP9:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP9]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 4 x i32>, ptr [[TMP4]], align 16
 ; CHECK-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
+; CHECK-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 2097152
+; CHECK-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; CHECK-NEXT:    store <vscale x 4 x i32> [[_MSLD]], ptr [[TMP7]], align 16
 ; CHECK-NEXT:    store <vscale x 4 x float> [[TMP1]], ptr [[B]], align 16
 ; CHECK-NEXT:    ret void
@@ -171,21 +183,23 @@ define void @test_load_store_float(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = load <vscale x 4 x float>, ptr [[A]], align 16
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP22]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load <vscale x 4 x i32>, ptr [[TMP4]], align 16
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 16
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
-; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
-; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592186044416
+; ORIGIN-NEXT:    [[TMP23:%.*]] = add i64 [[TMP9]], 2097152
+; ORIGIN-NEXT:    [[TMP10:%.*]] = inttoptr i64 [[TMP23]] to ptr
+; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP9]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
 ; ORIGIN-NEXT:    store <vscale x 4 x i32> [[_MSLD]], ptr [[TMP10]], align 16
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = call i32 @llvm.vector.reduce.or.nxv4i32(<vscale x 4 x i32> [[_MSLD]])
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP13]], 0
 ; ORIGIN-NEXT:    br i1 [[_MSCMP]], label [[TMP14:%.*]], label [[TMP21:%.*]], !prof [[PROF1]]
-; ORIGIN:       14:
+; ORIGIN:       16:
 ; ORIGIN-NEXT:    [[TMP15:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP7]])
 ; ORIGIN-NEXT:    [[TMP16:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGIN-NEXT:    [[TMP17:%.*]] = mul nuw i64 [[TMP16]], 16
@@ -201,7 +215,7 @@ define void @test_load_store_float(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGIN:       .split.split:
 ; ORIGIN-NEXT:    br label [[TMP21]]
-; ORIGIN:       21:
+; ORIGIN:       23:
 ; ORIGIN-NEXT:    store <vscale x 4 x float> [[TMP1]], ptr [[B]], align 16
 ; ORIGIN-NEXT:    ret void
 ;
@@ -217,18 +231,21 @@ define void @test_load_store_add_float(ptr %a, ptr %b) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 2 x float>, ptr [[A]], align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP13:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP13]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP4]], align 8
 ; CHECK-NEXT:    [[TMP5:%.*]] = load <vscale x 2 x float>, ptr [[B]], align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080
-; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; CHECK-NEXT:    [[TMP15:%.*]] = add i64 [[TMP7]], 2097152
+; CHECK-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP15]] to ptr
 ; CHECK-NEXT:    [[_MSLD1:%.*]] = load <vscale x 2 x i32>, ptr [[TMP8]], align 8
 ; CHECK-NEXT:    [[_MSPROP:%.*]] = or <vscale x 2 x i32> [[_MSLD]], [[_MSLD1]]
 ; CHECK-NEXT:    [[TMP9:%.*]] = fadd <vscale x 2 x float> [[TMP1]], [[TMP5]]
 ; CHECK-NEXT:    [[TMP10:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP11:%.*]] = xor i64 [[TMP10]], 87960930222080
-; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
+; CHECK-NEXT:    [[TMP14:%.*]] = add i64 [[TMP11]], 2097152
+; CHECK-NEXT:    [[TMP12:%.*]] = inttoptr i64 [[TMP14]] to ptr
 ; CHECK-NEXT:    store <vscale x 2 x i32> [[_MSLD1]], ptr [[TMP12]], align 8
 ; CHECK-NEXT:    store <vscale x 2 x float> [[TMP5]], ptr [[B]], align 8
 ; CHECK-NEXT:    ret void
@@ -239,16 +256,18 @@ define void @test_load_store_add_float(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = load <vscale x 2 x float>, ptr [[A]], align 8
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP33:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP33]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP4]], align 8
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 8
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = load <vscale x 2 x float>, ptr [[B]], align 8
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP10:%.*]] = xor i64 [[TMP9]], 87960930222080
-; ORIGIN-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP10]] to ptr
-; ORIGIN-NEXT:    [[TMP12:%.*]] = add i64 [[TMP10]], 17592186044416
+; ORIGIN-NEXT:    [[TMP34:%.*]] = add i64 [[TMP10]], 2097152
+; ORIGIN-NEXT:    [[TMP11:%.*]] = inttoptr i64 [[TMP34]] to ptr
+; ORIGIN-NEXT:    [[TMP12:%.*]] = add i64 [[TMP10]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = inttoptr i64 [[TMP12]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD1:%.*]] = load <vscale x 2 x i32>, ptr [[TMP11]], align 8
 ; ORIGIN-NEXT:    [[TMP14:%.*]] = load i32, ptr [[TMP13]], align 8
@@ -259,14 +278,15 @@ define void @test_load_store_add_float(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP18:%.*]] = fadd <vscale x 2 x float> [[TMP1]], [[TMP8]]
 ; ORIGIN-NEXT:    [[TMP19:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP20:%.*]] = xor i64 [[TMP19]], 87960930222080
-; ORIGIN-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP20]] to ptr
-; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592186044416
+; ORIGIN-NEXT:    [[TMP35:%.*]] = add i64 [[TMP20]], 2097152
+; ORIGIN-NEXT:    [[TMP21:%.*]] = inttoptr i64 [[TMP35]] to ptr
+; ORIGIN-NEXT:    [[TMP22:%.*]] = add i64 [[TMP20]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP23:%.*]] = inttoptr i64 [[TMP22]] to ptr
 ; ORIGIN-NEXT:    store <vscale x 2 x i32> [[_MSLD1]], ptr [[TMP21]], align 8
 ; ORIGIN-NEXT:    [[TMP24:%.*]] = call i32 @llvm.vector.reduce.or.nxv2i32(<vscale x 2 x i32> [[_MSLD1]])
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP24]], 0
 ; ORIGIN-NEXT:    br i1 [[_MSCMP]], label [[TMP25:%.*]], label [[TMP32:%.*]], !prof [[PROF1]]
-; ORIGIN:       25:
+; ORIGIN:       28:
 ; ORIGIN-NEXT:    [[TMP26:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP14]])
 ; ORIGIN-NEXT:    [[TMP27:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGIN-NEXT:    [[TMP28:%.*]] = mul nuw i64 [[TMP27]], 8
@@ -282,7 +302,7 @@ define void @test_load_store_add_float(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGIN:       .split.split:
 ; ORIGIN-NEXT:    br label [[TMP32]]
-; ORIGIN:       32:
+; ORIGIN:       35:
 ; ORIGIN-NEXT:    store <vscale x 2 x float> [[TMP8]], ptr [[B]], align 8
 ; ORIGIN-NEXT:    ret void
 ;
@@ -300,7 +320,8 @@ define <vscale x 2 x float> @fn_ret(ptr %a) sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = load <vscale x 2 x float>, ptr [[A]], align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP4]], align 8
 ; CHECK-NEXT:    store <vscale x 2 x i32> [[_MSLD]], ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    ret <vscale x 2 x float> [[TMP1]]
@@ -311,8 +332,9 @@ define <vscale x 2 x float> @fn_ret(ptr %a) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = load <vscale x 2 x float>, ptr [[A]], align 8
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP4]], align 8
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 8
@@ -335,7 +357,8 @@ define void @test_ret(ptr %a, ptr %b) sanitize_memory {
 ; CHECK-NEXT:    [[_MSRET:%.*]] = load <vscale x 2 x i32>, ptr @__msan_retval_tls, align 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP6:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; CHECK-NEXT:    store <vscale x 2 x i32> [[_MSRET]], ptr [[TMP4]], align 8
 ; CHECK-NEXT:    store <vscale x 2 x float> [[TMP5]], ptr [[B]], align 8
 ; CHECK-NEXT:    ret void
@@ -353,14 +376,15 @@ define void @test_ret(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP4:%.*]] = load i32, ptr @__msan_retval_origin_tls, align 4
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
-; ORIGIN-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
-; ORIGIN-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 17592186044416
+; ORIGIN-NEXT:    [[TMP19:%.*]] = add i64 [[TMP6]], 2097152
+; ORIGIN-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP19]] to ptr
+; ORIGIN-NEXT:    [[TMP8:%.*]] = add i64 [[TMP6]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = inttoptr i64 [[TMP8]] to ptr
 ; ORIGIN-NEXT:    store <vscale x 2 x i32> [[_MSRET]], ptr [[TMP7]], align 8
 ; ORIGIN-NEXT:    [[TMP10:%.*]] = call i32 @llvm.vector.reduce.or.nxv2i32(<vscale x 2 x i32> [[_MSRET]])
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP10]], 0
 ; ORIGIN-NEXT:    br i1 [[_MSCMP]], label [[TMP11:%.*]], label [[TMP18:%.*]], !prof [[PROF1]]
-; ORIGIN:       11:
+; ORIGIN:       12:
 ; ORIGIN-NEXT:    [[TMP12:%.*]] = call i32 @__msan_chain_origin(i32 [[TMP4]])
 ; ORIGIN-NEXT:    [[TMP13:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGIN-NEXT:    [[TMP14:%.*]] = mul nuw i64 [[TMP13]], 8
@@ -376,7 +400,7 @@ define void @test_ret(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGIN:       .split.split:
 ; ORIGIN-NEXT:    br label [[TMP18]]
-; ORIGIN:       18:
+; ORIGIN:       19:
 ; ORIGIN-NEXT:    store <vscale x 2 x float> [[TMP3]], ptr [[B]], align 8
 ; ORIGIN-NEXT:    ret void
 ;
@@ -391,7 +415,8 @@ define void @fn_param(<vscale x 2 x float> %a, ptr %b) sanitize_memory {
 ; CHECK-NEXT:    call void @llvm.donothing()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[B]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    store <vscale x 2 x i32> zeroinitializer, ptr [[TMP3]], align 8
 ; CHECK-NEXT:    store <vscale x 2 x float> [[A]], ptr [[B]], align 8
 ; CHECK-NEXT:    ret void
@@ -401,14 +426,15 @@ define void @fn_param(<vscale x 2 x float> %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    call void @llvm.donothing()
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[B]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP15:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP15]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; ORIGIN-NEXT:    store <vscale x 2 x i32> zeroinitializer, ptr [[TMP3]], align 8
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = call i32 @llvm.vector.reduce.or.nxv2i32(<vscale x 2 x i32> zeroinitializer)
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP6]], 0
 ; ORIGIN-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP14:%.*]], !prof [[PROF1]]
-; ORIGIN:       7:
+; ORIGIN:       8:
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = call i32 @__msan_chain_origin(i32 0)
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGIN-NEXT:    [[TMP10:%.*]] = mul nuw i64 [[TMP9]], 8
@@ -424,7 +450,7 @@ define void @fn_param(<vscale x 2 x float> %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]]
 ; ORIGIN:       .split.split:
 ; ORIGIN-NEXT:    br label [[TMP14]]
-; ORIGIN:       14:
+; ORIGIN:       15:
 ; ORIGIN-NEXT:    store <vscale x 2 x float> [[A]], ptr [[B]], align 8
 ; ORIGIN-NEXT:    ret void
 ;
@@ -440,16 +466,17 @@ define void @test_param(ptr %a, ptr %b) sanitize_memory {
 ; CHECK-NEXT:    [[TMP2:%.*]] = load <vscale x 2 x float>, ptr [[A]], align 8
 ; CHECK-NEXT:    [[TMP3:%.*]] = ptrtoint ptr [[A]] to i64
 ; CHECK-NEXT:    [[TMP4:%.*]] = xor i64 [[TMP3]], 87960930222080
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT:    [[TMP7:%.*]] = add i64 [[TMP4]], 2097152
+; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; CHECK-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP5]], align 8
 ; CHECK-NEXT:    store i64 [[TMP1]], ptr @__msan_param_tls, align 8
 ; CHECK-NEXT:    [[TMP6:%.*]] = call i32 @llvm.vector.reduce.or.nxv2i32(<vscale x 2 x i32> [[_MSLD]])
 ; CHECK-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP6]], 0
-; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP7:%.*]], label [[TMP8:%.*]], !prof [[PROF1:![0-9]+]]
-; CHECK:       7:
+; CHECK-NEXT:    br i1 [[_MSCMP]], label [[TMP8:%.*]], label [[TMP9:%.*]], !prof [[PROF1:![0-9]+]]
+; CHECK:       8:
 ; CHECK-NEXT:    call void @__msan_warning_noreturn() #[[ATTR5:[0-9]+]]
 ; CHECK-NEXT:    unreachable
-; CHECK:       8:
+; CHECK:       9:
 ; CHECK-NEXT:    call void @fn_param(<vscale x 2 x float> [[TMP2]], ptr [[B]])
 ; CHECK-NEXT:    ret void
 ;
@@ -461,8 +488,9 @@ define void @test_param(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = load <vscale x 2 x float>, ptr [[A]], align 8
 ; ORIGIN-NEXT:    [[TMP4:%.*]] = ptrtoint ptr [[A]] to i64
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = xor i64 [[TMP4]], 87960930222080
-; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
-; ORIGIN-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592186044416
+; ORIGIN-NEXT:    [[TMP11:%.*]] = add i64 [[TMP5]], 2097152
+; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP11]] to ptr
+; ORIGIN-NEXT:    [[TMP7:%.*]] = add i64 [[TMP5]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr
 ; ORIGIN-NEXT:    [[_MSLD:%.*]] = load <vscale x 2 x i32>, ptr [[TMP6]], align 8
 ; ORIGIN-NEXT:    [[TMP9:%.*]] = load i32, ptr [[TMP8]], align 8
@@ -470,11 +498,11 @@ define void @test_param(ptr %a, ptr %b) sanitize_memory {
 ; ORIGIN-NEXT:    store i32 [[TMP2]], ptr @__msan_param_origin_tls, align 4
 ; ORIGIN-NEXT:    [[TMP10:%.*]] = call i32 @llvm.vector.reduce.or.nxv2i32(<vscale x 2 x i32> [[_MSLD]])
 ; ORIGIN-NEXT:    [[_MSCMP:%.*]] = icmp ne i32 [[TMP10]], 0
-; ORIGIN-NEXT:    br i1 [[_MSCMP]], label [[TMP11:%.*]], label [[TMP12:%.*]], !prof [[PROF1]]
-; ORIGIN:       11:
+; ORIGIN-NEXT:    br i1 [[_MSCMP]], label [[TMP12:%.*]], label [[TMP13:%.*]], !prof [[PROF1]]
+; ORIGIN:       12:
 ; ORIGIN-NEXT:    call void @__msan_warning_with_origin_noreturn(i32 [[TMP9]]) #[[ATTR5:[0-9]+]]
 ; ORIGIN-NEXT:    unreachable
-; ORIGIN:       12:
+; ORIGIN:       13:
 ; ORIGIN-NEXT:    call void @fn_param(<vscale x 2 x float> [[TMP3]], ptr [[B]])
 ; ORIGIN-NEXT:    ret void
 ;
@@ -493,7 +521,8 @@ define void @test_alloca1() sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = mul nuw i64 [[TMP0]], 8
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[X]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP4]], i8 -1, i64 [[TMP1]], i1 false)
 ; CHECK-NEXT:    ret void
 ;
@@ -506,8 +535,9 @@ define void @test_alloca1() sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = mul nuw i64 [[TMP0]], 8
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[X]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP4]], i8 -1, i64 [[TMP1]], i1 false)
@@ -529,7 +559,8 @@ define void @test_alloca2() sanitize_memory {
 ; CHECK-NEXT:    [[TMP1:%.*]] = mul nuw i64 [[TMP0]], 512
 ; CHECK-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[X]] to i64
 ; CHECK-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP4]], i8 -1, i64 [[TMP1]], i1 false)
 ; CHECK-NEXT:    ret void
 ;
@@ -542,8 +573,9 @@ define void @test_alloca2() sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = mul nuw i64 [[TMP0]], 512
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = ptrtoint ptr [[X]] to i64
 ; ORIGIN-NEXT:    [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080
-; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
-; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592186044416
+; ORIGIN-NEXT:    [[TMP8:%.*]] = add i64 [[TMP3]], 2097152
+; ORIGIN-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP8]] to ptr
+; ORIGIN-NEXT:    [[TMP5:%.*]] = add i64 [[TMP3]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = and i64 [[TMP5]], -4
 ; ORIGIN-NEXT:    [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP4]], i8 -1, i64 [[TMP1]], i1 false)
@@ -564,7 +596,8 @@ define void @test_alloca3() sanitize_memory {
 ; CHECK-NEXT:    [[TMP0:%.*]] = call i64 @llvm.vscale.i64()
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[X]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
+; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP4]] to ptr
 ; CHECK-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP3]], i8 -1, i64 [[TMP0]], i1 false)
 ; CHECK-NEXT:    ret void
 ;
@@ -576,8 +609,9 @@ define void @test_alloca3() sanitize_memory {
 ; ORIGIN-NEXT:    [[TMP0:%.*]] = call i64 @llvm.vscale.i64()
 ; ORIGIN-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[X]] to i64
 ; ORIGIN-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
+; ORIGIN-NEXT:    [[TMP7:%.*]] = add i64 [[TMP2]], 2097152
+; ORIGIN-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP7]] to ptr
+; ORIGIN-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592188141568
 ; ORIGIN-NEXT:    [[TMP5:%.*]] = and i64 [[TMP4]], -4
 ; ORIGIN-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
 ; ORIGIN-NEXT:    call void @llvm.memset.p0.i64(ptr align 4 [[TMP3]], i8 -1, i64 [[TMP0]], i1 false)

>From e2d0e28a7f822ba0a7a9162ef4a5a19c060a331e Mon Sep 17 00:00:00 2001
From: Azat Khuzhin <a3at.mail at gmail.com>
Date: Sat, 13 Dec 2025 20:13:09 +0100
Subject: [PATCH 4/4] [llvm/test/Instrumentation/MemorySanitizer] update
 vector-track-origins-struct.ll manually

---
 .../vector-track-origins-struct.ll            | 25 ++++++++++---------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/llvm/test/Instrumentation/MemorySanitizer/vector-track-origins-struct.ll b/llvm/test/Instrumentation/MemorySanitizer/vector-track-origins-struct.ll
index c6d0f8fcebc03..9caf55bbd1cfe 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/vector-track-origins-struct.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/vector-track-origins-struct.ll
@@ -20,19 +20,20 @@ define { i32, i8 } @main() sanitize_memory {
 ; CHECK-NEXT:    [[O:%.*]] = load { i32, i8 }, ptr [[P]], align 4
 ; CHECK-NEXT:    [[TMP1:%.*]] = ptrtoint ptr [[P]] to i64
 ; CHECK-NEXT:    [[TMP2:%.*]] = xor i64 [[TMP1]], 87960930222080
-; CHECK-NEXT:    [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
-; CHECK-NEXT:    [[TMP4:%.*]] = add i64 [[TMP2]], 17592186044416
-; CHECK-NEXT:    [[TMP5:%.*]] = inttoptr i64 [[TMP4]] to ptr
-; CHECK-NEXT:    [[_MSLD:%.*]] = load { i32, i8 }, ptr [[TMP3]], align 4
-; CHECK-NEXT:    [[TMP6:%.*]] = load i32, ptr [[TMP5]], align 4
+; CHECK-NEXT:    [[TMP3:%.*]] = add i64 [[TMP2]], 2097152
+; CHECK-NEXT:    [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr
+; CHECK-NEXT:    [[TMP5:%.*]] = add i64 [[TMP2]], 17592188141568
+; CHECK-NEXT:    [[TMP6:%.*]] = inttoptr i64 [[TMP5]] to ptr
+; CHECK-NEXT:    [[_MSLD:%.*]] = load { i32, i8 }, ptr [[TMP4]], align 4
+; CHECK-NEXT:    [[TMP7:%.*]] = load i32, ptr [[TMP6]], align 4
 ; CHECK-NEXT:    store { i32, i8 } zeroinitializer, ptr @__msan_retval_tls, align 8
-; CHECK-NEXT:    [[TMP7:%.*]] = extractvalue { i32, i8 } [[_MSLD]], 0
-; CHECK-NEXT:    [[TMP8:%.*]] = icmp ne i32 [[TMP7]], 0
-; CHECK-NEXT:    [[TMP9:%.*]] = extractvalue { i32, i8 } [[_MSLD]], 1
-; CHECK-NEXT:    [[TMP10:%.*]] = icmp ne i8 [[TMP9]], 0
-; CHECK-NEXT:    [[TMP11:%.*]] = or i1 [[TMP8]], [[TMP10]]
-; CHECK-NEXT:    [[TMP12:%.*]] = zext i1 [[TMP11]] to i64
-; CHECK-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP12]], i32 zeroext [[TMP6]])
+; CHECK-NEXT:    [[TMP8:%.*]] = extractvalue { i32, i8 } [[_MSLD]], 0
+; CHECK-NEXT:    [[TMP9:%.*]] = icmp ne i32 [[TMP8]], 0
+; CHECK-NEXT:    [[TMP10:%.*]] = extractvalue { i32, i8 } [[_MSLD]], 1
+; CHECK-NEXT:    [[TMP11:%.*]] = icmp ne i8 [[TMP10]], 0
+; CHECK-NEXT:    [[TMP12:%.*]] = or i1 [[TMP9]], [[TMP11]]
+; CHECK-NEXT:    [[TMP13:%.*]] = zext i1 [[TMP12]] to i64
+; CHECK-NEXT:    call void @__msan_maybe_warning_8(i64 zeroext [[TMP13]], i32 zeroext [[TMP7]])
 ; CHECK-NEXT:    ret { i32, i8 } [[O]]
 ;
   %p = inttoptr i64 0 to ptr



More information about the llvm-commits mailing list