[clang] [lld] [llvm] [WIP][IR][Constants] Change the semantic of `ConstantPointerNull` to represent an actual `nullptr` instead of a zero-value pointer (PR #166667)
Shilei Tian via llvm-commits
llvm-commits at lists.llvm.org
Sun Nov 16 21:16:53 PST 2025
https://github.com/shiltian updated https://github.com/llvm/llvm-project/pull/166667
>From c52a3b6b8b1cd3fa4c7e58d4d27265482ea837d2 Mon Sep 17 00:00:00 2001
From: Shilei Tian <i at tianshilei.me>
Date: Sat, 15 Nov 2025 23:19:05 -0500
Subject: [PATCH] [WIP][IR][Constants] Change the semantic of
`ConstantPointerNull` to represent an actual `nullptr` instead of a
zero-value pointer
The value of a `nullptr` is not always `0`. For example, on AMDGPU, the `nullptr` in address spaces 3 and 5 is `0xffffffff`. Currently, there is no target-independent way to get this information, making it difficult and error-prone to handle null pointers in target-agnostic code.
We do have `ConstantPointerNull`, but it might be a little confusing and misleading. It represents a pointer with an all-zero value rather than necessarily a real `nullptr`. Therefore, to represent a real `nullptr` in address space `N`, we need to use `addrspacecast ptr null to ptr addrspace(N)` and it can't be folded.
In this PR, we change the semantic of `ConstantPointerNull` to represent an actual `nullptr` instead of a zero-value pointer. Here is the detailed changes.
* `ptr addrspace(N) null` will represent the actual `nullptr` in address space `N`.
* `ptr addrspace(N) zeroinitializer` will represent a zero-value pointer in address space `N`.
* `Constant::getNullValue` will return a _null_ value. It is same as the current semantics except for the `PointerType`, which will return a real `nullptr` pointer.
* `Constant::getZeroValue` will return a zero value constant. It is completely same as the current semantics. To represent a zero-value pointer, a `ConstantExpr` will be used (effectively `inttoptr i8 0 to ptr addrspace(N)`).
* Correspondingly, there will be both `Constant::isNullValue` and `Constant::isZeroValue`.
The RFC is https://discourse.llvm.org/t/rfc-introduce-sentinel-pointer-value-to-datalayout/85265. It is a little bit old and the title might look different, but everything eventually converges to this change. An early attempt can be found in https://github.com/llvm/llvm-project/pull/131557, which has many valuable discussion as well.
This PR is still WIP but any early feedback is welcome. I'll include as many necessary code changes as possible in this PR, but eventually this needs to be carefully split into multiple PRs, and I'll do it after the changes look good to every one.
---
clang/lib/Basic/Targets/AMDGPU.cpp | 2 +-
clang/test/CodeGen/target-data.c | 4 +-
clang/test/CodeGenOpenCL/amdgpu-env-amdgcn.cl | 2 +-
lld/test/ELF/lto/amdgcn-oses.ll | 6 +-
lld/test/ELF/lto/amdgcn.ll | 2 +-
llvm/docs/LangRef.rst | 8 +-
llvm/include/llvm/IR/Constant.h | 2 +
llvm/include/llvm/IR/Constants.h | 9 ++
llvm/include/llvm/IR/DataLayout.h | 13 +-
llvm/lib/Analysis/ConstantFolding.cpp | 41 ++++-
llvm/lib/AsmParser/LLParser.cpp | 2 +-
llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp | 11 +-
llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp | 15 +-
.../SelectionDAG/SelectionDAGBuilder.cpp | 14 +-
llvm/lib/IR/AsmWriter.cpp | 9 ++
llvm/lib/IR/ConstantFold.cpp | 34 ++++
llvm/lib/IR/Constants.cpp | 97 ++++++++++--
llvm/lib/IR/DataLayout.cpp | 41 ++++-
llvm/lib/IR/Verifier.cpp | 2 +-
llvm/lib/TargetParser/TargetDataLayout.cpp | 8 +-
.../Transforms/Vectorize/SLPVectorizer.cpp | 4 +-
llvm/test/Assembler/ConstantExprFoldCast.ll | 4 +-
.../AMDGPU/32-bit-local-address-space.ll | 4 +-
.../bug_shuffle_vector_to_scalar.ll | 2 +-
.../GlobalISel/irtranslator-amdgpu_kernel.ll | 4 +-
.../GlobalISel/llvm.amdgcn.set.inactive.ll | 4 +-
.../AMDGPU/addrspacecast-constantexpr.ll | 8 +-
.../test/CodeGen/AMDGPU/addrspacecast.r600.ll | 2 +-
.../AMDGPU/agpr-copy-no-free-registers.ll | 90 +++++------
.../AMDGPU/amdgpu-late-codegenprepare.ll | 8 +-
...der-no-live-segment-at-def-implicit-def.ll | 11 +-
.../branch-folding-implicit-def-subreg.ll | 127 ++++++++-------
.../CodeGen/AMDGPU/cf-loop-on-constant.ll | 4 +-
llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll | 34 ++--
llvm/test/CodeGen/AMDGPU/collapse-endcf.ll | 30 ++--
...cannot-create-empty-or-backward-segment.ll | 7 +-
.../hazard-recognizer-src-shared-base.ll | 11 +-
.../issue130120-eliminate-frame-index.ll | 30 ++--
...llvm.amdgcn.iglp.AFLCustomIRMutator.opt.ll | 35 ++--
.../AMDGPU/llvm.amdgcn.set.inactive.ll | 4 +-
llvm/test/CodeGen/AMDGPU/load-hi16.ll | 16 +-
llvm/test/CodeGen/AMDGPU/load-lo16.ll | 14 +-
.../lower-buffer-fat-pointers-constants.ll | 2 +-
.../CodeGen/AMDGPU/lower-mem-intrinsics.ll | 2 +-
...p-var-out-of-divergent-loop-swdev407790.ll | 19 +--
.../AMDGPU/promote-constOffset-to-imm.ll | 77 ++++-----
...emove-incompatible-extended-image-insts.ll | 2 +-
.../AMDGPU/remove-incompatible-functions.ll | 2 +-
.../CodeGen/AMDGPU/remove-incompatible-gws.ll | 2 +-
.../AMDGPU/remove-incompatible-s-time.ll | 4 +-
.../AMDGPU/sdwa-peephole-instr-combine-sel.ll | 3 +-
.../test/CodeGen/AMDGPU/setcc-multiple-use.ll | 2 +-
.../CodeGen/AMDGPU/stacksave_stackrestore.ll | 10 +-
llvm/test/CodeGen/AMDGPU/swdev282079.ll | 4 +-
...-call-inreg-arguments.convergencetokens.ll | 2 +-
.../AMDGPU/tail-call-inreg-arguments.ll | 2 +-
...-in-vgprs-issue110930.convergencetokens.ll | 2 +-
...all-uniform-target-in-vgprs-issue110930.ll | 2 +-
.../AMDGPU/tuple-allocation-failure.ll | 122 +++++++-------
.../AMDGPU/undef-handling-crash-in-ra.ll | 3 +-
.../AMDGPU/unstructured-cfg-def-use-issue.ll | 149 +++++++++---------
llvm/test/CodeGen/AMDGPU/v_cmp_gfx11.ll | 6 +-
.../CodeGen/AMDGPU/waterfall_kills_scc.ll | 9 +-
llvm/test/CodeGen/AMDGPU/wqm.ll | 18 ++-
.../AddressSanitizer/asan-scalable-vector.ll | 13 +-
.../HWAddressSanitizer/RISCV/alloca.ll | 6 +-
.../HWAddressSanitizer/RISCV/basic.ll | 52 +++---
.../HWAddressSanitizer/X86/alloca-array.ll | 2 +-
.../X86/alloca-with-calls.ll | 2 +-
.../HWAddressSanitizer/X86/alloca.ll | 12 +-
.../HWAddressSanitizer/X86/atomic.ll | 4 +-
.../HWAddressSanitizer/X86/basic.ll | 82 +++++-----
.../HWAddressSanitizer/alloca.ll | 6 +-
.../HWAddressSanitizer/basic.ll | 52 +++---
.../hwasan-pass-second-run.ll | 2 +-
.../HWAddressSanitizer/kernel-alloca.ll | 2 +-
.../HWAddressSanitizer/prologue.ll | 16 +-
.../HWAddressSanitizer/zero-ptr.ll | 2 +-
.../MemorySanitizer/msan_basic.ll | 24 +--
.../TypeSanitizer/access-with-offset.ll | 2 +-
...-addrspacecast-with-constantpointernull.ll | 8 +-
llvm/test/Transforms/GlobalOpt/issue62384.ll | 22 +--
.../InferAddressSpaces/AMDGPU/basic.ll | 2 +-
.../InferAddressSpaces/AMDGPU/issue110433.ll | 2 +-
.../InferAddressSpaces/AMDGPU/phi-poison.ll | 4 +-
.../Transforms/InstCombine/addrspacecast.ll | 4 +-
llvm/test/Transforms/InstCombine/assume.ll | 5 +-
.../InstCombine/gep-inbounds-null.ll | 7 +-
.../InstCombine/or-select-zero-icmp.ll | 2 +-
.../InstCombine/ptrtoint-nullgep.ll | 22 ++-
.../ConstProp/inttoptr-gep-nonintegral.ll | 2 +-
.../AMDGPU/lsr-invalid-ptr-extend.ll | 6 +-
.../Transforms/LoopStrengthReduce/funclet.ll | 32 ++--
.../Transforms/LoopStrengthReduce/pr27056.ll | 4 +-
.../non-literal-type.ll | 10 +-
.../heap-to-shared-missing-declarations.ll | 11 +-
.../OpenMP/spmdization_kernel_env_dep.ll | 2 +-
.../RewriteStatepointsForGC/pr56493.ll | 2 +-
.../SLPVectorizer/X86/stacksave-dependence.ll | 2 +-
.../X86/vectorize-widest-phis.ll | 2 +-
llvm/test/Transforms/SROA/basictest.ll | 12 +-
llvm/tools/llvm-stress/llvm-stress.cpp | 2 +-
llvm/unittests/IR/InstructionsTest.cpp | 2 +-
103 files changed, 973 insertions(+), 692 deletions(-)
diff --git a/clang/lib/Basic/Targets/AMDGPU.cpp b/clang/lib/Basic/Targets/AMDGPU.cpp
index d4d696b8456b6..a0f76a4239e11 100644
--- a/clang/lib/Basic/Targets/AMDGPU.cpp
+++ b/clang/lib/Basic/Targets/AMDGPU.cpp
@@ -31,7 +31,7 @@ static const char *const DataLayoutStringR600 =
"-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1";
static const char *const DataLayoutStringAMDGCN =
- "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32"
+ "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:32"
"-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-"
"v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-"
"v2048:2048-n32:64-S32-A5-G1-ni:7:8:9";
diff --git a/clang/test/CodeGen/target-data.c b/clang/test/CodeGen/target-data.c
index e95079490bd3c..17314dac23851 100644
--- a/clang/test/CodeGen/target-data.c
+++ b/clang/test/CodeGen/target-data.c
@@ -160,12 +160,12 @@
// RUN: %clang_cc1 -triple amdgcn-unknown -target-cpu hawaii -o - -emit-llvm %s \
// RUN: | FileCheck %s -check-prefix=R600SI
-// R600SI: target datalayout = "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9"
+// R600SI: target datalayout = "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:32-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9"
// Test default -target-cpu
// RUN: %clang_cc1 -triple amdgcn-unknown -o - -emit-llvm %s \
// RUN: | FileCheck %s -check-prefix=R600SIDefault
-// R600SIDefault: target datalayout = "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9"
+// R600SIDefault: target datalayout = "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:32-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9"
// RUN: %clang_cc1 -triple arm64-unknown -o - -emit-llvm %s | \
// RUN: FileCheck %s -check-prefix=AARCH64
diff --git a/clang/test/CodeGenOpenCL/amdgpu-env-amdgcn.cl b/clang/test/CodeGenOpenCL/amdgpu-env-amdgcn.cl
index 72ce72644b8ea..b1200c98d550a 100644
--- a/clang/test/CodeGenOpenCL/amdgpu-env-amdgcn.cl
+++ b/clang/test/CodeGenOpenCL/amdgpu-env-amdgcn.cl
@@ -1,5 +1,5 @@
// RUN: %clang_cc1 %s -O0 -triple amdgcn -emit-llvm -o - | FileCheck %s
// RUN: %clang_cc1 %s -O0 -triple amdgcn---opencl -emit-llvm -o - | FileCheck %s
-// CHECK: target datalayout = "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9"
+// CHECK: target datalayout = "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:32-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9"
void foo(void) {}
diff --git a/lld/test/ELF/lto/amdgcn-oses.ll b/lld/test/ELF/lto/amdgcn-oses.ll
index b3caf0f0de3b9..151cd664dcebb 100644
--- a/lld/test/ELF/lto/amdgcn-oses.ll
+++ b/lld/test/ELF/lto/amdgcn-oses.ll
@@ -25,7 +25,7 @@
;--- amdhsa.ll
target triple = "amdgcn-amd-amdhsa"
-target datalayout = "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
+target datalayout = "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
!llvm.module.flags = !{!0}
!0 = !{i32 1, !"amdhsa_code_object_version", i32 500}
@@ -36,7 +36,7 @@ define void @_start() {
;--- amdpal.ll
target triple = "amdgcn-amd-amdpal"
-target datalayout = "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
+target datalayout = "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
define amdgpu_cs void @_start() {
ret void
@@ -44,7 +44,7 @@ define amdgpu_cs void @_start() {
;--- mesa3d.ll
target triple = "amdgcn-amd-mesa3d"
-target datalayout = "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
+target datalayout = "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
define void @_start() {
ret void
diff --git a/lld/test/ELF/lto/amdgcn.ll b/lld/test/ELF/lto/amdgcn.ll
index 186185c44a2c2..8cec59611d187 100644
--- a/lld/test/ELF/lto/amdgcn.ll
+++ b/lld/test/ELF/lto/amdgcn.ll
@@ -5,7 +5,7 @@
; Make sure the amdgcn triple is handled
target triple = "amdgcn-amd-amdhsa"
-target datalayout = "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
+target datalayout = "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5"
define void @_start() {
ret void
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 1a8886dd79c9c..3a5763d4747cd 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -3322,8 +3322,12 @@ as follows:
default address space 0. The value of ``<as>`` must be in the range [1,2^24).
The optional ``<flags>`` are used to specify properties of pointers in this
address space: the character ``u`` marks pointers as having an unstable
- representation, and ``e`` marks pointers having external state. See
- :ref:`Non-Integral Pointer Types <nointptrtype>`.
+ representation; ``e`` marks pointers having external state; ``z`` marks the
+ value of the nullptr as all-zeros (default behavior if it is not specified);
+ ``o`` marks the value of the nullptr as all-ones; ``c`` marks the value of
+ the nullptr as custom (neither all-zeros nor all-ones), such that LLVM will
+ not be able to fold various casts involving nullptr.
+ See :ref:`Non-Integral Pointer Types <nointptrtype>`.
``i<size>:<abi>[:<pref>]``
This specifies the alignment for an integer type of a given bit
diff --git a/llvm/include/llvm/IR/Constant.h b/llvm/include/llvm/IR/Constant.h
index 0be1fc172ebd4..ae666baa74240 100644
--- a/llvm/include/llvm/IR/Constant.h
+++ b/llvm/include/llvm/IR/Constant.h
@@ -189,6 +189,8 @@ class Constant : public User {
LLVM_ABI static Constant *getNullValue(Type *Ty);
+ LLVM_ABI static Constant *getZeroValue(Type *Ty);
+
/// @returns the value for an integer or vector of integer constant of the
/// given type that has all its bits set to true.
/// Get the all ones value
diff --git a/llvm/include/llvm/IR/Constants.h b/llvm/include/llvm/IR/Constants.h
index e06e6adbc3130..4806987d12918 100644
--- a/llvm/include/llvm/IR/Constants.h
+++ b/llvm/include/llvm/IR/Constants.h
@@ -414,6 +414,9 @@ class ConstantAggregate : public Constant {
/// Transparently provide more efficient getOperand methods.
DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Constant);
+ /// Return true if the constant aggregate contains all null values.
+ inline bool isNullValue() const;
+
/// Methods for support type inquiry through isa, cast, and dyn_cast:
static bool classof(const Value *V) {
return V->getValueID() >= ConstantAggregateFirstVal &&
@@ -443,6 +446,8 @@ class ConstantArray final : public ConstantAggregate {
// ConstantArray accessors
LLVM_ABI static Constant *get(ArrayType *T, ArrayRef<Constant *> V);
+ LLVM_ABI static Constant *getNullValue(ArrayType *T);
+
private:
static Constant *getImpl(ArrayType *T, ArrayRef<Constant *> V);
@@ -475,6 +480,8 @@ class ConstantStruct final : public ConstantAggregate {
// ConstantStruct accessors
LLVM_ABI static Constant *get(StructType *T, ArrayRef<Constant *> V);
+ LLVM_ABI static Constant *getNullValue(StructType *T);
+
template <typename... Csts>
static std::enable_if_t<are_base_of<Constant, Csts...>::value, Constant *>
get(StructType *T, Csts *...Vs) {
@@ -527,6 +534,8 @@ class ConstantVector final : public ConstantAggregate {
// ConstantVector accessors
LLVM_ABI static Constant *get(ArrayRef<Constant *> V);
+ LLVM_ABI static Constant *getNullValue(VectorType *T);
+
private:
static Constant *getImpl(ArrayRef<Constant *> V);
diff --git a/llvm/include/llvm/IR/DataLayout.h b/llvm/include/llvm/IR/DataLayout.h
index 54458201af0b3..537666023f4a8 100644
--- a/llvm/include/llvm/IR/DataLayout.h
+++ b/llvm/include/llvm/IR/DataLayout.h
@@ -83,6 +83,11 @@ class DataLayout {
/// for additional metadata (e.g. AMDGPU buffer fat pointers with bounds
/// and other flags or CHERI capabilities that contain bounds+permissions).
uint32_t IndexBitWidth;
+ /// The value of the nullptr in this address space. It can be three values:
+ /// all-zeros, all-ones, or std::nullopt. Since we don't have a way to
+ /// represent an arbitrary bit pattern, we use std::nullopt to represent the
+ /// case where the nullptr value is neither 0 nor -1.
+ std::optional<APInt> NullPtrValue;
/// Pointers in this address space don't have a well-defined bitwise
/// representation (e.g. they may be relocated by a copying garbage
/// collector and thus have different addresses at different times).
@@ -158,7 +163,8 @@ class DataLayout {
/// Sets or updates the specification for pointer in the given address space.
void setPointerSpec(uint32_t AddrSpace, uint32_t BitWidth, Align ABIAlign,
Align PrefAlign, uint32_t IndexBitWidth,
- bool HasUnstableRepr, bool HasExternalState);
+ std::optional<APInt> NullPtrValue, bool HasUnstableRepr,
+ bool HasExternalState);
/// Internal helper to get alignment for integer of given bitwidth.
LLVM_ABI Align getIntegerAlignment(uint32_t BitWidth, bool abi_or_pref) const;
@@ -697,6 +703,11 @@ class DataLayout {
///
/// This includes an explicitly requested alignment (if the global has one).
LLVM_ABI Align getPreferredAlign(const GlobalVariable *GV) const;
+
+ /// Returns the value of the nullptr in the given address space.
+ LLVM_ABI std::optional<APInt> getNullPtrValue(unsigned AddrSpace) const {
+ return getPointerSpec(AddrSpace).NullPtrValue;
+ }
};
inline DataLayout *unwrap(LLVMTargetDataRef P) {
diff --git a/llvm/lib/Analysis/ConstantFolding.cpp b/llvm/lib/Analysis/ConstantFolding.cpp
index da32542cf7870..adb09e37c4ccf 100755
--- a/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/llvm/lib/Analysis/ConstantFolding.cpp
@@ -1497,6 +1497,21 @@ Constant *llvm::ConstantFoldCastOperand(unsigned Opcode, Constant *C,
llvm_unreachable("Missing case");
case Instruction::PtrToAddr:
case Instruction::PtrToInt:
+ // If the input is a nullptr, we can fold it to the corresponding nullptr
+ // value.
+ if (Opcode == Instruction::PtrToInt && C->isNullValue()) {
+ if (std::optional<APInt> NullPtrValue = DL.getNullPtrValue(
+ C->getType()->getScalarType()->getPointerAddressSpace())) {
+ if (NullPtrValue->isZero()) {
+ return Constant::getZeroValue(DestTy);
+ } else if (NullPtrValue->isAllOnes()) {
+ return ConstantInt::get(
+ DestTy, NullPtrValue->zextOrTrunc(DestTy->getScalarSizeInBits()));
+ } else {
+ llvm_unreachable("invalid nullptr value");
+ }
+ }
+ }
if (auto *CE = dyn_cast<ConstantExpr>(C)) {
Constant *FoldedValue = nullptr;
// If the input is an inttoptr, eliminate the pair. This requires knowing
@@ -1543,6 +1558,13 @@ Constant *llvm::ConstantFoldCastOperand(unsigned Opcode, Constant *C,
}
break;
case Instruction::IntToPtr:
+ // We can fold it to a null pointer if the input is the nullptr value.
+ if (std::optional<APInt> NullPtrValue = DL.getNullPtrValue(
+ DestTy->getScalarType()->getPointerAddressSpace())) {
+ if ((NullPtrValue->isZero() && C->isZeroValue()) ||
+ (NullPtrValue->isAllOnes() && C->isAllOnesValue()))
+ return Constant::getNullValue(DestTy);
+ }
// If the input is a ptrtoint, turn the pair into a ptr to ptr bitcast if
// the int size is >= the ptr size and the address spaces are the same.
// This requires knowing the width of a pointer, so it can't be done in
@@ -1561,6 +1583,24 @@ Constant *llvm::ConstantFoldCastOperand(unsigned Opcode, Constant *C,
}
}
break;
+ case Instruction::AddrSpaceCast:
+ // A null pointer (`ptr addrspace(N) null` in IR presentation,
+ // `ConstantPointerNull` in LLVM class, not `nullptr` in C/C++) used to
+ // represent a zero-value pointer in the corresponding address space.
+ // Therefore, we can't simply fold an address space cast of a null pointer
+ // from one address space to another, because on some targets, the nullptr
+ // of an address space could be non-zero.
+ //
+ // Recently, the semantic of `ptr addrspace(N) null` is changed to represent
+ // the actual nullptr in the corresponding address space. It can be zero or
+ // non-zero, depending on the target. Therefore, we can fold an address
+ // space cast of a nullptr from one address space to another.
+
+ // If the input is a nullptr, we can fold it to the corresponding
+ // nullptr in the destination address space.
+ if (C->isNullValue())
+ return Constant::getNullValue(DestTy);
+ [[fallthrough]];
case Instruction::Trunc:
case Instruction::ZExt:
case Instruction::SExt:
@@ -1570,7 +1610,6 @@ Constant *llvm::ConstantFoldCastOperand(unsigned Opcode, Constant *C,
case Instruction::SIToFP:
case Instruction::FPToUI:
case Instruction::FPToSI:
- case Instruction::AddrSpaceCast:
break;
case Instruction::BitCast:
return FoldBitCast(C, DestTy, DL);
diff --git a/llvm/lib/AsmParser/LLParser.cpp b/llvm/lib/AsmParser/LLParser.cpp
index 8e3ce4990f437..00748a7151d2b 100644
--- a/llvm/lib/AsmParser/LLParser.cpp
+++ b/llvm/lib/AsmParser/LLParser.cpp
@@ -6575,7 +6575,7 @@ bool LLParser::convertValIDToValue(Type *Ty, ValID &ID, Value *&V,
if (auto *TETy = dyn_cast<TargetExtType>(Ty))
if (!TETy->hasProperty(TargetExtType::HasZeroInit))
return error(ID.Loc, "invalid type for null constant");
- V = Constant::getNullValue(Ty);
+ V = Constant::getZeroValue(Ty);
return false;
case ValID::t_None:
if (!Ty->isTokenTy())
diff --git a/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp b/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
index 3aa245b7f3f1e..35e1920027a89 100644
--- a/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
@@ -4275,8 +4275,15 @@ static void emitGlobalConstantImpl(const DataLayout &DL, const Constant *CV,
return emitGlobalConstantFP(CFP, AP);
}
- if (isa<ConstantPointerNull>(CV)) {
- AP.OutStreamer->emitIntValue(0, Size);
+ if (auto *NullPtr = dyn_cast<ConstantPointerNull>(CV)) {
+ if (std::optional<APInt> NullPtrVal =
+ DL.getNullPtrValue(NullPtr->getType()->getPointerAddressSpace())) {
+ AP.OutStreamer->emitIntValue(NullPtrVal->getSExtValue(), Size);
+ } else {
+ // We fall back to the default behavior of emitting a zero value if we
+ // can't get the null pointer value from the data layout.
+ AP.OutStreamer->emitIntValue(0, Size);
+ }
return;
}
diff --git a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
index 2ec138b6e186d..f339226e70404 100644
--- a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -3768,9 +3768,18 @@ bool IRTranslator::translate(const Constant &C, Register Reg) {
EntryBuilder->buildFConstant(Reg, *CF);
} else if (isa<UndefValue>(C))
EntryBuilder->buildUndef(Reg);
- else if (isa<ConstantPointerNull>(C))
- EntryBuilder->buildConstant(Reg, 0);
- else if (auto GV = dyn_cast<GlobalValue>(&C))
+ else if (auto *NullPtr = dyn_cast<ConstantPointerNull>(&C)) {
+ const DataLayout &DL = EntryBuilder->getMF().getDataLayout();
+ if (std::optional<APInt> NullPtrValue =
+ DL.getNullPtrValue(NullPtr->getType()->getAddressSpace())) {
+ if (NullPtrValue->isZero())
+ EntryBuilder->buildConstant(Reg, 0);
+ else if (NullPtrValue->isAllOnes())
+ EntryBuilder->buildConstant(Reg, -1);
+ else
+ llvm_unreachable("unknown null pointer value");
+ }
+ } else if (auto GV = dyn_cast<GlobalValue>(&C))
EntryBuilder->buildGlobalValue(Reg, GV);
else if (auto CPA = dyn_cast<ConstantPtrAuth>(&C)) {
Register Addr = getOrCreateVReg(*CPA->getPointer());
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 4f13f3b128ea4..d0caf1dfebc4a 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -1872,8 +1872,18 @@ SDValue SelectionDAGBuilder::getValueImpl(const Value *V) {
getValue(CPA->getDiscriminator()));
}
- if (isa<ConstantPointerNull>(C))
- return DAG.getConstant(0, getCurSDLoc(), VT);
+ if (auto *NullPtr = dyn_cast<ConstantPointerNull>(C)) {
+ const DataLayout &DL = DAG.getDataLayout();
+ if (std::optional<APInt> NullPtrValue =
+ DL.getNullPtrValue(NullPtr->getType()->getAddressSpace())) {
+ if (NullPtrValue->isZero())
+ return DAG.getConstant(0, getCurSDLoc(), VT);
+ else if (NullPtrValue->isAllOnes())
+ return DAG.getAllOnesConstant(getCurSDLoc(), VT);
+ else
+ llvm_unreachable("unknown null pointer value");
+ }
+ }
if (match(C, m_VScale()))
return DAG.getVScale(getCurSDLoc(), VT, APInt(VT.getSizeInBits(), 1));
diff --git a/llvm/lib/IR/AsmWriter.cpp b/llvm/lib/IR/AsmWriter.cpp
index 4d4ffe93a8067..b7106fea22486 100644
--- a/llvm/lib/IR/AsmWriter.cpp
+++ b/llvm/lib/IR/AsmWriter.cpp
@@ -1806,6 +1806,15 @@ static void writeConstantInternal(raw_ostream &Out, const Constant *CV,
}
}
+ // Use zeroinitializer for inttoptr(0) constant expression.
+ if (CE->getOpcode() == Instruction::IntToPtr) {
+ Constant *SrcCI = cast<Constant>(CE->getOperand(0));
+ if (SrcCI->isZeroValue()) {
+ Out << "zeroinitializer";
+ return;
+ }
+ }
+
Out << CE->getOpcodeName();
writeOptimizationInfo(Out, CE);
Out << " (";
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index 6a9ef2efa321f..c77f47c2ed996 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -126,6 +126,40 @@ Constant *llvm::ConstantFoldCastInstruction(unsigned opc, Constant *V,
if (isa<PoisonValue>(V))
return PoisonValue::get(DestTy);
+ if (opc == Instruction::IntToPtr) {
+ // We can't fold inttoptr(0) to ConstantPointerNull without checking the
+ // target data layout. However, since data layout is not available here, we
+ // can't do this.
+ if (V->isZeroValue())
+ return nullptr;
+ // If the input is a ptrtoint(0), we can fold it to the corresponding zero
+ // value pointer.
+ if (auto *CE = dyn_cast<ConstantExpr>(V)) {
+ if (CE->getOpcode() == Instruction::PtrToInt) {
+ if (CE->getOperand(0)->isZeroValue())
+ return Constant::getZeroValue(DestTy);
+ }
+ }
+ }
+ if (opc == Instruction::PtrToInt) {
+ // Similarly, we can't fold ptrtoint(nullptr) to null.
+ if (V->isNullValue())
+ return nullptr;
+ // If the input is a inttoptr(0), we can fold it to the corresponding
+ // zero value.
+ if (auto *CE = dyn_cast<ConstantExpr>(V)) {
+ if (CE->getOpcode() == Instruction::IntToPtr) {
+ if (CE->getOperand(0)->isZeroValue())
+ return Constant::getZeroValue(DestTy);
+ }
+ }
+ }
+ // However, since the recent change of the semantic of `ptr addrspace(N)
+ // null`, we can fold address space cast of a nullptr to the corresponding
+ // nullptr in the destination address space.
+ if (opc == Instruction::AddrSpaceCast && V->isNullValue())
+ return Constant::getNullValue(DestTy);
+
if (isa<UndefValue>(V)) {
// zext(undef) = 0, because the top bits will be zero.
// sext(undef) = 0, because the top bits will all be the same.
diff --git a/llvm/lib/IR/Constants.cpp b/llvm/lib/IR/Constants.cpp
index cbce8bd736102..87f2952ed5d29 100644
--- a/llvm/lib/IR/Constants.cpp
+++ b/llvm/lib/IR/Constants.cpp
@@ -72,17 +72,35 @@ bool Constant::isNegativeZeroValue() const {
}
// Return true iff this constant is positive zero (floating point), negative
-// zero (floating point), or a null value.
+// zero (floating point), zero-value pointer, or a null value.
bool Constant::isZeroValue() const {
+ // We can no longer safely say that a ConstantPointerNull is a zero value.
+ if (isa<ConstantPointerNull>(this))
+ return false;
+
// Floating point values have an explicit -0.0 value.
if (const ConstantFP *CFP = dyn_cast<ConstantFP>(this))
return CFP->isZero();
+ // Zero value pointer is a constant expression of inttoptr(0).
+ if (const auto *CE = dyn_cast<ConstantExpr>(this)) {
+ if (CE->getOpcode() == Instruction::IntToPtr) {
+ Constant *SrcCI = cast<Constant>(CE->getOperand(0));
+ // We don't need to check the bitwise value since it is zext or truncate
+ // for inttoptr(0) so it doesn't matter.
+ if (SrcCI->isZeroValue())
+ return true;
+ }
+ }
+
// Check for constant splat vectors of 1 values.
if (getType()->isVectorTy())
if (const auto *SplatCFP = dyn_cast_or_null<ConstantFP>(getSplatValue()))
return SplatCFP->isZero();
+ if (isa<ConstantAggregateZero>(this))
+ return true;
+
// Otherwise, just use +0.0.
return isNullValue();
}
@@ -100,8 +118,14 @@ bool Constant::isNullValue() const {
// constant zero is zero for aggregates, cpnull is null for pointers, none for
// tokens.
- return isa<ConstantAggregateZero>(this) || isa<ConstantPointerNull>(this) ||
- isa<ConstantTokenNone>(this) || isa<ConstantTargetNone>(this);
+ if (isa<ConstantPointerNull>(this) || isa<ConstantTokenNone>(this) ||
+ isa<ConstantTargetNone>(this))
+ return true;
+
+ if (auto *CA = dyn_cast<ConstantAggregate>(this))
+ return CA->isNullValue();
+
+ return false;
}
bool Constant::isAllOnesValue() const {
@@ -369,7 +393,10 @@ bool Constant::containsConstantExpression() const {
return false;
}
-/// Constructor to create a '0' constant of arbitrary type.
+/// Constructor that creates a null constant of any type. For most types, this
+/// means a constant with value '0', but for pointer types, it represents a
+/// nullptr constant. A nullptr isn't always a zero-value pointer in certain
+/// address spaces on some targets.
Constant *Constant::getNullValue(Type *Ty) {
switch (Ty->getTypeID()) {
case Type::IntegerTyID:
@@ -386,10 +413,12 @@ Constant *Constant::getNullValue(Type *Ty) {
case Type::PointerTyID:
return ConstantPointerNull::get(cast<PointerType>(Ty));
case Type::StructTyID:
+ return ConstantStruct::getNullValue(cast<StructType>(Ty));
case Type::ArrayTyID:
+ return ConstantArray::getNullValue(cast<ArrayType>(Ty));
case Type::FixedVectorTyID:
case Type::ScalableVectorTyID:
- return ConstantAggregateZero::get(Ty);
+ return ConstantVector::getNullValue(cast<VectorType>(Ty));
case Type::TokenTyID:
return ConstantTokenNone::get(Ty->getContext());
case Type::TargetExtTyID:
@@ -400,6 +429,24 @@ Constant *Constant::getNullValue(Type *Ty) {
}
}
+/// Constructor that creates a zero constant of any type. For most types, this
+/// is equivalent to getNullValue. For pointer types, it creates an inttoptr
+/// constant expression.
+Constant *Constant::getZeroValue(Type *Ty) {
+ switch (Ty->getTypeID()) {
+ case Type::PointerTyID:
+ return ConstantExpr::getIntToPtr(
+ ConstantInt::get(Type::getInt8Ty(Ty->getContext()), 0), Ty);
+ case Type::StructTyID:
+ case Type::ArrayTyID:
+ case Type::FixedVectorTyID:
+ case Type::ScalableVectorTyID:
+ return ConstantAggregateZero::get(Ty);
+ default:
+ return Constant::getNullValue(Ty);
+ }
+}
+
Constant *Constant::getIntegerValue(Type *Ty, const APInt &V) {
Type *ScalarTy = Ty->getScalarType();
@@ -735,7 +782,7 @@ static bool constantIsDead(const Constant *C, bool RemoveDeadUsers) {
ReplaceableMetadataImpl::SalvageDebugInfo(*C);
const_cast<Constant *>(C)->destroyConstant();
}
-
+
return true;
}
@@ -1307,6 +1354,19 @@ ConstantAggregate::ConstantAggregate(Type *T, ValueTy VT,
}
}
+bool ConstantAggregate::isNullValue() const {
+ if (getType()->isVectorTy()) {
+ Constant *V = getSplatValue();
+ return V && V->isNullValue();
+ }
+
+ for (unsigned I = 0; I < getNumOperands(); ++I) {
+ if (!getOperand(I)->isNullValue())
+ return false;
+ }
+ return true;
+}
+
ConstantArray::ConstantArray(ArrayType *T, ArrayRef<Constant *> V,
AllocInfo AllocInfo)
: ConstantAggregate(T, ConstantArrayVal, V, AllocInfo) {
@@ -1392,11 +1452,11 @@ Constant *ConstantStruct::get(StructType *ST, ArrayRef<Constant*> V) {
if (!V.empty()) {
isUndef = isa<UndefValue>(V[0]);
isPoison = isa<PoisonValue>(V[0]);
- isZero = V[0]->isNullValue();
+ isZero = V[0]->isZeroValue();
// PoisonValue inherits UndefValue, so its check is not necessary.
if (isUndef || isZero) {
for (Constant *C : V) {
- if (!C->isNullValue())
+ if (!C->isZeroValue())
isZero = false;
if (!isa<PoisonValue>(C))
isPoison = false;
@@ -1437,7 +1497,7 @@ Constant *ConstantVector::getImpl(ArrayRef<Constant*> V) {
// If this is an all-undef or all-zero vector, return a
// ConstantAggregateZero or UndefValue.
Constant *C = V[0];
- bool isZero = C->isNullValue();
+ bool isZero = C->isZeroValue();
bool isUndef = isa<UndefValue>(C);
bool isPoison = isa<PoisonValue>(C);
bool isSplatFP = UseConstantFPForFixedLengthSplat && isa<ConstantFP>(C);
@@ -3329,6 +3389,12 @@ Value *ConstantArray::handleOperandChangeImpl(Value *From, Value *To) {
Values, this, From, ToC, NumUpdated, OperandNo);
}
+Constant *ConstantArray::getNullValue(ArrayType *T) {
+ return ConstantArray::get(
+ T, SmallVector<Constant *>(T->getNumElements(),
+ Constant::getNullValue(T->getElementType())));
+}
+
Value *ConstantStruct::handleOperandChangeImpl(Value *From, Value *To) {
assert(isa<Constant>(To) && "Cannot make Constant refer to non-constant!");
Constant *ToC = cast<Constant>(To);
@@ -3365,6 +3431,14 @@ Value *ConstantStruct::handleOperandChangeImpl(Value *From, Value *To) {
Values, this, From, ToC, NumUpdated, OperandNo);
}
+Constant *ConstantStruct::getNullValue(StructType *T) {
+ SmallVector<Constant *> Values;
+ Values.reserve(T->getNumElements());
+ for (unsigned I = 0; I < T->getNumElements(); ++I)
+ Values.push_back(Constant::getNullValue(T->getElementType(I)));
+ return ConstantStruct::get(T, Values);
+}
+
Value *ConstantVector::handleOperandChangeImpl(Value *From, Value *To) {
assert(isa<Constant>(To) && "Cannot make Constant refer to non-constant!");
Constant *ToC = cast<Constant>(To);
@@ -3391,6 +3465,11 @@ Value *ConstantVector::handleOperandChangeImpl(Value *From, Value *To) {
Values, this, From, ToC, NumUpdated, OperandNo);
}
+Constant *ConstantVector::getNullValue(VectorType *T) {
+ return ConstantVector::getSplat(T->getElementCount(),
+ Constant::getNullValue(T->getElementType()));
+}
+
Value *ConstantExpr::handleOperandChangeImpl(Value *From, Value *ToV) {
assert(isa<Constant>(ToV) && "Cannot make Constant refer to non-constant!");
Constant *To = cast<Constant>(ToV);
diff --git a/llvm/lib/IR/DataLayout.cpp b/llvm/lib/IR/DataLayout.cpp
index 49e1f898ca594..86573c8e0942d 100644
--- a/llvm/lib/IR/DataLayout.cpp
+++ b/llvm/lib/IR/DataLayout.cpp
@@ -193,9 +193,12 @@ constexpr DataLayout::PrimitiveSpec DefaultVectorSpecs[] = {
};
// Default pointer type specifications.
-constexpr DataLayout::PointerSpec DefaultPointerSpecs[] = {
+const DataLayout::PointerSpec DefaultPointerSpecs[] = {
// p0:64:64:64:64
- {0, 64, Align::Constant<8>(), Align::Constant<8>(), 64, false, false},
+ {/*AddrSpace=*/0, /*BitWidth=*/64, /*ABIAlign=*/Align::Constant<8>(),
+ /*PrefAlign=*/Align::Constant<8>(), /*IndexBitWidth=*/64,
+ /*NullPtrValue=*/APInt(64, 0), /*HasUnstableRepr=*/false,
+ /*HasExternalState=*/false},
};
DataLayout::DataLayout()
@@ -408,6 +411,15 @@ Error DataLayout::parsePointerSpec(StringRef Spec) {
unsigned AddrSpace = 0;
bool ExternalState = false;
bool UnstableRepr = false;
+ // Default nullptr value kind is all-zeros, which is same as the previous
+ // behavior.
+ enum {
+ AllZeros,
+ AllOnes,
+ // The nullptr value is neither all-zeros nor all-ones. LLVM doesn't accept
+ // arbitrary bit patterns, so it will not be able to fold nullptr.
+ Custom
+ } NullPtrValueKind = AllZeros;
StringRef AddrSpaceStr = Components[0];
while (!AddrSpaceStr.empty()) {
char C = AddrSpaceStr.front();
@@ -415,6 +427,12 @@ Error DataLayout::parsePointerSpec(StringRef Spec) {
ExternalState = true;
} else if (C == 'u') {
UnstableRepr = true;
+ } else if (C == 'z') {
+ NullPtrValueKind = AllZeros;
+ } else if (C == 'o') {
+ NullPtrValueKind = AllOnes;
+ } else if (C == 'c') {
+ NullPtrValueKind = Custom;
} else if (isAlpha(C)) {
return createStringError("'%c' is not a valid pointer specification flag",
C);
@@ -461,8 +479,14 @@ Error DataLayout::parsePointerSpec(StringRef Spec) {
return createStringError(
"index size cannot be larger than the pointer size");
+ std::optional<APInt> NullPtrValue;
+ if (NullPtrValueKind == AllZeros)
+ NullPtrValue = APInt::getZero(BitWidth);
+ else if (NullPtrValueKind == AllOnes)
+ NullPtrValue = APInt::getAllOnes(BitWidth);
+
setPointerSpec(AddrSpace, BitWidth, ABIAlign, PrefAlign, IndexBitWidth,
- UnstableRepr, ExternalState);
+ NullPtrValue, UnstableRepr, ExternalState);
return Error::success();
}
@@ -638,6 +662,7 @@ Error DataLayout::parseLayoutString(StringRef LayoutString) {
// the spec for AS0, and we then update that to mark it non-integral.
const PointerSpec &PS = getPointerSpec(AS);
setPointerSpec(AS, PS.BitWidth, PS.ABIAlign, PS.PrefAlign, PS.IndexBitWidth,
+ PS.NullPtrValue,
/*HasUnstableRepr=*/true, /*HasExternalState=*/false);
}
@@ -686,18 +711,20 @@ DataLayout::getPointerSpec(uint32_t AddrSpace) const {
void DataLayout::setPointerSpec(uint32_t AddrSpace, uint32_t BitWidth,
Align ABIAlign, Align PrefAlign,
- uint32_t IndexBitWidth, bool HasUnstableRepr,
- bool HasExternalState) {
+ uint32_t IndexBitWidth,
+ std::optional<APInt> NullPtrValue,
+ bool HasUnstableRepr, bool HasExternalState) {
auto I = lower_bound(PointerSpecs, AddrSpace, LessPointerAddrSpace());
if (I == PointerSpecs.end() || I->AddrSpace != AddrSpace) {
PointerSpecs.insert(I, PointerSpec{AddrSpace, BitWidth, ABIAlign, PrefAlign,
- IndexBitWidth, HasUnstableRepr,
- HasExternalState});
+ IndexBitWidth, NullPtrValue,
+ HasUnstableRepr, HasExternalState});
} else {
I->BitWidth = BitWidth;
I->ABIAlign = ABIAlign;
I->PrefAlign = PrefAlign;
I->IndexBitWidth = IndexBitWidth;
+ I->NullPtrValue = NullPtrValue;
I->HasUnstableRepresentation = HasUnstableRepr;
I->HasExternalState = HasExternalState;
}
diff --git a/llvm/lib/IR/Verifier.cpp b/llvm/lib/IR/Verifier.cpp
index fa18c3cd0f404..e99c5e5056bbe 100644
--- a/llvm/lib/IR/Verifier.cpp
+++ b/llvm/lib/IR/Verifier.cpp
@@ -842,7 +842,7 @@ void Verifier::visitGlobalVariable(const GlobalVariable &GV) {
// If the global has common linkage, it must have a zero initializer and
// cannot be constant.
if (GV.hasCommonLinkage()) {
- Check(GV.getInitializer()->isNullValue(),
+ Check(GV.getInitializer()->isZeroValue(),
"'common' global must have a zero initializer!", &GV);
Check(!GV.isConstant(), "'common' global may not be marked constant!",
&GV);
diff --git a/llvm/lib/TargetParser/TargetDataLayout.cpp b/llvm/lib/TargetParser/TargetDataLayout.cpp
index d7359234b02f7..6b4dd7a73382f 100644
--- a/llvm/lib/TargetParser/TargetDataLayout.cpp
+++ b/llvm/lib/TargetParser/TargetDataLayout.cpp
@@ -269,10 +269,10 @@ static std::string computeAMDDataLayout(const Triple &TT) {
// (address space 7), and 128-bit non-integral buffer resourcees (address
// space 8) which cannot be non-trivilally accessed by LLVM memory operations
// like getelementptr.
- return "e-m:e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32"
- "-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-"
- "v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-"
- "v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9";
+ return "e-m:e-p:64:64-p1:64:64-po2:32:32-po3:32:32-p4:64:64-po5:32:32-p6:32:"
+ "32-p7:160:256:256:32-p8:128:128:128:48-p9:192:256:256:32-i64:64-v16:"
+ "16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:"
+ "1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9";
}
static std::string computeRISCVDataLayout(const Triple &TT, StringRef ABIName) {
diff --git a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
index ff7149044d199..0b3c95e946c1a 100644
--- a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
+++ b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp
@@ -25493,8 +25493,8 @@ class HorizontalReduction {
dbgs() << "> of " << VectorizedValue << ". (HorRdx)\n");
if (NeedShuffle)
VectorizedValue = Builder.CreateShuffleVector(
- VectorizedValue,
- ConstantVector::getNullValue(VectorizedValue->getType()), Mask);
+ VectorizedValue, Constant::getNullValue(VectorizedValue->getType()),
+ Mask);
return VectorizedValue;
}
case RecurKind::FAdd: {
diff --git a/llvm/test/Assembler/ConstantExprFoldCast.ll b/llvm/test/Assembler/ConstantExprFoldCast.ll
index 2e1782a4c34f7..7bb2fa2b5cdca 100644
--- a/llvm/test/Assembler/ConstantExprFoldCast.ll
+++ b/llvm/test/Assembler/ConstantExprFoldCast.ll
@@ -5,8 +5,8 @@
; CHECK-NOT: bitcast
; CHECK-NOT: trunc
-; CHECK: addrspacecast
-; CHECK: addrspacecast
+; CHECK: ptr addrspace(1) null
+; CHECK: ptr null
@A = global ptr null ; Cast null -> fold
@B = global ptr @A ; Cast to same type -> fold
diff --git a/llvm/test/CodeGen/AMDGPU/32-bit-local-address-space.ll b/llvm/test/CodeGen/AMDGPU/32-bit-local-address-space.ll
index 4b53f66b379a4..3f8e8a9314878 100644
--- a/llvm/test/CodeGen/AMDGPU/32-bit-local-address-space.ll
+++ b/llvm/test/CodeGen/AMDGPU/32-bit-local-address-space.ll
@@ -164,7 +164,7 @@ define amdgpu_kernel void @null_32bit_lds_ptr(ptr addrspace(1) %out, ptr addrspa
; GFX7-NEXT: s_mov_b32 s3, 0xf000
; GFX7-NEXT: s_mov_b32 s2, -1
; GFX7-NEXT: s_waitcnt lgkmcnt(0)
-; GFX7-NEXT: s_cmp_lg_u32 s6, 0
+; GFX7-NEXT: s_cmp_lg_u32 s6, -1
; GFX7-NEXT: s_cselect_b32 s4, s4, 0x1c8
; GFX7-NEXT: v_mov_b32_e32 v0, s4
; GFX7-NEXT: buffer_store_dword v0, off, s[0:3], 0
@@ -178,7 +178,7 @@ define amdgpu_kernel void @null_32bit_lds_ptr(ptr addrspace(1) %out, ptr addrspa
; GFX8-NEXT: s_mov_b32 s3, 0xf000
; GFX8-NEXT: s_mov_b32 s2, -1
; GFX8-NEXT: s_waitcnt lgkmcnt(0)
-; GFX8-NEXT: s_cmp_lg_u32 s6, 0
+; GFX8-NEXT: s_cmp_lg_u32 s6, -1
; GFX8-NEXT: s_cselect_b32 s4, s4, 0x1c8
; GFX8-NEXT: v_mov_b32_e32 v0, s4
; GFX8-NEXT: buffer_store_dword v0, off, s[0:3], 0
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/bug_shuffle_vector_to_scalar.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/bug_shuffle_vector_to_scalar.ll
index b19872aba2cca..451b9394c2f8e 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/bug_shuffle_vector_to_scalar.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/bug_shuffle_vector_to_scalar.ll
@@ -7,7 +7,7 @@
define amdgpu_gs <4 x float> @_amdgpu_gs_main() {
; CHECK-LABEL: _amdgpu_gs_main:
; CHECK: ; %bb.0: ; %bb
-; CHECK-NEXT: v_mov_b32_e32 v0, 16
+; CHECK-NEXT: v_mov_b32_e32 v0, 15
; CHECK-NEXT: ds_read2_b32 v[0:1], v0 offset1:1
; CHECK-NEXT: s_mov_b32 s0, 0
; CHECK-NEXT: s_mov_b32 s1, s0
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-amdgpu_kernel.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-amdgpu_kernel.ll
index 11153bbbba612..6dfe8560c91b5 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-amdgpu_kernel.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-amdgpu_kernel.ll
@@ -1990,7 +1990,7 @@ define amdgpu_kernel void @p1i8_arg(ptr addrspace(1) %arg) nounwind {
; HSA-VI-NEXT: {{ $}}
; HSA-VI-NEXT: [[COPY:%[0-9]+]]:_(p4) = COPY $sgpr8_sgpr9
; HSA-VI-NEXT: [[C:%[0-9]+]]:_(s8) = G_CONSTANT i8 9
- ; HSA-VI-NEXT: [[C1:%[0-9]+]]:_(p3) = G_CONSTANT i32 0
+ ; HSA-VI-NEXT: [[C1:%[0-9]+]]:_(p3) = G_CONSTANT i32 -1
; HSA-VI-NEXT: G_STORE [[C]](s8), [[C1]](p3) :: (store (s8) into `ptr addrspace(3) null`, addrspace 3)
; HSA-VI-NEXT: S_ENDPGM 0
;
@@ -2000,7 +2000,7 @@ define amdgpu_kernel void @p1i8_arg(ptr addrspace(1) %arg) nounwind {
; LEGACY-MESA-VI-NEXT: {{ $}}
; LEGACY-MESA-VI-NEXT: [[COPY:%[0-9]+]]:_(p4) = COPY $sgpr4_sgpr5
; LEGACY-MESA-VI-NEXT: [[C:%[0-9]+]]:_(s8) = G_CONSTANT i8 9
- ; LEGACY-MESA-VI-NEXT: [[C1:%[0-9]+]]:_(p3) = G_CONSTANT i32 0
+ ; LEGACY-MESA-VI-NEXT: [[C1:%[0-9]+]]:_(p3) = G_CONSTANT i32 -1
; LEGACY-MESA-VI-NEXT: G_STORE [[C]](s8), [[C1]](p3) :: (store (s8) into `ptr addrspace(3) null`, addrspace 3)
; LEGACY-MESA-VI-NEXT: S_ENDPGM 0
store i8 9, ptr addrspace(3) null
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.set.inactive.ll b/llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.set.inactive.ll
index 7b5621ff3b5a9..d405b52bd58e2 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.set.inactive.ll
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.set.inactive.ll
@@ -433,7 +433,7 @@ define amdgpu_kernel void @set_inactive_p3(ptr addrspace(1) %out, ptr addrspace(
; GCN-NEXT: s_waitcnt lgkmcnt(0)
; GCN-NEXT: v_mov_b32_e32 v1, s3
; GCN-NEXT: s_or_saveexec_b64 s[4:5], -1
-; GCN-NEXT: v_cndmask_b32_e64 v0, 0, v1, s[4:5]
+; GCN-NEXT: v_cndmask_b32_e64 v0, -1, v1, s[4:5]
; GCN-NEXT: s_mov_b64 exec, s[4:5]
; GCN-NEXT: v_mov_b32_e32 v1, v0
; GCN-NEXT: s_mov_b32 s3, 0xf000
@@ -454,7 +454,7 @@ define amdgpu_kernel void @set_inactive_p5(ptr addrspace(1) %out, ptr addrspace(
; GCN-NEXT: s_waitcnt lgkmcnt(0)
; GCN-NEXT: v_mov_b32_e32 v1, s3
; GCN-NEXT: s_or_saveexec_b64 s[4:5], -1
-; GCN-NEXT: v_cndmask_b32_e64 v0, 0, v1, s[4:5]
+; GCN-NEXT: v_cndmask_b32_e64 v0, -1, v1, s[4:5]
; GCN-NEXT: s_mov_b64 exec, s[4:5]
; GCN-NEXT: v_mov_b32_e32 v1, v0
; GCN-NEXT: s_mov_b32 s3, 0xf000
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast-constantexpr.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast-constantexpr.ll
index 98fbbe1a515ed..59b36abc4b21c 100644
--- a/llvm/test/CodeGen/AMDGPU/addrspacecast-constantexpr.ll
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast-constantexpr.ll
@@ -18,7 +18,7 @@ declare void @llvm.memcpy.p1.p4.i32(ptr addrspace(1) nocapture, ptr addrspace(4)
define amdgpu_kernel void @store_cast_0_flat_to_group_addrspacecast() #1 {
; HSA-LABEL: define {{[^@]+}}@store_cast_0_flat_to_group_addrspacecast
; HSA-SAME: () #[[ATTR1:[0-9]+]] {
-; HSA-NEXT: store i32 7, ptr addrspace(3) addrspacecast (ptr addrspace(4) null to ptr addrspace(3)), align 4
+; HSA-NEXT: store i32 7, ptr addrspace(3) null, align 4
; HSA-NEXT: ret void
;
store i32 7, ptr addrspace(3) addrspacecast (ptr addrspace(4) null to ptr addrspace(3))
@@ -27,8 +27,8 @@ define amdgpu_kernel void @store_cast_0_flat_to_group_addrspacecast() #1 {
define amdgpu_kernel void @store_cast_0_group_to_flat_addrspacecast() #1 {
; HSA-LABEL: define {{[^@]+}}@store_cast_0_group_to_flat_addrspacecast
-; HSA-SAME: () #[[ATTR2:[0-9]+]] {
-; HSA-NEXT: store i32 7, ptr addrspace(4) addrspacecast (ptr addrspace(3) null to ptr addrspace(4)), align 4
+; HSA-SAME: () #[[ATTR1]] {
+; HSA-NEXT: store i32 7, ptr addrspace(4) null, align 4
; HSA-NEXT: ret void
;
store i32 7, ptr addrspace(4) addrspacecast (ptr addrspace(3) null to ptr addrspace(4))
@@ -37,7 +37,7 @@ define amdgpu_kernel void @store_cast_0_group_to_flat_addrspacecast() #1 {
define amdgpu_kernel void @store_constant_cast_group_gv_to_flat() #1 {
; HSA-LABEL: define {{[^@]+}}@store_constant_cast_group_gv_to_flat
-; HSA-SAME: () #[[ATTR2]] {
+; HSA-SAME: () #[[ATTR2:[0-9]+]] {
; HSA-NEXT: store i32 7, ptr addrspace(4) addrspacecast (ptr addrspace(3) @lds.i32 to ptr addrspace(4)), align 4
; HSA-NEXT: ret void
;
diff --git a/llvm/test/CodeGen/AMDGPU/addrspacecast.r600.ll b/llvm/test/CodeGen/AMDGPU/addrspacecast.r600.ll
index 95a3263a58a2b..a0693a7dbfc8c 100644
--- a/llvm/test/CodeGen/AMDGPU/addrspacecast.r600.ll
+++ b/llvm/test/CodeGen/AMDGPU/addrspacecast.r600.ll
@@ -42,7 +42,7 @@ define amdgpu_kernel void @addrspacecast_flat_null_to_local(ptr addrspace(1) %ou
; CHECK-NEXT: PAD
; CHECK-NEXT: ALU clause starting at 4:
; CHECK-NEXT: MOV * T0.X, literal.x,
-; CHECK-NEXT: -1(nan), 0(0.000000e+00)
+; CHECK-NEXT: 0(0.000000e+00), 0(0.000000e+00)
; CHECK-NEXT: LSHR * T1.X, KC0[2].Y, literal.x,
; CHECK-NEXT: 2(2.802597e-45), 0(0.000000e+00)
store ptr addrspace(3) addrspacecast (ptr null to ptr addrspace(3)), ptr addrspace(1) %out
diff --git a/llvm/test/CodeGen/AMDGPU/agpr-copy-no-free-registers.ll b/llvm/test/CodeGen/AMDGPU/agpr-copy-no-free-registers.ll
index ef7a13819a799..ce3ad1b8c6444 100644
--- a/llvm/test/CodeGen/AMDGPU/agpr-copy-no-free-registers.ll
+++ b/llvm/test/CodeGen/AMDGPU/agpr-copy-no-free-registers.ll
@@ -520,10 +520,13 @@ define amdgpu_kernel void @introduced_copy_to_sgpr(i64 %arg, i32 %arg1, i32 %arg
; GFX908-NEXT: s_sub_i32 s1, 0, s7
; GFX908-NEXT: v_cvt_f32_f16_e32 v18, s0
; GFX908-NEXT: v_mov_b32_e32 v17, 0
-; GFX908-NEXT: v_rcp_iflag_f32_e32 v0, v0
-; GFX908-NEXT: v_mul_f32_e32 v0, 0x4f7ffffe, v0
-; GFX908-NEXT: v_cvt_u32_f32_e32 v0, v0
-; GFX908-NEXT: v_readfirstlane_b32 s2, v0
+; GFX908-NEXT: v_rcp_iflag_f32_e32 v2, v0
+; GFX908-NEXT: v_mov_b32_e32 v0, 0
+; GFX908-NEXT: v_mov_b32_e32 v1, 0
+; GFX908-NEXT: v_mov_b32_e32 v20, -1
+; GFX908-NEXT: v_mul_f32_e32 v2, 0x4f7ffffe, v2
+; GFX908-NEXT: v_cvt_u32_f32_e32 v2, v2
+; GFX908-NEXT: v_readfirstlane_b32 s2, v2
; GFX908-NEXT: s_mul_i32 s1, s1, s2
; GFX908-NEXT: s_mul_hi_u32 s1, s2, s1
; GFX908-NEXT: s_add_i32 s2, s2, s1
@@ -541,11 +544,9 @@ define amdgpu_kernel void @introduced_copy_to_sgpr(i64 %arg, i32 %arg1, i32 %arg
; GFX908-NEXT: s_lshr_b32 s2, s0, 16
; GFX908-NEXT: v_cvt_f32_f16_e32 v19, s2
; GFX908-NEXT: s_lshl_b64 s[6:7], s[4:5], 5
-; GFX908-NEXT: v_mov_b32_e32 v0, 0
; GFX908-NEXT: s_lshl_b64 s[14:15], s[10:11], 5
; GFX908-NEXT: s_and_b64 s[0:1], exec, s[0:1]
; GFX908-NEXT: s_lshl_b64 s[16:17], s[8:9], 5
-; GFX908-NEXT: v_mov_b32_e32 v1, 0
; GFX908-NEXT: s_waitcnt vmcnt(0)
; GFX908-NEXT: v_readfirstlane_b32 s2, v16
; GFX908-NEXT: s_and_b32 s2, 0xffff, s2
@@ -609,37 +610,37 @@ define amdgpu_kernel void @introduced_copy_to_sgpr(i64 %arg, i32 %arg1, i32 %arg
; GFX908-NEXT: ; => This Inner Loop Header: Depth=2
; GFX908-NEXT: s_add_u32 s22, s20, s9
; GFX908-NEXT: s_addc_u32 s23, s21, s13
-; GFX908-NEXT: global_load_dword v21, v17, s[22:23] offset:16 glc
+; GFX908-NEXT: global_load_dword v22, v17, s[22:23] offset:16 glc
; GFX908-NEXT: s_waitcnt vmcnt(0)
-; GFX908-NEXT: global_load_dword v20, v17, s[22:23] offset:20 glc
+; GFX908-NEXT: global_load_dword v21, v17, s[22:23] offset:20 glc
; GFX908-NEXT: s_waitcnt vmcnt(0)
; GFX908-NEXT: global_load_dword v12, v17, s[22:23] offset:24 glc
; GFX908-NEXT: s_waitcnt vmcnt(0)
; GFX908-NEXT: global_load_dword v12, v17, s[22:23] offset:28 glc
; GFX908-NEXT: s_waitcnt vmcnt(0)
-; GFX908-NEXT: ds_read_b64 v[12:13], v17
+; GFX908-NEXT: ds_read_b64 v[12:13], v20
; GFX908-NEXT: ds_read_b64 v[14:15], v0
; GFX908-NEXT: s_and_b64 vcc, exec, s[2:3]
; GFX908-NEXT: s_waitcnt lgkmcnt(0)
; GFX908-NEXT: s_cbranch_vccnz .LBB3_7
; GFX908-NEXT: ; %bb.6: ; %bb51
; GFX908-NEXT: ; in Loop: Header=BB3_5 Depth=2
-; GFX908-NEXT: v_cvt_f32_f16_sdwa v22, v21 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
+; GFX908-NEXT: v_cvt_f32_f16_sdwa v23, v22 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
+; GFX908-NEXT: v_cvt_f32_f16_e32 v22, v22
+; GFX908-NEXT: v_cvt_f32_f16_sdwa v24, v21 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
; GFX908-NEXT: v_cvt_f32_f16_e32 v21, v21
-; GFX908-NEXT: v_cvt_f32_f16_sdwa v23, v20 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
-; GFX908-NEXT: v_cvt_f32_f16_e32 v20, v20
-; GFX908-NEXT: v_add_f32_e32 v24, v18, v12
-; GFX908-NEXT: v_add_f32_e32 v25, v19, v13
-; GFX908-NEXT: v_add_f32_e32 v26, 0, v12
-; GFX908-NEXT: v_add_f32_e32 v27, 0, v13
-; GFX908-NEXT: v_add_f32_e32 v15, v22, v15
-; GFX908-NEXT: v_add_f32_e32 v14, v21, v14
-; GFX908-NEXT: v_add_f32_e32 v13, v23, v13
-; GFX908-NEXT: v_add_f32_e32 v12, v20, v12
-; GFX908-NEXT: v_add_f32_e32 v5, v5, v25
-; GFX908-NEXT: v_add_f32_e32 v4, v4, v24
-; GFX908-NEXT: v_add_f32_e32 v7, v7, v27
-; GFX908-NEXT: v_add_f32_e32 v6, v6, v26
+; GFX908-NEXT: v_add_f32_e32 v25, v18, v12
+; GFX908-NEXT: v_add_f32_e32 v26, v19, v13
+; GFX908-NEXT: v_add_f32_e32 v27, 0, v12
+; GFX908-NEXT: v_add_f32_e32 v28, 0, v13
+; GFX908-NEXT: v_add_f32_e32 v15, v23, v15
+; GFX908-NEXT: v_add_f32_e32 v14, v22, v14
+; GFX908-NEXT: v_add_f32_e32 v13, v24, v13
+; GFX908-NEXT: v_add_f32_e32 v12, v21, v12
+; GFX908-NEXT: v_add_f32_e32 v5, v5, v26
+; GFX908-NEXT: v_add_f32_e32 v4, v4, v25
+; GFX908-NEXT: v_add_f32_e32 v7, v7, v28
+; GFX908-NEXT: v_add_f32_e32 v6, v6, v27
; GFX908-NEXT: v_add_f32_e32 v8, v8, v14
; GFX908-NEXT: v_add_f32_e32 v9, v9, v15
; GFX908-NEXT: v_add_f32_e32 v10, v10, v12
@@ -684,12 +685,13 @@ define amdgpu_kernel void @introduced_copy_to_sgpr(i64 %arg, i32 %arg1, i32 %arg
; GFX90A-NEXT: v_cvt_f32_u32_e32 v0, s7
; GFX90A-NEXT: s_sub_i32 s1, 0, s7
; GFX90A-NEXT: v_mov_b32_e32 v19, 0
-; GFX90A-NEXT: v_pk_mov_b32 v[2:3], 0, 0
-; GFX90A-NEXT: v_rcp_iflag_f32_e32 v0, v0
-; GFX90A-NEXT: v_mul_f32_e32 v0, 0x4f7ffffe, v0
-; GFX90A-NEXT: v_cvt_u32_f32_e32 v1, v0
-; GFX90A-NEXT: v_cvt_f32_f16_e32 v0, s0
-; GFX90A-NEXT: v_readfirstlane_b32 s2, v1
+; GFX90A-NEXT: v_mov_b32_e32 v20, -1
+; GFX90A-NEXT: v_rcp_iflag_f32_e32 v2, v0
+; GFX90A-NEXT: v_pk_mov_b32 v[0:1], 0, 0
+; GFX90A-NEXT: v_mul_f32_e32 v2, 0x4f7ffffe, v2
+; GFX90A-NEXT: v_cvt_u32_f32_e32 v3, v2
+; GFX90A-NEXT: v_cvt_f32_f16_e32 v2, s0
+; GFX90A-NEXT: v_readfirstlane_b32 s2, v3
; GFX90A-NEXT: s_mul_i32 s1, s1, s2
; GFX90A-NEXT: s_mul_hi_u32 s1, s2, s1
; GFX90A-NEXT: s_add_i32 s2, s2, s1
@@ -705,7 +707,7 @@ define amdgpu_kernel void @introduced_copy_to_sgpr(i64 %arg, i32 %arg1, i32 %arg
; GFX90A-NEXT: s_cmp_ge_u32 s2, s7
; GFX90A-NEXT: s_cselect_b32 s8, s3, s1
; GFX90A-NEXT: s_lshr_b32 s2, s0, 16
-; GFX90A-NEXT: v_cvt_f32_f16_e32 v1, s2
+; GFX90A-NEXT: v_cvt_f32_f16_e32 v3, s2
; GFX90A-NEXT: s_lshl_b64 s[6:7], s[4:5], 5
; GFX90A-NEXT: s_lshl_b64 s[14:15], s[10:11], 5
; GFX90A-NEXT: s_and_b64 s[0:1], exec, s[0:1]
@@ -731,7 +733,7 @@ define amdgpu_kernel void @introduced_copy_to_sgpr(i64 %arg, i32 %arg1, i32 %arg
; GFX90A-NEXT: s_cbranch_vccz .LBB3_10
; GFX90A-NEXT: ; %bb.3: ; %bb14
; GFX90A-NEXT: ; in Loop: Header=BB3_2 Depth=1
-; GFX90A-NEXT: global_load_dwordx2 v[4:5], v[2:3], off
+; GFX90A-NEXT: global_load_dwordx2 v[4:5], v[0:1], off
; GFX90A-NEXT: v_cmp_gt_i64_e64 s[2:3], s[10:11], -1
; GFX90A-NEXT: s_mov_b32 s13, s12
; GFX90A-NEXT: v_cndmask_b32_e64 v8, 0, 1, s[2:3]
@@ -769,15 +771,15 @@ define amdgpu_kernel void @introduced_copy_to_sgpr(i64 %arg, i32 %arg1, i32 %arg
; GFX90A-NEXT: ; => This Inner Loop Header: Depth=2
; GFX90A-NEXT: s_add_u32 s22, s20, s9
; GFX90A-NEXT: s_addc_u32 s23, s21, s13
-; GFX90A-NEXT: global_load_dword v21, v19, s[22:23] offset:16 glc
+; GFX90A-NEXT: global_load_dword v22, v19, s[22:23] offset:16 glc
; GFX90A-NEXT: s_waitcnt vmcnt(0)
-; GFX90A-NEXT: global_load_dword v20, v19, s[22:23] offset:20 glc
+; GFX90A-NEXT: global_load_dword v21, v19, s[22:23] offset:20 glc
; GFX90A-NEXT: s_waitcnt vmcnt(0)
; GFX90A-NEXT: global_load_dword v14, v19, s[22:23] offset:24 glc
; GFX90A-NEXT: s_waitcnt vmcnt(0)
; GFX90A-NEXT: global_load_dword v14, v19, s[22:23] offset:28 glc
; GFX90A-NEXT: s_waitcnt vmcnt(0)
-; GFX90A-NEXT: ds_read_b64 v[14:15], v19
+; GFX90A-NEXT: ds_read_b64 v[14:15], v20
; GFX90A-NEXT: ds_read_b64 v[16:17], v0
; GFX90A-NEXT: s_and_b64 vcc, exec, s[2:3]
; GFX90A-NEXT: ; kill: killed $sgpr22 killed $sgpr23
@@ -785,16 +787,16 @@ define amdgpu_kernel void @introduced_copy_to_sgpr(i64 %arg, i32 %arg1, i32 %arg
; GFX90A-NEXT: s_cbranch_vccnz .LBB3_7
; GFX90A-NEXT: ; %bb.6: ; %bb51
; GFX90A-NEXT: ; in Loop: Header=BB3_5 Depth=2
-; GFX90A-NEXT: v_cvt_f32_f16_sdwa v23, v21 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
-; GFX90A-NEXT: v_cvt_f32_f16_e32 v22, v21
-; GFX90A-NEXT: v_cvt_f32_f16_sdwa v21, v20 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
-; GFX90A-NEXT: v_cvt_f32_f16_e32 v20, v20
-; GFX90A-NEXT: v_pk_add_f32 v[24:25], v[0:1], v[14:15]
-; GFX90A-NEXT: v_pk_add_f32 v[26:27], v[14:15], 0 op_sel_hi:[1,0]
+; GFX90A-NEXT: v_cvt_f32_f16_sdwa v23, v22 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
+; GFX90A-NEXT: v_cvt_f32_f16_e32 v22, v22
+; GFX90A-NEXT: v_cvt_f32_f16_sdwa v25, v21 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_1
+; GFX90A-NEXT: v_cvt_f32_f16_e32 v24, v21
+; GFX90A-NEXT: v_pk_add_f32 v[26:27], v[2:3], v[14:15]
+; GFX90A-NEXT: v_pk_add_f32 v[28:29], v[14:15], 0 op_sel_hi:[1,0]
; GFX90A-NEXT: v_pk_add_f32 v[16:17], v[22:23], v[16:17]
-; GFX90A-NEXT: v_pk_add_f32 v[14:15], v[20:21], v[14:15]
-; GFX90A-NEXT: v_pk_add_f32 v[6:7], v[6:7], v[24:25]
-; GFX90A-NEXT: v_pk_add_f32 v[8:9], v[8:9], v[26:27]
+; GFX90A-NEXT: v_pk_add_f32 v[14:15], v[24:25], v[14:15]
+; GFX90A-NEXT: v_pk_add_f32 v[6:7], v[6:7], v[26:27]
+; GFX90A-NEXT: v_pk_add_f32 v[8:9], v[8:9], v[28:29]
; GFX90A-NEXT: v_pk_add_f32 v[10:11], v[10:11], v[16:17]
; GFX90A-NEXT: v_pk_add_f32 v[12:13], v[12:13], v[14:15]
; GFX90A-NEXT: s_branch .LBB3_4
diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-late-codegenprepare.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-late-codegenprepare.ll
index 3e232bb1914f8..3fed0c978f5d1 100644
--- a/llvm/test/CodeGen/AMDGPU/amdgpu-late-codegenprepare.ll
+++ b/llvm/test/CodeGen/AMDGPU/amdgpu-late-codegenprepare.ll
@@ -7,14 +7,14 @@
; address spaces
define amdgpu_kernel void @constant_from_offset_cast_generic_null() {
; GFX9-LABEL: @constant_from_offset_cast_generic_null(
-; GFX9-NEXT: [[TMP1:%.*]] = load i32, ptr addrspace(4) getelementptr (i8, ptr addrspace(4) addrspacecast (ptr null to ptr addrspace(4)), i64 4), align 4
+; GFX9-NEXT: [[TMP1:%.*]] = load i32, ptr addrspace(4) getelementptr (i8, ptr addrspace(4) null, i64 4), align 4
; GFX9-NEXT: [[TMP2:%.*]] = lshr i32 [[TMP1]], 16
; GFX9-NEXT: [[TMP3:%.*]] = trunc i32 [[TMP2]] to i8
; GFX9-NEXT: store i8 [[TMP3]], ptr addrspace(1) poison, align 1
; GFX9-NEXT: ret void
;
; GFX12-LABEL: @constant_from_offset_cast_generic_null(
-; GFX12-NEXT: [[LOAD:%.*]] = load i8, ptr addrspace(4) getelementptr inbounds (i8, ptr addrspace(4) addrspacecast (ptr null to ptr addrspace(4)), i64 6), align 1
+; GFX12-NEXT: [[LOAD:%.*]] = load i8, ptr addrspace(4) getelementptr inbounds (i8, ptr addrspace(4) null, i64 6), align 1
; GFX12-NEXT: store i8 [[LOAD]], ptr addrspace(1) poison, align 1
; GFX12-NEXT: ret void
;
@@ -25,14 +25,14 @@ define amdgpu_kernel void @constant_from_offset_cast_generic_null() {
define amdgpu_kernel void @constant_from_offset_cast_global_null() {
; GFX9-LABEL: @constant_from_offset_cast_global_null(
-; GFX9-NEXT: [[TMP1:%.*]] = load i32, ptr addrspace(4) getelementptr (i8, ptr addrspace(4) addrspacecast (ptr addrspace(1) null to ptr addrspace(4)), i64 4), align 4
+; GFX9-NEXT: [[TMP1:%.*]] = load i32, ptr addrspace(4) getelementptr (i8, ptr addrspace(4) null, i64 4), align 4
; GFX9-NEXT: [[TMP2:%.*]] = lshr i32 [[TMP1]], 16
; GFX9-NEXT: [[TMP3:%.*]] = trunc i32 [[TMP2]] to i8
; GFX9-NEXT: store i8 [[TMP3]], ptr addrspace(1) poison, align 1
; GFX9-NEXT: ret void
;
; GFX12-LABEL: @constant_from_offset_cast_global_null(
-; GFX12-NEXT: [[LOAD:%.*]] = load i8, ptr addrspace(4) getelementptr inbounds (i8, ptr addrspace(4) addrspacecast (ptr addrspace(1) null to ptr addrspace(4)), i64 6), align 1
+; GFX12-NEXT: [[LOAD:%.*]] = load i8, ptr addrspace(4) getelementptr inbounds (i8, ptr addrspace(4) null, i64 6), align 1
; GFX12-NEXT: store i8 [[LOAD]], ptr addrspace(1) poison, align 1
; GFX12-NEXT: ret void
;
diff --git a/llvm/test/CodeGen/AMDGPU/blender-no-live-segment-at-def-implicit-def.ll b/llvm/test/CodeGen/AMDGPU/blender-no-live-segment-at-def-implicit-def.ll
index ad0d6d8016ad6..0bcbfd213b5e6 100644
--- a/llvm/test/CodeGen/AMDGPU/blender-no-live-segment-at-def-implicit-def.ll
+++ b/llvm/test/CodeGen/AMDGPU/blender-no-live-segment-at-def-implicit-def.ll
@@ -73,11 +73,12 @@ define amdgpu_kernel void @blender_no_live_segment_at_def_error(<4 x float> %ext
; CHECK-NEXT: s_mov_b32 s50, s48
; CHECK-NEXT: s_mov_b32 s51, s48
; CHECK-NEXT: .LBB0_8: ; %if.end294.i.i
-; CHECK-NEXT: v_mov_b32_e32 v0, 0
-; CHECK-NEXT: buffer_store_dword v0, off, s[0:3], 0 offset:12
-; CHECK-NEXT: buffer_store_dword v0, off, s[0:3], 0 offset:8
-; CHECK-NEXT: buffer_store_dword v0, off, s[0:3], 0 offset:4
-; CHECK-NEXT: buffer_store_dword v0, off, s[0:3], 0
+; CHECK-NEXT: v_mov_b32_e32 v0, -1
+; CHECK-NEXT: v_mov_b32_e32 v1, 0
+; CHECK-NEXT: buffer_store_dword v1, v0, s[0:3], 0 offen
+; CHECK-NEXT: buffer_store_dword v1, off, s[0:3], 0 offset:11
+; CHECK-NEXT: buffer_store_dword v1, off, s[0:3], 0 offset:7
+; CHECK-NEXT: buffer_store_dword v1, off, s[0:3], 0 offset:3
; CHECK-NEXT: .LBB0_9: ; %kernel_direct_lighting.exit
; CHECK-NEXT: s_load_dwordx2 s[4:5], s[8:9], 0x20
; CHECK-NEXT: v_mov_b32_e32 v0, s48
diff --git a/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll b/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
index 5c526c78afcd7..9f80463578172 100644
--- a/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
+++ b/llvm/test/CodeGen/AMDGPU/branch-folding-implicit-def-subreg.ll
@@ -25,8 +25,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: S_BITCMP1_B32 renamable $sgpr17, 8, implicit-def $scc
; GFX90A-NEXT: renamable $sgpr30_sgpr31 = S_CSELECT_B64 -1, 0, implicit killed $scc
; GFX90A-NEXT: renamable $sgpr30_sgpr31 = S_XOR_B64 killed renamable $sgpr30_sgpr31, -1, implicit-def dead $scc
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr4 = DS_READ_B32_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s32) from `ptr addrspace(3) null`, align 8, addrspace 3)
; GFX90A-NEXT: renamable $vgpr5 = AV_MOV_B32_IMM_PSEUDO 0, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr4 = DS_READ_B32_gfx9 renamable $vgpr5, 0, 0, implicit $exec :: (load (s32) from `ptr addrspace(3) null`, align 8, addrspace 3)
; GFX90A-NEXT: renamable $sgpr40_sgpr41 = S_MOV_B64 0
; GFX90A-NEXT: renamable $vcc = S_AND_B64 $exec, renamable $sgpr28_sgpr29, implicit-def dead $scc
; GFX90A-NEXT: S_CBRANCH_VCCZ %bb.2, implicit $vcc
@@ -142,8 +143,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.10(0x80000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr52_sgpr53, $sgpr54_sgpr55, $sgpr64_sgpr65, $sgpr66_sgpr67, $sgpr68_sgpr69, $vgpr0_vgpr1:0x000000000000000F, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000F, $vgpr12_vgpr13:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr13, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr12, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr2 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr12, killed renamable $vgpr2, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr13, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.10.Flow33:
@@ -159,8 +161,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.12(0x80000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr52_sgpr53, $sgpr54_sgpr55, $sgpr64_sgpr65, $sgpr66_sgpr67, $sgpr68_sgpr69, $vgpr0_vgpr1:0x000000000000000F, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr11, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr10, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr2 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr10, killed renamable $vgpr2, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr11, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.12.Flow34:
@@ -176,8 +179,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.14(0x80000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr42_sgpr43, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr52_sgpr53, $sgpr54_sgpr55, $sgpr64_sgpr65, $sgpr66_sgpr67, $sgpr68_sgpr69, $vgpr0_vgpr1:0x000000000000000F, $vgpr8_vgpr9:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr9, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr8, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr2 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr8, killed renamable $vgpr2, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr9, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.14.Flow35:
@@ -216,8 +220,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.18(0x80000000)
; GFX90A-NEXT: liveins: $sgpr4_sgpr5, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr52_sgpr53, $sgpr54_sgpr55, $sgpr64_sgpr65, $sgpr68_sgpr69, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr47, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr46, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr46, killed renamable $vgpr0, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr47, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.18.Flow37:
@@ -233,8 +238,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.20(0x80000000)
; GFX90A-NEXT: liveins: $sgpr4_sgpr5, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr52_sgpr53, $sgpr54_sgpr55, $sgpr68_sgpr69, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr63, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr62, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr62, killed renamable $vgpr0, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr63, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.20.Flow38:
@@ -250,8 +256,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.22(0x80000000)
; GFX90A-NEXT: liveins: $sgpr4_sgpr5, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr52_sgpr53, $sgpr68_sgpr69, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr61, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr60, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr60, killed renamable $vgpr0, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr61, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.22.Flow39:
@@ -267,8 +274,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.24(0x80000000)
; GFX90A-NEXT: liveins: $sgpr4_sgpr5, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr68_sgpr69, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr59, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr58, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr58, killed renamable $vgpr0, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr59, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.24.Flow40:
@@ -284,8 +292,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.26(0x80000000)
; GFX90A-NEXT: liveins: $sgpr4_sgpr5, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr48_sgpr49, $sgpr68_sgpr69, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr57, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr56, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr56, killed renamable $vgpr0, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr57, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.26.Flow41:
@@ -301,8 +310,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.28(0x80000000)
; GFX90A-NEXT: liveins: $sgpr4_sgpr5, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr68_sgpr69, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr45, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr44, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr44, killed renamable $vgpr0, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr45, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.28.Flow42:
@@ -326,8 +336,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.31(0x80000000)
; GFX90A-NEXT: liveins: $sgpr68_sgpr69, $vgpr40_vgpr41:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr41, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr40, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr40, killed renamable $vgpr0, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr41, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.31.Flow44:
@@ -352,8 +363,9 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.29(0x80000000)
; GFX90A-NEXT: liveins: $sgpr4_sgpr5, $sgpr34_sgpr35, $sgpr68_sgpr69, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET renamable $vgpr43, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr42, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN renamable $vgpr42, killed renamable $vgpr0, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr43, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: renamable $sgpr68_sgpr69 = S_OR_B64 killed renamable $sgpr68_sgpr69, $exec, implicit-def dead $scc
; GFX90A-NEXT: S_BRANCH %bb.29
; GFX90A-NEXT: {{ $}}
@@ -763,7 +775,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr24_sgpr25, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr34_sgpr35, $sgpr36_sgpr37, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr54_sgpr55, $sgpr56_sgpr57:0x000000000000000F, $sgpr62_sgpr63, $sgpr64_sgpr65, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x0000000000000003, $vgpr4_vgpr5:0x000000000000000F, $vgpr6_vgpr7:0x0000000000000003, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000F, $vgpr12_vgpr13:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $vgpr30 = V_CNDMASK_B32_e64 0, 0, 0, 1, killed $sgpr64_sgpr65, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr3 = AV_MOV_B32_IMM_PSEUDO 0, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr3 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
; GFX90A-NEXT: renamable $vgpr7 = COPY renamable $sgpr21, implicit $exec
; GFX90A-NEXT: renamable $vgpr24_vgpr25 = DS_READ_B64_gfx9 killed renamable $vgpr7, 0, 0, implicit $exec :: (load (s64) from %ir.7, addrspace 3)
; GFX90A-NEXT: renamable $vgpr22_vgpr23 = DS_READ_B64_gfx9 killed renamable $vgpr3, 0, 0, implicit $exec :: (load (s64) from `ptr addrspace(3) null`, addrspace 3)
@@ -819,7 +831,7 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.3(0x80000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $sgpr17, $sgpr33, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr30_sgpr31, $sgpr40_sgpr41, $sgpr56_sgpr57:0x000000000000000F, $sgpr20_sgpr21_sgpr22_sgpr23:0x00000000000000FF, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000FF, $vgpr4_vgpr5:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO 0, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr0 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
; GFX90A-NEXT: renamable $vgpr28_vgpr29 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from `ptr addrspace(3) null`, addrspace 3)
; GFX90A-NEXT: renamable $vgpr0 = COPY renamable $sgpr23, implicit $exec
; GFX90A-NEXT: renamable $vgpr26_vgpr27 = DS_READ_B64_gfx9 killed renamable $vgpr0, 0, 0, implicit $exec :: (load (s64) from %ir.419, addrspace 3)
@@ -927,7 +939,8 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: renamable $vcc = V_CMP_EQ_U32_sdwa 0, killed $vgpr7, 0, $vgpr3, 0, 0, 6, implicit $exec
; GFX90A-NEXT: renamable $vgpr2 = V_CNDMASK_B32_e64 0, 0, 0, killed $vgpr2, killed $vcc, implicit $exec
; GFX90A-NEXT: renamable $vgpr2 = V_OR_B32_e32 killed $vgpr2, killed $vgpr14, implicit $exec
- ; GFX90A-NEXT: DS_WRITE2_B32_gfx9 killed renamable $vgpr3, killed renamable $vgpr2, renamable $vgpr3, 0, 1, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, align 4, addrspace 3)
+ ; GFX90A-NEXT: renamable $vgpr4 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: DS_WRITE2_B32_gfx9 killed renamable $vgpr4, killed renamable $vgpr2, killed renamable $vgpr3, 0, 1, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, align 4, addrspace 3)
; GFX90A-NEXT: S_BRANCH %bb.65
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.68.bb174:
@@ -935,14 +948,14 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr7, $vgpr30, $vgpr31, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr12_sgpr13, $sgpr18_sgpr19, $sgpr28_sgpr29, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr52_sgpr53, $sgpr54_sgpr55, $sgpr64_sgpr65, $sgpr66_sgpr67, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000F, $vgpr4_vgpr5:0x000000000000000F, $vgpr6_vgpr7:0x0000000000000003, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000F, $vgpr12_vgpr13:0x000000000000000F, $vgpr14_vgpr15:0x0000000000000003, $vgpr16_vgpr17:0x000000000000000F, $vgpr18_vgpr19:0x0000000000000003, $vgpr20_vgpr21:0x000000000000000F, $vgpr22_vgpr23:0x0000000000000003, $vgpr24_vgpr25:0x0000000000000003, $vgpr26_vgpr27:0x000000000000000F, $vgpr28_vgpr29:0x000000000000000F, $vgpr32_vgpr33:0x000000000000000F, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: renamable $agpr0 = COPY killed renamable $vgpr14, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr34 = V_OR_B32_e32 1, $vgpr32, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr54 = V_OR_B32_e32 $vgpr34, $vgpr28, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr48 = V_OR_B32_e32 $vgpr54, $vgpr26, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr36 = V_CNDMASK_B32_e64 0, $vgpr48, 0, 0, $sgpr12_sgpr13, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr52 = V_OR_B32_e32 $vgpr36, $vgpr2, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr50 = V_OR_B32_e32 $vgpr52, $vgpr16, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr38 = V_OR_B32_e32 $vgpr50, $vgpr20, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr14 = V_CNDMASK_B32_e64 0, 0, 0, $vgpr38, killed $sgpr12_sgpr13, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr48 = V_OR_B32_e32 1, $vgpr32, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr52 = V_OR_B32_e32 $vgpr48, $vgpr28, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr38 = V_OR_B32_e32 $vgpr52, $vgpr26, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr34 = V_CNDMASK_B32_e64 0, $vgpr38, 0, 0, $sgpr12_sgpr13, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr50 = V_OR_B32_e32 $vgpr34, $vgpr2, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr14 = V_OR_B32_e32 $vgpr50, $vgpr16, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr36 = V_OR_B32_e32 $vgpr14, $vgpr20, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr54 = V_CNDMASK_B32_e64 0, 0, 0, $vgpr36, killed $sgpr12_sgpr13, implicit $exec
; GFX90A-NEXT: renamable $sgpr12_sgpr13 = S_MOV_B64 -1
; GFX90A-NEXT: renamable $vcc = S_AND_B64 $exec, killed renamable $sgpr28_sgpr29, implicit-def dead $scc
; GFX90A-NEXT: S_CBRANCH_VCCNZ %bb.72, implicit $vcc
@@ -962,26 +975,27 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: renamable $vgpr2 = COPY renamable $sgpr27, implicit $exec
; GFX90A-NEXT: renamable $vgpr4, renamable $vcc = V_ADD_CO_U32_e64 killed $sgpr26, $vgpr4, 0, implicit $exec
; GFX90A-NEXT: renamable $vgpr2, dead renamable $vcc = V_ADDC_U32_e64 killed $vgpr2, killed $vgpr5, killed $vcc, 0, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr35 = AV_MOV_B32_IMM_PSEUDO 0, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr55 = COPY renamable $vgpr35, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr49 = COPY renamable $vgpr35, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr53 = COPY renamable $vgpr35, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr51 = COPY renamable $vgpr35, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr37 = COPY renamable $vgpr35, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr15 = COPY renamable $vgpr35, implicit $exec
- ; GFX90A-NEXT: renamable $vgpr39 = COPY renamable $vgpr35, implicit $exec
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr35, renamable $vgpr34_vgpr35, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
- ; GFX90A-NEXT: renamable $vgpr5 = COPY renamable $sgpr21, implicit $exec
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr54_vgpr55, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
- ; GFX90A-NEXT: renamable $vgpr16 = COPY killed renamable $sgpr22, implicit $exec
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr16, killed renamable $vgpr48_vgpr49, 0, 0, implicit $exec :: (store (s64) into %ir.8, addrspace 3)
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr35, killed renamable $vgpr52_vgpr53, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr50_vgpr51, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr35, killed renamable $vgpr36_vgpr37, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr5, killed renamable $vgpr14_vgpr15, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr35, killed renamable $vgpr38_vgpr39, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr2, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 4, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
- ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr4, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: renamable $vgpr49 = AV_MOV_B32_IMM_PSEUDO 0, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr53 = COPY renamable $vgpr49, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr39 = COPY renamable $vgpr49, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr51 = COPY renamable $vgpr49, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr15 = COPY renamable $vgpr49, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr35 = COPY renamable $vgpr49, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr55 = COPY renamable $vgpr49, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr37 = COPY renamable $vgpr49, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr5 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr48_vgpr49, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
+ ; GFX90A-NEXT: renamable $vgpr16 = COPY renamable $sgpr21, implicit $exec
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr16, killed renamable $vgpr52_vgpr53, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
+ ; GFX90A-NEXT: renamable $vgpr19 = COPY killed renamable $sgpr22, implicit $exec
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr19, killed renamable $vgpr38_vgpr39, 0, 0, implicit $exec :: (store (s64) into %ir.8, addrspace 3)
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr50_vgpr51, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr16, killed renamable $vgpr14_vgpr15, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr34_vgpr35, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr16, killed renamable $vgpr54_vgpr55, 0, 0, implicit $exec :: (store (s64) into %ir.7, addrspace 3)
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 renamable $vgpr5, killed renamable $vgpr36_vgpr37, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFEN killed renamable $vgpr4, killed renamable $vgpr5, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 0, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null`, align 8, addrspace 5)
+ ; GFX90A-NEXT: BUFFER_STORE_DWORD_OFFSET killed renamable $vgpr2, $sgpr0_sgpr1_sgpr2_sgpr3, 0, 3, 0, 0, implicit $exec :: (store (s32) into `ptr addrspace(5) null` + 4, basealign 8, addrspace 5)
; GFX90A-NEXT: {{ $}}
; GFX90A-NEXT: bb.71.Flow9:
; GFX90A-NEXT: successors: %bb.63(0x80000000)
@@ -995,10 +1009,11 @@ define amdgpu_kernel void @f1(ptr addrspace(1) %arg, ptr addrspace(1) %arg1, i64
; GFX90A-NEXT: successors: %bb.69(0x80000000)
; GFX90A-NEXT: liveins: $sgpr14, $sgpr15, $sgpr16, $vgpr7, $vgpr30, $vgpr31, $agpr0_agpr1:0x0000000000000003, $sgpr4_sgpr5, $sgpr6_sgpr7, $sgpr8_sgpr9:0x000000000000000F, $sgpr10_sgpr11, $sgpr18_sgpr19, $sgpr34_sgpr35, $sgpr38_sgpr39, $sgpr40_sgpr41, $sgpr42_sgpr43, $sgpr44_sgpr45, $sgpr46_sgpr47, $sgpr48_sgpr49, $sgpr50_sgpr51, $sgpr52_sgpr53, $sgpr54_sgpr55, $sgpr64_sgpr65, $sgpr66_sgpr67, $sgpr20_sgpr21_sgpr22_sgpr23:0x000000000000003C, $sgpr24_sgpr25_sgpr26_sgpr27:0x00000000000000F0, $vgpr0_vgpr1:0x000000000000000F, $vgpr2_vgpr3:0x000000000000000C, $vgpr4_vgpr5:0x000000000000000F, $vgpr6_vgpr7:0x0000000000000003, $vgpr8_vgpr9:0x000000000000000F, $vgpr10_vgpr11:0x000000000000000F, $vgpr12_vgpr13:0x000000000000000F, $vgpr14_vgpr15:0x0000000000000003, $vgpr16_vgpr17:0x000000000000000C, $vgpr18_vgpr19:0x0000000000000003, $vgpr20_vgpr21:0x000000000000000C, $vgpr22_vgpr23:0x0000000000000003, $vgpr24_vgpr25:0x0000000000000003, $vgpr26_vgpr27:0x000000000000000C, $vgpr28_vgpr29:0x000000000000000C, $vgpr32_vgpr33:0x000000000000000C, $vgpr34_vgpr35:0x0000000000000003, $vgpr36_vgpr37:0x0000000000000003, $vgpr38_vgpr39:0x0000000000000003, $vgpr40_vgpr41:0x000000000000000F, $vgpr42_vgpr43:0x000000000000000F, $vgpr44_vgpr45:0x000000000000000F, $vgpr46_vgpr47:0x000000000000000F, $vgpr48_vgpr49:0x0000000000000003, $vgpr50_vgpr51:0x0000000000000003, $vgpr52_vgpr53:0x0000000000000003, $vgpr54_vgpr55:0x0000000000000003, $vgpr56_vgpr57:0x000000000000000F, $vgpr58_vgpr59:0x000000000000000F, $vgpr60_vgpr61:0x000000000000000F, $vgpr62_vgpr63:0x000000000000000F, $sgpr0_sgpr1_sgpr2_sgpr3
; GFX90A-NEXT: {{ $}}
- ; GFX90A-NEXT: renamable $vgpr2 = V_OR_B32_e32 $vgpr14, killed $vgpr24, implicit $exec
+ ; GFX90A-NEXT: renamable $vgpr2 = V_OR_B32_e32 $vgpr54, killed $vgpr24, implicit $exec
; GFX90A-NEXT: renamable $vgpr22 = V_OR_B32_e32 killed $vgpr2, killed $vgpr22, implicit $exec
; GFX90A-NEXT: renamable $vgpr23 = AV_MOV_B32_IMM_PSEUDO 0, implicit $exec
- ; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr23, renamable $vgpr22_vgpr23, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
+ ; GFX90A-NEXT: renamable $vgpr2 = AV_MOV_B32_IMM_PSEUDO -1, implicit $exec
+ ; GFX90A-NEXT: DS_WRITE_B64_gfx9 killed renamable $vgpr2, killed renamable $vgpr22_vgpr23, 0, 0, implicit $exec :: (store (s64) into `ptr addrspace(3) null`, addrspace 3)
; GFX90A-NEXT: renamable $sgpr12_sgpr13 = S_MOV_B64 0
; GFX90A-NEXT: S_BRANCH %bb.69
bb:
diff --git a/llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll b/llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll
index b5352bef50b1e..55d65b3f19541 100644
--- a/llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll
+++ b/llvm/test/CodeGen/AMDGPU/cf-loop-on-constant.ll
@@ -365,7 +365,7 @@ for.body:
define amdgpu_kernel void @loop_arg_0(ptr addrspace(3) %ptr, i32 %n) nounwind {
; GCN-LABEL: loop_arg_0:
; GCN: ; %bb.0: ; %entry
-; GCN-NEXT: v_mov_b32_e32 v0, 0
+; GCN-NEXT: v_mov_b32_e32 v0, -1
; GCN-NEXT: s_mov_b32 m0, -1
; GCN-NEXT: ds_read_u8 v0, v0
; GCN-NEXT: s_load_dword s4, s[4:5], 0x9
@@ -401,7 +401,7 @@ define amdgpu_kernel void @loop_arg_0(ptr addrspace(3) %ptr, i32 %n) nounwind {
; GCN_DBG-NEXT: ; implicit-def: $vgpr2 : SGPR spill to VGPR lane
; GCN_DBG-NEXT: s_waitcnt lgkmcnt(0)
; GCN_DBG-NEXT: v_writelane_b32 v2, s0, 0
-; GCN_DBG-NEXT: v_mov_b32_e32 v0, 0
+; GCN_DBG-NEXT: v_mov_b32_e32 v0, -1
; GCN_DBG-NEXT: s_mov_b32 m0, -1
; GCN_DBG-NEXT: ds_read_u8 v0, v0
; GCN_DBG-NEXT: s_waitcnt lgkmcnt(0)
diff --git a/llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll b/llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll
index dae77d19c1235..6c7a72de5eb26 100644
--- a/llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll
+++ b/llvm/test/CodeGen/AMDGPU/chain-hi-to-lo.ll
@@ -13,8 +13,8 @@ define <2 x half> @chain_hi_to_lo_private() {
; GFX900: ; %bb.0: ; %bb
; GFX900-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX900-NEXT: buffer_load_ushort v0, off, s[0:3], 0 offset:2
-; GFX900-NEXT: s_nop 0
-; GFX900-NEXT: buffer_load_short_d16_hi v0, off, s[0:3], 0
+; GFX900-NEXT: v_mov_b32_e32 v1, -1
+; GFX900-NEXT: buffer_load_short_d16_hi v0, v1, s[0:3], 0 offen
; GFX900-NEXT: s_waitcnt vmcnt(0)
; GFX900-NEXT: s_setpc_b64 s[30:31]
;
@@ -23,7 +23,7 @@ define <2 x half> @chain_hi_to_lo_private() {
; FLATSCR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; FLATSCR-NEXT: s_mov_b32 s0, 2
; FLATSCR-NEXT: scratch_load_ushort v0, off, s0
-; FLATSCR-NEXT: s_mov_b32 s0, 0
+; FLATSCR-NEXT: s_mov_b32 s0, -1
; FLATSCR-NEXT: scratch_load_short_d16_hi v0, off, s0
; FLATSCR-NEXT: s_waitcnt vmcnt(0)
; FLATSCR-NEXT: s_setpc_b64 s[30:31]
@@ -31,9 +31,9 @@ define <2 x half> @chain_hi_to_lo_private() {
; GFX10_DEFAULT-LABEL: chain_hi_to_lo_private:
; GFX10_DEFAULT: ; %bb.0: ; %bb
; GFX10_DEFAULT-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX10_DEFAULT-NEXT: s_clause 0x1
; GFX10_DEFAULT-NEXT: buffer_load_ushort v0, off, s[0:3], 0 offset:2
-; GFX10_DEFAULT-NEXT: buffer_load_short_d16_hi v0, off, s[0:3], 0
+; GFX10_DEFAULT-NEXT: v_mov_b32_e32 v1, -1
+; GFX10_DEFAULT-NEXT: buffer_load_short_d16_hi v0, v1, s[0:3], 0 offen
; GFX10_DEFAULT-NEXT: s_waitcnt vmcnt(0)
; GFX10_DEFAULT-NEXT: s_setpc_b64 s[30:31]
;
@@ -43,7 +43,7 @@ define <2 x half> @chain_hi_to_lo_private() {
; FLATSCR_GFX10-NEXT: s_mov_b32 s0, 2
; FLATSCR_GFX10-NEXT: scratch_load_ushort v0, off, s0
; FLATSCR_GFX10-NEXT: s_waitcnt_depctr 0xffe3
-; FLATSCR_GFX10-NEXT: s_mov_b32 s0, 0
+; FLATSCR_GFX10-NEXT: s_mov_b32 s0, -1
; FLATSCR_GFX10-NEXT: scratch_load_short_d16_hi v0, off, s0
; FLATSCR_GFX10-NEXT: s_waitcnt vmcnt(0)
; FLATSCR_GFX10-NEXT: s_setpc_b64 s[30:31]
@@ -53,7 +53,7 @@ define <2 x half> @chain_hi_to_lo_private() {
; GFX11-TRUE16-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX11-TRUE16-NEXT: s_mov_b32 s0, 2
; GFX11-TRUE16-NEXT: scratch_load_d16_b16 v0, off, s0
-; GFX11-TRUE16-NEXT: s_mov_b32 s0, 0
+; GFX11-TRUE16-NEXT: s_mov_b32 s0, -1
; GFX11-TRUE16-NEXT: scratch_load_d16_hi_b16 v0, off, s0
; GFX11-TRUE16-NEXT: s_waitcnt vmcnt(0)
; GFX11-TRUE16-NEXT: s_setpc_b64 s[30:31]
@@ -63,7 +63,7 @@ define <2 x half> @chain_hi_to_lo_private() {
; GFX11-FAKE16-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX11-FAKE16-NEXT: s_mov_b32 s0, 2
; GFX11-FAKE16-NEXT: scratch_load_u16 v0, off, s0
-; GFX11-FAKE16-NEXT: s_mov_b32 s0, 0
+; GFX11-FAKE16-NEXT: s_mov_b32 s0, -1
; GFX11-FAKE16-NEXT: scratch_load_d16_hi_b16 v0, off, s0
; GFX11-FAKE16-NEXT: s_waitcnt vmcnt(0)
; GFX11-FAKE16-NEXT: s_setpc_b64 s[30:31]
@@ -207,8 +207,9 @@ define <2 x half> @chain_hi_to_lo_group() {
; GCN-LABEL: chain_hi_to_lo_group:
; GCN: ; %bb.0: ; %bb
; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GCN-NEXT: v_mov_b32_e32 v1, 0
-; GCN-NEXT: ds_read_u16 v0, v1 offset:2
+; GCN-NEXT: v_mov_b32_e32 v0, 0
+; GCN-NEXT: ds_read_u16 v0, v0 offset:2
+; GCN-NEXT: v_mov_b32_e32 v1, -1
; GCN-NEXT: s_waitcnt lgkmcnt(0)
; GCN-NEXT: ds_read_u16_d16_hi v0, v1
; GCN-NEXT: s_waitcnt lgkmcnt(0)
@@ -217,8 +218,9 @@ define <2 x half> @chain_hi_to_lo_group() {
; GFX10-LABEL: chain_hi_to_lo_group:
; GFX10: ; %bb.0: ; %bb
; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX10-NEXT: v_mov_b32_e32 v1, 0
-; GFX10-NEXT: ds_read_u16 v0, v1 offset:2
+; GFX10-NEXT: v_mov_b32_e32 v0, 0
+; GFX10-NEXT: v_mov_b32_e32 v1, -1
+; GFX10-NEXT: ds_read_u16 v0, v0 offset:2
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
; GFX10-NEXT: ds_read_u16_d16_hi v0, v1
; GFX10-NEXT: s_waitcnt lgkmcnt(0)
@@ -227,8 +229,8 @@ define <2 x half> @chain_hi_to_lo_group() {
; GFX11-TRUE16-LABEL: chain_hi_to_lo_group:
; GFX11-TRUE16: ; %bb.0: ; %bb
; GFX11-TRUE16-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-TRUE16-NEXT: v_mov_b32_e32 v1, 0
-; GFX11-TRUE16-NEXT: ds_load_u16_d16 v0, v1 offset:2
+; GFX11-TRUE16-NEXT: v_dual_mov_b32 v0, 0 :: v_dual_mov_b32 v1, -1
+; GFX11-TRUE16-NEXT: ds_load_u16_d16 v0, v0 offset:2
; GFX11-TRUE16-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-TRUE16-NEXT: ds_load_u16_d16_hi v0, v1
; GFX11-TRUE16-NEXT: s_waitcnt lgkmcnt(0)
@@ -237,8 +239,8 @@ define <2 x half> @chain_hi_to_lo_group() {
; GFX11-FAKE16-LABEL: chain_hi_to_lo_group:
; GFX11-FAKE16: ; %bb.0: ; %bb
; GFX11-FAKE16-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; GFX11-FAKE16-NEXT: v_mov_b32_e32 v1, 0
-; GFX11-FAKE16-NEXT: ds_load_u16 v0, v1 offset:2
+; GFX11-FAKE16-NEXT: v_dual_mov_b32 v0, 0 :: v_dual_mov_b32 v1, -1
+; GFX11-FAKE16-NEXT: ds_load_u16 v0, v0 offset:2
; GFX11-FAKE16-NEXT: s_waitcnt lgkmcnt(0)
; GFX11-FAKE16-NEXT: ds_load_u16_d16_hi v0, v1
; GFX11-FAKE16-NEXT: s_waitcnt lgkmcnt(0)
diff --git a/llvm/test/CodeGen/AMDGPU/collapse-endcf.ll b/llvm/test/CodeGen/AMDGPU/collapse-endcf.ll
index b1557b33110b9..f8b3281d8c629 100644
--- a/llvm/test/CodeGen/AMDGPU/collapse-endcf.ll
+++ b/llvm/test/CodeGen/AMDGPU/collapse-endcf.ll
@@ -35,7 +35,7 @@ define amdgpu_kernel void @simple_nested_if(ptr addrspace(1) nocapture %arg) {
; GCN-NEXT: .LBB0_3: ; %bb.outer.end
; GCN-NEXT: s_or_b64 exec, exec, s[6:7]
; GCN-NEXT: v_mov_b32_e32 v0, 3
-; GCN-NEXT: v_mov_b32_e32 v1, 0
+; GCN-NEXT: v_mov_b32_e32 v1, -1
; GCN-NEXT: s_mov_b32 m0, -1
; GCN-NEXT: ds_write_b32 v1, v0
; GCN-NEXT: s_endpgm
@@ -142,7 +142,7 @@ define amdgpu_kernel void @simple_nested_if(ptr addrspace(1) nocapture %arg) {
; GCN-O0-NEXT: v_readlane_b32 s1, v4, 3
; GCN-O0-NEXT: s_or_b64 exec, exec, s[0:1]
; GCN-O0-NEXT: v_mov_b32_e32 v1, 3
-; GCN-O0-NEXT: v_mov_b32_e32 v0, 0
+; GCN-O0-NEXT: v_mov_b32_e32 v0, -1
; GCN-O0-NEXT: s_mov_b32 m0, -1
; GCN-O0-NEXT: ds_write_b32 v0, v1
; GCN-O0-NEXT: s_endpgm
@@ -204,7 +204,7 @@ define amdgpu_kernel void @uncollapsable_nested_if(ptr addrspace(1) nocapture %a
; GCN-NEXT: s_or_b64 exec, exec, s[6:7]
; GCN-NEXT: s_waitcnt expcnt(0)
; GCN-NEXT: v_mov_b32_e32 v0, 3
-; GCN-NEXT: v_mov_b32_e32 v1, 0
+; GCN-NEXT: v_mov_b32_e32 v1, -1
; GCN-NEXT: s_mov_b32 m0, -1
; GCN-NEXT: ds_write_b32 v1, v0
; GCN-NEXT: s_endpgm
@@ -332,7 +332,7 @@ define amdgpu_kernel void @uncollapsable_nested_if(ptr addrspace(1) nocapture %a
; GCN-O0-NEXT: s_branch .LBB1_3
; GCN-O0-NEXT: .LBB1_5: ; %bb.outer.end
; GCN-O0-NEXT: v_mov_b32_e32 v1, 3
-; GCN-O0-NEXT: v_mov_b32_e32 v0, 0
+; GCN-O0-NEXT: v_mov_b32_e32 v0, -1
; GCN-O0-NEXT: s_mov_b32 m0, -1
; GCN-O0-NEXT: ds_write_b32 v0, v1
; GCN-O0-NEXT: s_endpgm
@@ -378,9 +378,10 @@ define amdgpu_kernel void @nested_if_if_else(ptr addrspace(1) nocapture %arg) {
; GCN-NEXT: s_and_saveexec_b64 s[2:3], vcc
; GCN-NEXT: s_cbranch_execz .LBB2_5
; GCN-NEXT: ; %bb.1: ; %bb.outer.then
-; GCN-NEXT: v_mov_b32_e32 v4, s1
-; GCN-NEXT: v_add_i32_e32 v3, vcc, s0, v1
-; GCN-NEXT: v_addc_u32_e32 v4, vcc, 0, v4, vcc
+; GCN-NEXT: s_waitcnt expcnt(0)
+; GCN-NEXT: v_mov_b32_e32 v2, s1
+; GCN-NEXT: v_add_i32_e32 v1, vcc, s0, v1
+; GCN-NEXT: v_addc_u32_e32 v2, vcc, 0, v2, vcc
; GCN-NEXT: v_cmp_ne_u32_e32 vcc, 2, v0
; GCN-NEXT: s_and_saveexec_b64 s[0:1], vcc
; GCN-NEXT: s_xor_b64 s[0:1], exec, s[0:1]
@@ -391,8 +392,8 @@ define amdgpu_kernel void @nested_if_if_else(ptr addrspace(1) nocapture %arg) {
; GCN-NEXT: s_mov_b32 s4, s6
; GCN-NEXT: s_mov_b32 s5, s6
; GCN-NEXT: v_mov_b32_e32 v0, 2
-; GCN-NEXT: buffer_store_dword v0, v[3:4], s[4:7], 0 addr64 offset:8
-; GCN-NEXT: ; implicit-def: $vgpr3_vgpr4
+; GCN-NEXT: buffer_store_dword v0, v[1:2], s[4:7], 0 addr64 offset:8
+; GCN-NEXT: ; implicit-def: $vgpr1_vgpr2
; GCN-NEXT: .LBB2_3: ; %Flow
; GCN-NEXT: s_andn2_saveexec_b64 s[0:1], s[0:1]
; GCN-NEXT: s_cbranch_execz .LBB2_5
@@ -403,13 +404,14 @@ define amdgpu_kernel void @nested_if_if_else(ptr addrspace(1) nocapture %arg) {
; GCN-NEXT: s_mov_b32 s5, s6
; GCN-NEXT: s_waitcnt expcnt(0)
; GCN-NEXT: v_mov_b32_e32 v0, 1
-; GCN-NEXT: buffer_store_dword v0, v[3:4], s[4:7], 0 addr64 offset:4
+; GCN-NEXT: buffer_store_dword v0, v[1:2], s[4:7], 0 addr64 offset:4
; GCN-NEXT: .LBB2_5: ; %bb.outer.end
; GCN-NEXT: s_or_b64 exec, exec, s[2:3]
; GCN-NEXT: s_waitcnt expcnt(0)
; GCN-NEXT: v_mov_b32_e32 v0, 3
+; GCN-NEXT: v_mov_b32_e32 v1, -1
; GCN-NEXT: s_mov_b32 m0, -1
-; GCN-NEXT: ds_write_b32 v2, v0
+; GCN-NEXT: ds_write_b32 v1, v0
; GCN-NEXT: s_endpgm
;
; GCN-O0-LABEL: nested_if_if_else:
@@ -558,7 +560,7 @@ define amdgpu_kernel void @nested_if_if_else(ptr addrspace(1) nocapture %arg) {
; GCN-O0-NEXT: v_readlane_b32 s1, v4, 3
; GCN-O0-NEXT: s_or_b64 exec, exec, s[0:1]
; GCN-O0-NEXT: v_mov_b32_e32 v1, 3
-; GCN-O0-NEXT: v_mov_b32_e32 v0, 0
+; GCN-O0-NEXT: v_mov_b32_e32 v0, -1
; GCN-O0-NEXT: s_mov_b32 m0, -1
; GCN-O0-NEXT: ds_write_b32 v0, v1
; GCN-O0-NEXT: s_endpgm
@@ -647,7 +649,7 @@ define amdgpu_kernel void @nested_if_else_if(ptr addrspace(1) nocapture %arg) {
; GCN-NEXT: s_or_b64 exec, exec, s[4:5]
; GCN-NEXT: s_waitcnt expcnt(0)
; GCN-NEXT: v_mov_b32_e32 v0, 3
-; GCN-NEXT: v_mov_b32_e32 v1, 0
+; GCN-NEXT: v_mov_b32_e32 v1, -1
; GCN-NEXT: s_mov_b32 m0, -1
; GCN-NEXT: ds_write_b32 v1, v0
; GCN-NEXT: s_endpgm
@@ -838,7 +840,7 @@ define amdgpu_kernel void @nested_if_else_if(ptr addrspace(1) nocapture %arg) {
; GCN-O0-NEXT: v_readlane_b32 s1, v6, 3
; GCN-O0-NEXT: s_or_b64 exec, exec, s[0:1]
; GCN-O0-NEXT: v_mov_b32_e32 v1, 3
-; GCN-O0-NEXT: v_mov_b32_e32 v0, 0
+; GCN-O0-NEXT: v_mov_b32_e32 v0, -1
; GCN-O0-NEXT: s_mov_b32 m0, -1
; GCN-O0-NEXT: ds_write_b32 v0, v1
; GCN-O0-NEXT: s_endpgm
diff --git a/llvm/test/CodeGen/AMDGPU/exec-mask-opt-cannot-create-empty-or-backward-segment.ll b/llvm/test/CodeGen/AMDGPU/exec-mask-opt-cannot-create-empty-or-backward-segment.ll
index 72913d2596ebf..436ec6aed5353 100644
--- a/llvm/test/CodeGen/AMDGPU/exec-mask-opt-cannot-create-empty-or-backward-segment.ll
+++ b/llvm/test/CodeGen/AMDGPU/exec-mask-opt-cannot-create-empty-or-backward-segment.ll
@@ -31,7 +31,8 @@ define amdgpu_kernel void @cannot_create_empty_or_backwards_segment(i1 %arg, i1
; CHECK-NEXT: s_and_b64 s[4:5], exec, s[4:5]
; CHECK-NEXT: s_and_b64 s[6:7], exec, s[10:11]
; CHECK-NEXT: v_cmp_ne_u32_e64 s[0:1], 1, v0
-; CHECK-NEXT: v_mov_b32_e32 v0, 0
+; CHECK-NEXT: v_mov_b32_e32 v0, -1
+; CHECK-NEXT: v_mov_b32_e32 v1, 0
; CHECK-NEXT: s_branch .LBB0_3
; CHECK-NEXT: .LBB0_1: ; in Loop: Header=BB0_3 Depth=1
; CHECK-NEXT: s_mov_b64 s[18:19], 0
@@ -98,8 +99,8 @@ define amdgpu_kernel void @cannot_create_empty_or_backwards_segment(i1 %arg, i1
; CHECK-NEXT: s_cbranch_vccnz .LBB0_15
; CHECK-NEXT: ; %bb.14: ; %bb15
; CHECK-NEXT: ; in Loop: Header=BB0_3 Depth=1
-; CHECK-NEXT: buffer_store_dword v0, off, s[24:27], 0 offset:4
-; CHECK-NEXT: buffer_store_dword v0, off, s[24:27], 0
+; CHECK-NEXT: buffer_store_dword v1, v0, s[24:27], 0 offen
+; CHECK-NEXT: buffer_store_dword v1, off, s[24:27], 0 offset:3
; CHECK-NEXT: .LBB0_15: ; %Flow
; CHECK-NEXT: ; in Loop: Header=BB0_3 Depth=1
; CHECK-NEXT: s_mov_b64 s[20:21], 0
diff --git a/llvm/test/CodeGen/AMDGPU/hazard-recognizer-src-shared-base.ll b/llvm/test/CodeGen/AMDGPU/hazard-recognizer-src-shared-base.ll
index 1db476300c261..5e95ac9b619fe 100644
--- a/llvm/test/CodeGen/AMDGPU/hazard-recognizer-src-shared-base.ll
+++ b/llvm/test/CodeGen/AMDGPU/hazard-recognizer-src-shared-base.ll
@@ -4,11 +4,12 @@
define amdgpu_kernel void @foo() {
; CHECK-LABEL: foo:
; CHECK: ; %bb.0: ; %entry
-; CHECK-NEXT: s_mov_b64 s[0:1], src_shared_base
-; CHECK-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
-; CHECK-NEXT: v_dual_mov_b32 v1, 0 :: v_dual_mov_b32 v2, s1
-; CHECK-NEXT: v_mov_b32_e32 v0, v1
-; CHECK-NEXT: flat_store_b64 v[1:2], v[0:1]
+; CHECK-NEXT: v_mov_b32_e32 v0, 0
+; CHECK-NEXT: v_mov_b32_e32 v2, 0
+; CHECK-NEXT: v_mov_b32_e32 v3, 0
+; CHECK-NEXT: s_delay_alu instid0(VALU_DEP_3)
+; CHECK-NEXT: v_mov_b32_e32 v1, v0
+; CHECK-NEXT: flat_store_b64 v[2:3], v[0:1]
; CHECK-NEXT: s_endpgm
entry:
br label %bb1
diff --git a/llvm/test/CodeGen/AMDGPU/issue130120-eliminate-frame-index.ll b/llvm/test/CodeGen/AMDGPU/issue130120-eliminate-frame-index.ll
index 300124848c1aa..bb28526b8407d 100644
--- a/llvm/test/CodeGen/AMDGPU/issue130120-eliminate-frame-index.ll
+++ b/llvm/test/CodeGen/AMDGPU/issue130120-eliminate-frame-index.ll
@@ -38,24 +38,24 @@ define amdgpu_gfx [13 x i32] @issue130120() {
; CHECK-NEXT: s_cmp_eq_u32 s46, 0
; CHECK-NEXT: s_mov_b32 s49, s48
; CHECK-NEXT: s_mov_b32 s50, s48
-; CHECK-NEXT: s_cselect_b32 s51, 0, s1
-; CHECK-NEXT: s_cselect_b32 s55, 0, s35
+; CHECK-NEXT: s_cselect_b32 s51, -1, s1
+; CHECK-NEXT: s_cselect_b32 s55, -1, s35
; CHECK-NEXT: v_dual_mov_b32 v2, s48 :: v_dual_mov_b32 v3, s49
-; CHECK-NEXT: s_cselect_b32 s52, 0, s2
-; CHECK-NEXT: s_cselect_b32 s56, 0, s36
-; CHECK-NEXT: s_cselect_b32 vcc_lo, 0, s43
+; CHECK-NEXT: s_cselect_b32 s52, -1, s2
+; CHECK-NEXT: s_cselect_b32 s56, -1, s36
+; CHECK-NEXT: s_cselect_b32 vcc_lo, -1, s43
; CHECK-NEXT: v_mov_b32_e32 v4, s50
; CHECK-NEXT: s_cselect_b32 s47, s45, 0xf0
-; CHECK-NEXT: s_cselect_b32 s53, 0, s3
-; CHECK-NEXT: s_cselect_b32 s54, 0, s34
-; CHECK-NEXT: s_cselect_b32 s57, 0, s37
-; CHECK-NEXT: s_cselect_b32 s58, 0, s38
-; CHECK-NEXT: s_cselect_b32 s59, 0, s0
-; CHECK-NEXT: s_cselect_b32 s60, 0, s39
-; CHECK-NEXT: s_cselect_b32 s61, 0, s40
-; CHECK-NEXT: s_cselect_b32 s62, 0, s41
-; CHECK-NEXT: s_cselect_b32 s63, 0, s42
-; CHECK-NEXT: s_cselect_b32 vcc_hi, 0, s44
+; CHECK-NEXT: s_cselect_b32 s53, -1, s3
+; CHECK-NEXT: s_cselect_b32 s54, -1, s34
+; CHECK-NEXT: s_cselect_b32 s57, -1, s37
+; CHECK-NEXT: s_cselect_b32 s58, -1, s38
+; CHECK-NEXT: s_cselect_b32 s59, -1, s0
+; CHECK-NEXT: s_cselect_b32 s60, -1, s39
+; CHECK-NEXT: s_cselect_b32 s61, -1, s40
+; CHECK-NEXT: s_cselect_b32 s62, -1, s41
+; CHECK-NEXT: s_cselect_b32 s63, -1, s42
+; CHECK-NEXT: s_cselect_b32 vcc_hi, -1, s44
; CHECK-NEXT: s_mov_b32 s46, s48
; CHECK-NEXT: scratch_store_b32 off, v0, s51
; CHECK-NEXT: scratch_store_b32 off, v0, s52
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.iglp.AFLCustomIRMutator.opt.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.iglp.AFLCustomIRMutator.opt.ll
index 50bf632533378..9800138f13f19 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.iglp.AFLCustomIRMutator.opt.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.iglp.AFLCustomIRMutator.opt.ll
@@ -4,29 +4,30 @@
define amdgpu_kernel void @test_iglp_opt_rev_mfma_gemm(<1 x i64> %L1) {
; GCN-LABEL: test_iglp_opt_rev_mfma_gemm:
; GCN: ; %bb.0: ; %entry
-; GCN-NEXT: v_mov_b32_e32 v0, 0
-; GCN-NEXT: ds_read_b128 v[2:5], v0
+; GCN-NEXT: v_mov_b32_e32 v1, 0
+; GCN-NEXT: ds_read_b128 v[30:33], v1 offset:111
+; GCN-NEXT: ds_read_b128 v[26:29], v1 offset:95
+; GCN-NEXT: ds_read_b128 v[22:25], v1 offset:79
+; GCN-NEXT: ds_read_b128 v[2:5], v1 offset:15
+; GCN-NEXT: v_mov_b32_e32 v0, -1
; GCN-NEXT: s_load_dwordx2 s[0:1], s[8:9], 0x0
-; GCN-NEXT: ds_read_b128 v[30:33], v0 offset:112
-; GCN-NEXT: ds_read_b128 v[26:29], v0 offset:96
-; GCN-NEXT: ds_read_b128 v[22:25], v0 offset:80
-; GCN-NEXT: ds_read_b128 v[18:21], v0 offset:64
-; GCN-NEXT: ds_read_b128 v[6:9], v0 offset:16
-; GCN-NEXT: ds_read_b128 v[10:13], v0 offset:32
-; GCN-NEXT: ds_read_b128 v[14:17], v0 offset:48
+; GCN-NEXT: ds_read_b128 v[10:13], v1 offset:31
+; GCN-NEXT: ds_read_b128 v[14:17], v1 offset:47
+; GCN-NEXT: ds_read_b128 v[18:21], v1 offset:63
+; GCN-NEXT: ds_read_b128 v[6:9], v0
; GCN-NEXT: s_waitcnt lgkmcnt(0)
-; GCN-NEXT: ds_write_b128 v0, v[2:5]
+; GCN-NEXT: ds_write_b128 v1, v[2:5] offset:15
; GCN-NEXT: v_mov_b32_e32 v2, 0
; GCN-NEXT: v_mov_b32_e32 v3, 0
; GCN-NEXT: s_cmp_lg_u64 s[0:1], 0
; GCN-NEXT: ; iglp_opt mask(0x00000001)
-; GCN-NEXT: ds_write_b128 v0, v[30:33] offset:112
-; GCN-NEXT: ds_write_b128 v0, v[26:29] offset:96
-; GCN-NEXT: ds_write_b128 v0, v[22:25] offset:80
-; GCN-NEXT: ds_write_b128 v0, v[18:21] offset:64
-; GCN-NEXT: ds_write_b128 v0, v[14:17] offset:48
-; GCN-NEXT: ds_write_b128 v0, v[10:13] offset:32
-; GCN-NEXT: ds_write_b128 v0, v[6:9] offset:16
+; GCN-NEXT: ds_write_b128 v1, v[30:33] offset:111
+; GCN-NEXT: ds_write_b128 v1, v[26:29] offset:95
+; GCN-NEXT: ds_write_b128 v1, v[22:25] offset:79
+; GCN-NEXT: ds_write_b128 v1, v[18:21] offset:63
+; GCN-NEXT: ds_write_b128 v1, v[14:17] offset:47
+; GCN-NEXT: ds_write_b128 v1, v[10:13] offset:31
+; GCN-NEXT: ds_write_b128 v0, v[6:9]
; GCN-NEXT: ds_write_b64 v0, v[2:3]
; GCN-NEXT: s_endpgm
entry:
diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.set.inactive.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.set.inactive.ll
index 32cbe6d9cb73c..55c39c27c908b 100644
--- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.set.inactive.ll
+++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.set.inactive.ll
@@ -451,7 +451,7 @@ define amdgpu_kernel void @set_inactive_p3(ptr addrspace(1) %out, ptr addrspace(
; GCN-NEXT: s_waitcnt lgkmcnt(0)
; GCN-NEXT: v_mov_b32_e32 v1, s6
; GCN-NEXT: s_or_saveexec_b64 s[4:5], -1
-; GCN-NEXT: v_cndmask_b32_e64 v0, 0, v1, s[4:5]
+; GCN-NEXT: v_cndmask_b32_e64 v0, -1, v1, s[4:5]
; GCN-NEXT: s_mov_b64 exec, s[4:5]
; GCN-NEXT: v_mov_b32_e32 v1, v0
; GCN-NEXT: buffer_store_dword v1, off, s[0:3], 0
@@ -472,7 +472,7 @@ define amdgpu_kernel void @set_inactive_p5(ptr addrspace(1) %out, ptr addrspace(
; GCN-NEXT: s_waitcnt lgkmcnt(0)
; GCN-NEXT: v_mov_b32_e32 v1, s6
; GCN-NEXT: s_or_saveexec_b64 s[4:5], -1
-; GCN-NEXT: v_cndmask_b32_e64 v0, 0, v1, s[4:5]
+; GCN-NEXT: v_cndmask_b32_e64 v0, -1, v1, s[4:5]
; GCN-NEXT: s_mov_b64 exec, s[4:5]
; GCN-NEXT: v_mov_b32_e32 v1, v0
; GCN-NEXT: buffer_store_dword v1, off, s[0:3], 0
diff --git a/llvm/test/CodeGen/AMDGPU/load-hi16.ll b/llvm/test/CodeGen/AMDGPU/load-hi16.ll
index 825ae8060aaa6..348025a1ce20b 100644
--- a/llvm/test/CodeGen/AMDGPU/load-hi16.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-hi16.ll
@@ -12,7 +12,7 @@ define <2 x i16> @load_local_lo_hi_v2i16_multi_use_lo(ptr addrspace(3) noalias %
; GFX900-NEXT: s_waitcnt lgkmcnt(0)
; GFX900-NEXT: v_mov_b32_e32 v1, v2
; GFX900-NEXT: ds_read_u16_d16_hi v1, v0 offset:16
-; GFX900-NEXT: v_mov_b32_e32 v0, 0
+; GFX900-NEXT: v_mov_b32_e32 v0, -1
; GFX900-NEXT: ds_write_b16 v0, v2
; GFX900-NEXT: s_waitcnt lgkmcnt(1)
; GFX900-NEXT: v_mov_b32_e32 v0, v1
@@ -25,7 +25,7 @@ define <2 x i16> @load_local_lo_hi_v2i16_multi_use_lo(ptr addrspace(3) noalias %
; GFX906-NEXT: ds_read_u16 v1, v0
; GFX906-NEXT: ds_read_u16 v0, v0 offset:16
; GFX906-NEXT: s_mov_b32 s4, 0x5040100
-; GFX906-NEXT: v_mov_b32_e32 v2, 0
+; GFX906-NEXT: v_mov_b32_e32 v2, -1
; GFX906-NEXT: s_waitcnt lgkmcnt(1)
; GFX906-NEXT: ds_write_b16 v2, v1
; GFX906-NEXT: s_waitcnt lgkmcnt(1)
@@ -39,7 +39,7 @@ define <2 x i16> @load_local_lo_hi_v2i16_multi_use_lo(ptr addrspace(3) noalias %
; GFX803-NEXT: s_mov_b32 m0, -1
; GFX803-NEXT: ds_read_u16 v1, v0
; GFX803-NEXT: ds_read_u16 v0, v0 offset:16
-; GFX803-NEXT: v_mov_b32_e32 v2, 0
+; GFX803-NEXT: v_mov_b32_e32 v2, -1
; GFX803-NEXT: s_waitcnt lgkmcnt(1)
; GFX803-NEXT: ds_write_b16 v2, v1
; GFX803-NEXT: s_waitcnt lgkmcnt(1)
@@ -55,7 +55,7 @@ define <2 x i16> @load_local_lo_hi_v2i16_multi_use_lo(ptr addrspace(3) noalias %
; GFX900-FLATSCR-NEXT: s_waitcnt lgkmcnt(0)
; GFX900-FLATSCR-NEXT: v_mov_b32_e32 v1, v2
; GFX900-FLATSCR-NEXT: ds_read_u16_d16_hi v1, v0 offset:16
-; GFX900-FLATSCR-NEXT: v_mov_b32_e32 v0, 0
+; GFX900-FLATSCR-NEXT: v_mov_b32_e32 v0, -1
; GFX900-FLATSCR-NEXT: ds_write_b16 v0, v2
; GFX900-FLATSCR-NEXT: s_waitcnt lgkmcnt(1)
; GFX900-FLATSCR-NEXT: v_mov_b32_e32 v0, v1
@@ -78,7 +78,7 @@ define <2 x i16> @load_local_lo_hi_v2i16_multi_use_hi(ptr addrspace(3) noalias %
; GFX900-NEXT: ds_read_u16 v1, v0 offset:16
; GFX900-NEXT: ds_read_u16 v0, v0
; GFX900-NEXT: s_mov_b32 s4, 0x5040100
-; GFX900-NEXT: v_mov_b32_e32 v2, 0
+; GFX900-NEXT: v_mov_b32_e32 v2, -1
; GFX900-NEXT: s_waitcnt lgkmcnt(1)
; GFX900-NEXT: ds_write_b16 v2, v1
; GFX900-NEXT: s_waitcnt lgkmcnt(1)
@@ -92,7 +92,7 @@ define <2 x i16> @load_local_lo_hi_v2i16_multi_use_hi(ptr addrspace(3) noalias %
; GFX906-NEXT: ds_read_u16 v1, v0 offset:16
; GFX906-NEXT: ds_read_u16 v0, v0
; GFX906-NEXT: s_mov_b32 s4, 0x5040100
-; GFX906-NEXT: v_mov_b32_e32 v2, 0
+; GFX906-NEXT: v_mov_b32_e32 v2, -1
; GFX906-NEXT: s_waitcnt lgkmcnt(1)
; GFX906-NEXT: ds_write_b16 v2, v1
; GFX906-NEXT: s_waitcnt lgkmcnt(1)
@@ -106,7 +106,7 @@ define <2 x i16> @load_local_lo_hi_v2i16_multi_use_hi(ptr addrspace(3) noalias %
; GFX803-NEXT: s_mov_b32 m0, -1
; GFX803-NEXT: ds_read_u16 v1, v0 offset:16
; GFX803-NEXT: ds_read_u16 v0, v0
-; GFX803-NEXT: v_mov_b32_e32 v2, 0
+; GFX803-NEXT: v_mov_b32_e32 v2, -1
; GFX803-NEXT: s_waitcnt lgkmcnt(1)
; GFX803-NEXT: ds_write_b16 v2, v1
; GFX803-NEXT: v_lshlrev_b32_e32 v1, 16, v1
@@ -121,7 +121,7 @@ define <2 x i16> @load_local_lo_hi_v2i16_multi_use_hi(ptr addrspace(3) noalias %
; GFX900-FLATSCR-NEXT: ds_read_u16 v1, v0 offset:16
; GFX900-FLATSCR-NEXT: ds_read_u16 v0, v0
; GFX900-FLATSCR-NEXT: s_mov_b32 s0, 0x5040100
-; GFX900-FLATSCR-NEXT: v_mov_b32_e32 v2, 0
+; GFX900-FLATSCR-NEXT: v_mov_b32_e32 v2, -1
; GFX900-FLATSCR-NEXT: s_waitcnt lgkmcnt(1)
; GFX900-FLATSCR-NEXT: ds_write_b16 v2, v1
; GFX900-FLATSCR-NEXT: s_waitcnt lgkmcnt(1)
diff --git a/llvm/test/CodeGen/AMDGPU/load-lo16.ll b/llvm/test/CodeGen/AMDGPU/load-lo16.ll
index 5e5c3bcd37f01..c7a1da95075ab 100644
--- a/llvm/test/CodeGen/AMDGPU/load-lo16.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-lo16.ll
@@ -593,7 +593,7 @@ define void @load_local_lo_v2i16_reghi_vreg_multi_use_lo(ptr addrspace(3) %in, <
; GFX900-MUBUF: ; %bb.0: ; %entry
; GFX900-MUBUF-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX900-MUBUF-NEXT: ds_read_u16 v0, v0
-; GFX900-MUBUF-NEXT: v_mov_b32_e32 v2, 0
+; GFX900-MUBUF-NEXT: v_mov_b32_e32 v2, -1
; GFX900-MUBUF-NEXT: s_mov_b32 s4, 0xffff
; GFX900-MUBUF-NEXT: s_waitcnt lgkmcnt(0)
; GFX900-MUBUF-NEXT: ds_write_b16 v2, v0
@@ -606,7 +606,7 @@ define void @load_local_lo_v2i16_reghi_vreg_multi_use_lo(ptr addrspace(3) %in, <
; GFX906: ; %bb.0: ; %entry
; GFX906-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX906-NEXT: ds_read_u16 v0, v0
-; GFX906-NEXT: v_mov_b32_e32 v2, 0
+; GFX906-NEXT: v_mov_b32_e32 v2, -1
; GFX906-NEXT: s_mov_b32 s4, 0xffff
; GFX906-NEXT: s_waitcnt lgkmcnt(0)
; GFX906-NEXT: ds_write_b16 v2, v0
@@ -620,7 +620,7 @@ define void @load_local_lo_v2i16_reghi_vreg_multi_use_lo(ptr addrspace(3) %in, <
; GFX803-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX803-NEXT: s_mov_b32 m0, -1
; GFX803-NEXT: ds_read_u16 v0, v0
-; GFX803-NEXT: v_mov_b32_e32 v2, 0
+; GFX803-NEXT: v_mov_b32_e32 v2, -1
; GFX803-NEXT: s_mov_b32 s4, 0x3020504
; GFX803-NEXT: s_waitcnt lgkmcnt(0)
; GFX803-NEXT: ds_write_b16 v2, v0
@@ -633,7 +633,7 @@ define void @load_local_lo_v2i16_reghi_vreg_multi_use_lo(ptr addrspace(3) %in, <
; GFX900-FLATSCR: ; %bb.0: ; %entry
; GFX900-FLATSCR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX900-FLATSCR-NEXT: ds_read_u16 v0, v0
-; GFX900-FLATSCR-NEXT: v_mov_b32_e32 v2, 0
+; GFX900-FLATSCR-NEXT: v_mov_b32_e32 v2, -1
; GFX900-FLATSCR-NEXT: s_mov_b32 s0, 0xffff
; GFX900-FLATSCR-NEXT: s_waitcnt lgkmcnt(0)
; GFX900-FLATSCR-NEXT: ds_write_b16 v2, v0
@@ -656,7 +656,7 @@ define void @load_local_lo_v2i16_reghi_vreg_multi_use_hi(ptr addrspace(3) %in, <
; GFX900-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; GFX900-NEXT: v_lshrrev_b32_e32 v2, 16, v1
; GFX900-NEXT: ds_read_u16_d16 v1, v0
-; GFX900-NEXT: v_mov_b32_e32 v0, 0
+; GFX900-NEXT: v_mov_b32_e32 v0, -1
; GFX900-NEXT: ds_write_b16 v0, v2
; GFX900-NEXT: s_waitcnt lgkmcnt(1)
; GFX900-NEXT: global_store_dword v[0:1], v1, off
@@ -669,7 +669,7 @@ define void @load_local_lo_v2i16_reghi_vreg_multi_use_hi(ptr addrspace(3) %in, <
; GFX906-NEXT: ds_read_u16 v0, v0
; GFX906-NEXT: s_mov_b32 s4, 0xffff
; GFX906-NEXT: v_lshrrev_b32_e32 v2, 16, v1
-; GFX906-NEXT: v_mov_b32_e32 v3, 0
+; GFX906-NEXT: v_mov_b32_e32 v3, -1
; GFX906-NEXT: ds_write_b16 v3, v2
; GFX906-NEXT: s_waitcnt lgkmcnt(1)
; GFX906-NEXT: v_bfi_b32 v0, s4, v0, v1
@@ -684,7 +684,7 @@ define void @load_local_lo_v2i16_reghi_vreg_multi_use_hi(ptr addrspace(3) %in, <
; GFX803-NEXT: ds_read_u16 v0, v0
; GFX803-NEXT: s_mov_b32 s4, 0x3020504
; GFX803-NEXT: v_lshrrev_b32_e32 v2, 16, v1
-; GFX803-NEXT: v_mov_b32_e32 v3, 0
+; GFX803-NEXT: v_mov_b32_e32 v3, -1
; GFX803-NEXT: ds_write_b16 v3, v2
; GFX803-NEXT: s_waitcnt lgkmcnt(1)
; GFX803-NEXT: v_perm_b32 v0, v0, v1, s4
diff --git a/llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-constants.ll b/llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-constants.ll
index a09e392b89e63..f3eb2c6c34858 100644
--- a/llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-constants.ll
+++ b/llvm/test/CodeGen/AMDGPU/lower-buffer-fat-pointers-constants.ll
@@ -193,7 +193,7 @@ define i32 @ptrtoint_very_short() {
define <2 x i160> @ptrtoint_vec() {
; CHECK-LABEL: define <2 x i160> @ptrtoint_vec
; CHECK-SAME: () #[[ATTR0]] {
-; CHECK-NEXT: ret <2 x i160> zeroinitializer
+; CHECK-NEXT: ret <2 x i160> ptrtoint (<2 x ptr addrspace(7)> zeroinitializer to <2 x i160>)
;
ret <2 x i160> ptrtoint (<2 x ptr addrspace(7)> zeroinitializer to <2 x i160>)
}
diff --git a/llvm/test/CodeGen/AMDGPU/lower-mem-intrinsics.ll b/llvm/test/CodeGen/AMDGPU/lower-mem-intrinsics.ll
index 5a9f53ec0077d..a408f4ca9a0da 100644
--- a/llvm/test/CodeGen/AMDGPU/lower-mem-intrinsics.ll
+++ b/llvm/test/CodeGen/AMDGPU/lower-mem-intrinsics.ll
@@ -2362,7 +2362,7 @@ define void @test_umin(i64 %0, i64 %idxprom, ptr %x, ptr %y) {
; OPT-LABEL: @test_umin(
; OPT-NEXT: entry:
; OPT-NEXT: [[ARRAYIDX:%.*]] = getelementptr [32 x [8 x i64]], ptr [[Y:%.*]], i64 0, i64 [[IDXPROM:%.*]]
-; OPT-NEXT: [[SPEC_SELECT:%.*]] = tail call i64 @llvm.umin.i64(i64 sub (i64 ptrtoint (ptr addrspacecast (ptr addrspace(4) inttoptr (i64 32 to ptr addrspace(4)) to ptr) to i64), i64 ptrtoint (ptr addrspacecast (ptr addrspace(4) null to ptr) to i64)), i64 56)
+; OPT-NEXT: [[SPEC_SELECT:%.*]] = tail call i64 @llvm.umin.i64(i64 sub (i64 ptrtoint (ptr addrspacecast (ptr addrspace(4) inttoptr (i64 32 to ptr addrspace(4)) to ptr) to i64), i64 ptrtoint (ptr null to i64)), i64 56)
; OPT-NEXT: [[TMP2:%.*]] = and i64 [[SPEC_SELECT]], 15
; OPT-NEXT: [[TMP3:%.*]] = sub i64 [[SPEC_SELECT]], [[TMP2]]
; OPT-NEXT: [[TMP4:%.*]] = icmp ne i64 [[TMP3]], 0
diff --git a/llvm/test/CodeGen/AMDGPU/machine-sink-loop-var-out-of-divergent-loop-swdev407790.ll b/llvm/test/CodeGen/AMDGPU/machine-sink-loop-var-out-of-divergent-loop-swdev407790.ll
index 34a9624cb19eb..654f856825c91 100644
--- a/llvm/test/CodeGen/AMDGPU/machine-sink-loop-var-out-of-divergent-loop-swdev407790.ll
+++ b/llvm/test/CodeGen/AMDGPU/machine-sink-loop-var-out-of-divergent-loop-swdev407790.ll
@@ -12,6 +12,7 @@ define void @machinesink_loop_variable_out_of_divergent_loop(i32 %arg, i1 %cmp49
; CHECK-NEXT: v_and_b32_e32 v3, 1, v3
; CHECK-NEXT: s_mov_b32 s6, 0
; CHECK-NEXT: v_cmp_ne_u32_e64 s4, 1, v1
+; CHECK-NEXT: v_mov_b32_e32 v1, -1
; CHECK-NEXT: v_cmp_eq_u32_e32 vcc_lo, 1, v3
; CHECK-NEXT: s_inst_prefetch 0x1
; CHECK-NEXT: s_branch .LBB0_3
@@ -19,11 +20,11 @@ define void @machinesink_loop_variable_out_of_divergent_loop(i32 %arg, i1 %cmp49
; CHECK-NEXT: .LBB0_1: ; %Flow
; CHECK-NEXT: ; in Loop: Header=BB0_3 Depth=1
; CHECK-NEXT: s_or_b32 exec_lo, exec_lo, s8
-; CHECK-NEXT: v_add_nc_u32_e32 v3, -4, v3
+; CHECK-NEXT: v_add_nc_u32_e32 v4, -4, v4
; CHECK-NEXT: .LBB0_2: ; %Flow1
; CHECK-NEXT: ; in Loop: Header=BB0_3 Depth=1
; CHECK-NEXT: s_or_b32 exec_lo, exec_lo, s7
-; CHECK-NEXT: v_cmp_ne_u32_e64 s5, 0, v1
+; CHECK-NEXT: v_cmp_ne_u32_e64 s5, 0, v3
; CHECK-NEXT: ;;#ASMSTART
; CHECK-NEXT: ; j lastloop entry
; CHECK-NEXT: ;;#ASMEND
@@ -33,8 +34,8 @@ define void @machinesink_loop_variable_out_of_divergent_loop(i32 %arg, i1 %cmp49
; CHECK-NEXT: .LBB0_3: ; %for.body33
; CHECK-NEXT: ; =>This Loop Header: Depth=1
; CHECK-NEXT: ; Child Loop BB0_6 Depth 2
+; CHECK-NEXT: v_mov_b32_e32 v4, 0
; CHECK-NEXT: v_mov_b32_e32 v3, 0
-; CHECK-NEXT: v_mov_b32_e32 v1, 0
; CHECK-NEXT: s_and_saveexec_b32 s7, s4
; CHECK-NEXT: s_cbranch_execz .LBB0_2
; CHECK-NEXT: ; %bb.4: ; %for.body51.preheader
@@ -50,23 +51,23 @@ define void @machinesink_loop_variable_out_of_divergent_loop(i32 %arg, i1 %cmp49
; CHECK-NEXT: ;;#ASMSTART
; CHECK-NEXT: ; backedge
; CHECK-NEXT: ;;#ASMEND
-; CHECK-NEXT: v_add_nc_u32_e32 v3, s9, v2
-; CHECK-NEXT: v_cmp_ge_u32_e64 s5, v3, v0
+; CHECK-NEXT: v_add_nc_u32_e32 v4, s9, v2
+; CHECK-NEXT: v_cmp_ge_u32_e64 s5, v4, v0
; CHECK-NEXT: s_or_b32 s8, s5, s8
; CHECK-NEXT: s_andn2_b32 exec_lo, exec_lo, s8
; CHECK-NEXT: s_cbranch_execz .LBB0_1
; CHECK-NEXT: .LBB0_6: ; %for.body51
; CHECK-NEXT: ; Parent Loop BB0_3 Depth=1
; CHECK-NEXT: ; => This Inner Loop Header: Depth=2
-; CHECK-NEXT: v_mov_b32_e32 v1, 1
+; CHECK-NEXT: v_mov_b32_e32 v3, 1
; CHECK-NEXT: s_and_saveexec_b32 s5, vcc_lo
; CHECK-NEXT: s_cbranch_execz .LBB0_5
; CHECK-NEXT: ; %bb.7: ; %if.then112
; CHECK-NEXT: ; in Loop: Header=BB0_6 Depth=2
; CHECK-NEXT: s_add_i32 s10, s9, 4
-; CHECK-NEXT: v_mov_b32_e32 v1, 0
-; CHECK-NEXT: v_mov_b32_e32 v3, s10
-; CHECK-NEXT: ds_write_b32 v1, v3
+; CHECK-NEXT: v_mov_b32_e32 v3, 0
+; CHECK-NEXT: v_mov_b32_e32 v4, s10
+; CHECK-NEXT: ds_write_b32 v1, v4
; CHECK-NEXT: s_branch .LBB0_5
; CHECK-NEXT: .LBB0_8: ; %for.body159.preheader
; CHECK-NEXT: s_inst_prefetch 0x2
diff --git a/llvm/test/CodeGen/AMDGPU/promote-constOffset-to-imm.ll b/llvm/test/CodeGen/AMDGPU/promote-constOffset-to-imm.ll
index 24dcf53939cda..83957f577cb41 100644
--- a/llvm/test/CodeGen/AMDGPU/promote-constOffset-to-imm.ll
+++ b/llvm/test/CodeGen/AMDGPU/promote-constOffset-to-imm.ll
@@ -2559,12 +2559,8 @@ entry:
define amdgpu_kernel void @negativeoffsetnullptr(ptr %buffer) {
; GFX8-LABEL: negativeoffsetnullptr:
; GFX8: ; %bb.0: ; %entry
-; GFX8-NEXT: s_load_dword s1, s[4:5], 0xec
-; GFX8-NEXT: s_add_u32 s0, 0, -1
-; GFX8-NEXT: s_waitcnt lgkmcnt(0)
-; GFX8-NEXT: s_addc_u32 s1, s1, -1
-; GFX8-NEXT: v_mov_b32_e32 v0, s0
-; GFX8-NEXT: v_mov_b32_e32 v1, s1
+; GFX8-NEXT: v_mov_b32_e32 v0, -1
+; GFX8-NEXT: v_mov_b32_e32 v1, -1
; GFX8-NEXT: flat_load_ubyte v0, v[0:1]
; GFX8-NEXT: s_mov_b64 s[0:1], 0
; GFX8-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
@@ -2578,32 +2574,27 @@ define amdgpu_kernel void @negativeoffsetnullptr(ptr %buffer) {
; GFX8-NEXT: ; %bb.2: ; %end
; GFX8-NEXT: s_endpgm
;
-; GFX9-LABEL: negativeoffsetnullptr:
-; GFX9: ; %bb.0: ; %entry
-; GFX9-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX9-NEXT: v_mov_b32_e32 v1, s1
-; GFX9-NEXT: v_add_co_u32_e64 v0, vcc, -1, 0
-; GFX9-NEXT: v_addc_co_u32_e32 v1, vcc, -1, v1, vcc
-; GFX9-NEXT: flat_load_ubyte v0, v[0:1]
-; GFX9-NEXT: s_mov_b64 s[0:1], 0
-; GFX9-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
-; GFX9-NEXT: v_cmp_eq_u16_e32 vcc, 0, v0
-; GFX9-NEXT: .LBB8_1: ; %branch
-; GFX9-NEXT: ; =>This Inner Loop Header: Depth=1
-; GFX9-NEXT: s_and_b64 s[2:3], exec, vcc
-; GFX9-NEXT: s_or_b64 s[0:1], s[2:3], s[0:1]
-; GFX9-NEXT: s_andn2_b64 exec, exec, s[0:1]
-; GFX9-NEXT: s_cbranch_execnz .LBB8_1
-; GFX9-NEXT: ; %bb.2: ; %end
-; GFX9-NEXT: s_endpgm
+; GFX900-LABEL: negativeoffsetnullptr:
+; GFX900: ; %bb.0: ; %entry
+; GFX900-NEXT: v_mov_b32_e32 v0, -1
+; GFX900-NEXT: v_mov_b32_e32 v1, -1
+; GFX900-NEXT: flat_load_ubyte v0, v[0:1]
+; GFX900-NEXT: s_mov_b64 s[0:1], 0
+; GFX900-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX900-NEXT: v_cmp_eq_u16_e32 vcc, 0, v0
+; GFX900-NEXT: .LBB8_1: ; %branch
+; GFX900-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX900-NEXT: s_and_b64 s[2:3], exec, vcc
+; GFX900-NEXT: s_or_b64 s[0:1], s[2:3], s[0:1]
+; GFX900-NEXT: s_andn2_b64 exec, exec, s[0:1]
+; GFX900-NEXT: s_cbranch_execnz .LBB8_1
+; GFX900-NEXT: ; %bb.2: ; %end
+; GFX900-NEXT: s_endpgm
;
; GFX10-LABEL: negativeoffsetnullptr:
; GFX10: ; %bb.0: ; %entry
-; GFX10-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX10-NEXT: s_add_u32 s0, 0, -1
-; GFX10-NEXT: s_addc_u32 s1, s1, -1
-; GFX10-NEXT: v_mov_b32_e32 v0, s0
-; GFX10-NEXT: v_mov_b32_e32 v1, s1
+; GFX10-NEXT: v_mov_b32_e32 v0, -1
+; GFX10-NEXT: v_mov_b32_e32 v1, -1
; GFX10-NEXT: s_mov_b32 s0, 0
; GFX10-NEXT: flat_load_ubyte v0, v[0:1]
; GFX10-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
@@ -2617,12 +2608,26 @@ define amdgpu_kernel void @negativeoffsetnullptr(ptr %buffer) {
; GFX10-NEXT: ; %bb.2: ; %end
; GFX10-NEXT: s_endpgm
;
+; GFX90A-LABEL: negativeoffsetnullptr:
+; GFX90A: ; %bb.0: ; %entry
+; GFX90A-NEXT: v_pk_mov_b32 v[0:1], -1, -1
+; GFX90A-NEXT: flat_load_ubyte v0, v[0:1]
+; GFX90A-NEXT: s_mov_b64 s[0:1], 0
+; GFX90A-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
+; GFX90A-NEXT: v_cmp_eq_u16_e32 vcc, 0, v0
+; GFX90A-NEXT: .LBB8_1: ; %branch
+; GFX90A-NEXT: ; =>This Inner Loop Header: Depth=1
+; GFX90A-NEXT: s_and_b64 s[2:3], exec, vcc
+; GFX90A-NEXT: s_or_b64 s[0:1], s[2:3], s[0:1]
+; GFX90A-NEXT: s_andn2_b64 exec, exec, s[0:1]
+; GFX90A-NEXT: s_cbranch_execnz .LBB8_1
+; GFX90A-NEXT: ; %bb.2: ; %end
+; GFX90A-NEXT: s_endpgm
+;
; GFX11-TRUE16-LABEL: negativeoffsetnullptr:
; GFX11-TRUE16: ; %bb.0: ; %entry
-; GFX11-TRUE16-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX11-TRUE16-NEXT: v_add_co_u32 v0, s0, -1, 0
-; GFX11-TRUE16-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX11-TRUE16-NEXT: v_add_co_ci_u32_e64 v1, null, -1, s1, s0
+; GFX11-TRUE16-NEXT: v_mov_b32_e32 v0, -1
+; GFX11-TRUE16-NEXT: v_mov_b32_e32 v1, -1
; GFX11-TRUE16-NEXT: s_mov_b32 s0, 0
; GFX11-TRUE16-NEXT: flat_load_d16_u8 v0, v[0:1]
; GFX11-TRUE16-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
@@ -2639,10 +2644,8 @@ define amdgpu_kernel void @negativeoffsetnullptr(ptr %buffer) {
;
; GFX11-FAKE16-LABEL: negativeoffsetnullptr:
; GFX11-FAKE16: ; %bb.0: ; %entry
-; GFX11-FAKE16-NEXT: s_mov_b64 s[0:1], src_private_base
-; GFX11-FAKE16-NEXT: v_add_co_u32 v0, s0, -1, 0
-; GFX11-FAKE16-NEXT: s_delay_alu instid0(VALU_DEP_1)
-; GFX11-FAKE16-NEXT: v_add_co_ci_u32_e64 v1, null, -1, s1, s0
+; GFX11-FAKE16-NEXT: v_mov_b32_e32 v0, -1
+; GFX11-FAKE16-NEXT: v_mov_b32_e32 v1, -1
; GFX11-FAKE16-NEXT: s_mov_b32 s0, 0
; GFX11-FAKE16-NEXT: flat_load_u8 v0, v[0:1]
; GFX11-FAKE16-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
diff --git a/llvm/test/CodeGen/AMDGPU/remove-incompatible-extended-image-insts.ll b/llvm/test/CodeGen/AMDGPU/remove-incompatible-extended-image-insts.ll
index c899e353d8052..7868a338e200a 100644
--- a/llvm/test/CodeGen/AMDGPU/remove-incompatible-extended-image-insts.ll
+++ b/llvm/test/CodeGen/AMDGPU/remove-incompatible-extended-image-insts.ll
@@ -22,7 +22,7 @@
]
; EXTIMG: @ConstantExpr = internal global i64 ptrtoint (ptr @needs_extimg to i64)
-; NOEXTIMG: @ConstantExpr = internal global i64 0
+; NOEXTIMG: @ConstantExpr = internal global i64 ptrtoint (ptr null to i64)
@ConstantExpr = internal global i64 ptrtoint (ptr @needs_extimg to i64)
diff --git a/llvm/test/CodeGen/AMDGPU/remove-incompatible-functions.ll b/llvm/test/CodeGen/AMDGPU/remove-incompatible-functions.ll
index a4edcaca00e37..89516106254f9 100644
--- a/llvm/test/CodeGen/AMDGPU/remove-incompatible-functions.ll
+++ b/llvm/test/CodeGen/AMDGPU/remove-incompatible-functions.ll
@@ -119,7 +119,7 @@
ptr @needs_dot8_insts
]
-; GFX7: @ConstantExpr = internal global i64 0
+; GFX7: @ConstantExpr = internal global i64 ptrtoint (ptr null to i64)
@ConstantExpr = internal global i64 ptrtoint (ptr @needs_dpp to i64)
define void @needs_dpp(ptr %out, ptr %in, i64 %a, i64 %b, i64 %c) #0 {
diff --git a/llvm/test/CodeGen/AMDGPU/remove-incompatible-gws.ll b/llvm/test/CodeGen/AMDGPU/remove-incompatible-gws.ll
index 87304e98c03e7..03066acffb38a 100644
--- a/llvm/test/CodeGen/AMDGPU/remove-incompatible-gws.ll
+++ b/llvm/test/CodeGen/AMDGPU/remove-incompatible-gws.ll
@@ -24,7 +24,7 @@
; COMPATIBLE: @ConstantExpr = internal global i64 ptrtoint (ptr @needs_gws to i64)
-; INCOMPATIBLE: @ConstantExpr = internal global i64 0
+; INCOMPATIBLE: @ConstantExpr = internal global i64 ptrtoint (ptr null to i64)
@ConstantExpr = internal global i64 ptrtoint (ptr @needs_gws to i64)
diff --git a/llvm/test/CodeGen/AMDGPU/remove-incompatible-s-time.ll b/llvm/test/CodeGen/AMDGPU/remove-incompatible-s-time.ll
index d182d35cf9d08..27158acee47c2 100644
--- a/llvm/test/CodeGen/AMDGPU/remove-incompatible-s-time.ll
+++ b/llvm/test/CodeGen/AMDGPU/remove-incompatible-s-time.ll
@@ -33,11 +33,11 @@
]
; REALTIME: @ConstantExpr0 = internal global i64 ptrtoint (ptr @needs_s_memrealtime to i64)
-; NOREALTIME: @ConstantExpr0 = internal global i64 0
+; NOREALTIME: @ConstantExpr0 = internal global i64 ptrtoint (ptr null to i64)
@ConstantExpr0 = internal global i64 ptrtoint (ptr @needs_s_memrealtime to i64)
; MEMTIME: @ConstantExpr1 = internal global i64 ptrtoint (ptr @needs_s_memtime to i64)
-; NOMEMTIME: @ConstantExpr1 = internal global i64 0
+; NOMEMTIME: @ConstantExpr1 = internal global i64 ptrtoint (ptr null to i64)
@ConstantExpr1 = internal global i64 ptrtoint (ptr @needs_s_memtime to i64)
; REALTIME: define i64 @needs_s_memrealtime
diff --git a/llvm/test/CodeGen/AMDGPU/sdwa-peephole-instr-combine-sel.ll b/llvm/test/CodeGen/AMDGPU/sdwa-peephole-instr-combine-sel.ll
index 883a6b70a5a6d..d056eee8e101a 100644
--- a/llvm/test/CodeGen/AMDGPU/sdwa-peephole-instr-combine-sel.ll
+++ b/llvm/test/CodeGen/AMDGPU/sdwa-peephole-instr-combine-sel.ll
@@ -21,7 +21,8 @@ define amdgpu_kernel void @widget(ptr addrspace(1) %arg, i1 %arg1, ptr addrspace
; CHECK-NEXT: s_cbranch_vccz .LBB0_2
; CHECK-NEXT: ; %bb.1: ; %bb19
; CHECK-NEXT: v_mov_b32_e32 v1, 0
-; CHECK-NEXT: ds_write_b32 v1, v1
+; CHECK-NEXT: v_mov_b32_e32 v2, -1
+; CHECK-NEXT: ds_write_b32 v2, v1
; CHECK-NEXT: .LBB0_2: ; %bb20
; CHECK-NEXT: s_mov_b32 s0, exec_lo
; CHECK-NEXT: s_waitcnt vmcnt(0)
diff --git a/llvm/test/CodeGen/AMDGPU/setcc-multiple-use.ll b/llvm/test/CodeGen/AMDGPU/setcc-multiple-use.ll
index ace4907670d37..4bf798e661661 100644
--- a/llvm/test/CodeGen/AMDGPU/setcc-multiple-use.ll
+++ b/llvm/test/CodeGen/AMDGPU/setcc-multiple-use.ll
@@ -11,7 +11,7 @@ define i32 @f() {
; CHECK-LABEL: f:
; CHECK: ; %bb.0: ; %bb
; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; CHECK-NEXT: v_mov_b32_e32 v0, 0
+; CHECK-NEXT: v_mov_b32_e32 v0, -1
; CHECK-NEXT: ds_read_b32 v0, v0
; CHECK-NEXT: s_waitcnt lgkmcnt(0)
; CHECK-NEXT: v_cmp_eq_u32_e32 vcc_lo, 0, v0
diff --git a/llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll b/llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll
index d2394bab82c77..2bccd0c339638 100644
--- a/llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll
+++ b/llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll
@@ -390,19 +390,19 @@ define void @func_stackrestore_null() {
; WAVE32-OPT-LABEL: func_stackrestore_null:
; WAVE32-OPT: ; %bb.0:
; WAVE32-OPT-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; WAVE32-OPT-NEXT: s_mov_b32 s32, 0
+; WAVE32-OPT-NEXT: s_movk_i32 s32, 0xffe0
; WAVE32-OPT-NEXT: s_setpc_b64 s[30:31]
;
; WAVE64-OPT-LABEL: func_stackrestore_null:
; WAVE64-OPT: ; %bb.0:
; WAVE64-OPT-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; WAVE64-OPT-NEXT: s_mov_b32 s32, 0
+; WAVE64-OPT-NEXT: s_movk_i32 s32, 0xffc0
; WAVE64-OPT-NEXT: s_setpc_b64 s[30:31]
;
; WAVE32-O0-LABEL: func_stackrestore_null:
; WAVE32-O0: ; %bb.0:
; WAVE32-O0-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; WAVE32-O0-NEXT: s_mov_b32 s4, 0
+; WAVE32-O0-NEXT: s_mov_b32 s4, -1
; WAVE32-O0-NEXT: s_lshl_b32 s4, s4, 5
; WAVE32-O0-NEXT: s_mov_b32 s32, s4
; WAVE32-O0-NEXT: s_setpc_b64 s[30:31]
@@ -410,7 +410,7 @@ define void @func_stackrestore_null() {
; WAVE64-O0-LABEL: func_stackrestore_null:
; WAVE64-O0: ; %bb.0:
; WAVE64-O0-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; WAVE64-O0-NEXT: s_mov_b32 s4, 0
+; WAVE64-O0-NEXT: s_mov_b32 s4, -1
; WAVE64-O0-NEXT: s_lshl_b32 s4, s4, 6
; WAVE64-O0-NEXT: s_mov_b32 s32, s4
; WAVE64-O0-NEXT: s_setpc_b64 s[30:31]
@@ -418,7 +418,7 @@ define void @func_stackrestore_null() {
; WAVE32-WWM-PREALLOC-LABEL: func_stackrestore_null:
; WAVE32-WWM-PREALLOC: ; %bb.0:
; WAVE32-WWM-PREALLOC-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; WAVE32-WWM-PREALLOC-NEXT: s_mov_b32 s4, 0
+; WAVE32-WWM-PREALLOC-NEXT: s_mov_b32 s4, -1
; WAVE32-WWM-PREALLOC-NEXT: s_lshl_b32 s4, s4, 5
; WAVE32-WWM-PREALLOC-NEXT: s_mov_b32 s32, s4
; WAVE32-WWM-PREALLOC-NEXT: s_setpc_b64 s[30:31]
diff --git a/llvm/test/CodeGen/AMDGPU/swdev282079.ll b/llvm/test/CodeGen/AMDGPU/swdev282079.ll
index 6ba972ae6f5e4..a350e14b0213c 100644
--- a/llvm/test/CodeGen/AMDGPU/swdev282079.ll
+++ b/llvm/test/CodeGen/AMDGPU/swdev282079.ll
@@ -20,9 +20,7 @@ define protected amdgpu_kernel void @foo(ptr addrspace(1) %arg, ptr addrspace(1)
; CHECK-NEXT: v_mov_b32_e32 v1, 0
; CHECK-NEXT: s_mov_b32 s32, 0
; CHECK-NEXT: s_swappc_b64 s[30:31], s[18:19]
-; CHECK-NEXT: s_mov_b64 s[4:5], src_private_base
-; CHECK-NEXT: v_mov_b32_e32 v2, 0
-; CHECK-NEXT: v_mov_b32_e32 v3, s5
+; CHECK-NEXT: v_pk_mov_b32 v[2:3], 0, 0
; CHECK-NEXT: flat_load_dwordx2 v[2:3], v[2:3]
; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
; CHECK-NEXT: flat_store_dwordx2 v[2:3], v[0:1]
diff --git a/llvm/test/CodeGen/AMDGPU/tail-call-inreg-arguments.convergencetokens.ll b/llvm/test/CodeGen/AMDGPU/tail-call-inreg-arguments.convergencetokens.ll
index 75f3b6ff8917a..27f5dff18027d 100644
--- a/llvm/test/CodeGen/AMDGPU/tail-call-inreg-arguments.convergencetokens.ll
+++ b/llvm/test/CodeGen/AMDGPU/tail-call-inreg-arguments.convergencetokens.ll
@@ -22,7 +22,7 @@ define void @tail_call_i64_inreg_uniform_in_vgpr_convergence_tokens() #0 {
; CHECK-NEXT: [[COPY7:%[0-9]+]]:sgpr_64 = COPY $sgpr6_sgpr7
; CHECK-NEXT: [[COPY8:%[0-9]+]]:sgpr_64 = COPY $sgpr4_sgpr5
; CHECK-NEXT: [[CONVERGENCECTRL_ENTRY:%[0-9]+]]:sreg_64 = CONVERGENCECTRL_ENTRY
- ; CHECK-NEXT: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
+ ; CHECK-NEXT: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 -1, implicit $exec
; CHECK-NEXT: [[DS_READ_B64_gfx9_:%[0-9]+]]:vreg_64 = DS_READ_B64_gfx9 killed [[V_MOV_B32_e32_]], 0, 0, implicit $exec :: (load (s64) from `ptr addrspace(3) null`, addrspace 3)
; CHECK-NEXT: [[COPY9:%[0-9]+]]:vgpr_32 = COPY [[DS_READ_B64_gfx9_]].sub1
; CHECK-NEXT: [[COPY10:%[0-9]+]]:vgpr_32 = COPY [[DS_READ_B64_gfx9_]].sub0
diff --git a/llvm/test/CodeGen/AMDGPU/tail-call-inreg-arguments.ll b/llvm/test/CodeGen/AMDGPU/tail-call-inreg-arguments.ll
index 2b1f6389f542b..d68afc15088cc 100644
--- a/llvm/test/CodeGen/AMDGPU/tail-call-inreg-arguments.ll
+++ b/llvm/test/CodeGen/AMDGPU/tail-call-inreg-arguments.ll
@@ -56,7 +56,7 @@ define void @tail_call_i64_inreg_uniform_in_vgpr() {
; CHECK-LABEL: tail_call_i64_inreg_uniform_in_vgpr:
; CHECK: ; %bb.0:
; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; CHECK-NEXT: v_mov_b32_e32 v0, 0
+; CHECK-NEXT: v_mov_b32_e32 v0, -1
; CHECK-NEXT: ds_read_b64 v[0:1], v0
; CHECK-NEXT: s_getpc_b64 s[16:17]
; CHECK-NEXT: s_add_u32 s16, s16, void_func_i64_inreg at gotpcrel32@lo+4
diff --git a/llvm/test/CodeGen/AMDGPU/tail-call-uniform-target-in-vgprs-issue110930.convergencetokens.ll b/llvm/test/CodeGen/AMDGPU/tail-call-uniform-target-in-vgprs-issue110930.convergencetokens.ll
index 662e751eeca1d..9870fb4905215 100644
--- a/llvm/test/CodeGen/AMDGPU/tail-call-uniform-target-in-vgprs-issue110930.convergencetokens.ll
+++ b/llvm/test/CodeGen/AMDGPU/tail-call-uniform-target-in-vgprs-issue110930.convergencetokens.ll
@@ -21,7 +21,7 @@ define void @tail_call_uniform_vgpr_value_convergence_tokens() #0 {
; CHECK-NEXT: [[COPY7:%[0-9]+]]:sgpr_64 = COPY $sgpr6_sgpr7
; CHECK-NEXT: [[COPY8:%[0-9]+]]:sgpr_64 = COPY $sgpr4_sgpr5
; CHECK-NEXT: [[CONVERGENCECTRL_ENTRY:%[0-9]+]]:sreg_64 = CONVERGENCECTRL_ENTRY
- ; CHECK-NEXT: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 0, implicit $exec
+ ; CHECK-NEXT: [[V_MOV_B32_e32_:%[0-9]+]]:vgpr_32 = V_MOV_B32_e32 -1, implicit $exec
; CHECK-NEXT: [[DS_READ_B64_gfx9_:%[0-9]+]]:vreg_64 = DS_READ_B64_gfx9 killed [[V_MOV_B32_e32_]], 0, 0, implicit $exec :: (load (s64) from `ptr addrspace(3) null`, addrspace 3)
; CHECK-NEXT: [[COPY9:%[0-9]+]]:vgpr_32 = COPY [[DS_READ_B64_gfx9_]].sub1
; CHECK-NEXT: CONVERGENCECTRL_GLUE [[CONVERGENCECTRL_ENTRY]]
diff --git a/llvm/test/CodeGen/AMDGPU/tail-call-uniform-target-in-vgprs-issue110930.ll b/llvm/test/CodeGen/AMDGPU/tail-call-uniform-target-in-vgprs-issue110930.ll
index 4068ea75002b6..22e78b7581b55 100644
--- a/llvm/test/CodeGen/AMDGPU/tail-call-uniform-target-in-vgprs-issue110930.ll
+++ b/llvm/test/CodeGen/AMDGPU/tail-call-uniform-target-in-vgprs-issue110930.ll
@@ -7,7 +7,7 @@ define void @tail_call_uniform_vgpr_value() {
; CHECK-LABEL: tail_call_uniform_vgpr_value:
; CHECK: ; %bb.0:
; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; CHECK-NEXT: v_mov_b32_e32 v0, 0
+; CHECK-NEXT: v_mov_b32_e32 v0, -1
; CHECK-NEXT: ds_read_b64 v[0:1], v0
; CHECK-NEXT: s_waitcnt lgkmcnt(0)
; CHECK-NEXT: v_readfirstlane_b32 s17, v1
diff --git a/llvm/test/CodeGen/AMDGPU/tuple-allocation-failure.ll b/llvm/test/CodeGen/AMDGPU/tuple-allocation-failure.ll
index 8f8e2c0ba52fc..d2d16cbf71ce3 100644
--- a/llvm/test/CodeGen/AMDGPU/tuple-allocation-failure.ll
+++ b/llvm/test/CodeGen/AMDGPU/tuple-allocation-failure.ll
@@ -70,12 +70,12 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: s_xor_b64 s[4:5], s[4:5], -1
; GLOBALNESS1-NEXT: s_mov_b64 s[38:39], s[8:9]
; GLOBALNESS1-NEXT: v_cmp_ne_u32_e64 s[8:9], 1, v1
-; GLOBALNESS1-NEXT: ; implicit-def: $vgpr57 : SGPR spill to VGPR lane
+; GLOBALNESS1-NEXT: ; implicit-def: $vgpr60 : SGPR spill to VGPR lane
; GLOBALNESS1-NEXT: v_cmp_ne_u32_e64 s[66:67], 1, v0
; GLOBALNESS1-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[4:5]
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s8, 0
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s8, 0
; GLOBALNESS1-NEXT: v_cmp_ne_u32_e64 s[68:69], 1, v0
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s9, 1
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s9, 1
; GLOBALNESS1-NEXT: v_cmp_ne_u32_e64 s[70:71], 1, v3
; GLOBALNESS1-NEXT: v_mov_b32_e32 v46, 0x80
; GLOBALNESS1-NEXT: s_mov_b32 s82, s16
@@ -83,6 +83,7 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: s_mov_b32 s84, s14
; GLOBALNESS1-NEXT: s_mov_b64 s[34:35], s[10:11]
; GLOBALNESS1-NEXT: v_mov_b32_e32 v47, 0
+; GLOBALNESS1-NEXT: v_mov_b32_e32 v40, -1
; GLOBALNESS1-NEXT: v_mov_b32_e32 v43, v42
; GLOBALNESS1-NEXT: s_mov_b32 s32, 0
; GLOBALNESS1-NEXT: ; implicit-def: $vgpr58_vgpr59
@@ -94,24 +95,24 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: v_cmp_eq_u32_e32 vcc, 1, v2
; GLOBALNESS1-NEXT: v_cmp_ne_u32_e64 s[4:5], 1, v0
; GLOBALNESS1-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s4, 2
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s4, 2
; GLOBALNESS1-NEXT: v_cmp_eq_u32_e32 vcc, 0, v2
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s5, 3
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s5, 3
; GLOBALNESS1-NEXT: v_cmp_ne_u32_e64 s[4:5], 1, v3
; GLOBALNESS1-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s4, 4
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s5, 5
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s4, 4
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s5, 5
; GLOBALNESS1-NEXT: v_cmp_ne_u32_e64 s[4:5], 1, v2
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s4, 6
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s5, 7
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s4, 6
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s5, 7
; GLOBALNESS1-NEXT: v_cmp_ne_u32_e64 s[80:81], 1, v1
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s70, 8
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s71, 9
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s70, 8
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s71, 9
; GLOBALNESS1-NEXT: s_branch .LBB1_4
; GLOBALNESS1-NEXT: .LBB1_1: ; %bb70.i
; GLOBALNESS1-NEXT: ; in Loop: Header=BB1_4 Depth=1
-; GLOBALNESS1-NEXT: v_readlane_b32 s6, v57, 6
-; GLOBALNESS1-NEXT: v_readlane_b32 s7, v57, 7
+; GLOBALNESS1-NEXT: v_readlane_b32 s6, v60, 6
+; GLOBALNESS1-NEXT: v_readlane_b32 s7, v60, 7
; GLOBALNESS1-NEXT: s_and_b64 vcc, exec, s[6:7]
; GLOBALNESS1-NEXT: s_cbranch_vccz .LBB1_28
; GLOBALNESS1-NEXT: .LBB1_2: ; %Flow15
@@ -126,10 +127,10 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: .LBB1_4: ; %bb5
; GLOBALNESS1-NEXT: ; =>This Loop Header: Depth=1
; GLOBALNESS1-NEXT: ; Child Loop BB1_16 Depth 2
-; GLOBALNESS1-NEXT: flat_load_dword v40, v[46:47]
-; GLOBALNESS1-NEXT: s_add_u32 s8, s38, 40
-; GLOBALNESS1-NEXT: buffer_store_dword v42, off, s[0:3], 0
; GLOBALNESS1-NEXT: flat_load_dword v56, v[46:47]
+; GLOBALNESS1-NEXT: s_add_u32 s8, s38, 40
+; GLOBALNESS1-NEXT: buffer_store_dword v42, v40, s[0:3], 0 offen
+; GLOBALNESS1-NEXT: flat_load_dword v57, v[46:47]
; GLOBALNESS1-NEXT: s_addc_u32 s9, s39, 0
; GLOBALNESS1-NEXT: s_getpc_b64 s[4:5]
; GLOBALNESS1-NEXT: s_add_u32 s4, s4, wobble at gotpcrel32@lo+4
@@ -187,10 +188,10 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: ; %bb.11: ; %bb33.i
; GLOBALNESS1-NEXT: ; in Loop: Header=BB1_4 Depth=1
; GLOBALNESS1-NEXT: global_load_dwordx2 v[0:1], v[44:45], off
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s8, 10
-; GLOBALNESS1-NEXT: v_writelane_b32 v57, s9, 11
-; GLOBALNESS1-NEXT: v_readlane_b32 s4, v57, 2
-; GLOBALNESS1-NEXT: v_readlane_b32 s5, v57, 3
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s8, 10
+; GLOBALNESS1-NEXT: v_writelane_b32 v60, s9, 11
+; GLOBALNESS1-NEXT: v_readlane_b32 s4, v60, 2
+; GLOBALNESS1-NEXT: v_readlane_b32 s5, v60, 3
; GLOBALNESS1-NEXT: s_and_b64 vcc, exec, s[4:5]
; GLOBALNESS1-NEXT: s_cbranch_vccnz .LBB1_13
; GLOBALNESS1-NEXT: ; %bb.12: ; %bb39.i
@@ -198,8 +199,8 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: global_store_dwordx2 v[44:45], v[42:43], off
; GLOBALNESS1-NEXT: .LBB1_13: ; %bb44.lr.ph.i
; GLOBALNESS1-NEXT: ; in Loop: Header=BB1_4 Depth=1
-; GLOBALNESS1-NEXT: v_cmp_ne_u32_e32 vcc, 0, v56
-; GLOBALNESS1-NEXT: v_cndmask_b32_e32 v2, 0, v40, vcc
+; GLOBALNESS1-NEXT: v_cmp_ne_u32_e32 vcc, 0, v57
+; GLOBALNESS1-NEXT: v_cndmask_b32_e32 v2, 0, v56, vcc
; GLOBALNESS1-NEXT: s_waitcnt vmcnt(0)
; GLOBALNESS1-NEXT: v_cmp_nlt_f64_e32 vcc, 0, v[0:1]
; GLOBALNESS1-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc
@@ -228,8 +229,8 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: s_cbranch_vccnz .LBB1_21
; GLOBALNESS1-NEXT: ; %bb.19: ; %bb3.i.i
; GLOBALNESS1-NEXT: ; in Loop: Header=BB1_16 Depth=2
-; GLOBALNESS1-NEXT: v_readlane_b32 s4, v57, 0
-; GLOBALNESS1-NEXT: v_readlane_b32 s5, v57, 1
+; GLOBALNESS1-NEXT: v_readlane_b32 s4, v60, 0
+; GLOBALNESS1-NEXT: v_readlane_b32 s5, v60, 1
; GLOBALNESS1-NEXT: s_and_b64 vcc, exec, s[4:5]
; GLOBALNESS1-NEXT: s_cbranch_vccnz .LBB1_21
; GLOBALNESS1-NEXT: ; %bb.20: ; %bb6.i.i
@@ -276,13 +277,13 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: .LBB1_24: ; %Flow23
; GLOBALNESS1-NEXT: ; in Loop: Header=BB1_4 Depth=1
; GLOBALNESS1-NEXT: s_load_dwordx4 s[4:7], s[38:39], 0x0
-; GLOBALNESS1-NEXT: v_readlane_b32 s70, v57, 8
-; GLOBALNESS1-NEXT: v_readlane_b32 s8, v57, 10
+; GLOBALNESS1-NEXT: v_readlane_b32 s70, v60, 8
+; GLOBALNESS1-NEXT: v_readlane_b32 s8, v60, 10
; GLOBALNESS1-NEXT: v_pk_mov_b32 v[0:1], 0, 0
-; GLOBALNESS1-NEXT: v_readlane_b32 s71, v57, 9
+; GLOBALNESS1-NEXT: v_readlane_b32 s71, v60, 9
; GLOBALNESS1-NEXT: s_waitcnt lgkmcnt(0)
; GLOBALNESS1-NEXT: s_mov_b32 s55, s7
-; GLOBALNESS1-NEXT: v_readlane_b32 s9, v57, 11
+; GLOBALNESS1-NEXT: v_readlane_b32 s9, v60, 11
; GLOBALNESS1-NEXT: .LBB1_25: ; %Flow24
; GLOBALNESS1-NEXT: ; in Loop: Header=BB1_4 Depth=1
; GLOBALNESS1-NEXT: s_or_b64 exec, exec, s[52:53]
@@ -290,8 +291,8 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS1-NEXT: s_cbranch_execz .LBB1_2
; GLOBALNESS1-NEXT: ; %bb.26: ; %bb67.i
; GLOBALNESS1-NEXT: ; in Loop: Header=BB1_4 Depth=1
-; GLOBALNESS1-NEXT: v_readlane_b32 s6, v57, 4
-; GLOBALNESS1-NEXT: v_readlane_b32 s7, v57, 5
+; GLOBALNESS1-NEXT: v_readlane_b32 s6, v60, 4
+; GLOBALNESS1-NEXT: v_readlane_b32 s7, v60, 5
; GLOBALNESS1-NEXT: s_and_b64 vcc, exec, s[6:7]
; GLOBALNESS1-NEXT: s_cbranch_vccnz .LBB1_1
; GLOBALNESS1-NEXT: ; %bb.27: ; %bb69.i
@@ -381,12 +382,12 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: s_xor_b64 s[4:5], s[4:5], -1
; GLOBALNESS0-NEXT: s_mov_b64 s[38:39], s[8:9]
; GLOBALNESS0-NEXT: v_cmp_ne_u32_e64 s[8:9], 1, v1
-; GLOBALNESS0-NEXT: ; implicit-def: $vgpr57 : SGPR spill to VGPR lane
+; GLOBALNESS0-NEXT: ; implicit-def: $vgpr60 : SGPR spill to VGPR lane
; GLOBALNESS0-NEXT: v_cmp_ne_u32_e64 s[66:67], 1, v0
; GLOBALNESS0-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[4:5]
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s8, 0
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s8, 0
; GLOBALNESS0-NEXT: v_cmp_ne_u32_e64 s[68:69], 1, v0
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s9, 1
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s9, 1
; GLOBALNESS0-NEXT: v_cmp_ne_u32_e64 s[84:85], 1, v3
; GLOBALNESS0-NEXT: v_mov_b32_e32 v46, 0x80
; GLOBALNESS0-NEXT: s_mov_b32 s70, s16
@@ -394,6 +395,7 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: s_mov_b32 s82, s14
; GLOBALNESS0-NEXT: s_mov_b64 s[34:35], s[10:11]
; GLOBALNESS0-NEXT: v_mov_b32_e32 v47, 0
+; GLOBALNESS0-NEXT: v_mov_b32_e32 v40, -1
; GLOBALNESS0-NEXT: v_mov_b32_e32 v43, v42
; GLOBALNESS0-NEXT: s_mov_b32 s32, 0
; GLOBALNESS0-NEXT: ; implicit-def: $vgpr58_vgpr59
@@ -405,24 +407,24 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: v_cmp_eq_u32_e32 vcc, 1, v2
; GLOBALNESS0-NEXT: v_cmp_ne_u32_e64 s[4:5], 1, v0
; GLOBALNESS0-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s4, 2
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s4, 2
; GLOBALNESS0-NEXT: v_cmp_eq_u32_e32 vcc, 0, v2
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s5, 3
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s5, 3
; GLOBALNESS0-NEXT: v_cmp_ne_u32_e64 s[4:5], 1, v3
; GLOBALNESS0-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s4, 4
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s5, 5
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s4, 4
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s5, 5
; GLOBALNESS0-NEXT: v_cmp_ne_u32_e64 s[4:5], 1, v2
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s4, 6
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s5, 7
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s4, 6
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s5, 7
; GLOBALNESS0-NEXT: v_cmp_ne_u32_e64 s[80:81], 1, v1
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s84, 8
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s85, 9
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s84, 8
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s85, 9
; GLOBALNESS0-NEXT: s_branch .LBB1_4
; GLOBALNESS0-NEXT: .LBB1_1: ; %bb70.i
; GLOBALNESS0-NEXT: ; in Loop: Header=BB1_4 Depth=1
-; GLOBALNESS0-NEXT: v_readlane_b32 s6, v57, 6
-; GLOBALNESS0-NEXT: v_readlane_b32 s7, v57, 7
+; GLOBALNESS0-NEXT: v_readlane_b32 s6, v60, 6
+; GLOBALNESS0-NEXT: v_readlane_b32 s7, v60, 7
; GLOBALNESS0-NEXT: s_and_b64 vcc, exec, s[6:7]
; GLOBALNESS0-NEXT: s_cbranch_vccz .LBB1_28
; GLOBALNESS0-NEXT: .LBB1_2: ; %Flow15
@@ -437,10 +439,10 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: .LBB1_4: ; %bb5
; GLOBALNESS0-NEXT: ; =>This Loop Header: Depth=1
; GLOBALNESS0-NEXT: ; Child Loop BB1_16 Depth 2
-; GLOBALNESS0-NEXT: flat_load_dword v40, v[46:47]
-; GLOBALNESS0-NEXT: s_add_u32 s8, s38, 40
-; GLOBALNESS0-NEXT: buffer_store_dword v42, off, s[0:3], 0
; GLOBALNESS0-NEXT: flat_load_dword v56, v[46:47]
+; GLOBALNESS0-NEXT: s_add_u32 s8, s38, 40
+; GLOBALNESS0-NEXT: buffer_store_dword v42, v40, s[0:3], 0 offen
+; GLOBALNESS0-NEXT: flat_load_dword v57, v[46:47]
; GLOBALNESS0-NEXT: s_addc_u32 s9, s39, 0
; GLOBALNESS0-NEXT: s_getpc_b64 s[4:5]
; GLOBALNESS0-NEXT: s_add_u32 s4, s4, wobble at gotpcrel32@lo+4
@@ -498,10 +500,10 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: ; %bb.11: ; %bb33.i
; GLOBALNESS0-NEXT: ; in Loop: Header=BB1_4 Depth=1
; GLOBALNESS0-NEXT: global_load_dwordx2 v[0:1], v[44:45], off
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s8, 10
-; GLOBALNESS0-NEXT: v_writelane_b32 v57, s9, 11
-; GLOBALNESS0-NEXT: v_readlane_b32 s4, v57, 2
-; GLOBALNESS0-NEXT: v_readlane_b32 s5, v57, 3
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s8, 10
+; GLOBALNESS0-NEXT: v_writelane_b32 v60, s9, 11
+; GLOBALNESS0-NEXT: v_readlane_b32 s4, v60, 2
+; GLOBALNESS0-NEXT: v_readlane_b32 s5, v60, 3
; GLOBALNESS0-NEXT: s_mov_b32 s83, s55
; GLOBALNESS0-NEXT: s_and_b64 vcc, exec, s[4:5]
; GLOBALNESS0-NEXT: s_cbranch_vccnz .LBB1_13
@@ -510,8 +512,8 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: global_store_dwordx2 v[44:45], v[42:43], off
; GLOBALNESS0-NEXT: .LBB1_13: ; %bb44.lr.ph.i
; GLOBALNESS0-NEXT: ; in Loop: Header=BB1_4 Depth=1
-; GLOBALNESS0-NEXT: v_cmp_ne_u32_e32 vcc, 0, v56
-; GLOBALNESS0-NEXT: v_cndmask_b32_e32 v2, 0, v40, vcc
+; GLOBALNESS0-NEXT: v_cmp_ne_u32_e32 vcc, 0, v57
+; GLOBALNESS0-NEXT: v_cndmask_b32_e32 v2, 0, v56, vcc
; GLOBALNESS0-NEXT: s_waitcnt vmcnt(0)
; GLOBALNESS0-NEXT: v_cmp_nlt_f64_e32 vcc, 0, v[0:1]
; GLOBALNESS0-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc
@@ -540,8 +542,8 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: s_cbranch_vccnz .LBB1_21
; GLOBALNESS0-NEXT: ; %bb.19: ; %bb3.i.i
; GLOBALNESS0-NEXT: ; in Loop: Header=BB1_16 Depth=2
-; GLOBALNESS0-NEXT: v_readlane_b32 s4, v57, 0
-; GLOBALNESS0-NEXT: v_readlane_b32 s5, v57, 1
+; GLOBALNESS0-NEXT: v_readlane_b32 s4, v60, 0
+; GLOBALNESS0-NEXT: v_readlane_b32 s5, v60, 1
; GLOBALNESS0-NEXT: s_and_b64 vcc, exec, s[4:5]
; GLOBALNESS0-NEXT: s_cbranch_vccnz .LBB1_21
; GLOBALNESS0-NEXT: ; %bb.20: ; %bb6.i.i
@@ -587,12 +589,12 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: s_branch .LBB1_14
; GLOBALNESS0-NEXT: .LBB1_24: ; %Flow23
; GLOBALNESS0-NEXT: ; in Loop: Header=BB1_4 Depth=1
-; GLOBALNESS0-NEXT: v_readlane_b32 s84, v57, 8
-; GLOBALNESS0-NEXT: v_readlane_b32 s8, v57, 10
+; GLOBALNESS0-NEXT: v_readlane_b32 s84, v60, 8
+; GLOBALNESS0-NEXT: v_readlane_b32 s8, v60, 10
; GLOBALNESS0-NEXT: v_pk_mov_b32 v[0:1], 0, 0
; GLOBALNESS0-NEXT: s_mov_b32 s55, s83
-; GLOBALNESS0-NEXT: v_readlane_b32 s85, v57, 9
-; GLOBALNESS0-NEXT: v_readlane_b32 s9, v57, 11
+; GLOBALNESS0-NEXT: v_readlane_b32 s85, v60, 9
+; GLOBALNESS0-NEXT: v_readlane_b32 s9, v60, 11
; GLOBALNESS0-NEXT: .LBB1_25: ; %Flow24
; GLOBALNESS0-NEXT: ; in Loop: Header=BB1_4 Depth=1
; GLOBALNESS0-NEXT: s_or_b64 exec, exec, s[52:53]
@@ -600,8 +602,8 @@ define amdgpu_kernel void @kernel(ptr addrspace(1) %arg1.global, i1 %tmp3.i.i, i
; GLOBALNESS0-NEXT: s_cbranch_execz .LBB1_2
; GLOBALNESS0-NEXT: ; %bb.26: ; %bb67.i
; GLOBALNESS0-NEXT: ; in Loop: Header=BB1_4 Depth=1
-; GLOBALNESS0-NEXT: v_readlane_b32 s6, v57, 4
-; GLOBALNESS0-NEXT: v_readlane_b32 s7, v57, 5
+; GLOBALNESS0-NEXT: v_readlane_b32 s6, v60, 4
+; GLOBALNESS0-NEXT: v_readlane_b32 s7, v60, 5
; GLOBALNESS0-NEXT: s_and_b64 vcc, exec, s[6:7]
; GLOBALNESS0-NEXT: s_cbranch_vccnz .LBB1_1
; GLOBALNESS0-NEXT: ; %bb.27: ; %bb69.i
diff --git a/llvm/test/CodeGen/AMDGPU/undef-handling-crash-in-ra.ll b/llvm/test/CodeGen/AMDGPU/undef-handling-crash-in-ra.ll
index fc32bc644ddcd..65694231f9b45 100644
--- a/llvm/test/CodeGen/AMDGPU/undef-handling-crash-in-ra.ll
+++ b/llvm/test/CodeGen/AMDGPU/undef-handling-crash-in-ra.ll
@@ -122,7 +122,8 @@ define amdgpu_kernel void @foo(ptr addrspace(5) %ptr5, ptr %p0, double %v0, <4 x
; CHECK-NEXT: v_cmp_eq_u32_e32 vcc, 0, v0
; CHECK-NEXT: s_and_saveexec_b64 s[4:5], vcc
; CHECK-NEXT: s_or_b64 exec, exec, s[4:5]
-; CHECK-NEXT: buffer_store_dword v42, off, s[0:3], 0
+; CHECK-NEXT: v_mov_b32_e32 v0, -1
+; CHECK-NEXT: buffer_store_dword v42, v0, s[0:3], 0 offen
; CHECK-NEXT: s_endpgm
entry:
%load.null = load i32, ptr null, align 8
diff --git a/llvm/test/CodeGen/AMDGPU/unstructured-cfg-def-use-issue.ll b/llvm/test/CodeGen/AMDGPU/unstructured-cfg-def-use-issue.ll
index 25e8581fb6cdd..6ca1b767d24bd 100644
--- a/llvm/test/CodeGen/AMDGPU/unstructured-cfg-def-use-issue.ll
+++ b/llvm/test/CodeGen/AMDGPU/unstructured-cfg-def-use-issue.ll
@@ -257,41 +257,42 @@ define hidden void @blam() {
; GCN-NEXT: s_mov_b32 s16, s33
; GCN-NEXT: s_mov_b32 s33, s32
; GCN-NEXT: s_or_saveexec_b64 s[18:19], -1
-; GCN-NEXT: buffer_store_dword v45, off, s[0:3], s33 offset:20 ; 4-byte Folded Spill
+; GCN-NEXT: buffer_store_dword v46, off, s[0:3], s33 offset:24 ; 4-byte Folded Spill
; GCN-NEXT: s_mov_b64 exec, s[18:19]
-; GCN-NEXT: v_writelane_b32 v45, s16, 26
+; GCN-NEXT: v_writelane_b32 v46, s16, 26
; GCN-NEXT: s_addk_i32 s32, 0x800
-; GCN-NEXT: buffer_store_dword v40, off, s[0:3], s33 offset:16 ; 4-byte Folded Spill
-; GCN-NEXT: buffer_store_dword v41, off, s[0:3], s33 offset:12 ; 4-byte Folded Spill
-; GCN-NEXT: buffer_store_dword v42, off, s[0:3], s33 offset:8 ; 4-byte Folded Spill
-; GCN-NEXT: buffer_store_dword v43, off, s[0:3], s33 offset:4 ; 4-byte Folded Spill
-; GCN-NEXT: buffer_store_dword v44, off, s[0:3], s33 ; 4-byte Folded Spill
-; GCN-NEXT: v_writelane_b32 v45, s30, 0
-; GCN-NEXT: v_writelane_b32 v45, s31, 1
-; GCN-NEXT: v_writelane_b32 v45, s34, 2
-; GCN-NEXT: v_writelane_b32 v45, s35, 3
-; GCN-NEXT: v_writelane_b32 v45, s36, 4
-; GCN-NEXT: v_writelane_b32 v45, s37, 5
-; GCN-NEXT: v_writelane_b32 v45, s38, 6
-; GCN-NEXT: v_writelane_b32 v45, s39, 7
-; GCN-NEXT: v_writelane_b32 v45, s48, 8
-; GCN-NEXT: v_writelane_b32 v45, s49, 9
-; GCN-NEXT: v_writelane_b32 v45, s50, 10
-; GCN-NEXT: v_writelane_b32 v45, s51, 11
-; GCN-NEXT: v_writelane_b32 v45, s52, 12
-; GCN-NEXT: v_writelane_b32 v45, s53, 13
-; GCN-NEXT: v_writelane_b32 v45, s54, 14
-; GCN-NEXT: v_writelane_b32 v45, s55, 15
-; GCN-NEXT: v_writelane_b32 v45, s64, 16
-; GCN-NEXT: v_writelane_b32 v45, s65, 17
-; GCN-NEXT: v_writelane_b32 v45, s66, 18
-; GCN-NEXT: v_writelane_b32 v45, s67, 19
-; GCN-NEXT: v_writelane_b32 v45, s68, 20
-; GCN-NEXT: v_writelane_b32 v45, s69, 21
-; GCN-NEXT: v_writelane_b32 v45, s70, 22
-; GCN-NEXT: v_writelane_b32 v45, s71, 23
-; GCN-NEXT: v_writelane_b32 v45, s80, 24
-; GCN-NEXT: v_writelane_b32 v45, s81, 25
+; GCN-NEXT: buffer_store_dword v40, off, s[0:3], s33 offset:20 ; 4-byte Folded Spill
+; GCN-NEXT: buffer_store_dword v41, off, s[0:3], s33 offset:16 ; 4-byte Folded Spill
+; GCN-NEXT: buffer_store_dword v42, off, s[0:3], s33 offset:12 ; 4-byte Folded Spill
+; GCN-NEXT: buffer_store_dword v43, off, s[0:3], s33 offset:8 ; 4-byte Folded Spill
+; GCN-NEXT: buffer_store_dword v44, off, s[0:3], s33 offset:4 ; 4-byte Folded Spill
+; GCN-NEXT: buffer_store_dword v45, off, s[0:3], s33 ; 4-byte Folded Spill
+; GCN-NEXT: v_writelane_b32 v46, s30, 0
+; GCN-NEXT: v_writelane_b32 v46, s31, 1
+; GCN-NEXT: v_writelane_b32 v46, s34, 2
+; GCN-NEXT: v_writelane_b32 v46, s35, 3
+; GCN-NEXT: v_writelane_b32 v46, s36, 4
+; GCN-NEXT: v_writelane_b32 v46, s37, 5
+; GCN-NEXT: v_writelane_b32 v46, s38, 6
+; GCN-NEXT: v_writelane_b32 v46, s39, 7
+; GCN-NEXT: v_writelane_b32 v46, s48, 8
+; GCN-NEXT: v_writelane_b32 v46, s49, 9
+; GCN-NEXT: v_writelane_b32 v46, s50, 10
+; GCN-NEXT: v_writelane_b32 v46, s51, 11
+; GCN-NEXT: v_writelane_b32 v46, s52, 12
+; GCN-NEXT: v_writelane_b32 v46, s53, 13
+; GCN-NEXT: v_writelane_b32 v46, s54, 14
+; GCN-NEXT: v_writelane_b32 v46, s55, 15
+; GCN-NEXT: v_writelane_b32 v46, s64, 16
+; GCN-NEXT: v_writelane_b32 v46, s65, 17
+; GCN-NEXT: v_writelane_b32 v46, s66, 18
+; GCN-NEXT: v_writelane_b32 v46, s67, 19
+; GCN-NEXT: v_writelane_b32 v46, s68, 20
+; GCN-NEXT: v_writelane_b32 v46, s69, 21
+; GCN-NEXT: v_writelane_b32 v46, s70, 22
+; GCN-NEXT: v_writelane_b32 v46, s71, 23
+; GCN-NEXT: v_writelane_b32 v46, s80, 24
+; GCN-NEXT: v_writelane_b32 v46, s81, 25
; GCN-NEXT: v_mov_b32_e32 v40, v31
; GCN-NEXT: s_mov_b32 s54, s15
; GCN-NEXT: s_mov_b32 s55, s14
@@ -304,14 +305,15 @@ define hidden void @blam() {
; GCN-NEXT: v_mov_b32_e32 v0, 0
; GCN-NEXT: v_mov_b32_e32 v1, 0
; GCN-NEXT: v_and_b32_e32 v2, 0x3ff, v40
-; GCN-NEXT: flat_load_dword v43, v[0:1]
; GCN-NEXT: v_mov_b32_e32 v42, 0
+; GCN-NEXT: flat_load_dword v43, v[0:1]
; GCN-NEXT: s_mov_b64 s[66:67], 0
+; GCN-NEXT: v_mov_b32_e32 v44, -1
; GCN-NEXT: v_lshlrev_b32_e32 v41, 2, v2
; GCN-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0)
; GCN-NEXT: v_cmp_eq_f32_e64 s[68:69], 0, v43
; GCN-NEXT: v_cmp_neq_f32_e64 s[50:51], 0, v43
-; GCN-NEXT: v_mov_b32_e32 v44, 0x7fc00000
+; GCN-NEXT: v_mov_b32_e32 v45, 0x7fc00000
; GCN-NEXT: s_branch .LBB1_2
; GCN-NEXT: .LBB1_1: ; %Flow7
; GCN-NEXT: ; in Loop: Header=BB1_2 Depth=1
@@ -323,7 +325,7 @@ define hidden void @blam() {
; GCN-NEXT: .LBB1_2: ; %bb2
; GCN-NEXT: ; =>This Inner Loop Header: Depth=1
; GCN-NEXT: flat_load_dword v0, v[41:42]
-; GCN-NEXT: buffer_store_dword v42, off, s[0:3], 0
+; GCN-NEXT: buffer_store_dword v42, v44, s[0:3], 0 offen
; GCN-NEXT: s_mov_b64 s[6:7], 0
; GCN-NEXT: s_waitcnt vmcnt(1)
; GCN-NEXT: v_cmp_lt_i32_e32 vcc, 2, v0
@@ -362,7 +364,7 @@ define hidden void @blam() {
; GCN-NEXT: s_cbranch_execz .LBB1_7
; GCN-NEXT: ; %bb.6: ; %bb16
; GCN-NEXT: ; in Loop: Header=BB1_2 Depth=1
-; GCN-NEXT: buffer_store_dword v44, off, s[0:3], 0
+; GCN-NEXT: buffer_store_dword v45, v44, s[0:3], 0 offen
; GCN-NEXT: s_or_b64 s[8:9], s[68:69], exec
; GCN-NEXT: .LBB1_7: ; %Flow3
; GCN-NEXT: ; in Loop: Header=BB1_2 Depth=1
@@ -374,7 +376,7 @@ define hidden void @blam() {
; GCN-NEXT: ; %bb.8: ; %bb17
; GCN-NEXT: ; in Loop: Header=BB1_2 Depth=1
; GCN-NEXT: s_mov_b64 s[6:7], exec
-; GCN-NEXT: buffer_store_dword v43, off, s[0:3], 0
+; GCN-NEXT: buffer_store_dword v43, v44, s[0:3], 0 offen
; GCN-NEXT: .LBB1_9: ; %Flow4
; GCN-NEXT: ; in Loop: Header=BB1_2 Depth=1
; GCN-NEXT: s_or_b64 exec, exec, s[8:9]
@@ -404,7 +406,7 @@ define hidden void @blam() {
; GCN-NEXT: s_cbranch_execz .LBB1_15
; GCN-NEXT: ; %bb.14: ; %bb10
; GCN-NEXT: ; in Loop: Header=BB1_2 Depth=1
-; GCN-NEXT: buffer_store_dword v44, off, s[0:3], 0
+; GCN-NEXT: buffer_store_dword v45, v44, s[0:3], 0 offen
; GCN-NEXT: s_or_b64 s[10:11], s[6:7], exec
; GCN-NEXT: .LBB1_15: ; %Flow6
; GCN-NEXT: ; in Loop: Header=BB1_2 Depth=1
@@ -422,46 +424,47 @@ define hidden void @blam() {
; GCN-NEXT: s_cbranch_execz .LBB1_1
; GCN-NEXT: ; %bb.17: ; %bb18
; GCN-NEXT: ; in Loop: Header=BB1_2 Depth=1
-; GCN-NEXT: buffer_store_dword v44, off, s[0:3], 0
+; GCN-NEXT: buffer_store_dword v45, v44, s[0:3], 0 offen
; GCN-NEXT: s_andn2_b64 s[4:5], s[4:5], exec
; GCN-NEXT: s_branch .LBB1_1
; GCN-NEXT: .LBB1_18: ; %DummyReturnBlock
; GCN-NEXT: s_or_b64 exec, exec, s[66:67]
-; GCN-NEXT: v_readlane_b32 s81, v45, 25
-; GCN-NEXT: v_readlane_b32 s80, v45, 24
-; GCN-NEXT: v_readlane_b32 s71, v45, 23
-; GCN-NEXT: v_readlane_b32 s70, v45, 22
-; GCN-NEXT: v_readlane_b32 s69, v45, 21
-; GCN-NEXT: v_readlane_b32 s68, v45, 20
-; GCN-NEXT: v_readlane_b32 s67, v45, 19
-; GCN-NEXT: v_readlane_b32 s66, v45, 18
-; GCN-NEXT: v_readlane_b32 s65, v45, 17
-; GCN-NEXT: v_readlane_b32 s64, v45, 16
-; GCN-NEXT: v_readlane_b32 s55, v45, 15
-; GCN-NEXT: v_readlane_b32 s54, v45, 14
-; GCN-NEXT: v_readlane_b32 s53, v45, 13
-; GCN-NEXT: v_readlane_b32 s52, v45, 12
-; GCN-NEXT: v_readlane_b32 s51, v45, 11
-; GCN-NEXT: v_readlane_b32 s50, v45, 10
-; GCN-NEXT: v_readlane_b32 s49, v45, 9
-; GCN-NEXT: v_readlane_b32 s48, v45, 8
-; GCN-NEXT: v_readlane_b32 s39, v45, 7
-; GCN-NEXT: v_readlane_b32 s38, v45, 6
-; GCN-NEXT: v_readlane_b32 s37, v45, 5
-; GCN-NEXT: v_readlane_b32 s36, v45, 4
-; GCN-NEXT: v_readlane_b32 s35, v45, 3
-; GCN-NEXT: v_readlane_b32 s34, v45, 2
-; GCN-NEXT: v_readlane_b32 s31, v45, 1
-; GCN-NEXT: v_readlane_b32 s30, v45, 0
-; GCN-NEXT: buffer_load_dword v44, off, s[0:3], s33 ; 4-byte Folded Reload
-; GCN-NEXT: buffer_load_dword v43, off, s[0:3], s33 offset:4 ; 4-byte Folded Reload
-; GCN-NEXT: buffer_load_dword v42, off, s[0:3], s33 offset:8 ; 4-byte Folded Reload
-; GCN-NEXT: buffer_load_dword v41, off, s[0:3], s33 offset:12 ; 4-byte Folded Reload
-; GCN-NEXT: buffer_load_dword v40, off, s[0:3], s33 offset:16 ; 4-byte Folded Reload
+; GCN-NEXT: v_readlane_b32 s81, v46, 25
+; GCN-NEXT: v_readlane_b32 s80, v46, 24
+; GCN-NEXT: v_readlane_b32 s71, v46, 23
+; GCN-NEXT: v_readlane_b32 s70, v46, 22
+; GCN-NEXT: v_readlane_b32 s69, v46, 21
+; GCN-NEXT: v_readlane_b32 s68, v46, 20
+; GCN-NEXT: v_readlane_b32 s67, v46, 19
+; GCN-NEXT: v_readlane_b32 s66, v46, 18
+; GCN-NEXT: v_readlane_b32 s65, v46, 17
+; GCN-NEXT: v_readlane_b32 s64, v46, 16
+; GCN-NEXT: v_readlane_b32 s55, v46, 15
+; GCN-NEXT: v_readlane_b32 s54, v46, 14
+; GCN-NEXT: v_readlane_b32 s53, v46, 13
+; GCN-NEXT: v_readlane_b32 s52, v46, 12
+; GCN-NEXT: v_readlane_b32 s51, v46, 11
+; GCN-NEXT: v_readlane_b32 s50, v46, 10
+; GCN-NEXT: v_readlane_b32 s49, v46, 9
+; GCN-NEXT: v_readlane_b32 s48, v46, 8
+; GCN-NEXT: v_readlane_b32 s39, v46, 7
+; GCN-NEXT: v_readlane_b32 s38, v46, 6
+; GCN-NEXT: v_readlane_b32 s37, v46, 5
+; GCN-NEXT: v_readlane_b32 s36, v46, 4
+; GCN-NEXT: v_readlane_b32 s35, v46, 3
+; GCN-NEXT: v_readlane_b32 s34, v46, 2
+; GCN-NEXT: v_readlane_b32 s31, v46, 1
+; GCN-NEXT: v_readlane_b32 s30, v46, 0
+; GCN-NEXT: buffer_load_dword v45, off, s[0:3], s33 ; 4-byte Folded Reload
+; GCN-NEXT: buffer_load_dword v44, off, s[0:3], s33 offset:4 ; 4-byte Folded Reload
+; GCN-NEXT: buffer_load_dword v43, off, s[0:3], s33 offset:8 ; 4-byte Folded Reload
+; GCN-NEXT: buffer_load_dword v42, off, s[0:3], s33 offset:12 ; 4-byte Folded Reload
+; GCN-NEXT: buffer_load_dword v41, off, s[0:3], s33 offset:16 ; 4-byte Folded Reload
+; GCN-NEXT: buffer_load_dword v40, off, s[0:3], s33 offset:20 ; 4-byte Folded Reload
; GCN-NEXT: s_mov_b32 s32, s33
-; GCN-NEXT: v_readlane_b32 s4, v45, 26
+; GCN-NEXT: v_readlane_b32 s4, v46, 26
; GCN-NEXT: s_or_saveexec_b64 s[6:7], -1
-; GCN-NEXT: buffer_load_dword v45, off, s[0:3], s33 offset:20 ; 4-byte Folded Reload
+; GCN-NEXT: buffer_load_dword v46, off, s[0:3], s33 offset:24 ; 4-byte Folded Reload
; GCN-NEXT: s_mov_b64 exec, s[6:7]
; GCN-NEXT: s_mov_b32 s33, s4
; GCN-NEXT: s_waitcnt vmcnt(0)
diff --git a/llvm/test/CodeGen/AMDGPU/v_cmp_gfx11.ll b/llvm/test/CodeGen/AMDGPU/v_cmp_gfx11.ll
index a6a406986558e..7d75615007f59 100644
--- a/llvm/test/CodeGen/AMDGPU/v_cmp_gfx11.ll
+++ b/llvm/test/CodeGen/AMDGPU/v_cmp_gfx11.ll
@@ -5,7 +5,7 @@ define amdgpu_kernel void @icmp_test() {
; CHECK-LABEL: icmp_test:
; CHECK: ; %bb.0: ; %entry
; CHECK-NEXT: v_cmp_eq_u16_e64 s[0:1], 0, 0
-; CHECK-NEXT: v_mov_b32_e32 v1, 0
+; CHECK-NEXT: v_mov_b32_e32 v1, -1
; CHECK-NEXT: s_cmp_eq_u64 s[0:1], 0
; CHECK-NEXT: s_cselect_b64 s[0:1], -1, 0
; CHECK-NEXT: s_delay_alu instid0(SALU_CYCLE_1)
@@ -24,7 +24,7 @@ define amdgpu_kernel void @fcmp_test(half %x, half %y) {
; CHECK-LABEL: fcmp_test:
; CHECK: ; %bb.0: ; %entry
; CHECK-NEXT: s_load_b32 s0, s[4:5], 0x0
-; CHECK-NEXT: v_mov_b32_e32 v1, 0
+; CHECK-NEXT: v_mov_b32_e32 v1, -1
; CHECK-NEXT: s_waitcnt lgkmcnt(0)
; CHECK-NEXT: s_lshr_b32 s1, s0, 16
; CHECK-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_2) | instid1(SALU_CYCLE_1)
@@ -46,7 +46,7 @@ define amdgpu_kernel void @ballot_test(half %x, half %y) {
; CHECK-LABEL: ballot_test:
; CHECK: ; %bb.0:
; CHECK-NEXT: s_load_b32 s0, s[4:5], 0x0
-; CHECK-NEXT: v_mov_b32_e32 v2, 0
+; CHECK-NEXT: v_mov_b32_e32 v2, -1
; CHECK-NEXT: s_waitcnt lgkmcnt(0)
; CHECK-NEXT: s_lshr_b32 s1, s0, 16
; CHECK-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1)
diff --git a/llvm/test/CodeGen/AMDGPU/waterfall_kills_scc.ll b/llvm/test/CodeGen/AMDGPU/waterfall_kills_scc.ll
index ddb6afa34ab22..2b92b0a05d1cb 100644
--- a/llvm/test/CodeGen/AMDGPU/waterfall_kills_scc.ll
+++ b/llvm/test/CodeGen/AMDGPU/waterfall_kills_scc.ll
@@ -24,10 +24,11 @@ define amdgpu_kernel void @foo(i1 %cmp1) {
; GFX906-NEXT: s_mov_b32 s15, 0xe00000
; GFX906-NEXT: s_add_u32 s12, s12, s11
; GFX906-NEXT: s_addc_u32 s13, s13, 0
-; GFX906-NEXT: buffer_load_dword v3, off, s[12:15], 0
-; GFX906-NEXT: buffer_load_dword v4, off, s[12:15], 0 offset:4
-; GFX906-NEXT: buffer_load_dword v5, off, s[12:15], 0 offset:8
-; GFX906-NEXT: buffer_load_dword v6, off, s[12:15], 0 offset:12
+; GFX906-NEXT: v_mov_b32_e32 v7, -1
+; GFX906-NEXT: buffer_load_dword v3, v7, s[12:15], 0 offen
+; GFX906-NEXT: buffer_load_dword v4, off, s[12:15], 0 offset:3
+; GFX906-NEXT: buffer_load_dword v5, off, s[12:15], 0 offset:7
+; GFX906-NEXT: buffer_load_dword v6, off, s[12:15], 0 offset:11
; GFX906-NEXT: s_load_dword s2, s[4:5], 0x24
; GFX906-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x1c
; GFX906-NEXT: s_mov_b32 s4, 0
diff --git a/llvm/test/CodeGen/AMDGPU/wqm.ll b/llvm/test/CodeGen/AMDGPU/wqm.ll
index 0fdc1a83dddbd..eebde828a5a2c 100644
--- a/llvm/test/CodeGen/AMDGPU/wqm.ll
+++ b/llvm/test/CodeGen/AMDGPU/wqm.ll
@@ -3409,6 +3409,7 @@ define amdgpu_gs void @wqm_init_exec() {
; GFX9-W64-NEXT: buffer_store_dwordx4 v[0:3], off, s[0:3], 0
; GFX9-W64-NEXT: s_wqm_b64 exec, exec
; GFX9-W64-NEXT: ; kill: def $sgpr0 killed $sgpr0 killed $exec
+; GFX9-W64-NEXT: v_mov_b32_e32 v0, -1
; GFX9-W64-NEXT: v_mov_b32_e32 v1, s0
; GFX9-W64-NEXT: ds_write_b32 v0, v1
; GFX9-W64-NEXT: s_endpgm
@@ -3416,20 +3417,21 @@ define amdgpu_gs void @wqm_init_exec() {
; GFX10-W32-LABEL: wqm_init_exec:
; GFX10-W32: ; %bb.0: ; %bb
; GFX10-W32-NEXT: s_mov_b32 exec_lo, -1
-; GFX10-W32-NEXT: s_mov_b32 s1, exec_lo
-; GFX10-W32-NEXT: v_mov_b32_e32 v0, 0
; GFX10-W32-NEXT: s_mov_b32 s0, 0
+; GFX10-W32-NEXT: s_mov_b32 s2, exec_lo
+; GFX10-W32-NEXT: v_mov_b32_e32 v0, 0
+; GFX10-W32-NEXT: s_mov_b32 s1, s0
; GFX10-W32-NEXT: s_wqm_b32 exec_lo, exec_lo
-; GFX10-W32-NEXT: s_mov_b32 s2, s0
-; GFX10-W32-NEXT: s_and_b32 exec_lo, exec_lo, s1
+; GFX10-W32-NEXT: s_mov_b32 s3, s0
+; GFX10-W32-NEXT: s_and_b32 exec_lo, exec_lo, s2
; GFX10-W32-NEXT: v_mov_b32_e32 v1, v0
; GFX10-W32-NEXT: v_mov_b32_e32 v2, v0
; GFX10-W32-NEXT: v_mov_b32_e32 v3, v0
-; GFX10-W32-NEXT: v_mov_b32_e32 v4, s0
-; GFX10-W32-NEXT: s_mov_b32 s1, s0
-; GFX10-W32-NEXT: s_mov_b32 s3, s0
+; GFX10-W32-NEXT: v_mov_b32_e32 v4, -1
+; GFX10-W32-NEXT: v_mov_b32_e32 v5, s0
+; GFX10-W32-NEXT: s_mov_b32 s2, s0
; GFX10-W32-NEXT: buffer_store_dwordx4 v[0:3], off, s[0:3], 0
-; GFX10-W32-NEXT: ds_write_b32 v0, v4
+; GFX10-W32-NEXT: ds_write_b32 v4, v5
; GFX10-W32-NEXT: s_endpgm
bb:
call void @llvm.amdgcn.init.exec(i64 -1)
diff --git a/llvm/test/Instrumentation/AddressSanitizer/asan-scalable-vector.ll b/llvm/test/Instrumentation/AddressSanitizer/asan-scalable-vector.ll
index 6a841f2d399c0..bf08ca530bb7b 100644
--- a/llvm/test/Instrumentation/AddressSanitizer/asan-scalable-vector.ll
+++ b/llvm/test/Instrumentation/AddressSanitizer/asan-scalable-vector.ll
@@ -7,13 +7,16 @@ define void @test() #1 {
; CHECK-NEXT: [[ENTRY:.*:]]
; CHECK-NEXT: [[CTX_PG:%.*]] = alloca <vscale x 16 x i1>, align 2
; CHECK-NEXT: call void @llvm.lifetime.start.p0(ptr [[CTX_PG]])
-; CHECK-NEXT: [[TMP0:%.*]] = load i8, ptr inttoptr (i64 17592186044416 to ptr), align 1
+; CHECK-NEXT: [[TMP3:%.*]] = lshr i64 ptrtoint (ptr null to i64), 3
+; CHECK-NEXT: [[TMP4:%.*]] = or i64 [[TMP3]], 17592186044416
+; CHECK-NEXT: [[TMP2:%.*]] = inttoptr i64 [[TMP4]] to ptr
+; CHECK-NEXT: [[TMP0:%.*]] = load i8, ptr [[TMP2]], align 1
; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i8 [[TMP0]], 0
-; CHECK-NEXT: br i1 [[TMP1]], label %[[BB2:.*]], label %[[BB3:.*]]
-; CHECK: [[BB2]]:
-; CHECK-NEXT: call void @__asan_report_store8(i64 0) #[[ATTR4:[0-9]+]]
+; CHECK-NEXT: br i1 [[TMP1]], label %[[BB5:.*]], label %[[BB6:.*]]
+; CHECK: [[BB5]]:
+; CHECK-NEXT: call void @__asan_report_store8(i64 ptrtoint (ptr null to i64)) #[[ATTR4:[0-9]+]]
; CHECK-NEXT: unreachable
-; CHECK: [[BB3]]:
+; CHECK: [[BB6]]:
; CHECK-NEXT: store ptr [[CTX_PG]], ptr null, align 8
; CHECK-NEXT: ret void
;
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/RISCV/alloca.ll b/llvm/test/Instrumentation/HWAddressSanitizer/RISCV/alloca.ll
index edbcdbeb8516c..607639239dbdd 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/RISCV/alloca.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/RISCV/alloca.ll
@@ -70,7 +70,7 @@ define void @test_alloca() sanitize_hwaddress !dbg !15 {
; ZERO-BASED-SHADOW-LABEL: define void @test_alloca
; ZERO-BASED-SHADOW-SAME: () #[[ATTR0:[0-9]+]] personality ptr @__hwasan_personality_thunk !dbg [[DBG8:![0-9]+]] {
; ZERO-BASED-SHADOW-NEXT: entry:
-; ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = ptrtoint ptr [[TMP0]] to i64
; ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = lshr i64 [[TMP1]], 20
@@ -143,7 +143,7 @@ declare void @llvm.dbg.value(metadata, metadata, metadata)
;.
; DYNAMIC-SHADOW: [[META0]] = !{ptr @hwasan.note}
; DYNAMIC-SHADOW: [[META1:![0-9]+]] = distinct !DICompileUnit(language: DW_LANG_C_plus_plus_14, file: [[META2:![0-9]+]], producer: "{{.*}}clang version {{.*}}", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: [[META3:![0-9]+]], splitDebugInlining: false, nameTableKind: None)
-; DYNAMIC-SHADOW: [[META2]] = !DIFile(filename: "alloca.cpp", directory: {{.*}})
+; DYNAMIC-SHADOW: [[META2]] = !DIFile(filename: "{{.*}}alloca.cpp", directory: {{.*}})
; DYNAMIC-SHADOW: [[META3]] = !{}
; DYNAMIC-SHADOW: [[META4:![0-9]+]] = !{i32 7, !"Dwarf Version", i32 4}
; DYNAMIC-SHADOW: [[META5:![0-9]+]] = !{i32 2, !"Debug Info Version", i32 3}
@@ -160,7 +160,7 @@ declare void @llvm.dbg.value(metadata, metadata, metadata)
;.
; ZERO-BASED-SHADOW: [[META0]] = !{ptr @hwasan.note}
; ZERO-BASED-SHADOW: [[META1:![0-9]+]] = distinct !DICompileUnit(language: DW_LANG_C_plus_plus_14, file: [[META2:![0-9]+]], producer: "{{.*}}clang version {{.*}}", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: [[META3:![0-9]+]], splitDebugInlining: false, nameTableKind: None)
-; ZERO-BASED-SHADOW: [[META2]] = !DIFile(filename: "alloca.cpp", directory: {{.*}})
+; ZERO-BASED-SHADOW: [[META2]] = !DIFile(filename: "{{.*}}alloca.cpp", directory: {{.*}})
; ZERO-BASED-SHADOW: [[META3]] = !{}
; ZERO-BASED-SHADOW: [[META4:![0-9]+]] = !{i32 7, !"Dwarf Version", i32 4}
; ZERO-BASED-SHADOW: [[META5:![0-9]+]] = !{i32 2, !"Debug Info Version", i32 3}
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/RISCV/basic.ll b/llvm/test/Instrumentation/HWAddressSanitizer/RISCV/basic.ll
index e0eb1115854ab..dfee94203a37a 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/RISCV/basic.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/RISCV/basic.ll
@@ -134,7 +134,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i8 @test_load8
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -154,7 +154,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i8 @test_load8
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -314,7 +314,7 @@ define i16 @test_load16(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i16 @test_load16
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -334,7 +334,7 @@ define i16 @test_load16(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i16 @test_load16
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -494,7 +494,7 @@ define i32 @test_load32(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i32 @test_load32
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -514,7 +514,7 @@ define i32 @test_load32(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i32 @test_load32
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -674,7 +674,7 @@ define i64 @test_load64(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i64 @test_load64
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -694,7 +694,7 @@ define i64 @test_load64(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i64 @test_load64
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -854,7 +854,7 @@ define i128 @test_load128(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i128 @test_load128
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -874,7 +874,7 @@ define i128 @test_load128(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i128 @test_load128
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -974,7 +974,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i40 @test_load40
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_loadN(i64 [[TMP0]], i64 5)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -983,7 +983,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i40 @test_load40
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_loadN_noabort(i64 [[TMP0]], i64 5)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -1115,7 +1115,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store8
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1135,7 +1135,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store8
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1295,7 +1295,7 @@ define void @test_store16(ptr %a, i16 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store16
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1315,7 +1315,7 @@ define void @test_store16(ptr %a, i16 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store16
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1475,7 +1475,7 @@ define void @test_store32(ptr %a, i32 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store32
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1495,7 +1495,7 @@ define void @test_store32(ptr %a, i32 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store32
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1655,7 +1655,7 @@ define void @test_store64(ptr %a, i64 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store64
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1675,7 +1675,7 @@ define void @test_store64(ptr %a, i64 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store64
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1835,7 +1835,7 @@ define void @test_store128(ptr %a, i128 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store128
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i128 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1855,7 +1855,7 @@ define void @test_store128(ptr %a, i128 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store128
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i128 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1955,7 +1955,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store40
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 5)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -1964,7 +1964,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store40
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_storeN_noabort(i64 [[TMP0]], i64 5)
; RECOVER-ZERO-BASED-SHADOW-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -2036,7 +2036,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store_unaligned
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 8)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i64 [[B]], ptr [[A]], align 4
@@ -2045,7 +2045,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store_unaligned
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_storeN_noabort(i64 [[TMP0]], i64 8)
; RECOVER-ZERO-BASED-SHADOW-NEXT: store i64 [[B]], ptr [[A]], align 4
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca-array.ll b/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca-array.ll
index 72a0f6bc8df5b..e008fad5fac92 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca-array.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca-array.ll
@@ -9,7 +9,7 @@ declare void @use(ptr, ptr)
define void @test_alloca() sanitize_hwaddress {
; CHECK-LABEL: define void @test_alloca
; CHECK-SAME: () #[[ATTR0:[0-9]+]] personality ptr @__hwasan_personality_thunk {
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP1:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; CHECK-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[TMP1]] to i64
; CHECK-NEXT: [[TMP3:%.*]] = lshr i64 [[TMP2]], 20
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca-with-calls.ll b/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca-with-calls.ll
index 2a82d708a0ef1..500f8f1ed5f4a 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca-with-calls.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca-with-calls.ll
@@ -12,7 +12,7 @@ define void @test_alloca() sanitize_hwaddress {
; CHECK-LABEL: define void @test_alloca
; CHECK-SAME: () #[[ATTR0:[0-9]+]] personality ptr @__hwasan_personality_thunk {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; CHECK-NEXT: [[TMP1:%.*]] = ptrtoint ptr [[TMP0]] to i64
; CHECK-NEXT: [[TMP2:%.*]] = lshr i64 [[TMP1]], 57
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca.ll b/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca.ll
index ef86e63aca0d6..c8caf11d8b807 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/X86/alloca.ll
@@ -13,7 +13,7 @@ define void @test_alloca() sanitize_hwaddress {
; CHECK-LABEL: define void @test_alloca
; CHECK-SAME: () #[[ATTR0:[0-9]+]] personality ptr @__hwasan_personality_thunk {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; CHECK-NEXT: [[TMP1:%.*]] = ptrtoint ptr [[TMP0]] to i64
; CHECK-NEXT: [[TMP2:%.*]] = lshr i64 [[TMP1]], 20
@@ -94,7 +94,7 @@ define i32 @test_simple(ptr %a) sanitize_hwaddress {
; CHECK-LABEL: define i32 @test_simple
; CHECK-SAME: (ptr [[A:%.*]]) #[[ATTR0]] personality ptr @__hwasan_personality_thunk {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; CHECK-NEXT: [[TMP1:%.*]] = ptrtoint ptr [[TMP0]] to i64
; CHECK-NEXT: [[TMP2:%.*]] = lshr i64 [[TMP1]], 20
@@ -168,10 +168,10 @@ define i32 @test_simple(ptr %a) sanitize_hwaddress {
; INLINE-NEXT: [[TMP33:%.*]] = getelementptr i8, ptr [[TMP14]], i64 [[TMP32]]
; INLINE-NEXT: [[TMP34:%.*]] = load i8, ptr [[TMP33]], align 1
; INLINE-NEXT: [[TMP35:%.*]] = icmp ne i8 [[TMP30]], [[TMP34]]
-; INLINE-NEXT: br i1 [[TMP35]], label [[TMP36:%.*]], label [[TMP50:%.*]], !prof [[PROF1:![0-9]+]]
+; INLINE-NEXT: br i1 [[TMP35]], label [[TMP36:%.*]], label [[TMP50:%.*]], !prof [[PROF2:![0-9]+]]
; INLINE: 36:
; INLINE-NEXT: [[TMP37:%.*]] = icmp ugt i8 [[TMP34]], 15
-; INLINE-NEXT: br i1 [[TMP37]], label [[TMP38:%.*]], label [[TMP39:%.*]], !prof [[PROF1]]
+; INLINE-NEXT: br i1 [[TMP37]], label [[TMP38:%.*]], label [[TMP39:%.*]], !prof [[PROF2]]
; INLINE: 38:
; INLINE-NEXT: call void asm sideeffect "int3\0Anopl 80([[RAX:%.*]])", "{rdi}"(i64 [[TMP28]])
; INLINE-NEXT: unreachable
@@ -180,13 +180,13 @@ define i32 @test_simple(ptr %a) sanitize_hwaddress {
; INLINE-NEXT: [[TMP41:%.*]] = trunc i64 [[TMP40]] to i8
; INLINE-NEXT: [[TMP42:%.*]] = add i8 [[TMP41]], 0
; INLINE-NEXT: [[TMP43:%.*]] = icmp uge i8 [[TMP42]], [[TMP34]]
-; INLINE-NEXT: br i1 [[TMP43]], label [[TMP38]], label [[TMP44:%.*]], !prof [[PROF1]]
+; INLINE-NEXT: br i1 [[TMP43]], label [[TMP38]], label [[TMP44:%.*]], !prof [[PROF2]]
; INLINE: 44:
; INLINE-NEXT: [[TMP45:%.*]] = or i64 [[TMP31]], 15
; INLINE-NEXT: [[TMP46:%.*]] = inttoptr i64 [[TMP45]] to ptr
; INLINE-NEXT: [[TMP47:%.*]] = load i8, ptr [[TMP46]], align 1
; INLINE-NEXT: [[TMP48:%.*]] = icmp ne i8 [[TMP30]], [[TMP47]]
-; INLINE-NEXT: br i1 [[TMP48]], label [[TMP38]], label [[TMP49:%.*]], !prof [[PROF1]]
+; INLINE-NEXT: br i1 [[TMP48]], label [[TMP38]], label [[TMP49:%.*]], !prof [[PROF2]]
; INLINE: 49:
; INLINE-NEXT: br label [[TMP50]]
; INLINE: 50:
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/X86/atomic.ll b/llvm/test/Instrumentation/HWAddressSanitizer/X86/atomic.ll
index 49b55139fb980..04cc83cbc4fbe 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/X86/atomic.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/X86/atomic.ll
@@ -10,7 +10,7 @@ define void @atomicrmw(ptr %ptr) sanitize_hwaddress {
; CHECK-LABEL: define void @atomicrmw
; CHECK-SAME: (ptr [[PTR:%.*]]) #[[ATTR0:[0-9]+]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[PTR]] to i64
; CHECK-NEXT: call void @__hwasan_store8(i64 [[TMP0]])
; CHECK-NEXT: [[TMP1:%.*]] = atomicrmw add ptr [[PTR]], i64 1 seq_cst, align 8
@@ -28,7 +28,7 @@ define void @cmpxchg(ptr %ptr, i64 %compare_to, i64 %new_value) sanitize_hwaddre
; CHECK-LABEL: define void @cmpxchg
; CHECK-SAME: (ptr [[PTR:%.*]], i64 [[COMPARE_TO:%.*]], i64 [[NEW_VALUE:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[PTR]] to i64
; CHECK-NEXT: call void @__hwasan_store8(i64 [[TMP0]])
; CHECK-NEXT: [[TMP1:%.*]] = cmpxchg ptr [[PTR]], i64 [[COMPARE_TO]], i64 [[NEW_VALUE]] seq_cst seq_cst, align 8
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/X86/basic.ll b/llvm/test/Instrumentation/HWAddressSanitizer/X86/basic.ll
index ebe66e0d51baa..9485a97ecadb2 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/X86/basic.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/X86/basic.ll
@@ -18,7 +18,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; CHECK-LABEL: define i8 @test_load8
; CHECK-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; CHECK-NEXT: call void @__hwasan_load1(i64 [[TMP0]])
; CHECK-NEXT: [[B:%.*]] = load i8, ptr [[A]], align 4
@@ -27,7 +27,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; NOFASTPATH-LABEL: define i8 @test_load8
; NOFASTPATH-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; NOFASTPATH-NEXT: entry:
-; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; NOFASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; NOFASTPATH-NEXT: call void @__hwasan_load1(i64 [[TMP0]])
; NOFASTPATH-NEXT: [[B:%.*]] = load i8, ptr [[A]], align 4
@@ -36,7 +36,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; FASTPATH-LABEL: define i8 @test_load8
; FASTPATH-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; FASTPATH-NEXT: entry:
-; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; FASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; FASTPATH-NEXT: call void @__hwasan_load1(i64 [[TMP0]])
; FASTPATH-NEXT: [[B:%.*]] = load i8, ptr [[A]], align 4
@@ -45,7 +45,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; ABORT-LABEL: define i8 @test_load8
; ABORT-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; ABORT-NEXT: entry:
-; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-NEXT: call void @__hwasan_load1(i64 [[TMP0]])
; ABORT-NEXT: [[B:%.*]] = load i8, ptr [[A]], align 4
@@ -54,7 +54,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; RECOVER-LABEL: define i8 @test_load8
; RECOVER-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; RECOVER-NEXT: entry:
-; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-NEXT: call void @__hwasan_load1_noabort(i64 [[TMP0]])
; RECOVER-NEXT: [[B:%.*]] = load i8, ptr [[A]], align 4
@@ -76,10 +76,10 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; ABORT-INLINE-NEXT: [[TMP9:%.*]] = getelementptr i8, ptr [[TMP3]], i64 [[TMP8]]
; ABORT-INLINE-NEXT: [[TMP10:%.*]] = load i8, ptr [[TMP9]], align 1
; ABORT-INLINE-NEXT: [[TMP11:%.*]] = icmp ne i8 [[TMP6]], [[TMP10]]
-; ABORT-INLINE-NEXT: br i1 [[TMP11]], label [[TMP12:%.*]], label [[TMP26:%.*]], !prof [[PROF1:![0-9]+]]
+; ABORT-INLINE-NEXT: br i1 [[TMP11]], label [[TMP12:%.*]], label [[TMP26:%.*]], !prof [[PROF2:![0-9]+]]
; ABORT-INLINE: 12:
; ABORT-INLINE-NEXT: [[TMP13:%.*]] = icmp ugt i8 [[TMP10]], 15
-; ABORT-INLINE-NEXT: br i1 [[TMP13]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
+; ABORT-INLINE-NEXT: br i1 [[TMP13]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF2]]
; ABORT-INLINE: 14:
; ABORT-INLINE-NEXT: call void asm sideeffect "int3\0Anopl 64([[RAX:%.*]])", "{rdi}"(i64 [[TMP4]])
; ABORT-INLINE-NEXT: unreachable
@@ -88,13 +88,13 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; ABORT-INLINE-NEXT: [[TMP17:%.*]] = trunc i64 [[TMP16]] to i8
; ABORT-INLINE-NEXT: [[TMP18:%.*]] = add i8 [[TMP17]], 0
; ABORT-INLINE-NEXT: [[TMP19:%.*]] = icmp uge i8 [[TMP18]], [[TMP10]]
-; ABORT-INLINE-NEXT: br i1 [[TMP19]], label [[TMP14]], label [[TMP20:%.*]], !prof [[PROF1]]
+; ABORT-INLINE-NEXT: br i1 [[TMP19]], label [[TMP14]], label [[TMP20:%.*]], !prof [[PROF2]]
; ABORT-INLINE: 20:
; ABORT-INLINE-NEXT: [[TMP21:%.*]] = or i64 [[TMP7]], 15
; ABORT-INLINE-NEXT: [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
; ABORT-INLINE-NEXT: [[TMP23:%.*]] = load i8, ptr [[TMP22]], align 1
; ABORT-INLINE-NEXT: [[TMP24:%.*]] = icmp ne i8 [[TMP6]], [[TMP23]]
-; ABORT-INLINE-NEXT: br i1 [[TMP24]], label [[TMP14]], label [[TMP25:%.*]], !prof [[PROF1]]
+; ABORT-INLINE-NEXT: br i1 [[TMP24]], label [[TMP14]], label [[TMP25:%.*]], !prof [[PROF2]]
; ABORT-INLINE: 25:
; ABORT-INLINE-NEXT: br label [[TMP26]]
; ABORT-INLINE: 26:
@@ -117,10 +117,10 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; RECOVER-INLINE-NEXT: [[TMP9:%.*]] = getelementptr i8, ptr [[TMP3]], i64 [[TMP8]]
; RECOVER-INLINE-NEXT: [[TMP10:%.*]] = load i8, ptr [[TMP9]], align 1
; RECOVER-INLINE-NEXT: [[TMP11:%.*]] = icmp ne i8 [[TMP6]], [[TMP10]]
-; RECOVER-INLINE-NEXT: br i1 [[TMP11]], label [[TMP12:%.*]], label [[TMP26:%.*]], !prof [[PROF1:![0-9]+]]
+; RECOVER-INLINE-NEXT: br i1 [[TMP11]], label [[TMP12:%.*]], label [[TMP26:%.*]], !prof [[PROF2:![0-9]+]]
; RECOVER-INLINE: 12:
; RECOVER-INLINE-NEXT: [[TMP13:%.*]] = icmp ugt i8 [[TMP10]], 15
-; RECOVER-INLINE-NEXT: br i1 [[TMP13]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
+; RECOVER-INLINE-NEXT: br i1 [[TMP13]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF2]]
; RECOVER-INLINE: 14:
; RECOVER-INLINE-NEXT: call void asm sideeffect "int3\0Anopl 96([[RAX:%.*]])", "{rdi}"(i64 [[TMP4]])
; RECOVER-INLINE-NEXT: br label [[TMP25:%.*]]
@@ -129,13 +129,13 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; RECOVER-INLINE-NEXT: [[TMP17:%.*]] = trunc i64 [[TMP16]] to i8
; RECOVER-INLINE-NEXT: [[TMP18:%.*]] = add i8 [[TMP17]], 0
; RECOVER-INLINE-NEXT: [[TMP19:%.*]] = icmp uge i8 [[TMP18]], [[TMP10]]
-; RECOVER-INLINE-NEXT: br i1 [[TMP19]], label [[TMP14]], label [[TMP20:%.*]], !prof [[PROF1]]
+; RECOVER-INLINE-NEXT: br i1 [[TMP19]], label [[TMP14]], label [[TMP20:%.*]], !prof [[PROF2]]
; RECOVER-INLINE: 20:
; RECOVER-INLINE-NEXT: [[TMP21:%.*]] = or i64 [[TMP7]], 15
; RECOVER-INLINE-NEXT: [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
; RECOVER-INLINE-NEXT: [[TMP23:%.*]] = load i8, ptr [[TMP22]], align 1
; RECOVER-INLINE-NEXT: [[TMP24:%.*]] = icmp ne i8 [[TMP6]], [[TMP23]]
-; RECOVER-INLINE-NEXT: br i1 [[TMP24]], label [[TMP14]], label [[TMP25]], !prof [[PROF1]]
+; RECOVER-INLINE-NEXT: br i1 [[TMP24]], label [[TMP14]], label [[TMP25]], !prof [[PROF2]]
; RECOVER-INLINE: 25:
; RECOVER-INLINE-NEXT: br label [[TMP26]]
; RECOVER-INLINE: 26:
@@ -151,7 +151,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; CHECK-LABEL: define i40 @test_load40
; CHECK-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; CHECK-NEXT: call void @__hwasan_loadN(i64 [[TMP0]], i64 5)
; CHECK-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -160,7 +160,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; NOFASTPATH-LABEL: define i40 @test_load40
; NOFASTPATH-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; NOFASTPATH-NEXT: entry:
-; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; NOFASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; NOFASTPATH-NEXT: call void @__hwasan_loadN(i64 [[TMP0]], i64 5)
; NOFASTPATH-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -169,7 +169,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; FASTPATH-LABEL: define i40 @test_load40
; FASTPATH-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; FASTPATH-NEXT: entry:
-; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; FASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; FASTPATH-NEXT: call void @__hwasan_loadN(i64 [[TMP0]], i64 5)
; FASTPATH-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -178,7 +178,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; ABORT-LABEL: define i40 @test_load40
; ABORT-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-NEXT: entry:
-; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-NEXT: call void @__hwasan_loadN(i64 [[TMP0]], i64 5)
; ABORT-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -187,7 +187,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; RECOVER-LABEL: define i40 @test_load40
; RECOVER-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-NEXT: entry:
-; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-NEXT: call void @__hwasan_loadN_noabort(i64 [[TMP0]], i64 5)
; RECOVER-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -228,7 +228,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; CHECK-LABEL: define void @test_store8
; CHECK-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; CHECK-NEXT: call void @__hwasan_store1(i64 [[TMP0]])
; CHECK-NEXT: store i8 [[B]], ptr [[A]], align 4
@@ -237,7 +237,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; NOFASTPATH-LABEL: define void @test_store8
; NOFASTPATH-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; NOFASTPATH-NEXT: entry:
-; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; NOFASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; NOFASTPATH-NEXT: call void @__hwasan_store1(i64 [[TMP0]])
; NOFASTPATH-NEXT: store i8 [[B]], ptr [[A]], align 4
@@ -246,7 +246,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; FASTPATH-LABEL: define void @test_store8
; FASTPATH-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; FASTPATH-NEXT: entry:
-; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; FASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; FASTPATH-NEXT: call void @__hwasan_store1(i64 [[TMP0]])
; FASTPATH-NEXT: store i8 [[B]], ptr [[A]], align 4
@@ -255,7 +255,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; ABORT-LABEL: define void @test_store8
; ABORT-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; ABORT-NEXT: entry:
-; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-NEXT: call void @__hwasan_store1(i64 [[TMP0]])
; ABORT-NEXT: store i8 [[B]], ptr [[A]], align 4
@@ -264,7 +264,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; RECOVER-LABEL: define void @test_store8
; RECOVER-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-NEXT: entry:
-; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-NEXT: call void @__hwasan_store1_noabort(i64 [[TMP0]])
; RECOVER-NEXT: store i8 [[B]], ptr [[A]], align 4
@@ -286,10 +286,10 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; ABORT-INLINE-NEXT: [[TMP9:%.*]] = getelementptr i8, ptr [[TMP3]], i64 [[TMP8]]
; ABORT-INLINE-NEXT: [[TMP10:%.*]] = load i8, ptr [[TMP9]], align 1
; ABORT-INLINE-NEXT: [[TMP11:%.*]] = icmp ne i8 [[TMP6]], [[TMP10]]
-; ABORT-INLINE-NEXT: br i1 [[TMP11]], label [[TMP12:%.*]], label [[TMP26:%.*]], !prof [[PROF1]]
+; ABORT-INLINE-NEXT: br i1 [[TMP11]], label [[TMP12:%.*]], label [[TMP26:%.*]], !prof [[PROF2]]
; ABORT-INLINE: 12:
; ABORT-INLINE-NEXT: [[TMP13:%.*]] = icmp ugt i8 [[TMP10]], 15
-; ABORT-INLINE-NEXT: br i1 [[TMP13]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
+; ABORT-INLINE-NEXT: br i1 [[TMP13]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF2]]
; ABORT-INLINE: 14:
; ABORT-INLINE-NEXT: call void asm sideeffect "int3\0Anopl 80([[RAX:%.*]])", "{rdi}"(i64 [[TMP4]])
; ABORT-INLINE-NEXT: unreachable
@@ -298,13 +298,13 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; ABORT-INLINE-NEXT: [[TMP17:%.*]] = trunc i64 [[TMP16]] to i8
; ABORT-INLINE-NEXT: [[TMP18:%.*]] = add i8 [[TMP17]], 0
; ABORT-INLINE-NEXT: [[TMP19:%.*]] = icmp uge i8 [[TMP18]], [[TMP10]]
-; ABORT-INLINE-NEXT: br i1 [[TMP19]], label [[TMP14]], label [[TMP20:%.*]], !prof [[PROF1]]
+; ABORT-INLINE-NEXT: br i1 [[TMP19]], label [[TMP14]], label [[TMP20:%.*]], !prof [[PROF2]]
; ABORT-INLINE: 20:
; ABORT-INLINE-NEXT: [[TMP21:%.*]] = or i64 [[TMP7]], 15
; ABORT-INLINE-NEXT: [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
; ABORT-INLINE-NEXT: [[TMP23:%.*]] = load i8, ptr [[TMP22]], align 1
; ABORT-INLINE-NEXT: [[TMP24:%.*]] = icmp ne i8 [[TMP6]], [[TMP23]]
-; ABORT-INLINE-NEXT: br i1 [[TMP24]], label [[TMP14]], label [[TMP25:%.*]], !prof [[PROF1]]
+; ABORT-INLINE-NEXT: br i1 [[TMP24]], label [[TMP14]], label [[TMP25:%.*]], !prof [[PROF2]]
; ABORT-INLINE: 25:
; ABORT-INLINE-NEXT: br label [[TMP26]]
; ABORT-INLINE: 26:
@@ -327,10 +327,10 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; RECOVER-INLINE-NEXT: [[TMP9:%.*]] = getelementptr i8, ptr [[TMP3]], i64 [[TMP8]]
; RECOVER-INLINE-NEXT: [[TMP10:%.*]] = load i8, ptr [[TMP9]], align 1
; RECOVER-INLINE-NEXT: [[TMP11:%.*]] = icmp ne i8 [[TMP6]], [[TMP10]]
-; RECOVER-INLINE-NEXT: br i1 [[TMP11]], label [[TMP12:%.*]], label [[TMP26:%.*]], !prof [[PROF1]]
+; RECOVER-INLINE-NEXT: br i1 [[TMP11]], label [[TMP12:%.*]], label [[TMP26:%.*]], !prof [[PROF2]]
; RECOVER-INLINE: 12:
; RECOVER-INLINE-NEXT: [[TMP13:%.*]] = icmp ugt i8 [[TMP10]], 15
-; RECOVER-INLINE-NEXT: br i1 [[TMP13]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF1]]
+; RECOVER-INLINE-NEXT: br i1 [[TMP13]], label [[TMP14:%.*]], label [[TMP15:%.*]], !prof [[PROF2]]
; RECOVER-INLINE: 14:
; RECOVER-INLINE-NEXT: call void asm sideeffect "int3\0Anopl 112([[RAX:%.*]])", "{rdi}"(i64 [[TMP4]])
; RECOVER-INLINE-NEXT: br label [[TMP25:%.*]]
@@ -339,13 +339,13 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; RECOVER-INLINE-NEXT: [[TMP17:%.*]] = trunc i64 [[TMP16]] to i8
; RECOVER-INLINE-NEXT: [[TMP18:%.*]] = add i8 [[TMP17]], 0
; RECOVER-INLINE-NEXT: [[TMP19:%.*]] = icmp uge i8 [[TMP18]], [[TMP10]]
-; RECOVER-INLINE-NEXT: br i1 [[TMP19]], label [[TMP14]], label [[TMP20:%.*]], !prof [[PROF1]]
+; RECOVER-INLINE-NEXT: br i1 [[TMP19]], label [[TMP14]], label [[TMP20:%.*]], !prof [[PROF2]]
; RECOVER-INLINE: 20:
; RECOVER-INLINE-NEXT: [[TMP21:%.*]] = or i64 [[TMP7]], 15
; RECOVER-INLINE-NEXT: [[TMP22:%.*]] = inttoptr i64 [[TMP21]] to ptr
; RECOVER-INLINE-NEXT: [[TMP23:%.*]] = load i8, ptr [[TMP22]], align 1
; RECOVER-INLINE-NEXT: [[TMP24:%.*]] = icmp ne i8 [[TMP6]], [[TMP23]]
-; RECOVER-INLINE-NEXT: br i1 [[TMP24]], label [[TMP14]], label [[TMP25]], !prof [[PROF1]]
+; RECOVER-INLINE-NEXT: br i1 [[TMP24]], label [[TMP14]], label [[TMP25]], !prof [[PROF2]]
; RECOVER-INLINE: 25:
; RECOVER-INLINE-NEXT: br label [[TMP26]]
; RECOVER-INLINE: 26:
@@ -361,7 +361,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; CHECK-LABEL: define void @test_store40
; CHECK-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; CHECK-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 5)
; CHECK-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -370,7 +370,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; NOFASTPATH-LABEL: define void @test_store40
; NOFASTPATH-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; NOFASTPATH-NEXT: entry:
-; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; NOFASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; NOFASTPATH-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 5)
; NOFASTPATH-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -379,7 +379,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; FASTPATH-LABEL: define void @test_store40
; FASTPATH-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; FASTPATH-NEXT: entry:
-; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; FASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; FASTPATH-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 5)
; FASTPATH-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -388,7 +388,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; ABORT-LABEL: define void @test_store40
; ABORT-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; ABORT-NEXT: entry:
-; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 5)
; ABORT-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -397,7 +397,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; RECOVER-LABEL: define void @test_store40
; RECOVER-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-NEXT: entry:
-; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-NEXT: call void @__hwasan_storeN_noabort(i64 [[TMP0]], i64 5)
; RECOVER-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -438,7 +438,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; CHECK-LABEL: define void @test_store_unaligned
; CHECK-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; CHECK-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 8)
; CHECK-NEXT: store i64 [[B]], ptr [[A]], align 4
@@ -447,7 +447,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; NOFASTPATH-LABEL: define void @test_store_unaligned
; NOFASTPATH-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; NOFASTPATH-NEXT: entry:
-; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; NOFASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; NOFASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; NOFASTPATH-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 8)
; NOFASTPATH-NEXT: store i64 [[B]], ptr [[A]], align 4
@@ -456,7 +456,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; FASTPATH-LABEL: define void @test_store_unaligned
; FASTPATH-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; FASTPATH-NEXT: entry:
-; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; FASTPATH-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; FASTPATH-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; FASTPATH-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 8)
; FASTPATH-NEXT: store i64 [[B]], ptr [[A]], align 4
@@ -465,7 +465,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; ABORT-LABEL: define void @test_store_unaligned
; ABORT-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; ABORT-NEXT: entry:
-; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 8)
; ABORT-NEXT: store i64 [[B]], ptr [[A]], align 4
@@ -474,7 +474,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; RECOVER-LABEL: define void @test_store_unaligned
; RECOVER-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-NEXT: entry:
-; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-NEXT: call void @__hwasan_storeN_noabort(i64 [[TMP0]], i64 8)
; RECOVER-NEXT: store i64 [[B]], ptr [[A]], align 4
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/alloca.ll b/llvm/test/Instrumentation/HWAddressSanitizer/alloca.ll
index f4f5e66549fe1..018da45634710 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/alloca.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/alloca.ll
@@ -69,7 +69,7 @@ define void @test_alloca() sanitize_hwaddress !dbg !15 {
; ZERO-BASED-SHADOW-LABEL: define void @test_alloca(
; ZERO-BASED-SHADOW-SAME: ) #[[ATTR0:[0-9]+]] personality ptr @__hwasan_personality_thunk !dbg [[DBG8:![0-9]+]] {
; ZERO-BASED-SHADOW-NEXT: entry:
-; ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = ptrtoint ptr [[TMP0]] to i64
; ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = lshr i64 [[TMP1]], 20
@@ -160,7 +160,7 @@ declare void @llvm.dbg.value(metadata, metadata, metadata)
;.
; DYNAMIC-SHADOW: [[META0]] = !{ptr @hwasan.note}
; DYNAMIC-SHADOW: [[META1:![0-9]+]] = distinct !DICompileUnit(language: DW_LANG_C_plus_plus_14, file: [[META2:![0-9]+]], producer: "{{.*}}clang version {{.*}}", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: [[META3:![0-9]+]], splitDebugInlining: false, nameTableKind: None)
-; DYNAMIC-SHADOW: [[META2]] = !DIFile(filename: "alloca.cpp", directory: {{.*}})
+; DYNAMIC-SHADOW: [[META2]] = !DIFile(filename: "{{.*}}alloca.cpp", directory: {{.*}})
; DYNAMIC-SHADOW: [[META3]] = !{}
; DYNAMIC-SHADOW: [[META4:![0-9]+]] = !{i32 7, !"Dwarf Version", i32 4}
; DYNAMIC-SHADOW: [[META5:![0-9]+]] = !{i32 2, !"Debug Info Version", i32 3}
@@ -177,7 +177,7 @@ declare void @llvm.dbg.value(metadata, metadata, metadata)
;.
; ZERO-BASED-SHADOW: [[META0]] = !{ptr @hwasan.note}
; ZERO-BASED-SHADOW: [[META1:![0-9]+]] = distinct !DICompileUnit(language: DW_LANG_C_plus_plus_14, file: [[META2:![0-9]+]], producer: "{{.*}}clang version {{.*}}", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: [[META3:![0-9]+]], splitDebugInlining: false, nameTableKind: None)
-; ZERO-BASED-SHADOW: [[META2]] = !DIFile(filename: "alloca.cpp", directory: {{.*}})
+; ZERO-BASED-SHADOW: [[META2]] = !DIFile(filename: "{{.*}}alloca.cpp", directory: {{.*}})
; ZERO-BASED-SHADOW: [[META3]] = !{}
; ZERO-BASED-SHADOW: [[META4:![0-9]+]] = !{i32 7, !"Dwarf Version", i32 4}
; ZERO-BASED-SHADOW: [[META5:![0-9]+]] = !{i32 2, !"Debug Info Version", i32 3}
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/basic.ll b/llvm/test/Instrumentation/HWAddressSanitizer/basic.ll
index 355e3b94978b3..b8bc6a9e0cd3c 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/basic.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/basic.ll
@@ -98,7 +98,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i8 @test_load8
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 0, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i8, ptr [[A]], align 4
; ABORT-ZERO-BASED-SHADOW-NEXT: ret i8 [[B]]
@@ -106,7 +106,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i8 @test_load8
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -230,7 +230,7 @@ define i16 @test_load16(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i16 @test_load16
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 1, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i16, ptr [[A]], align 4
; ABORT-ZERO-BASED-SHADOW-NEXT: ret i16 [[B]]
@@ -238,7 +238,7 @@ define i16 @test_load16(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i16 @test_load16
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -362,7 +362,7 @@ define i32 @test_load32(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i32 @test_load32
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 2, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i32, ptr [[A]], align 4
; ABORT-ZERO-BASED-SHADOW-NEXT: ret i32 [[B]]
@@ -370,7 +370,7 @@ define i32 @test_load32(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i32 @test_load32
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -494,7 +494,7 @@ define i64 @test_load64(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i64 @test_load64
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 3, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i64, ptr [[A]], align 8
; ABORT-ZERO-BASED-SHADOW-NEXT: ret i64 [[B]]
@@ -502,7 +502,7 @@ define i64 @test_load64(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i64 @test_load64
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -626,7 +626,7 @@ define i128 @test_load128(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i128 @test_load128
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 4, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i128, ptr [[A]], align 16
; ABORT-ZERO-BASED-SHADOW-NEXT: ret i128 [[B]]
@@ -634,7 +634,7 @@ define i128 @test_load128(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i128 @test_load128
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -722,7 +722,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define i40 @test_load40
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_loadN(i64 [[TMP0]], i64 5)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -731,7 +731,7 @@ define i40 @test_load40(ptr %a) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define i40 @test_load40
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_loadN_noabort(i64 [[TMP0]], i64 5)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = load i40, ptr [[A]], align 4
@@ -827,7 +827,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store8
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 16, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i8 [[B]], ptr [[A]], align 4
; ABORT-ZERO-BASED-SHADOW-NEXT: ret void
@@ -835,7 +835,7 @@ define void @test_store8(ptr %a, i8 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store8
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i8 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -959,7 +959,7 @@ define void @test_store16(ptr %a, i16 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store16
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 17, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i16 [[B]], ptr [[A]], align 4
; ABORT-ZERO-BASED-SHADOW-NEXT: ret void
@@ -967,7 +967,7 @@ define void @test_store16(ptr %a, i16 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store16
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i16 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1091,7 +1091,7 @@ define void @test_store32(ptr %a, i32 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store32
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 18, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i32 [[B]], ptr [[A]], align 4
; ABORT-ZERO-BASED-SHADOW-NEXT: ret void
@@ -1099,7 +1099,7 @@ define void @test_store32(ptr %a, i32 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store32
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i32 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1223,7 +1223,7 @@ define void @test_store64(ptr %a, i64 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store64
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 19, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i64 [[B]], ptr [[A]], align 8
; ABORT-ZERO-BASED-SHADOW-NEXT: ret void
@@ -1231,7 +1231,7 @@ define void @test_store64(ptr %a, i64 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store64
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1355,7 +1355,7 @@ define void @test_store128(ptr %a, i128 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store128
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i128 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 20, i64 0)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i128 [[B]], ptr [[A]], align 16
; ABORT-ZERO-BASED-SHADOW-NEXT: ret void
@@ -1363,7 +1363,7 @@ define void @test_store128(ptr %a, i128 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store128
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i128 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 56
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP2:%.*]] = trunc i64 [[TMP1]] to i8
@@ -1451,7 +1451,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store40
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 5)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -1460,7 +1460,7 @@ define void @test_store40(ptr %a, i40 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store40
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i40 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_storeN_noabort(i64 [[TMP0]], i64 5)
; RECOVER-ZERO-BASED-SHADOW-NEXT: store i40 [[B]], ptr [[A]], align 4
@@ -1520,7 +1520,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store_unaligned
; ABORT-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; ABORT-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_storeN(i64 [[TMP0]], i64 8)
; ABORT-ZERO-BASED-SHADOW-NEXT: store i64 [[B]], ptr [[A]], align 4
@@ -1529,7 +1529,7 @@ define void @test_store_unaligned(ptr %a, i64 %b) sanitize_hwaddress {
; RECOVER-ZERO-BASED-SHADOW-LABEL: define void @test_store_unaligned
; RECOVER-ZERO-BASED-SHADOW-SAME: (ptr [[A:%.*]], i64 [[B:%.*]]) #[[ATTR0]] {
; RECOVER-ZERO-BASED-SHADOW-NEXT: entry:
-; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; RECOVER-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; RECOVER-ZERO-BASED-SHADOW-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; RECOVER-ZERO-BASED-SHADOW-NEXT: call void @__hwasan_storeN_noabort(i64 [[TMP0]], i64 8)
; RECOVER-ZERO-BASED-SHADOW-NEXT: store i64 [[B]], ptr [[A]], align 4
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/hwasan-pass-second-run.ll b/llvm/test/Instrumentation/HWAddressSanitizer/hwasan-pass-second-run.ll
index 00614b603fe79..be28e2478016a 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/hwasan-pass-second-run.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/hwasan-pass-second-run.ll
@@ -22,7 +22,7 @@ define i8 @test_load8(ptr %a) sanitize_hwaddress {
; CHECK-LABEL: define i8 @test_load8
; CHECK-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr [[A]] to i64
; CHECK-NEXT: call void @__hwasan_load1(i64 [[TMP0]])
; CHECK-NEXT: [[B:%.*]] = load i8, ptr [[A]], align 4
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/kernel-alloca.ll b/llvm/test/Instrumentation/HWAddressSanitizer/kernel-alloca.ll
index 7652587ce4ec0..b9fcb86fbf127 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/kernel-alloca.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/kernel-alloca.ll
@@ -12,7 +12,7 @@ define void @test_alloca() sanitize_hwaddress {
; CHECK-LABEL: define void @test_alloca
; CHECK-SAME: () #[[ATTR0:[0-9]+]] {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; CHECK-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; CHECK-NEXT: [[TMP0:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; CHECK-NEXT: [[TMP1:%.*]] = ptrtoint ptr [[TMP0]] to i64
; CHECK-NEXT: [[TMP2:%.*]] = lshr i64 [[TMP1]], 20
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/prologue.ll b/llvm/test/Instrumentation/HWAddressSanitizer/prologue.ll
index 4e7c021bd7f97..9fb0ceb593aec 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/prologue.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/prologue.ll
@@ -61,7 +61,7 @@ define i32 @test_load(ptr %a) sanitize_hwaddress {
; FUCHSIA-LABEL: define i32 @test_load
; FUCHSIA-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; FUCHSIA-NEXT: entry:
-; FUCHSIA-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; FUCHSIA-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; FUCHSIA-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 2, i64 0)
; FUCHSIA-NEXT: [[X:%.*]] = load i32, ptr [[A]], align 4
; FUCHSIA-NEXT: ret i32 [[X]]
@@ -69,7 +69,7 @@ define i32 @test_load(ptr %a) sanitize_hwaddress {
; FUCHSIA-LIBCALL-LABEL: define i32 @test_load
; FUCHSIA-LIBCALL-SAME: (ptr [[A:%.*]]) #[[ATTR0:[0-9]+]] {
; FUCHSIA-LIBCALL-NEXT: entry:
-; FUCHSIA-LIBCALL-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; FUCHSIA-LIBCALL-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; FUCHSIA-LIBCALL-NEXT: call void @llvm.hwasan.check.memaccess.shortgranules.fixedshadow(ptr [[A]], i32 2, i64 0)
; FUCHSIA-LIBCALL-NEXT: [[X:%.*]] = load i32, ptr [[A]], align 4
; FUCHSIA-LIBCALL-NEXT: ret i32 [[X]]
@@ -92,7 +92,7 @@ define void @test_alloca() sanitize_hwaddress {
; CHECK-NEXT: [[TMP1:%.*]] = getelementptr i8, ptr [[TMP0]], i32 48
; CHECK-NEXT: [[TMP2:%.*]] = load i64, ptr [[TMP1]], align 8
; CHECK-NEXT: [[TMP3:%.*]] = ashr i64 [[TMP2]], 3
-; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.read_register.i64(metadata [[META1:![0-9]+]])
+; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.read_register.i64(metadata [[META2:![0-9]+]])
; CHECK-NEXT: [[TMP5:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; CHECK-NEXT: [[TMP6:%.*]] = ptrtoint ptr [[TMP5]] to i64
; CHECK-NEXT: [[TMP7:%.*]] = shl i64 [[TMP6]], 44
@@ -138,7 +138,7 @@ define void @test_alloca() sanitize_hwaddress {
; NOIFUNC-TLS-HISTORY-NEXT: [[TMP1:%.*]] = getelementptr i8, ptr [[TMP0]], i32 48
; NOIFUNC-TLS-HISTORY-NEXT: [[TMP2:%.*]] = load i64, ptr [[TMP1]], align 8
; NOIFUNC-TLS-HISTORY-NEXT: [[TMP3:%.*]] = ashr i64 [[TMP2]], 3
-; NOIFUNC-TLS-HISTORY-NEXT: [[TMP4:%.*]] = call i64 @llvm.read_register.i64(metadata [[META1:![0-9]+]])
+; NOIFUNC-TLS-HISTORY-NEXT: [[TMP4:%.*]] = call i64 @llvm.read_register.i64(metadata [[META2:![0-9]+]])
; NOIFUNC-TLS-HISTORY-NEXT: [[TMP5:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; NOIFUNC-TLS-HISTORY-NEXT: [[TMP6:%.*]] = ptrtoint ptr [[TMP5]] to i64
; NOIFUNC-TLS-HISTORY-NEXT: [[TMP7:%.*]] = shl i64 [[TMP6]], 44
@@ -273,10 +273,10 @@ define void @test_alloca() sanitize_hwaddress {
; FUCHSIA-LABEL: define void @test_alloca
; FUCHSIA-SAME: () #[[ATTR0]] personality ptr @__hwasan_personality_thunk {
; FUCHSIA-NEXT: entry:
-; FUCHSIA-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; FUCHSIA-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; FUCHSIA-NEXT: [[TMP0:%.*]] = load i64, ptr @__hwasan_tls, align 8
; FUCHSIA-NEXT: [[TMP1:%.*]] = ashr i64 [[TMP0]], 3
-; FUCHSIA-NEXT: [[TMP2:%.*]] = call i64 @llvm.read_register.i64(metadata [[META1:![0-9]+]])
+; FUCHSIA-NEXT: [[TMP2:%.*]] = call i64 @llvm.read_register.i64(metadata [[META2:![0-9]+]])
; FUCHSIA-NEXT: [[TMP3:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; FUCHSIA-NEXT: [[TMP4:%.*]] = ptrtoint ptr [[TMP3]] to i64
; FUCHSIA-NEXT: [[TMP5:%.*]] = shl i64 [[TMP4]], 44
@@ -318,8 +318,8 @@ define void @test_alloca() sanitize_hwaddress {
; FUCHSIA-LIBCALL-LABEL: define void @test_alloca
; FUCHSIA-LIBCALL-SAME: () #[[ATTR0]] personality ptr @__hwasan_personality_thunk {
; FUCHSIA-LIBCALL-NEXT: entry:
-; FUCHSIA-LIBCALL-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
-; FUCHSIA-LIBCALL-NEXT: [[TMP0:%.*]] = call i64 @llvm.read_register.i64(metadata [[META1:![0-9]+]])
+; FUCHSIA-LIBCALL-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
+; FUCHSIA-LIBCALL-NEXT: [[TMP0:%.*]] = call i64 @llvm.read_register.i64(metadata [[META2:![0-9]+]])
; FUCHSIA-LIBCALL-NEXT: [[TMP1:%.*]] = call ptr @llvm.frameaddress.p0(i32 0)
; FUCHSIA-LIBCALL-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[TMP1]] to i64
; FUCHSIA-LIBCALL-NEXT: [[TMP3:%.*]] = shl i64 [[TMP2]], 44
diff --git a/llvm/test/Instrumentation/HWAddressSanitizer/zero-ptr.ll b/llvm/test/Instrumentation/HWAddressSanitizer/zero-ptr.ll
index 95cf6f1544df0..0a03adc772425 100644
--- a/llvm/test/Instrumentation/HWAddressSanitizer/zero-ptr.ll
+++ b/llvm/test/Instrumentation/HWAddressSanitizer/zero-ptr.ll
@@ -21,7 +21,7 @@ define void @test_store_to_zeroptr() sanitize_hwaddress {
; ABORT-ZERO-BASED-SHADOW-LABEL: define void @test_store_to_zeroptr
; ABORT-ZERO-BASED-SHADOW-SAME: () #[[ATTR0:[0-9]+]] {
; ABORT-ZERO-BASED-SHADOW-NEXT: entry:
-; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr null)
+; ABORT-ZERO-BASED-SHADOW-NEXT: [[DOTHWASAN_SHADOW:%.*]] = call ptr asm "", "=r,0"(ptr zeroinitializer)
; ABORT-ZERO-BASED-SHADOW-NEXT: [[B:%.*]] = inttoptr i64 0 to ptr
; ABORT-ZERO-BASED-SHADOW-NEXT: store i64 42, ptr [[B]], align 8
; ABORT-ZERO-BASED-SHADOW-NEXT: ret void
diff --git a/llvm/test/Instrumentation/MemorySanitizer/msan_basic.ll b/llvm/test/Instrumentation/MemorySanitizer/msan_basic.ll
index 0ad9e4dd32adf..41748ee40c9c5 100644
--- a/llvm/test/Instrumentation/MemorySanitizer/msan_basic.ll
+++ b/llvm/test/Instrumentation/MemorySanitizer/msan_basic.ll
@@ -356,7 +356,7 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr getelementptr (i8, ptr @__msan_param_tls, i64 8), align 8
; CHECK-NEXT: call void @llvm.donothing()
; CHECK-NEXT: [[TMP1:%.*]] = ptrtoint ptr [[B]] to i64
-; CHECK-NEXT: [[TMP2:%.*]] = xor i64 [[TMP1]], 0
+; CHECK-NEXT: [[TMP2:%.*]] = xor i64 [[TMP1]], ptrtoint (ptr null to i64)
; CHECK-NEXT: [[TMP3:%.*]] = or i64 [[TMP0]], 0
; CHECK-NEXT: [[TMP4:%.*]] = icmp ne i64 [[TMP3]], 0
; CHECK-NEXT: [[TMP5:%.*]] = xor i64 [[TMP3]], -1
@@ -401,7 +401,7 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
; ORIGIN-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr (i8, ptr @__msan_param_origin_tls, i64 8), align 4
; ORIGIN-NEXT: call void @llvm.donothing()
; ORIGIN-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[B]] to i64
-; ORIGIN-NEXT: [[TMP3:%.*]] = xor i64 [[TMP2]], 0
+; ORIGIN-NEXT: [[TMP3:%.*]] = xor i64 [[TMP2]], ptrtoint (ptr null to i64)
; ORIGIN-NEXT: [[TMP4:%.*]] = or i64 [[TMP0]], 0
; ORIGIN-NEXT: [[TMP5:%.*]] = icmp ne i64 [[TMP4]], 0
; ORIGIN-NEXT: [[TMP6:%.*]] = xor i64 [[TMP4]], -1
@@ -465,7 +465,7 @@ define void @FuncWithPhi(ptr nocapture %a, ptr %b, ptr nocapture %c) nounwind uw
; CALLS-NEXT: [[TMP5:%.*]] = load i32, ptr @__msan_param_origin_tls, align 4
; CALLS-NEXT: call void @llvm.donothing()
; CALLS-NEXT: [[TMP6:%.*]] = ptrtoint ptr [[B]] to i64
-; CALLS-NEXT: [[TMP7:%.*]] = xor i64 [[TMP6]], 0
+; CALLS-NEXT: [[TMP7:%.*]] = xor i64 [[TMP6]], ptrtoint (ptr null to i64)
; CALLS-NEXT: [[TMP8:%.*]] = or i64 [[TMP0]], 0
; CALLS-NEXT: [[TMP9:%.*]] = icmp ne i64 [[TMP8]], 0
; CALLS-NEXT: [[TMP10:%.*]] = xor i64 [[TMP8]], -1
@@ -2090,8 +2090,8 @@ define <2 x i1> @ICmpSLT_vector_Zero(<2 x ptr> %x) nounwind uwtable readnone san
; CHECK-NEXT: [[TMP4:%.*]] = xor <2 x i64> [[TMP1]], splat (i64 -1)
; CHECK-NEXT: [[TMP5:%.*]] = and <2 x i64> [[TMP3]], [[TMP4]]
; CHECK-NEXT: [[TMP6:%.*]] = or <2 x i64> [[TMP3]], [[TMP1]]
-; CHECK-NEXT: [[TMP9:%.*]] = icmp ult <2 x i64> [[TMP5]], splat (i64 -9223372036854775808)
-; CHECK-NEXT: [[TMP16:%.*]] = icmp ult <2 x i64> [[TMP6]], splat (i64 -9223372036854775808)
+; CHECK-NEXT: [[TMP9:%.*]] = icmp ult <2 x i64> [[TMP5]], <i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 0), i64 -9223372036854775808), i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 1), i64 -9223372036854775808)>
+; CHECK-NEXT: [[TMP16:%.*]] = icmp ult <2 x i64> [[TMP6]], <i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 0), i64 -9223372036854775808), i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 1), i64 -9223372036854775808)>
; CHECK-NEXT: [[TMP17:%.*]] = xor <2 x i1> [[TMP9]], [[TMP16]]
; CHECK-NEXT: [[TMP18:%.*]] = icmp slt <2 x ptr> [[X]], zeroinitializer
; CHECK-NEXT: store <2 x i1> [[TMP17]], ptr @__msan_retval_tls, align 8
@@ -2107,8 +2107,8 @@ define <2 x i1> @ICmpSLT_vector_Zero(<2 x ptr> %x) nounwind uwtable readnone san
; ORIGIN-NEXT: [[TMP5:%.*]] = xor <2 x i64> [[TMP1]], splat (i64 -1)
; ORIGIN-NEXT: [[TMP6:%.*]] = and <2 x i64> [[TMP4]], [[TMP5]]
; ORIGIN-NEXT: [[TMP7:%.*]] = or <2 x i64> [[TMP4]], [[TMP1]]
-; ORIGIN-NEXT: [[TMP10:%.*]] = icmp ult <2 x i64> [[TMP6]], splat (i64 -9223372036854775808)
-; ORIGIN-NEXT: [[TMP17:%.*]] = icmp ult <2 x i64> [[TMP7]], splat (i64 -9223372036854775808)
+; ORIGIN-NEXT: [[TMP10:%.*]] = icmp ult <2 x i64> [[TMP6]], <i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 0), i64 -9223372036854775808), i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 1), i64 -9223372036854775808)>
+; ORIGIN-NEXT: [[TMP17:%.*]] = icmp ult <2 x i64> [[TMP7]], <i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 0), i64 -9223372036854775808), i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 1), i64 -9223372036854775808)>
; ORIGIN-NEXT: [[TMP18:%.*]] = xor <2 x i1> [[TMP10]], [[TMP17]]
; ORIGIN-NEXT: [[TMP19:%.*]] = icmp slt <2 x ptr> [[X]], zeroinitializer
; ORIGIN-NEXT: store <2 x i1> [[TMP18]], ptr @__msan_retval_tls, align 8
@@ -2125,8 +2125,8 @@ define <2 x i1> @ICmpSLT_vector_Zero(<2 x ptr> %x) nounwind uwtable readnone san
; CALLS-NEXT: [[TMP5:%.*]] = xor <2 x i64> [[TMP1]], splat (i64 -1)
; CALLS-NEXT: [[TMP6:%.*]] = and <2 x i64> [[TMP4]], [[TMP5]]
; CALLS-NEXT: [[TMP7:%.*]] = or <2 x i64> [[TMP4]], [[TMP1]]
-; CALLS-NEXT: [[TMP10:%.*]] = icmp ult <2 x i64> [[TMP6]], splat (i64 -9223372036854775808)
-; CALLS-NEXT: [[TMP17:%.*]] = icmp ult <2 x i64> [[TMP7]], splat (i64 -9223372036854775808)
+; CALLS-NEXT: [[TMP10:%.*]] = icmp ult <2 x i64> [[TMP6]], <i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 0), i64 -9223372036854775808), i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 1), i64 -9223372036854775808)>
+; CALLS-NEXT: [[TMP17:%.*]] = icmp ult <2 x i64> [[TMP7]], <i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 0), i64 -9223372036854775808), i64 xor (i64 extractelement (<2 x i64> ptrtoint (<2 x ptr> zeroinitializer to <2 x i64>), i32 1), i64 -9223372036854775808)>
; CALLS-NEXT: [[TMP18:%.*]] = xor <2 x i1> [[TMP10]], [[TMP17]]
; CALLS-NEXT: [[TMP19:%.*]] = icmp slt <2 x ptr> [[X]], zeroinitializer
; CALLS-NEXT: store <2 x i1> [[TMP18]], ptr @__msan_retval_tls, align 8
@@ -2746,7 +2746,7 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
; CHECK-NEXT: [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080
; CHECK-NEXT: [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr
; CHECK-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[TMP7]], i8 -1, i64 4, i1 false)
-; CHECK-NEXT: [[VA:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CHECK-NEXT: [[VA:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[VA]] to i64
; CHECK-NEXT: [[TMP9:%.*]] = xor i64 [[TMP8]], 87960930222080
; CHECK-NEXT: [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr
@@ -2799,7 +2799,7 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
; ORIGIN-NEXT: [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
; ORIGIN-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[TMP9]], i8 -1, i64 4, i1 false)
; ORIGIN-NEXT: call void @__msan_set_alloca_origin_with_descr(ptr [[X_ADDR]], i64 4, ptr @[[GLOB4:[0-9]+]], ptr @[[GLOB5:[0-9]+]])
-; ORIGIN-NEXT: [[VA:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; ORIGIN-NEXT: [[VA:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
; ORIGIN-NEXT: [[TMP13:%.*]] = ptrtoint ptr [[VA]] to i64
; ORIGIN-NEXT: [[TMP14:%.*]] = xor i64 [[TMP13]], 87960930222080
; ORIGIN-NEXT: [[TMP15:%.*]] = inttoptr i64 [[TMP14]] to ptr
@@ -2873,7 +2873,7 @@ define void @VAStart(i32 %x, ...) sanitize_memory {
; CALLS-NEXT: [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr
; CALLS-NEXT: call void @llvm.memset.p0.i64(ptr align 4 [[TMP9]], i8 -1, i64 4, i1 false)
; CALLS-NEXT: call void @__msan_set_alloca_origin_with_descr(ptr [[X_ADDR]], i64 4, ptr @[[GLOB4:[0-9]+]], ptr @[[GLOB5:[0-9]+]])
-; CALLS-NEXT: [[VA:%.*]] = alloca [1 x %struct.__va_list_tag], align 16
+; CALLS-NEXT: [[VA:%.*]] = alloca [1 x [[STRUCT___VA_LIST_TAG:%.*]]], align 16
; CALLS-NEXT: [[TMP13:%.*]] = ptrtoint ptr [[VA]] to i64
; CALLS-NEXT: [[TMP14:%.*]] = xor i64 [[TMP13]], 87960930222080
; CALLS-NEXT: [[TMP15:%.*]] = inttoptr i64 [[TMP14]] to ptr
diff --git a/llvm/test/Instrumentation/TypeSanitizer/access-with-offset.ll b/llvm/test/Instrumentation/TypeSanitizer/access-with-offset.ll
index 84e0f7307c7ec..aec080b183bac 100644
--- a/llvm/test/Instrumentation/TypeSanitizer/access-with-offset.ll
+++ b/llvm/test/Instrumentation/TypeSanitizer/access-with-offset.ll
@@ -17,7 +17,7 @@ define ptr @test_load_offset(ptr %argv) {
; CHECK-NEXT: [[ENTRY:.*:]]
; CHECK-NEXT: [[APP_MEM_MASK:%.*]] = load i64, ptr @__tysan_app_memory_mask, align 4
; CHECK-NEXT: [[SHADOW_BASE:%.*]] = load i64, ptr @__tysan_shadow_memory_address, align 4
-; CHECK-NEXT: [[APP_PTR_MASKED:%.*]] = and i64 0, [[APP_MEM_MASK]]
+; CHECK-NEXT: [[APP_PTR_MASKED:%.*]] = and i64 ptrtoint (ptr null to i64), [[APP_MEM_MASK]]
; CHECK-NEXT: [[APP_PTR_SHIFTED:%.*]] = shl i64 [[APP_PTR_MASKED]], 3
; CHECK-NEXT: [[SHADOW_PTR_INT:%.*]] = add i64 [[APP_PTR_SHIFTED]], [[SHADOW_BASE]]
; CHECK-NEXT: [[SHADOW_PTR:%.*]] = inttoptr i64 [[SHADOW_PTR_INT]] to ptr
diff --git a/llvm/test/Transforms/Attributor/AMDGPU/do-not-replace-addrspacecast-with-constantpointernull.ll b/llvm/test/Transforms/Attributor/AMDGPU/do-not-replace-addrspacecast-with-constantpointernull.ll
index fb4153bac808e..735384fc5a10c 100644
--- a/llvm/test/Transforms/Attributor/AMDGPU/do-not-replace-addrspacecast-with-constantpointernull.ll
+++ b/llvm/test/Transforms/Attributor/AMDGPU/do-not-replace-addrspacecast-with-constantpointernull.ll
@@ -4,7 +4,7 @@
define i32 @addrspacecast_ptr(ptr %p0, ptr addrspace(5) %p5) {
; CHECK-LABEL: define i32 @addrspacecast_ptr(
; CHECK-SAME: ptr nofree readonly captures(none) [[P0:%.*]], ptr addrspace(5) nofree readonly [[P5:%.*]]) #[[ATTR0:[0-9]+]] {
-; CHECK-NEXT: [[ICMP:%.*]] = icmp eq ptr addrspace(5) [[P5]], addrspacecast (ptr null to ptr addrspace(5))
+; CHECK-NEXT: [[ICMP:%.*]] = icmp eq ptr addrspace(5) [[P5]], null
; CHECK-NEXT: [[SELECT:%.*]] = select i1 [[ICMP]], ptr [[P0]], ptr null
; CHECK-NEXT: [[LOAD:%.*]] = load i32, ptr [[SELECT]], align 4
; CHECK-NEXT: ret i32 [[LOAD]]
@@ -19,7 +19,7 @@ define i32 @vec_addrspacecast_ptr(ptr %p0, ptr %p1, <2 x ptr addrspace(5)> %ptrv
; CHECK-LABEL: define i32 @vec_addrspacecast_ptr(
; CHECK-SAME: ptr nofree readonly captures(none) [[P0:%.*]], ptr nofree noundef nonnull readonly align 16 captures(none) dereferenceable(8) [[P1:%.*]], <2 x ptr addrspace(5)> [[PTRVEC:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: [[LOADVEC:%.*]] = load <2 x ptr addrspace(5)>, ptr [[P1]], align 16
-; CHECK-NEXT: [[ICMPVEC:%.*]] = icmp eq <2 x ptr addrspace(5)> [[LOADVEC]], <ptr addrspace(5) addrspacecast (ptr null to ptr addrspace(5)), ptr addrspace(5) addrspacecast (ptr null to ptr addrspace(5))>
+; CHECK-NEXT: [[ICMPVEC:%.*]] = icmp eq <2 x ptr addrspace(5)> [[LOADVEC]], zeroinitializer
; CHECK-NEXT: [[ICMP:%.*]] = extractelement <2 x i1> [[ICMPVEC]], i32 1
; CHECK-NEXT: [[SELECT:%.*]] = select i1 [[ICMP]], ptr [[P0]], ptr null
; CHECK-NEXT: [[LOAD:%.*]] = load i32, ptr [[SELECT]], align 4
@@ -37,7 +37,7 @@ define i32 @addrspacecast_vec_as1_ptr(ptr %p0, ptr %p1, <2 x ptr addrspace(5)> %
; CHECK-LABEL: define i32 @addrspacecast_vec_as1_ptr(
; CHECK-SAME: ptr nofree readonly captures(none) [[P0:%.*]], ptr nofree noundef nonnull readonly align 16 captures(none) dereferenceable(8) [[P1:%.*]], <2 x ptr addrspace(5)> [[PTRVEC:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: [[LOADVEC:%.*]] = load <2 x ptr addrspace(5)>, ptr [[P1]], align 16
-; CHECK-NEXT: [[ICMPVEC:%.*]] = icmp eq <2 x ptr addrspace(5)> [[LOADVEC]], <ptr addrspace(5) addrspacecast (ptr addrspace(1) null to ptr addrspace(5)), ptr addrspace(5) addrspacecast (ptr addrspace(1) null to ptr addrspace(5))>
+; CHECK-NEXT: [[ICMPVEC:%.*]] = icmp eq <2 x ptr addrspace(5)> [[LOADVEC]], zeroinitializer
; CHECK-NEXT: [[ICMP:%.*]] = extractelement <2 x i1> [[ICMPVEC]], i32 1
; CHECK-NEXT: [[SELECT:%.*]] = select i1 [[ICMP]], ptr [[P0]], ptr null
; CHECK-NEXT: [[LOAD:%.*]] = load i32, ptr [[SELECT]], align 4
@@ -55,7 +55,7 @@ define i32 @addrspacecast_vec_ptr(ptr %p0, ptr %p1, <2 x ptr addrspace(5)> %ptrv
; CHECK-LABEL: define i32 @addrspacecast_vec_ptr(
; CHECK-SAME: ptr nofree readonly captures(none) [[P0:%.*]], ptr nofree noundef nonnull readonly align 16 captures(none) dereferenceable(8) [[P1:%.*]], <2 x ptr addrspace(5)> [[PTRVEC:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: [[LOADVEC:%.*]] = load <2 x ptr addrspace(5)>, ptr [[P1]], align 16
-; CHECK-NEXT: [[ICMPVEC:%.*]] = icmp eq <2 x ptr addrspace(5)> [[LOADVEC]], <ptr addrspace(5) addrspacecast (ptr null to ptr addrspace(5)), ptr addrspace(5) addrspacecast (ptr null to ptr addrspace(5))>
+; CHECK-NEXT: [[ICMPVEC:%.*]] = icmp eq <2 x ptr addrspace(5)> [[LOADVEC]], zeroinitializer
; CHECK-NEXT: [[ICMP:%.*]] = extractelement <2 x i1> [[ICMPVEC]], i32 1
; CHECK-NEXT: [[SELECT:%.*]] = select i1 [[ICMP]], ptr [[P0]], ptr null
; CHECK-NEXT: [[LOAD:%.*]] = load i32, ptr [[SELECT]], align 4
diff --git a/llvm/test/Transforms/GlobalOpt/issue62384.ll b/llvm/test/Transforms/GlobalOpt/issue62384.ll
index cc2bc8940b891..ae2a495b25ca5 100644
--- a/llvm/test/Transforms/GlobalOpt/issue62384.ll
+++ b/llvm/test/Transforms/GlobalOpt/issue62384.ll
@@ -10,10 +10,10 @@
define internal void @ctor() {
; CHECK-LABEL: define internal void @ctor() {
-; CHECK-NEXT: tail call fastcc void @init0(ptr addrspacecast (ptr addrspace(1) null to ptr))
-; CHECK-NEXT: tail call fastcc void @init1(ptr addrspacecast (ptr addrspace(3) null to ptr))
-; CHECK-NEXT: tail call fastcc void @init2(ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)))
-; CHECK-NEXT: tail call fastcc void @init3(ptr addrspace(1) addrspacecast (ptr addrspace(2) null to ptr addrspace(1)))
+; CHECK-NEXT: tail call fastcc void @init0(ptr null)
+; CHECK-NEXT: tail call fastcc void @init1(ptr null)
+; CHECK-NEXT: tail call fastcc void @init2(ptr addrspace(1) null)
+; CHECK-NEXT: tail call fastcc void @init3(ptr addrspace(1) null)
; CHECK-NEXT: ret void
;
tail call void @init0(ptr addrspacecast (ptr addrspace(1) null to ptr))
@@ -26,9 +26,7 @@ define internal void @ctor() {
define internal void @init0(ptr %T) {
; CHECK-LABEL: define internal fastcc void @init0
; CHECK-SAME: (ptr [[T:%.*]]) unnamed_addr {
-; CHECK-NEXT: [[LD:%.*]] = load ptr, ptr addrspace(1) @gv0, align 8
-; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr [[LD]], null
-; CHECK-NEXT: store ptr addrspacecast (ptr addrspace(1) null to ptr), ptr addrspace(1) @gv0, align 8
+; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr null, null
; CHECK-NEXT: ret void
;
%ld = load ptr, ptr addrspace(1) @gv0, align 8
@@ -42,7 +40,7 @@ define internal void @init1(ptr %T) {
; CHECK-SAME: (ptr [[T:%.*]]) unnamed_addr {
; CHECK-NEXT: [[LD:%.*]] = load ptr addrspace(3), ptr addrspace(1) @gv1, align 4
; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr addrspace(3) [[LD]], null
-; CHECK-NEXT: store ptr addrspacecast (ptr addrspace(3) null to ptr), ptr addrspace(1) @gv1, align 8
+; CHECK-NEXT: store ptr null, ptr addrspace(1) @gv1, align 8
; CHECK-NEXT: ret void
;
%ld = load ptr addrspace(3), ptr addrspace(1) @gv1, align 4
@@ -54,9 +52,7 @@ define internal void @init1(ptr %T) {
define internal void @init2(ptr addrspace(1) %T) {
; CHECK-LABEL: define internal fastcc void @init2
; CHECK-SAME: (ptr addrspace(1) [[T:%.*]]) unnamed_addr {
-; CHECK-NEXT: [[LD:%.*]] = load ptr addrspace(1), ptr addrspace(1) @gv2, align 4
-; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr addrspace(1) [[LD]], null
-; CHECK-NEXT: store ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) @gv2, align 8
+; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr addrspace(1) null, null
; CHECK-NEXT: ret void
;
%ld = load ptr addrspace(1), ptr addrspace(1) @gv2, align 4
@@ -68,9 +64,7 @@ define internal void @init2(ptr addrspace(1) %T) {
define internal void @init3(ptr addrspace(1) %T) {
; CHECK-LABEL: define internal fastcc void @init3
; CHECK-SAME: (ptr addrspace(1) [[T:%.*]]) unnamed_addr {
-; CHECK-NEXT: [[LD:%.*]] = load ptr addrspace(1), ptr addrspace(1) @gv3, align 4
-; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr addrspace(1) [[LD]], null
-; CHECK-NEXT: store ptr addrspace(1) addrspacecast (ptr addrspace(2) null to ptr addrspace(1)), ptr addrspace(1) @gv3, align 8
+; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr addrspace(1) null, null
; CHECK-NEXT: ret void
;
%ld = load ptr addrspace(1), ptr addrspace(1) @gv3, align 4
diff --git a/llvm/test/Transforms/InferAddressSpaces/AMDGPU/basic.ll b/llvm/test/Transforms/InferAddressSpaces/AMDGPU/basic.ll
index 60bb38f863e8e..5ccea73b10aea 100644
--- a/llvm/test/Transforms/InferAddressSpaces/AMDGPU/basic.ll
+++ b/llvm/test/Transforms/InferAddressSpaces/AMDGPU/basic.ll
@@ -225,7 +225,7 @@ define void @local_nullptr(ptr addrspace(1) nocapture %results, ptr addrspace(3)
; CHECK-LABEL: define void @local_nullptr(
; CHECK-SAME: ptr addrspace(1) captures(none) [[RESULTS:%.*]], ptr addrspace(3) [[A:%.*]]) {
; CHECK-NEXT: [[ENTRY:.*:]]
-; CHECK-NEXT: [[TOBOOL:%.*]] = icmp ne ptr addrspace(3) [[A]], addrspacecast (ptr addrspace(5) null to ptr addrspace(3))
+; CHECK-NEXT: [[TOBOOL:%.*]] = icmp ne ptr addrspace(3) [[A]], null
; CHECK-NEXT: [[CONV:%.*]] = zext i1 [[TOBOOL]] to i32
; CHECK-NEXT: store i32 [[CONV]], ptr addrspace(1) [[RESULTS]], align 4
; CHECK-NEXT: ret void
diff --git a/llvm/test/Transforms/InferAddressSpaces/AMDGPU/issue110433.ll b/llvm/test/Transforms/InferAddressSpaces/AMDGPU/issue110433.ll
index 1928bb98cd2a7..edd5febbbb941 100644
--- a/llvm/test/Transforms/InferAddressSpaces/AMDGPU/issue110433.ll
+++ b/llvm/test/Transforms/InferAddressSpaces/AMDGPU/issue110433.ll
@@ -6,7 +6,7 @@ define <8 x i1> @load_vector_of_flat_ptr_from_constant(ptr addrspace(4) %ptr) {
; CHECK-SAME: ptr addrspace(4) [[PTR:%.*]]) {
; CHECK-NEXT: [[LD:%.*]] = load <8 x ptr>, ptr addrspace(4) [[PTR]], align 128
; CHECK-NEXT: [[TMP1:%.*]] = addrspacecast <8 x ptr> [[LD]] to <8 x ptr addrspace(1)>
-; CHECK-NEXT: [[CMP:%.*]] = icmp eq <8 x ptr addrspace(1)> [[TMP1]], <ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1))>
+; CHECK-NEXT: [[CMP:%.*]] = icmp eq <8 x ptr addrspace(1)> [[TMP1]], zeroinitializer
; CHECK-NEXT: ret <8 x i1> [[CMP]]
;
%ld = load <8 x ptr>, ptr addrspace(4) %ptr, align 128
diff --git a/llvm/test/Transforms/InferAddressSpaces/AMDGPU/phi-poison.ll b/llvm/test/Transforms/InferAddressSpaces/AMDGPU/phi-poison.ll
index 0ccf7e3df8af9..e5880afb55135 100644
--- a/llvm/test/Transforms/InferAddressSpaces/AMDGPU/phi-poison.ll
+++ b/llvm/test/Transforms/InferAddressSpaces/AMDGPU/phi-poison.ll
@@ -11,8 +11,8 @@ define void @phi_poison(ptr addrspace(1) %arg, <2 x ptr addrspace(1)> %arg1) {
; CHECK: merge:
; CHECK-NEXT: [[I:%.*]] = phi ptr addrspace(1) [ [[ARG:%.*]], [[LEADER]] ], [ poison, [[ENTRY:%.*]] ]
; CHECK-NEXT: [[I2:%.*]] = phi <2 x ptr addrspace(1)> [ [[ARG1:%.*]], [[LEADER]] ], [ poison, [[ENTRY]] ]
-; CHECK-NEXT: [[J:%.*]] = icmp eq ptr addrspace(1) [[I]], addrspacecast (ptr null to ptr addrspace(1))
-; CHECK-NEXT: [[J1:%.*]] = icmp eq <2 x ptr addrspace(1)> [[I2]], <ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1)), ptr addrspace(1) addrspacecast (ptr null to ptr addrspace(1))>
+; CHECK-NEXT: [[J:%.*]] = icmp eq ptr addrspace(1) [[I]], null
+; CHECK-NEXT: [[J1:%.*]] = icmp eq <2 x ptr addrspace(1)> [[I2]], zeroinitializer
; CHECK-NEXT: ret void
;
entry:
diff --git a/llvm/test/Transforms/InstCombine/addrspacecast.ll b/llvm/test/Transforms/InstCombine/addrspacecast.ll
index 8f3270cd60609..33f3439cd366f 100644
--- a/llvm/test/Transforms/InstCombine/addrspacecast.ll
+++ b/llvm/test/Transforms/InstCombine/addrspacecast.ll
@@ -173,7 +173,7 @@ end:
define void @constant_fold_null() #0 {
; CHECK-LABEL: @constant_fold_null(
-; CHECK-NEXT: store i32 7, ptr addrspace(4) addrspacecast (ptr addrspace(3) null to ptr addrspace(4)), align 4
+; CHECK-NEXT: store i32 7, ptr addrspace(4) null, align 4
; CHECK-NEXT: ret void
;
%cast = addrspacecast ptr addrspace(3) null to ptr addrspace(4)
@@ -191,7 +191,7 @@ define ptr addrspace(4) @constant_fold_undef() #0 {
define <4 x ptr addrspace(4)> @constant_fold_null_vector() #0 {
; CHECK-LABEL: @constant_fold_null_vector(
-; CHECK-NEXT: ret <4 x ptr addrspace(4)> <ptr addrspace(4) addrspacecast (ptr addrspace(3) null to ptr addrspace(4)), ptr addrspace(4) addrspacecast (ptr addrspace(3) null to ptr addrspace(4)), ptr addrspace(4) addrspacecast (ptr addrspace(3) null to ptr addrspace(4)), ptr addrspace(4) addrspacecast (ptr addrspace(3) null to ptr addrspace(4))>
+; CHECK-NEXT: ret <4 x ptr addrspace(4)> zeroinitializer
;
%cast = addrspacecast <4 x ptr addrspace(3)> zeroinitializer to <4 x ptr addrspace(4)>
ret <4 x ptr addrspace(4)> %cast
diff --git a/llvm/test/Transforms/InstCombine/assume.ll b/llvm/test/Transforms/InstCombine/assume.ll
index cc87d6542fa12..f5144429c5e4d 100644
--- a/llvm/test/Transforms/InstCombine/assume.ll
+++ b/llvm/test/Transforms/InstCombine/assume.ll
@@ -409,9 +409,10 @@ define i1 @nonnull5(ptr %a) {
; CHECK-LABEL: @nonnull5(
; CHECK-NEXT: [[LOAD:%.*]] = load ptr, ptr [[A:%.*]], align 8
; CHECK-NEXT: tail call void @escape(ptr [[LOAD]])
-; CHECK-NEXT: [[CMP:%.*]] = icmp slt ptr [[LOAD]], null
+; CHECK-NEXT: [[CMP:%.*]] = icmp slt ptr [[LOAD]], zeroinitializer
; CHECK-NEXT: tail call void @llvm.assume(i1 [[CMP]])
-; CHECK-NEXT: ret i1 false
+; CHECK-NEXT: [[RVAL:%.*]] = icmp eq ptr [[LOAD]], null
+; CHECK-NEXT: ret i1 [[RVAL]]
;
%load = load ptr, ptr %a
;; This call may throw!
diff --git a/llvm/test/Transforms/InstCombine/gep-inbounds-null.ll b/llvm/test/Transforms/InstCombine/gep-inbounds-null.ll
index cd7eac6bcba28..5d9918bac2368 100644
--- a/llvm/test/Transforms/InstCombine/gep-inbounds-null.ll
+++ b/llvm/test/Transforms/InstCombine/gep-inbounds-null.ll
@@ -212,8 +212,7 @@ entry:
define i1 @invalid_bitcast_icmp_addrspacecast_as0_null(ptr addrspace(5) %ptr) {
; CHECK-LABEL: @invalid_bitcast_icmp_addrspacecast_as0_null(
; CHECK-NEXT: bb:
-; CHECK-NEXT: [[TMP2:%.*]] = icmp eq ptr addrspace(5) [[PTR:%.*]], addrspacecast (ptr null to ptr addrspace(5))
-; CHECK-NEXT: ret i1 [[TMP2]]
+; CHECK-NEXT: ret i1 false
;
bb:
%tmp1 = getelementptr inbounds i32, ptr addrspace(5) %ptr, i32 1
@@ -224,7 +223,9 @@ bb:
define i1 @invalid_bitcast_icmp_addrspacecast_as0_null_var(ptr addrspace(5) %ptr, i32 %idx) {
; CHECK-LABEL: @invalid_bitcast_icmp_addrspacecast_as0_null_var(
; CHECK-NEXT: bb:
-; CHECK-NEXT: [[TMP2:%.*]] = icmp eq ptr addrspace(5) [[PTR:%.*]], addrspacecast (ptr null to ptr addrspace(5))
+; CHECK-NEXT: [[TMP0:%.*]] = sext i32 [[IDX:%.*]] to i64
+; CHECK-NEXT: [[TMP1:%.*]] = getelementptr inbounds i32, ptr addrspace(5) [[PTR:%.*]], i64 [[TMP0]]
+; CHECK-NEXT: [[TMP2:%.*]] = icmp eq ptr addrspace(5) [[TMP1]], null
; CHECK-NEXT: ret i1 [[TMP2]]
;
bb:
diff --git a/llvm/test/Transforms/InstCombine/or-select-zero-icmp.ll b/llvm/test/Transforms/InstCombine/or-select-zero-icmp.ll
index a3b21ccc63e94..29e4001948d82 100644
--- a/llvm/test/Transforms/InstCombine/or-select-zero-icmp.ll
+++ b/llvm/test/Transforms/InstCombine/or-select-zero-icmp.ll
@@ -134,7 +134,7 @@ define <2 x i32> @vector_type(<2 x i32> %a, <2 x i32> %b) {
define ptr @pointer_type(ptr %p, ptr %q) {
; CHECK-LABEL: @pointer_type(
; CHECK-NEXT: [[A:%.*]] = ptrtoint ptr [[P:%.*]] to i64
-; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr [[P]], null
+; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr [[P]], zeroinitializer
; CHECK-NEXT: [[SEL:%.*]] = select i1 [[CMP]], ptr [[Q:%.*]], ptr null
; CHECK-NEXT: [[SEL_INT:%.*]] = ptrtoint ptr [[SEL]] to i64
; CHECK-NEXT: [[OR:%.*]] = or i64 [[A]], [[SEL_INT]]
diff --git a/llvm/test/Transforms/InstCombine/ptrtoint-nullgep.ll b/llvm/test/Transforms/InstCombine/ptrtoint-nullgep.ll
index bf978801fec5d..18da8d846fef9 100644
--- a/llvm/test/Transforms/InstCombine/ptrtoint-nullgep.ll
+++ b/llvm/test/Transforms/InstCombine/ptrtoint-nullgep.ll
@@ -16,8 +16,14 @@ declare void @use_i64(i64)
declare void @use_ptr(ptr addrspace(1))
define i64 @constant_fold_ptrtoint_gep_zero() {
-; ALL-LABEL: define {{[^@]+}}@constant_fold_ptrtoint_gep_zero() {
-; ALL-NEXT: ret i64 0
+; LLPARSER-LABEL: define {{[^@]+}}@constant_fold_ptrtoint_gep_zero() {
+; LLPARSER-NEXT: ret i64 ptrtoint (ptr addrspace(1) null to i64)
+;
+; INSTSIMPLIFY-LABEL: define {{[^@]+}}@constant_fold_ptrtoint_gep_zero() {
+; INSTSIMPLIFY-NEXT: ret i64 ptrtoint (ptr addrspace(1) null to i64)
+;
+; INSTCOMBINE-LABEL: define {{[^@]+}}@constant_fold_ptrtoint_gep_zero() {
+; INSTCOMBINE-NEXT: ret i64 0
;
ret i64 ptrtoint (ptr addrspace(1) null to i64)
}
@@ -35,8 +41,14 @@ define i64 @constant_fold_ptrtoint_gep_nonzero() {
}
define i64 @constant_fold_ptrtoint_gep_zero_inbounds() {
-; ALL-LABEL: define {{[^@]+}}@constant_fold_ptrtoint_gep_zero_inbounds() {
-; ALL-NEXT: ret i64 0
+; LLPARSER-LABEL: define {{[^@]+}}@constant_fold_ptrtoint_gep_zero_inbounds() {
+; LLPARSER-NEXT: ret i64 ptrtoint (ptr addrspace(1) null to i64)
+;
+; INSTSIMPLIFY-LABEL: define {{[^@]+}}@constant_fold_ptrtoint_gep_zero_inbounds() {
+; INSTSIMPLIFY-NEXT: ret i64 ptrtoint (ptr addrspace(1) null to i64)
+;
+; INSTCOMBINE-LABEL: define {{[^@]+}}@constant_fold_ptrtoint_gep_zero_inbounds() {
+; INSTCOMBINE-NEXT: ret i64 0
;
ret i64 ptrtoint (ptr addrspace(1) null to i64)
}
@@ -639,3 +651,5 @@ define i64 @fold_ptrtoint_nested_nullgep_array_variable_multiple_uses(i64 %x, i6
%ret = ptrtoint ptr addrspace(1) %ptr to i64
ret i64 %ret
}
+;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
+; ALL: {{.*}}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/inttoptr-gep-nonintegral.ll b/llvm/test/Transforms/InstSimplify/ConstProp/inttoptr-gep-nonintegral.ll
index f66825767bd0b..cf44ff8329142 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/inttoptr-gep-nonintegral.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/inttoptr-gep-nonintegral.ll
@@ -15,7 +15,7 @@ define ptr @test_null_base_normal() {
}
define ptr @test_inttoptr_base_normal() {
; CHECK-LABEL: define ptr @test_inttoptr_base_normal() {
-; CHECK-NEXT: ret ptr null
+; CHECK-NEXT: ret ptr zeroinitializer
;
%base = inttoptr i16 -1 to ptr
%gep = getelementptr i8, ptr %base, i8 1
diff --git a/llvm/test/Transforms/LoopStrengthReduce/AMDGPU/lsr-invalid-ptr-extend.ll b/llvm/test/Transforms/LoopStrengthReduce/AMDGPU/lsr-invalid-ptr-extend.ll
index 61c1fd6fbb198..c202413af1679 100644
--- a/llvm/test/Transforms/LoopStrengthReduce/AMDGPU/lsr-invalid-ptr-extend.ll
+++ b/llvm/test/Transforms/LoopStrengthReduce/AMDGPU/lsr-invalid-ptr-extend.ll
@@ -24,8 +24,8 @@ define amdgpu_kernel void @scaledregtest() local_unnamed_addr {
; CHECK-NEXT: [[SCEVGEP6]] = getelementptr i8, ptr addrspace(5) [[LSR_IV5]], i32 8
; CHECK-NEXT: br label [[FOR_BODY_1]]
; CHECK: for.body:
-; CHECK-NEXT: [[LSR_IV12:%.*]] = phi ptr [ [[SCEVGEP13]], [[FOR_BODY]] ], [ null, [[ENTRY:%.*]] ]
-; CHECK-NEXT: [[LSR_IV10:%.*]] = phi ptr addrspace(5) [ [[SCEVGEP11]], [[FOR_BODY]] ], [ null, [[ENTRY]] ]
+; CHECK-NEXT: [[LSR_IV12:%.*]] = phi ptr [ [[SCEVGEP13]], [[FOR_BODY]] ], [ zeroinitializer, [[ENTRY:%.*]] ]
+; CHECK-NEXT: [[LSR_IV10:%.*]] = phi ptr addrspace(5) [ [[SCEVGEP11]], [[FOR_BODY]] ], [ zeroinitializer, [[ENTRY]] ]
; CHECK-NEXT: [[SCEVGEP11]] = getelementptr i8, ptr addrspace(5) [[LSR_IV10]], i32 64
; CHECK-NEXT: [[SCEVGEP13]] = getelementptr i8, ptr [[LSR_IV12]], i64 64
; CHECK-NEXT: br i1 false, label [[LOOPEXIT]], label [[FOR_BODY]]
@@ -58,7 +58,7 @@ for.body:
define protected amdgpu_kernel void @baseregtest(i32 %n, i32 %lda, i1 %arg) local_unnamed_addr {
; CHECK-LABEL: @baseregtest(
; CHECK-NEXT: entry:
-; CHECK-NEXT: br i1 %arg, label [[EXIT:%.*]], label [[IF_END:%.*]]
+; CHECK-NEXT: br i1 [[ARG:%.*]], label [[EXIT:%.*]], label [[IF_END:%.*]]
; CHECK: if.end:
; CHECK-NEXT: [[TMP0:%.*]] = tail call i32 @foo()
; CHECK-NEXT: [[TMP1:%.*]] = shl i32 [[TMP0]], 3
diff --git a/llvm/test/Transforms/LoopStrengthReduce/funclet.ll b/llvm/test/Transforms/LoopStrengthReduce/funclet.ll
index da5721a72a906..ee9e04813085b 100644
--- a/llvm/test/Transforms/LoopStrengthReduce/funclet.ll
+++ b/llvm/test/Transforms/LoopStrengthReduce/funclet.ll
@@ -16,9 +16,9 @@ define void @f() personality ptr @_except_handler3 {
; CHECK-NEXT: br label [[THROW:%.*]]
; CHECK: throw:
; CHECK-NEXT: invoke void @reserve()
-; CHECK-NEXT: to label [[THROW]] unwind label [[PAD:%.*]]
+; CHECK-NEXT: to label [[THROW]] unwind label [[PAD:%.*]]
; CHECK: pad:
-; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label %unreachable] unwind label [[BLAH2:%.*]]
+; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label [[UNREACHABLE:%.*]]] unwind label [[BLAH2:%.*]]
; CHECK: unreachable:
; CHECK-NEXT: [[TMP0:%.*]] = catchpad within [[CS]] []
; CHECK-NEXT: unreachable
@@ -75,9 +75,9 @@ define void @g() personality ptr @_except_handler3 {
; CHECK-NEXT: br label [[THROW:%.*]]
; CHECK: throw:
; CHECK-NEXT: invoke void @reserve()
-; CHECK-NEXT: to label [[THROW]] unwind label [[PAD:%.*]]
+; CHECK-NEXT: to label [[THROW]] unwind label [[PAD:%.*]]
; CHECK: pad:
-; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label [[UNREACHABLE:%.*]], label %blah] unwind to caller
+; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label [[UNREACHABLE:%.*]], label [[BLAH:%.*]]] unwind to caller
; CHECK: unreachable:
; CHECK-NEXT: [[TMP0:%.*]] = catchpad within [[CS]] []
; CHECK-NEXT: unreachable
@@ -89,7 +89,7 @@ define void @g() personality ptr @_except_handler3 {
; CHECK: leave:
; CHECK-NEXT: ret void
; CHECK: loop_body:
-; CHECK-NEXT: [[LSR_IV:%.*]] = phi i32 [ [[LSR_IV_NEXT:%.*]], [[ITER:%.*]] ], [ 0, [[BLAH:%.*]] ]
+; CHECK-NEXT: [[LSR_IV:%.*]] = phi i32 [ [[LSR_IV_NEXT:%.*]], [[ITER:%.*]] ], [ 0, [[BLAH]] ]
; CHECK-NEXT: [[LSR_IV_NEXT]] = add nuw nsw i32 [[LSR_IV]], -1
; CHECK-NEXT: [[LSR_IV_NEXT1:%.*]] = inttoptr i32 [[LSR_IV_NEXT]] to ptr
; CHECK-NEXT: [[TMP100:%.*]] = icmp eq ptr [[LSR_IV_NEXT1]], null
@@ -139,9 +139,9 @@ define void @h() personality ptr @_except_handler3 {
; CHECK-NEXT: br label [[THROW:%.*]]
; CHECK: throw:
; CHECK-NEXT: invoke void @reserve()
-; CHECK-NEXT: to label [[THROW]] unwind label [[PAD:%.*]]
+; CHECK-NEXT: to label [[THROW]] unwind label [[PAD:%.*]]
; CHECK: pad:
-; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label [[UNREACHABLE:%.*]], label %blug] unwind to caller
+; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label [[UNREACHABLE:%.*]], label [[BLUG:%.*]]] unwind to caller
; CHECK: unreachable:
; CHECK-NEXT: [[TMP0:%.*]] = catchpad within [[CS]] []
; CHECK-NEXT: unreachable
@@ -153,7 +153,7 @@ define void @h() personality ptr @_except_handler3 {
; CHECK: leave:
; CHECK-NEXT: ret void
; CHECK: loop_body:
-; CHECK-NEXT: [[LSR_IV:%.*]] = phi i32 [ [[LSR_IV_NEXT:%.*]], [[ITER:%.*]] ], [ 0, [[BLUG:%.*]] ]
+; CHECK-NEXT: [[LSR_IV:%.*]] = phi i32 [ [[LSR_IV_NEXT:%.*]], [[ITER:%.*]] ], [ 0, [[BLUG]] ]
; CHECK-NEXT: [[LSR_IV_NEXT]] = add nuw nsw i32 [[LSR_IV]], -1
; CHECK-NEXT: [[LSR_IV_NEXT1:%.*]] = inttoptr i32 [[LSR_IV_NEXT]] to ptr
; CHECK-NEXT: [[TMP100:%.*]] = icmp eq ptr [[LSR_IV_NEXT1]], null
@@ -203,9 +203,9 @@ define void @i() personality ptr @_except_handler3 {
; CHECK-NEXT: br label [[THROW:%.*]]
; CHECK: throw:
; CHECK-NEXT: invoke void @reserve()
-; CHECK-NEXT: to label [[THROW]] unwind label [[CATCHPAD:%.*]]
+; CHECK-NEXT: to label [[THROW]] unwind label [[CATCHPAD:%.*]]
; CHECK: catchpad:
-; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label %cp_body] unwind label [[CLEANUPPAD:%.*]]
+; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label [[CP_BODY:%.*]]] unwind label [[CLEANUPPAD:%.*]]
; CHECK: cp_body:
; CHECK-NEXT: [[TMP0:%.*]] = catchpad within [[CS]] []
; CHECK-NEXT: br label [[LOOP_HEAD:%.*]]
@@ -268,21 +268,21 @@ define void @test1(ptr %b, ptr %c) personality ptr @__CxxFrameHandler3 {
; CHECK: for.cond:
; CHECK-NEXT: [[D_0:%.*]] = phi ptr [ [[B:%.*]], [[ENTRY:%.*]] ], [ [[INCDEC_PTR:%.*]], [[FOR_INC:%.*]] ]
; CHECK-NEXT: invoke void @external(ptr [[D_0]])
-; CHECK-NEXT: to label [[FOR_INC]] unwind label [[CATCH_DISPATCH:%.*]]
+; CHECK-NEXT: to label [[FOR_INC]] unwind label [[CATCH_DISPATCH:%.*]]
; CHECK: for.inc:
; CHECK-NEXT: [[INCDEC_PTR]] = getelementptr inbounds i32, ptr [[D_0]], i32 1
; CHECK-NEXT: br label [[FOR_COND]]
; CHECK: catch.dispatch:
-; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label %catch] unwind label [[CATCH_DISPATCH_2:%.*]]
+; CHECK-NEXT: [[CS:%.*]] = catchswitch within none [label [[CATCH:%.*]]] unwind label [[CATCH_DISPATCH_2:%.*]]
; CHECK: catch:
; CHECK-NEXT: [[TMP0:%.*]] = catchpad within [[CS]] [ptr null, i32 64, ptr null]
; CHECK-NEXT: catchret from [[TMP0]] to label [[TRY_CONT:%.*]]
; CHECK: try.cont:
; CHECK-NEXT: invoke void @external(ptr [[C:%.*]])
-; CHECK-NEXT: to label [[TRY_CONT_7:%.*]] unwind label [[CATCH_DISPATCH_2]]
+; CHECK-NEXT: to label [[TRY_CONT_7:%.*]] unwind label [[CATCH_DISPATCH_2]]
; CHECK: catch.dispatch.2:
; CHECK-NEXT: [[E_0:%.*]] = phi ptr [ [[C]], [[TRY_CONT]] ], [ [[B]], [[CATCH_DISPATCH]] ]
-; CHECK-NEXT: [[CS2:%.*]] = catchswitch within none [label %catch.4] unwind to caller
+; CHECK-NEXT: [[CS2:%.*]] = catchswitch within none [label [[CATCH_4:%.*]]] unwind to caller
; CHECK: catch.4:
; CHECK-NEXT: [[TMP1:%.*]] = catchpad within [[CS2]] [ptr null, i32 64, ptr null]
; CHECK-NEXT: unreachable
@@ -331,9 +331,9 @@ define i32 @test2() personality ptr @_except_handler3 {
; CHECK: for.body:
; CHECK-NEXT: [[PHI:%.*]] = phi i32 [ [[INC:%.*]], [[FOR_INC:%.*]] ], [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: invoke void @reserve()
-; CHECK-NEXT: to label [[FOR_INC]] unwind label [[CATCH_DISPATCH:%.*]]
+; CHECK-NEXT: to label [[FOR_INC]] unwind label [[CATCH_DISPATCH:%.*]]
; CHECK: catch.dispatch:
-; CHECK-NEXT: [[TMP18:%.*]] = catchswitch within none [label %catch.handler] unwind to caller
+; CHECK-NEXT: [[TMP18:%.*]] = catchswitch within none [label [[CATCH_HANDLER:%.*]]] unwind to caller
; CHECK: catch.handler:
; CHECK-NEXT: [[PHI_LCSSA:%.*]] = phi i32 [ [[PHI]], [[CATCH_DISPATCH]] ]
; CHECK-NEXT: [[TMP19:%.*]] = catchpad within [[TMP18]] [ptr null]
diff --git a/llvm/test/Transforms/LoopStrengthReduce/pr27056.ll b/llvm/test/Transforms/LoopStrengthReduce/pr27056.ll
index 5f082dae7cf7b..a6060a9a019b1 100644
--- a/llvm/test/Transforms/LoopStrengthReduce/pr27056.ll
+++ b/llvm/test/Transforms/LoopStrengthReduce/pr27056.ll
@@ -19,12 +19,12 @@ define void @b_copy_ctor() personality ptr @__CxxFrameHandler3 {
; CHECK-NEXT: [[LSR_IV:%.*]] = phi i64 [ [[LSR_IV_NEXT:%.*]], [[CALL_I_NOEXC:%.*]] ], [ 0, [[ENTRY:%.*]] ]
; CHECK-NEXT: [[LSR_IV2:%.*]] = inttoptr i64 [[LSR_IV]] to ptr
; CHECK-NEXT: invoke void @a_copy_ctor()
-; CHECK-NEXT: to label [[CALL_I_NOEXC]] unwind label [[CATCH_DISPATCH:%.*]]
+; CHECK-NEXT: to label [[CALL_I_NOEXC]] unwind label [[CATCH_DISPATCH:%.*]]
; CHECK: call.i.noexc:
; CHECK-NEXT: [[LSR_IV_NEXT]] = add i64 [[LSR_IV]], -16
; CHECK-NEXT: br label [[FOR_COND]]
; CHECK: catch.dispatch:
-; CHECK-NEXT: [[TMP2:%.*]] = catchswitch within none [label %catch] unwind to caller
+; CHECK-NEXT: [[TMP2:%.*]] = catchswitch within none [label [[CATCH:%.*]]] unwind to caller
; CHECK: catch:
; CHECK-NEXT: [[TMP3:%.*]] = catchpad within [[TMP2]] [ptr null, i32 64, ptr null]
; CHECK-NEXT: [[CMP16:%.*]] = icmp eq ptr [[LSR_IV2]], null
diff --git a/llvm/test/Transforms/LowerGlobalDestructors/non-literal-type.ll b/llvm/test/Transforms/LowerGlobalDestructors/non-literal-type.ll
index c7fb6266cf96a..4e412b5126abd 100644
--- a/llvm/test/Transforms/LowerGlobalDestructors/non-literal-type.ll
+++ b/llvm/test/Transforms/LowerGlobalDestructors/non-literal-type.ll
@@ -10,19 +10,19 @@ declare void @dtor()
@llvm.global_dtors = appending global [1 x %ty] [%ty {i32 65535, ptr @dtor, ptr zeroinitializer }], align 8
;.
-; CHECK: @[[__DSO_HANDLE:[a-zA-Z0-9_$"\\.-]+]] = extern_weak hidden constant i8
-; CHECK: @[[LLVM_GLOBAL_CTORS:[a-zA-Z0-9_$"\\.-]+]] = appending global [2 x %ty] [[[TY:%.*]] { i32 65535, ptr @ctor, ptr null }, [[TY]] { i32 65535, ptr @register_call_dtors, ptr null }]
+; CHECK: @__dso_handle = extern_weak hidden constant i8
+; CHECK: @llvm.global_ctors = appending global [2 x %ty] [%ty { i32 65535, ptr @ctor, ptr zeroinitializer }, %ty { i32 65535, ptr @register_call_dtors., ptr zeroinitializer }]
;.
-; CHECK-LABEL: define private void @call_dtors
+; CHECK-LABEL: define private void @call_dtors.
; CHECK-SAME: (ptr [[TMP0:%.*]]) {
; CHECK-NEXT: body:
; CHECK-NEXT: call void @dtor()
; CHECK-NEXT: ret void
;
;
-; CHECK-LABEL: define private void @register_call_dtors() {
+; CHECK-LABEL: define private void @register_call_dtors.() {
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[CALL:%.*]] = call i32 @__cxa_atexit(ptr @call_dtors, ptr null, ptr @__dso_handle)
+; CHECK-NEXT: [[CALL:%.*]] = call i32 @__cxa_atexit(ptr @call_dtors., ptr null, ptr @__dso_handle)
; CHECK-NEXT: [[TMP0:%.*]] = icmp ne i32 [[CALL]], 0
; CHECK-NEXT: br i1 [[TMP0]], label [[FAIL:%.*]], label [[RETURN:%.*]]
; CHECK: fail:
diff --git a/llvm/test/Transforms/OpenMP/heap-to-shared-missing-declarations.ll b/llvm/test/Transforms/OpenMP/heap-to-shared-missing-declarations.ll
index d81f34f3c4273..0d081ef9e98e8 100644
--- a/llvm/test/Transforms/OpenMP/heap-to-shared-missing-declarations.ll
+++ b/llvm/test/Transforms/OpenMP/heap-to-shared-missing-declarations.ll
@@ -7,7 +7,6 @@ define internal void @outlined0() {
; CHECK-LABEL: define {{[^@]+}}@outlined0
; CHECK-SAME: () #[[ATTR0:[0-9]+]] {
; CHECK-NEXT: bb:
-; CHECK-NEXT: call void @func() #[[ATTR1:[0-9]+]]
; CHECK-NEXT: [[I:%.*]] = call i32 @__kmpc_get_hardware_num_threads_in_block() #[[ATTR0]]
; CHECK-NEXT: ret void
;
@@ -18,14 +17,6 @@ bb:
}
define internal void @func() {
-; CHECK: Function Attrs: nosync nounwind
-; CHECK-LABEL: define {{[^@]+}}@func
-; CHECK-SAME: () #[[ATTR1]] {
-; CHECK-NEXT: bb:
-; CHECK-NEXT: [[I:%.*]] = load ptr, ptr addrspace(5) null, align 4294967296
-; CHECK-NEXT: store i64 0, ptr [[I]], align 8
-; CHECK-NEXT: ret void
-;
bb:
%i = load ptr, ptr addrspacecast (ptr addrspace(5) null to ptr), align 4294967296
store i64 0, ptr %i, align 8
@@ -35,7 +26,7 @@ bb:
define internal void @outlined1() {
; CHECK: Function Attrs: nosync nounwind
; CHECK-LABEL: define {{[^@]+}}@outlined1
-; CHECK-SAME: () #[[ATTR1]] {
+; CHECK-SAME: () #[[ATTR1:[0-9]+]] {
; CHECK-NEXT: bb:
; CHECK-NEXT: br label [[BB2:%.*]]
; CHECK: common.ret:
diff --git a/llvm/test/Transforms/OpenMP/spmdization_kernel_env_dep.ll b/llvm/test/Transforms/OpenMP/spmdization_kernel_env_dep.ll
index d3e8e98b6f510..b605194b8e4c2 100644
--- a/llvm/test/Transforms/OpenMP/spmdization_kernel_env_dep.ll
+++ b/llvm/test/Transforms/OpenMP/spmdization_kernel_env_dep.ll
@@ -12,7 +12,7 @@ target triple = "amdgcn-amd-amdhsa"
;.
; AMDGPU: @IsSPMDMode = internal addrspace(3) global i32 undef
-; AMDGPU: @__omp_offloading_10302_b20a40e_main_l4_kernel_environment = addrspace(1) constant %struct.KernelEnvironmentTy { %struct.ConfigurationEnvironmentTy.8 { i8 0, i8 0, i8 1, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0 }, ptr addrspacecast (ptr addrspace(1) null to ptr), ptr addrspacecast (ptr addrspace(1) null to ptr) }
+; AMDGPU: @__omp_offloading_10302_b20a40e_main_l4_kernel_environment = addrspace(1) constant %struct.KernelEnvironmentTy { %struct.ConfigurationEnvironmentTy.8 { i8 0, i8 0, i8 1, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0 }, ptr null, ptr null }
;.
define i32 @fputs() {
; AMDGPU-LABEL: define {{[^@]+}}@fputs
diff --git a/llvm/test/Transforms/RewriteStatepointsForGC/pr56493.ll b/llvm/test/Transforms/RewriteStatepointsForGC/pr56493.ll
index e5677cb495fb1..2f9ba194b2257 100644
--- a/llvm/test/Transforms/RewriteStatepointsForGC/pr56493.ll
+++ b/llvm/test/Transforms/RewriteStatepointsForGC/pr56493.ll
@@ -5,7 +5,7 @@
define void @test() gc "statepoint-example" personality ptr @zot {
; CHECK-LABEL: @test(
; CHECK-NEXT: bb:
-; CHECK-NEXT: [[STATEPOINT_TOKEN:%.*]] = call token (i64, i32, ptr, i32, i32, ...) @llvm.experimental.gc.statepoint.p0(i64 2882400000, i32 0, ptr elementtype(void (ptr addrspace(1), i64, ptr addrspace(1), i64, i64)) @__llvm_memcpy_element_unordered_atomic_safepoint_4, i32 5, i32 0, ptr addrspace(1) null, i64 undef, ptr addrspace(1) null, i64 ptrtoint (ptr addrspace(1) getelementptr inbounds (i8, ptr addrspace(1) null, i64 16) to i64), i64 undef, i32 0, i32 0) [ "deopt"() ]
+; CHECK-NEXT: [[STATEPOINT_TOKEN:%.*]] = call token (i64, i32, ptr, i32, i32, ...) @llvm.experimental.gc.statepoint.p0(i64 2882400000, i32 0, ptr elementtype(void (ptr addrspace(1), i64, ptr addrspace(1), i64, i64)) @__llvm_memcpy_element_unordered_atomic_safepoint_4, i32 5, i32 0, ptr addrspace(1) null, i64 undef, ptr addrspace(1) null, i64 sub (i64 ptrtoint (ptr addrspace(1) getelementptr inbounds (i8, ptr addrspace(1) null, i64 16) to i64), i64 ptrtoint (ptr addrspace(1) null to i64)), i64 undef, i32 0, i32 0) [ "deopt"() ]
; CHECK-NEXT: ret void
;
bb:
diff --git a/llvm/test/Transforms/SLPVectorizer/X86/stacksave-dependence.ll b/llvm/test/Transforms/SLPVectorizer/X86/stacksave-dependence.ll
index 977942aa06c51..f309dbf86cb41 100644
--- a/llvm/test/Transforms/SLPVectorizer/X86/stacksave-dependence.ll
+++ b/llvm/test/Transforms/SLPVectorizer/X86/stacksave-dependence.ll
@@ -7,7 +7,7 @@ declare i64 @may_inf_loop_ro() nounwind readonly
define void @basecase(ptr %a, ptr %b, ptr %c) {
; CHECK-LABEL: @basecase(
; CHECK-NEXT: [[TMP1:%.*]] = load <2 x ptr>, ptr [[A:%.*]], align 8
-; CHECK-NEXT: store ptr null, ptr [[A]], align 8
+; CHECK-NEXT: store ptr zeroinitializer, ptr [[A]], align 8
; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, <2 x ptr> [[TMP1]], <2 x i32> splat (i32 1)
; CHECK-NEXT: store <2 x ptr> [[TMP2]], ptr [[B:%.*]], align 8
; CHECK-NEXT: ret void
diff --git a/llvm/test/Transforms/SLPVectorizer/X86/vectorize-widest-phis.ll b/llvm/test/Transforms/SLPVectorizer/X86/vectorize-widest-phis.ll
index 6a479174777b0..0e6c241563d12 100644
--- a/llvm/test/Transforms/SLPVectorizer/X86/vectorize-widest-phis.ll
+++ b/llvm/test/Transforms/SLPVectorizer/X86/vectorize-widest-phis.ll
@@ -13,7 +13,7 @@ define void @foo(i1 %arg) {
; CHECK-NEXT: br label [[BB2:%.*]]
; CHECK: bb2:
; CHECK-NEXT: [[TMP2:%.*]] = phi <4 x float> [ [[TMP1]], [[BB1]] ], [ [[TMP14:%.*]], [[BB3:%.*]] ]
-; CHECK-NEXT: [[TMP3:%.*]] = load double, ptr null, align 8
+; CHECK-NEXT: [[TMP3:%.*]] = load double, ptr zeroinitializer, align 8
; CHECK-NEXT: br i1 [[ARG:%.*]], label [[BB3]], label [[BB4:%.*]]
; CHECK: bb4:
; CHECK-NEXT: [[TMP4:%.*]] = fpext <4 x float> [[TMP2]] to <4 x double>
diff --git a/llvm/test/Transforms/SROA/basictest.ll b/llvm/test/Transforms/SROA/basictest.ll
index 15803f7b5a25b..152af00853483 100644
--- a/llvm/test/Transforms/SROA/basictest.ll
+++ b/llvm/test/Transforms/SROA/basictest.ll
@@ -529,8 +529,8 @@ entry:
define ptr @test10() {
; CHECK-LABEL: @test10(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr null to i64
-; CHECK-NEXT: ret ptr null
+; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr zeroinitializer to i64
+; CHECK-NEXT: ret ptr zeroinitializer
;
entry:
%a = alloca [8 x i8]
@@ -1332,10 +1332,10 @@ define void @PR15674(ptr %data, ptr %src, i32 %size) {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[TMP_SROA_0:%.*]] = alloca i32, align 4
; CHECK-NEXT: switch i32 [[SIZE:%.*]], label [[END:%.*]] [
-; CHECK-NEXT: i32 4, label [[BB4:%.*]]
-; CHECK-NEXT: i32 3, label [[BB3:%.*]]
-; CHECK-NEXT: i32 2, label [[BB2:%.*]]
-; CHECK-NEXT: i32 1, label [[BB1:%.*]]
+; CHECK-NEXT: i32 4, label [[BB4:%.*]]
+; CHECK-NEXT: i32 3, label [[BB3:%.*]]
+; CHECK-NEXT: i32 2, label [[BB2:%.*]]
+; CHECK-NEXT: i32 1, label [[BB1:%.*]]
; CHECK-NEXT: ]
; CHECK: bb4:
; CHECK-NEXT: [[SRC_GEP3:%.*]] = getelementptr inbounds i8, ptr [[SRC:%.*]], i32 3
diff --git a/llvm/tools/llvm-stress/llvm-stress.cpp b/llvm/tools/llvm-stress/llvm-stress.cpp
index 2fe5d6b7e5254..1a037ab84fc99 100644
--- a/llvm/tools/llvm-stress/llvm-stress.cpp
+++ b/llvm/tools/llvm-stress/llvm-stress.cpp
@@ -411,7 +411,7 @@ struct ConstModifier: public Modifier {
return PT->push_back(ConstantVector::getAllOnesValue(Ty));
break;
case 1: if (Ty->isIntOrIntVectorTy())
- return PT->push_back(ConstantVector::getNullValue(Ty));
+ return PT->push_back(Constant::getNullValue(Ty));
}
}
diff --git a/llvm/unittests/IR/InstructionsTest.cpp b/llvm/unittests/IR/InstructionsTest.cpp
index f4693bfb1a4d1..4b53f4c71dfae 100644
--- a/llvm/unittests/IR/InstructionsTest.cpp
+++ b/llvm/unittests/IR/InstructionsTest.cpp
@@ -1330,7 +1330,7 @@ TEST(InstructionsTest, ShuffleMaskIsReplicationMask) {
for (int OpVF : seq_inclusive(VF, 2 * VF + 1)) {
LLVMContext Ctx;
Type *OpVFTy = FixedVectorType::get(IntegerType::getInt1Ty(Ctx), OpVF);
- Value *Op = ConstantVector::getNullValue(OpVFTy);
+ Value *Op = Constant::getNullValue(OpVFTy);
ShuffleVectorInst *SVI = new ShuffleVectorInst(Op, Op, ReplicatedMask);
EXPECT_EQ(SVI->isReplicationMask(GuessedReplicationFactor, GuessedVF),
OpVF == VF);
More information about the llvm-commits
mailing list