[llvm] [Intrinsics][AArch64] Add intrinsics for masking off aliasing vector lanes (PR #117007)
Sam Tebbs via llvm-commits
llvm-commits at lists.llvm.org
Tue Aug 19 07:41:56 PDT 2025
https://github.com/SamTebbs33 updated https://github.com/llvm/llvm-project/pull/117007
>From f9e5a7c65d243a079cde6ee1bd2a55511b8ca456 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 15 Nov 2024 10:24:46 +0000
Subject: [PATCH 01/43] [Intrinsics][AArch64] Add intrinsic to mask off
aliasing vector lanes
It can be unsafe to load a vector from an address and write a vector to
an address if those two addresses have overlapping lanes within a
vectorised loop iteration.
This PR adds an intrinsic designed to create a mask with lanes disabled
if they overlap between the two pointer arguments, so that only safe
lanes are loaded, operated on and stored.
Along with the two pointer parameters, the intrinsic also takes an
immediate that represents the size in bytes of the vector element
types, as well as an immediate i1 that is true if there is a write
after-read-hazard or false if there is a read-after-write hazard.
This will be used by #100579 and replaces the existing lowering for
whilewr since that isn't needed now we have the intrinsic.
---
llvm/docs/LangRef.rst | 84 ++++
llvm/include/llvm/CodeGen/TargetLowering.h | 7 +
llvm/include/llvm/IR/Intrinsics.td | 5 +
.../SelectionDAG/SelectionDAGBuilder.cpp | 50 +++
.../Target/AArch64/AArch64ISelLowering.cpp | 85 +++-
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 3 +
.../lib/Target/AArch64/AArch64SVEInstrInfo.td | 11 +-
llvm/lib/Target/AArch64/SVEInstrFormats.td | 10 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 421 ++++++++++++++++++
.../CodeGen/AArch64/alias_mask_scalable.ll | 195 ++++++++
10 files changed, 861 insertions(+), 10 deletions(-)
create mode 100644 llvm/test/CodeGen/AArch64/alias_mask.ll
create mode 100644 llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 1aebcc4439964..d95c0be3ae02e 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24104,6 +24104,90 @@ Examples:
%active.lane.mask = call <4 x i1> @llvm.get.active.lane.mask.v4i1.i64(i64 %elem0, i64 429)
%wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> poison)
+.. _int_experimental_get_alias_lane_mask:
+
+'``llvm.experimental.get.alias.lane.mask.*``' Intrinsics
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Syntax:
+"""""""
+This is an overloaded intrinsic.
+
+::
+
+ declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+
+
+Overview:
+"""""""""
+
+Create a mask representing lanes that do or not overlap between two pointers
+across one vector loop iteration.
+
+
+Arguments:
+""""""""""
+
+The first two arguments have the same scalar integer type.
+The final two are immediates and the result is a vector with the i1 element type.
+
+Semantics:
+""""""""""
+
+The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
+VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
+In other cases when ``%writeAfterRead`` is true, the
+'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+equivalent to:
+
+::
+
+ %diff = (%ptrB - %ptrA) / %elementSize
+ %m[i] = (icmp ult i, %diff) || (%diff <= 0)
+
+When the return value is not poison and ``%writeAfterRead`` is false, the
+'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+equivalent to:
+
+::
+
+ %diff = abs(%ptrB - %ptrA) / %elementSize
+ %m[i] = (icmp ult i, %diff) || (%diff == 0)
+
+where ``%m`` is a vector (mask) of active/inactive lanes with its elements
+indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two i64 arguments to
+``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
+immediate argument. The ``%writeAfterRead`` argument is expected to be true if
+``%ptrB`` is stored to after ``%ptrA`` is read from.
+The above is equivalent to:
+
+::
+
+ %m = @llvm.experimental.get.alias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
+
+This can, for example, be emitted by the loop vectorizer in which case
+``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a
+pointer that is stored to within the loop.
+If the difference between these pointers is less than the vector factor, then
+they overlap (alias) within a loop iteration.
+An example is if ``%ptrA`` is 20 and ``%ptrB`` is 23 with a vector factor of 8,
+then lanes 3, 4, 5, 6 and 7 of the vector loaded from ``%ptrA``
+share addresses with lanes 0, 1, 2, 3, 4 and 5 from the vector stored to at
+``%ptrB``.
+
+
+Examples:
+"""""""""
+
+.. code-block:: llvm
+
+ %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 4, i1 1)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %alias.lane.mask, <4 x i32> poison)
+ [...]
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %alias.lane.mask)
.. _int_experimental_vp_splice:
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index ed7495694cc70..bc24a4a3947b7 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -482,6 +482,13 @@ class LLVM_ABI TargetLoweringBase {
return true;
}
+ /// Return true if the @llvm.experimental.get.alias.lane.mask intrinsic should
+ /// be expanded using generic code in SelectionDAGBuilder.
+ virtual bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
+ unsigned EltSize) const {
+ return true;
+ }
+
virtual bool shouldExpandGetVectorLength(EVT CountVT, unsigned VF,
bool IsScalable) const {
return true;
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index e0ee12391b31d..0c5799a23d7a8 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2420,6 +2420,11 @@ let IntrProperties = [IntrNoMem, ImmArg<ArgIndex<1>>] in {
llvm_i32_ty]>;
}
+def int_experimental_get_alias_lane_mask:
+ DefaultAttrsIntrinsic<[llvm_anyvector_ty],
+ [llvm_anyint_ty, LLVMMatchType<1>, llvm_anyint_ty, llvm_i1_ty],
+ [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
+
def int_get_active_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[llvm_anyint_ty, LLVMMatchType<1>],
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 2eaab02130699..7b5206b891aff 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8314,6 +8314,56 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
visitVectorExtractLastActive(I, Intrinsic);
return;
}
+ case Intrinsic::experimental_get_alias_lane_mask: {
+ SDValue SourceValue = getValue(I.getOperand(0));
+ SDValue SinkValue = getValue(I.getOperand(1));
+ SDValue EltSize = getValue(I.getOperand(2));
+ bool IsWriteAfterRead =
+ cast<ConstantSDNode>(getValue(I.getOperand(3)))->getZExtValue() != 0;
+ auto IntrinsicVT = EVT::getEVT(I.getType());
+ auto PtrVT = SourceValue->getValueType(0);
+
+ if (!TLI.shouldExpandGetAliasLaneMask(
+ IntrinsicVT, PtrVT,
+ cast<ConstantSDNode>(EltSize)->getSExtValue())) {
+ visitTargetIntrinsic(I, Intrinsic);
+ return;
+ }
+
+ SDValue Diff = DAG.getNode(ISD::SUB, sdl, PtrVT, SinkValue, SourceValue);
+ if (!IsWriteAfterRead)
+ Diff = DAG.getNode(ISD::ABS, sdl, PtrVT, Diff);
+
+ Diff = DAG.getNode(ISD::SDIV, sdl, PtrVT, Diff, EltSize);
+ SDValue Zero = DAG.getTargetConstant(0, sdl, PtrVT);
+
+ // If the difference is positive then some elements may alias
+ auto CmpVT =
+ TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), PtrVT);
+ SDValue Cmp = DAG.getSetCC(sdl, CmpVT, Diff, Zero,
+ IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+
+ // Splat the compare result then OR it with a lane mask
+ SDValue Splat = DAG.getSplat(IntrinsicVT, sdl, Cmp);
+
+ SDValue DiffMask;
+ // Don't emit an active lane mask if the target doesn't support it
+ if (TLI.shouldExpandGetActiveLaneMask(IntrinsicVT, PtrVT)) {
+ EVT VecTy = EVT::getVectorVT(*DAG.getContext(), PtrVT,
+ IntrinsicVT.getVectorElementCount());
+ SDValue DiffSplat = DAG.getSplat(VecTy, sdl, Diff);
+ SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
+ DiffMask = DAG.getSetCC(sdl, IntrinsicVT, VectorStep, DiffSplat,
+ ISD::CondCode::SETULT);
+ } else {
+ DiffMask = DAG.getNode(
+ ISD::INTRINSIC_WO_CHAIN, sdl, IntrinsicVT,
+ DAG.getTargetConstant(Intrinsic::get_active_lane_mask, sdl, MVT::i64),
+ Zero, Diff);
+ }
+ SDValue Or = DAG.getNode(ISD::OR, sdl, IntrinsicVT, DiffMask, Splat);
+ setValue(&I, Or);
+ }
}
}
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 2072e48914ae6..21d6bf1921a3d 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -2160,6 +2160,25 @@ bool AArch64TargetLowering::shouldExpandGetActiveLaneMask(EVT ResVT,
return false;
}
+bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(
+ EVT VT, EVT PtrVT, unsigned EltSize) const {
+ if (!Subtarget->hasSVE2())
+ return true;
+
+ if (PtrVT != MVT::i64)
+ return true;
+
+ if (VT == MVT::v2i1 || VT == MVT::nxv2i1)
+ return EltSize != 8;
+ if (VT == MVT::v4i1 || VT == MVT::nxv4i1)
+ return EltSize != 4;
+ if (VT == MVT::v8i1 || VT == MVT::nxv8i1)
+ return EltSize != 2;
+ if (VT == MVT::v16i1 || VT == MVT::nxv16i1)
+ return EltSize != 1;
+ return true;
+}
+
bool AArch64TargetLowering::shouldExpandPartialReductionIntrinsic(
const IntrinsicInst *I) const {
assert(I->getIntrinsicID() ==
@@ -5987,6 +6006,18 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
EVT PtrVT = getPointerTy(DAG.getDataLayout());
return DAG.getNode(AArch64ISD::THREAD_POINTER, DL, PtrVT);
}
+ case Intrinsic::aarch64_sve_whilewr_b:
+ case Intrinsic::aarch64_sve_whilewr_h:
+ case Intrinsic::aarch64_sve_whilewr_s:
+ case Intrinsic::aarch64_sve_whilewr_d:
+ return DAG.getNode(AArch64ISD::WHILEWR, dl, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2));
+ case Intrinsic::aarch64_sve_whilerw_b:
+ case Intrinsic::aarch64_sve_whilerw_h:
+ case Intrinsic::aarch64_sve_whilerw_s:
+ case Intrinsic::aarch64_sve_whilerw_d:
+ return DAG.getNode(AArch64ISD::WHILERW, dl, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2));
case Intrinsic::aarch64_neon_abs: {
EVT Ty = Op.getValueType();
if (Ty == MVT::i64) {
@@ -6461,6 +6492,52 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(AArch64ISD::USDOT, DL, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2), Op.getOperand(3));
}
+ case Intrinsic::experimental_get_alias_lane_mask: {
+ unsigned IntrinsicID = 0;
+ uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
+ bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
+ switch (EltSize) {
+ case 1:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
+ : Intrinsic::aarch64_sve_whilerw_b;
+ break;
+ case 2:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
+ : Intrinsic::aarch64_sve_whilerw_h;
+ break;
+ case 4:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
+ : Intrinsic::aarch64_sve_whilerw_s;
+ break;
+ case 8:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
+ : Intrinsic::aarch64_sve_whilerw_d;
+ break;
+ default:
+ llvm_unreachable("Unexpected element size for get.alias.lane.mask");
+ break;
+ }
+ SDValue ID = DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
+
+ EVT VT = Op.getValueType();
+ if (VT.isScalableVector())
+ return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, VT, ID, Op.getOperand(1),
+ Op.getOperand(2));
+
+ // We can use the SVE whilewr/whilerw instruction to lower this
+ // intrinsic by creating the appropriate sequence of scalable vector
+ // operations and then extracting a fixed-width subvector from the scalable
+ // vector.
+
+ EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
+ EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
+
+ SDValue Mask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, WhileVT, ID,
+ Op.getOperand(1), Op.getOperand(2));
+ SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, dl, ContainerVT, Mask);
+ return DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, MaskAsInt,
+ DAG.getVectorIdxConstant(0, dl));
+ }
case Intrinsic::aarch64_neon_saddlv:
case Intrinsic::aarch64_neon_uaddlv: {
EVT OpVT = Op.getOperand(1).getValueType();
@@ -19961,7 +20038,10 @@ static bool isPredicateCCSettingOp(SDValue N) {
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilele ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelo ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilels ||
- N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelt)))
+ N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelt ||
+ // get_alias_lane_mask is lowered to a whilewr/rw instruction.
+ N.getConstantOperandVal(0) ==
+ Intrinsic::experimental_get_alias_lane_mask)))
return true;
return false;
@@ -28232,7 +28312,8 @@ void AArch64TargetLowering::ReplaceNodeResults(
DAG.getNode(ISD::TRUNCATE, DL, MVT::i1, RuntimePStateSM));
return;
}
- case Intrinsic::experimental_vector_match: {
+ case Intrinsic::experimental_vector_match:
+ case Intrinsic::experimental_get_alias_lane_mask: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 78d6a507b80d3..a1c40647555a0 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -513,6 +513,9 @@ class AArch64TargetLowering : public TargetLowering {
bool shouldExpandGetActiveLaneMask(EVT VT, EVT OpVT) const override;
+ bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
+ unsigned EltSize) const override;
+
bool
shouldExpandPartialReductionIntrinsic(const IntrinsicInst *I) const override;
diff --git a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
index 9775238027650..c33f298617ba8 100644
--- a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
@@ -167,6 +167,11 @@ def AArch64st1q_scatter : SDNode<"AArch64ISD::SST1Q_PRED", SDT_AArch64_SCATTER_V
// AArch64 SVE/SVE2 - the remaining node definitions
//
+// Alias masks
+def SDT_AArch64Mask : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisInt<1>, SDTCisSameAs<2, 1>, SDTCVecEltisVT<0,i1>]>;
+def AArch64whilewr : SDNode<"AArch64ISD::WHILEWR", SDT_AArch64Mask>;
+def AArch64whilerw : SDNode<"AArch64ISD::WHILERW", SDT_AArch64Mask>;
+
// SVE CNT/INC/RDVL
def sve_rdvl_imm : ComplexPattern<i64, 1, "SelectRDVLImm<-32, 31, 16>">;
def sve_cnth_imm : ComplexPattern<i64, 1, "SelectRDVLImm<1, 16, 8>">;
@@ -4125,9 +4130,9 @@ let Predicates = [HasSVE2_or_SME] in {
defm WHILEHI_PXX : sve_int_while8_rr<0b101, "whilehi", int_aarch64_sve_whilehi, get_active_lane_mask>;
// SVE2 pointer conflict compare
- defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", "int_aarch64_sve_whilewr">;
- defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", "int_aarch64_sve_whilerw">;
-} // End HasSVE2_or_SME
+ defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", AArch64whilewr>;
+ defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", AArch64whilerw>;
+} // End HasSVE2orSME
let Predicates = [HasSVEAES, HasNonStreamingSVE_or_SSVE_AES] in {
// SVE2 crypto destructive binary operations
diff --git a/llvm/lib/Target/AArch64/SVEInstrFormats.td b/llvm/lib/Target/AArch64/SVEInstrFormats.td
index a3a7d0f74e1bc..3c671aed2185d 100644
--- a/llvm/lib/Target/AArch64/SVEInstrFormats.td
+++ b/llvm/lib/Target/AArch64/SVEInstrFormats.td
@@ -5946,16 +5946,16 @@ class sve2_int_while_rr<bits<2> sz8_64, bits<1> rw, string asm,
let isWhile = 1;
}
-multiclass sve2_int_while_rr<bits<1> rw, string asm, string op> {
+multiclass sve2_int_while_rr<bits<1> rw, string asm, SDPatternOperator op> {
def _B : sve2_int_while_rr<0b00, rw, asm, PPR8>;
def _H : sve2_int_while_rr<0b01, rw, asm, PPR16>;
def _S : sve2_int_while_rr<0b10, rw, asm, PPR32>;
def _D : sve2_int_while_rr<0b11, rw, asm, PPR64>;
- def : SVE_2_Op_Pat<nxv16i1, !cast<SDPatternOperator>(op # _b), i64, i64, !cast<Instruction>(NAME # _B)>;
- def : SVE_2_Op_Pat<nxv8i1, !cast<SDPatternOperator>(op # _h), i64, i64, !cast<Instruction>(NAME # _H)>;
- def : SVE_2_Op_Pat<nxv4i1, !cast<SDPatternOperator>(op # _s), i64, i64, !cast<Instruction>(NAME # _S)>;
- def : SVE_2_Op_Pat<nxv2i1, !cast<SDPatternOperator>(op # _d), i64, i64, !cast<Instruction>(NAME # _D)>;
+ def : SVE_2_Op_Pat<nxv16i1, op, i64, i64, !cast<Instruction>(NAME # _B)>;
+ def : SVE_2_Op_Pat<nxv8i1, op, i64, i64, !cast<Instruction>(NAME # _H)>;
+ def : SVE_2_Op_Pat<nxv4i1, op, i64, i64, !cast<Instruction>(NAME # _S)>;
+ def : SVE_2_Op_Pat<nxv2i1, op, i64, i64, !cast<Instruction>(NAME # _D)>;
}
//===----------------------------------------------------------------------===//
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
new file mode 100644
index 0000000000000..84a22822f1702
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -0,0 +1,421 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE
+; RUN: llc -mtriple=aarch64 %s -o - | FileCheck %s --check-prefix=CHECK-NOSVE
+
+define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_8:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_1
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI0_0]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI0_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x8, :lo12:.LCPI0_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_4
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_3
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI0_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_5
+; CHECK-NOSVE-NEXT: dup v2.2d, x9
+; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI0_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_6
+; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI0_5]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI0_6]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI0_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ ret <16 x i1> %0
+}
+
+define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_16:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI1_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI1_2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI1_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI1_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI1_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI1_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <8 x i1> %0
+}
+
+define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI2_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI2_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI2_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI2_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ ret <4 x i1> %0
+}
+
+define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_64:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI3_0
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI3_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v1.2s, w8
+; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ ret <2 x i1> %0
+}
+
+define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_8:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_1
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI4_0]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_2
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI4_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_3
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI4_1]
+; CHECK-NOSVE-NEXT: ldr q4, [x8, :lo12:.LCPI4_3]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_4
+; CHECK-NOSVE-NEXT: dup v3.2d, x9
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI4_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_5
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_6
+; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI4_5]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI4_6]
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI4_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v3.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v3.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v2.4s
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v3.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ ret <16 x i1> %0
+}
+
+define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_16:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI5_1
+; CHECK-NOSVE-NEXT: cneg x8, x8, mi
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI5_2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI5_0]
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI5_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI5_2]
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI5_3]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ ret <8 x i1> %0
+}
+
+define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI6_0
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI6_0]
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI6_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI6_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ ret <4 x i1> %0
+}
+
+define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_64:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI7_0
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI7_0]
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v1.2s, w8
+; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ ret <2 x i1> %0
+}
+
+define <16 x i1> @not_whilewr_wrong_eltsize(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: not_whilewr_wrong_eltsize:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: dup v0.16b, w9
+; CHECK-SVE-NEXT: mov z1.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: orr v0.16b, v1.16b, v0.16b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: not_whilewr_wrong_eltsize:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI8_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x8
+; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI8_6]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI8_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <16 x i1> %0
+}
+
+define <2 x i1> @not_whilerw_ptr32(i32 %a, i32 %b) {
+; CHECK-SVE-LABEL: not_whilerw_ptr32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs w8, w1, w0
+; CHECK-SVE-NEXT: cneg w8, w8, mi
+; CHECK-SVE-NEXT: add w9, w8, #7
+; CHECK-SVE-NEXT: cmp w8, #0
+; CHECK-SVE-NEXT: csel w8, w9, w8, lt
+; CHECK-SVE-NEXT: asr w8, w8, #3
+; CHECK-SVE-NEXT: cmp w8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.s, #0, w8
+; CHECK-SVE-NEXT: dup v0.2s, w9
+; CHECK-SVE-NEXT: mov z1.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: orr v0.8b, v1.8b, v0.8b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: not_whilerw_ptr32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs w9, w1, w0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI9_0
+; CHECK-NOSVE-NEXT: cneg w9, w9, mi
+; CHECK-NOSVE-NEXT: ldr d1, [x8, :lo12:.LCPI9_0]
+; CHECK-NOSVE-NEXT: add w10, w9, #7
+; CHECK-NOSVE-NEXT: cmp w9, #0
+; CHECK-NOSVE-NEXT: csel w9, w10, w9, lt
+; CHECK-NOSVE-NEXT: asr w9, w9, #3
+; CHECK-NOSVE-NEXT: dup v0.2s, w9
+; CHECK-NOSVE-NEXT: cmp w9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: dup v2.2s, w8
+; CHECK-NOSVE-NEXT: cmhi v0.2s, v0.2s, v1.2s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v2.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i32.i32(i32 %a, i32 %b, i32 8, i1 0)
+ ret <2 x i1> %0
+}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
new file mode 100644
index 0000000000000..be5ec8b2a82bf
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -0,0 +1,195 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE2
+; RUN: llc -mtriple=aarch64 -mattr=+sve %s -o - | FileCheck %s --check-prefix=CHECK-SVE
+
+define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilewr_8:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilewr_16:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <vscale x 8 x i1> %0
+}
+
+define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilewr_32:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ ret <vscale x 4 x i1> %0
+}
+
+define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilewr_64:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ ret <vscale x 2 x i1> %0
+}
+
+define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilerw_8:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilerw_16:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.h, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ ret <vscale x 8 x i1> %0
+}
+
+define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilerw_32:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.s, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ ret <vscale x 4 x i1> %0
+}
+
+define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilerw_64:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.d, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ ret <vscale x 2 x i1> %0
+}
>From 071728fae6fd8fb5852d347ffd459bbdbf8f57b4 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 10 Jan 2025 11:37:37 +0000
Subject: [PATCH 02/43] Rework lowering location
---
llvm/include/llvm/CodeGen/ISDOpcodes.h | 5 +
.../SelectionDAG/LegalizeIntegerTypes.cpp | 22 ++
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 2 +
.../SelectionDAG/LegalizeVectorOps.cpp | 41 ++++
.../SelectionDAG/SelectionDAGBuilder.cpp | 53 +----
.../SelectionDAG/SelectionDAGDumper.cpp | 2 +
llvm/lib/CodeGen/TargetLoweringBase.cpp | 3 +
.../Target/AArch64/AArch64ISelLowering.cpp | 96 +++++++-
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 1 +
llvm/test/CodeGen/AArch64/alias_mask.ll | 120 ++--------
.../CodeGen/AArch64/alias_mask_scalable.ll | 210 ++++++++++++++----
11 files changed, 358 insertions(+), 197 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index 465e4a0a9d0d8..3acedb67fc7c0 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1558,6 +1558,11 @@ enum NodeType {
// bits conform to getBooleanContents similar to the SETCC operator.
GET_ACTIVE_LANE_MASK,
+ // The `llvm.experimental.get.alias.lane.mask.*` intrinsics
+ // Operands: Load pointer, Store pointer, Element size, Write after read
+ // Output: Mask
+ EXPERIMENTAL_ALIAS_LANE_MASK,
+
// llvm.clear_cache intrinsic
// Operands: Input Chain, Start Addres, End Address
// Outputs: Output Chain
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index a5bd97ace169e..68e8db38fa5ec 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -56,6 +56,9 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
N->dump(&DAG); dbgs() << "\n";
#endif
report_fatal_error("Do not know how to promote this operator!");
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
+ break;
case ISD::MERGE_VALUES:Res = PromoteIntRes_MERGE_VALUES(N, ResNo); break;
case ISD::AssertSext: Res = PromoteIntRes_AssertSext(N); break;
case ISD::AssertZext: Res = PromoteIntRes_AssertZext(N); break;
@@ -374,6 +377,14 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
return GetPromotedInteger(Op);
}
+SDValue
+DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+ EVT VT = N->getValueType(0);
+ EVT NewVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
+ return DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, SDLoc(N), NewVT,
+ N->ops());
+}
+
SDValue DAGTypeLegalizer::PromoteIntRes_AssertSext(SDNode *N) {
// Sign-extend the new bits, and continue the assertion.
SDValue Op = SExtPromotedInteger(N->getOperand(0));
@@ -2104,6 +2115,9 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::PARTIAL_REDUCE_SUMLA:
Res = PromoteIntOp_PARTIAL_REDUCE_MLA(N);
break;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Res = DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
+ break;
}
// If the result is null, the sub-method took care of registering results etc.
@@ -2937,6 +2951,14 @@ SDValue DAGTypeLegalizer::PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N) {
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
}
+SDValue
+DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N,
+ unsigned OpNo) {
+ SmallVector<SDValue, 4> NewOps(N->ops());
+ NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
+ return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
+}
+
//===----------------------------------------------------------------------===//
// Integer Result Expansion
//===----------------------------------------------------------------------===//
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 33fa3012618b3..c089da706b1b5 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -382,6 +382,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_VECTOR_FIND_LAST_ACTIVE(SDNode *N);
SDValue PromoteIntRes_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntRes_PARTIAL_REDUCE_MLA(SDNode *N);
+ SDValue PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
// Integer Operand Promotion.
bool PromoteIntegerOperand(SDNode *N, unsigned OpNo);
@@ -436,6 +437,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N);
+ SDValue PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N, unsigned OpNo);
void SExtOrZExtPromotedOperands(SDValue &LHS, SDValue &RHS);
void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index d2ecc13331e02..c9e5c7f8bbdb0 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -138,6 +138,7 @@ class VectorLegalizer {
SDValue ExpandVP_FNEG(SDNode *Node);
SDValue ExpandVP_FABS(SDNode *Node);
SDValue ExpandVP_FCOPYSIGN(SDNode *Node);
+ SDValue ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
SDValue ExpandSELECT(SDNode *Node);
std::pair<SDValue, SDValue> ExpandLoad(SDNode *N);
SDValue ExpandStore(SDNode *N);
@@ -475,6 +476,7 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::VECTOR_COMPRESS:
case ISD::SCMP:
case ISD::UCMP:
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
@@ -1291,6 +1293,9 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
case ISD::UCMP:
Results.push_back(TLI.expandCMP(Node, DAG));
return;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Results.push_back(ExpandEXPERIMENTAL_ALIAS_LANE_MASK(Node));
+ return;
case ISD::FADD:
case ISD::FMUL:
@@ -1796,6 +1801,42 @@ SDValue VectorLegalizer::ExpandVP_FCOPYSIGN(SDNode *Node) {
return DAG.getNode(ISD::BITCAST, DL, VT, CopiedSign);
}
+SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+ SDLoc DL(N);
+ SDValue SourceValue = N->getOperand(0);
+ SDValue SinkValue = N->getOperand(1);
+ SDValue EltSize = N->getOperand(2);
+
+ bool IsWriteAfterRead =
+ cast<ConstantSDNode>(N->getOperand(3))->getZExtValue() != 0;
+ auto VT = N->getValueType(0);
+ auto PtrVT = SourceValue->getValueType(0);
+
+ SDValue Diff = DAG.getNode(ISD::SUB, DL, PtrVT, SinkValue, SourceValue);
+ if (!IsWriteAfterRead)
+ Diff = DAG.getNode(ISD::ABS, DL, PtrVT, Diff);
+
+ Diff = DAG.getNode(ISD::SDIV, DL, PtrVT, Diff, EltSize);
+ SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
+
+ // If the difference is positive then some elements may alias
+ auto CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+ Diff.getValueType());
+ SDValue Cmp = DAG.getSetCC(DL, CmpVT, Diff, Zero,
+ IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+
+ EVT SplatTY =
+ EVT::getVectorVT(*DAG.getContext(), PtrVT, VT.getVectorElementCount());
+ SDValue DiffSplat = DAG.getSplat(SplatTY, DL, Diff);
+ SDValue VectorStep = DAG.getStepVector(DL, SplatTY);
+ SDValue DiffMask =
+ DAG.getSetCC(DL, VT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
+
+ // Splat the compare result then OR it with a lane mask
+ SDValue Splat = DAG.getSplat(VT, DL, Cmp);
+ return DAG.getNode(ISD::OR, DL, VT, DiffMask, Splat);
+}
+
void VectorLegalizer::ExpandFP_TO_UINT(SDNode *Node,
SmallVectorImpl<SDValue> &Results) {
// Attempt to expand using TargetLowering.
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 7b5206b891aff..031e2b8711c04 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8315,54 +8315,13 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
return;
}
case Intrinsic::experimental_get_alias_lane_mask: {
- SDValue SourceValue = getValue(I.getOperand(0));
- SDValue SinkValue = getValue(I.getOperand(1));
- SDValue EltSize = getValue(I.getOperand(2));
- bool IsWriteAfterRead =
- cast<ConstantSDNode>(getValue(I.getOperand(3)))->getZExtValue() != 0;
auto IntrinsicVT = EVT::getEVT(I.getType());
- auto PtrVT = SourceValue->getValueType(0);
-
- if (!TLI.shouldExpandGetAliasLaneMask(
- IntrinsicVT, PtrVT,
- cast<ConstantSDNode>(EltSize)->getSExtValue())) {
- visitTargetIntrinsic(I, Intrinsic);
- return;
- }
-
- SDValue Diff = DAG.getNode(ISD::SUB, sdl, PtrVT, SinkValue, SourceValue);
- if (!IsWriteAfterRead)
- Diff = DAG.getNode(ISD::ABS, sdl, PtrVT, Diff);
-
- Diff = DAG.getNode(ISD::SDIV, sdl, PtrVT, Diff, EltSize);
- SDValue Zero = DAG.getTargetConstant(0, sdl, PtrVT);
-
- // If the difference is positive then some elements may alias
- auto CmpVT =
- TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), PtrVT);
- SDValue Cmp = DAG.getSetCC(sdl, CmpVT, Diff, Zero,
- IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
-
- // Splat the compare result then OR it with a lane mask
- SDValue Splat = DAG.getSplat(IntrinsicVT, sdl, Cmp);
-
- SDValue DiffMask;
- // Don't emit an active lane mask if the target doesn't support it
- if (TLI.shouldExpandGetActiveLaneMask(IntrinsicVT, PtrVT)) {
- EVT VecTy = EVT::getVectorVT(*DAG.getContext(), PtrVT,
- IntrinsicVT.getVectorElementCount());
- SDValue DiffSplat = DAG.getSplat(VecTy, sdl, Diff);
- SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
- DiffMask = DAG.getSetCC(sdl, IntrinsicVT, VectorStep, DiffSplat,
- ISD::CondCode::SETULT);
- } else {
- DiffMask = DAG.getNode(
- ISD::INTRINSIC_WO_CHAIN, sdl, IntrinsicVT,
- DAG.getTargetConstant(Intrinsic::get_active_lane_mask, sdl, MVT::i64),
- Zero, Diff);
- }
- SDValue Or = DAG.getNode(ISD::OR, sdl, IntrinsicVT, DiffMask, Splat);
- setValue(&I, Or);
+ SmallVector<SDValue, 4> Ops;
+ for (auto &Op : I.operands())
+ Ops.push_back(getValue(Op));
+ SDValue Mask =
+ DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, sdl, IntrinsicVT, Ops);
+ setValue(&I, Mask);
}
}
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index 900da7645504f..41aa8c9ebfa8f 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -587,6 +587,8 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
return "partial_reduce_smla";
case ISD::PARTIAL_REDUCE_SUMLA:
return "partial_reduce_sumla";
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ return "alias_mask";
// Vector Predication
#define BEGIN_REGISTER_VP_SDNODE(SDID, LEGALARG, NAME, ...) \
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index 350948a92a3ae..c9e07a9dbaada 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -900,6 +900,9 @@ void TargetLoweringBase::initActions() {
// Masked vector extracts default to expand.
setOperationAction(ISD::VECTOR_FIND_LAST_ACTIVE, VT, Expand);
+ // Aliasing lanes mask default to expand
+ setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Expand);
+
// FP environment operations default to expand.
setOperationAction(ISD::GET_FPENV, VT, Expand);
setOperationAction(ISD::SET_FPENV, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 21d6bf1921a3d..72fd45a3198f3 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1915,6 +1915,15 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
}
}
+ // Handle non-aliasing elements mask
+ if (Subtarget->hasSVE2() ||
+ (Subtarget->hasSME() && Subtarget->isStreaming())) {
+ for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
+ MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
+ setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Custom);
+ }
+ }
+
// Handle operations that are only available in non-streaming SVE mode.
if (Subtarget->isSVEAvailable()) {
for (auto VT : {MVT::nxv16i8, MVT::nxv8i16, MVT::nxv4i32, MVT::nxv2i64,
@@ -5248,6 +5257,65 @@ SDValue AArch64TargetLowering::LowerFSINCOS(SDValue Op,
static MVT getSVEContainerType(EVT ContentTy);
+SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
+ SelectionDAG &DAG) const {
+ SDLoc DL(Op);
+ unsigned IntrinsicID = 0;
+ uint64_t EltSize = Op.getOperand(2)->getAsZExtVal();
+ bool IsWriteAfterRead = Op.getOperand(3)->getAsZExtVal() == 1;
+ EVT VT = Op.getValueType();
+ MVT SimpleVT = VT.getSimpleVT();
+ // Make sure that the promoted mask size and element size match
+ switch (EltSize) {
+ case 1:
+ assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
+ "Unexpected mask or element size");
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
+ : Intrinsic::aarch64_sve_whilerw_b;
+ break;
+ case 2:
+ assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
+ "Unexpected mask or element size");
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
+ : Intrinsic::aarch64_sve_whilerw_h;
+ break;
+ case 4:
+ assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
+ "Unexpected mask or element size");
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
+ : Intrinsic::aarch64_sve_whilerw_s;
+ break;
+ case 8:
+ assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
+ "Unexpected mask or element size");
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
+ : Intrinsic::aarch64_sve_whilerw_d;
+ break;
+ default:
+ llvm_unreachable("Unexpected element size for get.alias.lane.mask");
+ break;
+ }
+ SDValue ID = DAG.getTargetConstant(IntrinsicID, DL, MVT::i64);
+
+ if (VT.isScalableVector())
+ return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, VT, ID, Op.getOperand(0),
+ Op.getOperand(1));
+
+ // We can use the SVE whilewr/whilerw instruction to lower this
+ // intrinsic by creating the appropriate sequence of scalable vector
+ // operations and then extracting a fixed-width subvector from the scalable
+ // vector.
+
+ EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
+ EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
+
+ SDValue Mask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, WhileVT, ID,
+ Op.getOperand(0), Op.getOperand(1));
+ SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
+ return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
+ DAG.getVectorIdxConstant(0, DL));
+}
+
SDValue AArch64TargetLowering::LowerBITCAST(SDValue Op,
SelectionDAG &DAG) const {
EVT OpVT = Op.getValueType();
@@ -7423,6 +7491,8 @@ SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
default:
llvm_unreachable("unimplemented operand");
return SDValue();
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ return LowerALIAS_LANE_MASK(Op, DAG);
case ISD::BITCAST:
return LowerBITCAST(Op, DAG);
case ISD::GlobalAddress:
@@ -20027,9 +20097,10 @@ static SDValue getPTest(SelectionDAG &DAG, EVT VT, SDValue Pg, SDValue Op,
AArch64CC::CondCode Cond);
static bool isPredicateCCSettingOp(SDValue N) {
- if ((N.getOpcode() == ISD::SETCC) ||
+ if ((N.getOpcode() == ISD::SETCC ||
// get_active_lane_mask is lowered to a whilelo instruction.
- (N.getOpcode() == ISD::GET_ACTIVE_LANE_MASK) ||
+ N.getOpcode() == ISD::GET_ACTIVE_LANE_MASK ||
+ N.getOpcode() == ISD::EXPERIMENTAL_ALIAS_LANE_MASK) ||
(N.getOpcode() == ISD::INTRINSIC_WO_CHAIN &&
(N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilege ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilegt ||
@@ -20038,10 +20109,7 @@ static bool isPredicateCCSettingOp(SDValue N) {
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilele ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelo ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilels ||
- N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelt ||
- // get_alias_lane_mask is lowered to a whilewr/rw instruction.
- N.getConstantOperandVal(0) ==
- Intrinsic::experimental_get_alias_lane_mask)))
+ N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelt)))
return true;
return false;
@@ -28256,6 +28324,22 @@ void AArch64TargetLowering::ReplaceNodeResults(
case ISD::GET_ACTIVE_LANE_MASK:
ReplaceGetActiveLaneMaskResults(N, Results, DAG);
return;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK: {
+ EVT VT = N->getValueType(0);
+ if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
+ return;
+
+ // NOTE: Only trivial type promotion is supported.
+ EVT NewVT = getTypeToTransformTo(*DAG.getContext(), VT);
+ if (NewVT.getVectorNumElements() != VT.getVectorNumElements())
+ return;
+
+ SDLoc DL(N);
+ auto V =
+ DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, DL, NewVT, N->ops());
+ Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, VT, V));
+ return;
+ }
case ISD::INTRINSIC_WO_CHAIN: {
EVT VT = N->getValueType(0);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index a1c40647555a0..c4542c9b295f5 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -731,6 +731,7 @@ class AArch64TargetLowering : public TargetLowering {
SDValue LowerXOR(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFSINCOS(SDValue Op, SelectionDAG &DAG) const;
+ SDValue LowerALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerVSCALE(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerTRUNCATE(SDValue Op, SelectionDAG &DAG) const;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 84a22822f1702..9b344f03da077 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -48,10 +48,12 @@ define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
ret <16 x i1> %0
}
@@ -88,6 +90,8 @@ define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
; CHECK-NOSVE-NEXT: dup v1.8b, w8
; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: shl v0.8b, v0.8b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.8b, v0.8b, #0
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -125,7 +129,7 @@ define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
ret <4 x i1> %0
}
@@ -155,7 +159,7 @@ define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
ret <2 x i1> %0
}
@@ -206,10 +210,12 @@ define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
ret <16 x i1> %0
}
@@ -247,6 +253,8 @@ define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
; CHECK-NOSVE-NEXT: dup v1.8b, w8
; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: shl v0.8b, v0.8b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.8b, v0.8b, #0
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -285,7 +293,7 @@ define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
ret <4 x i1> %0
}
@@ -316,106 +324,6 @@ define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
- ret <2 x i1> %0
-}
-
-define <16 x i1> @not_whilewr_wrong_eltsize(i64 %a, i64 %b) {
-; CHECK-SVE-LABEL: not_whilewr_wrong_eltsize:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: dup v0.16b, w9
-; CHECK-SVE-NEXT: mov z1.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: orr v0.16b, v1.16b, v0.16b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: not_whilewr_wrong_eltsize:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI8_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x8
-; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI8_6]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI8_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
-entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
- ret <16 x i1> %0
-}
-
-define <2 x i1> @not_whilerw_ptr32(i32 %a, i32 %b) {
-; CHECK-SVE-LABEL: not_whilerw_ptr32:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: subs w8, w1, w0
-; CHECK-SVE-NEXT: cneg w8, w8, mi
-; CHECK-SVE-NEXT: add w9, w8, #7
-; CHECK-SVE-NEXT: cmp w8, #0
-; CHECK-SVE-NEXT: csel w8, w9, w8, lt
-; CHECK-SVE-NEXT: asr w8, w8, #3
-; CHECK-SVE-NEXT: cmp w8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.s, #0, w8
-; CHECK-SVE-NEXT: dup v0.2s, w9
-; CHECK-SVE-NEXT: mov z1.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: orr v0.8b, v1.8b, v0.8b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: not_whilerw_ptr32:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs w9, w1, w0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI9_0
-; CHECK-NOSVE-NEXT: cneg w9, w9, mi
-; CHECK-NOSVE-NEXT: ldr d1, [x8, :lo12:.LCPI9_0]
-; CHECK-NOSVE-NEXT: add w10, w9, #7
-; CHECK-NOSVE-NEXT: cmp w9, #0
-; CHECK-NOSVE-NEXT: csel w9, w10, w9, lt
-; CHECK-NOSVE-NEXT: asr w9, w9, #3
-; CHECK-NOSVE-NEXT: dup v0.2s, w9
-; CHECK-NOSVE-NEXT: cmp w9, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: dup v2.2s, w8
-; CHECK-NOSVE-NEXT: cmhi v0.2s, v0.2s, v1.2s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v2.8b
-; CHECK-NOSVE-NEXT: ret
-entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i32.i32(i32 %a, i32 %b, i32 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index be5ec8b2a82bf..a7c9c5e3cdd33 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -10,16 +10,57 @@ define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_8:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p5.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p4.s
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
@@ -31,13 +72,28 @@ define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_16:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
@@ -54,20 +110,27 @@ define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_32:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #3
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
@@ -80,19 +143,22 @@ define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-SVE-LABEL: whilewr_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #7
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
@@ -104,17 +170,60 @@ define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_8:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z1.d, z0.d
+; CHECK-SVE-NEXT: incd z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE-NEXT: mov z3.d, z2.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z1.d, z2.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z1.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z1.d, z5.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z1.d, z2.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z1.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE-NEXT: mov z0.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z1.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
@@ -126,14 +235,29 @@ define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_16:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
@@ -150,21 +274,28 @@ define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_32:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x9, x8, #3
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
@@ -177,19 +308,22 @@ define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-SVE-LABEL: whilerw_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x9, x8, #7
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
}
>From 80a72caecd15ad89aa36acaddb79d533d15d7e9c Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 15 Jan 2025 16:16:31 +0000
Subject: [PATCH 03/43] Fix ISD node name string and remove shouldExpand
function
---
.../SelectionDAG/SelectionDAGDumper.cpp | 2 +-
.../Target/AArch64/AArch64ISelLowering.cpp | 19 -------------------
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 3 ---
3 files changed, 1 insertion(+), 23 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index 41aa8c9ebfa8f..2693e86c36ae8 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -588,7 +588,7 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
case ISD::PARTIAL_REDUCE_SUMLA:
return "partial_reduce_sumla";
case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- return "alias_mask";
+ return "alias_lane_mask";
// Vector Predication
#define BEGIN_REGISTER_VP_SDNODE(SDID, LEGALARG, NAME, ...) \
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 72fd45a3198f3..8253cc751d907 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -2169,25 +2169,6 @@ bool AArch64TargetLowering::shouldExpandGetActiveLaneMask(EVT ResVT,
return false;
}
-bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(
- EVT VT, EVT PtrVT, unsigned EltSize) const {
- if (!Subtarget->hasSVE2())
- return true;
-
- if (PtrVT != MVT::i64)
- return true;
-
- if (VT == MVT::v2i1 || VT == MVT::nxv2i1)
- return EltSize != 8;
- if (VT == MVT::v4i1 || VT == MVT::nxv4i1)
- return EltSize != 4;
- if (VT == MVT::v8i1 || VT == MVT::nxv8i1)
- return EltSize != 2;
- if (VT == MVT::v16i1 || VT == MVT::nxv16i1)
- return EltSize != 1;
- return true;
-}
-
bool AArch64TargetLowering::shouldExpandPartialReductionIntrinsic(
const IntrinsicInst *I) const {
assert(I->getIntrinsicID() ==
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index c4542c9b295f5..381816bd3defd 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -513,9 +513,6 @@ class AArch64TargetLowering : public TargetLowering {
bool shouldExpandGetActiveLaneMask(EVT VT, EVT OpVT) const override;
- bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
- unsigned EltSize) const override;
-
bool
shouldExpandPartialReductionIntrinsic(const IntrinsicInst *I) const override;
>From daa2ac41a9368cf1a383eff5e5c596655feb1363 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 16 Jan 2025 10:24:59 +0000
Subject: [PATCH 04/43] Format
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 8253cc751d907..a0f3da8c274ec 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5249,24 +5249,32 @@ SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
+ : Intrinsic::aarch64_sve_whilerw_b;
assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
"Unexpected mask or element size");
IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
: Intrinsic::aarch64_sve_whilerw_b;
break;
case 2:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
+ : Intrinsic::aarch64_sve_whilerw_h;
assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
"Unexpected mask or element size");
IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
: Intrinsic::aarch64_sve_whilerw_h;
break;
case 4:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
+ : Intrinsic::aarch64_sve_whilerw_s;
assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
"Unexpected mask or element size");
IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
: Intrinsic::aarch64_sve_whilerw_s;
break;
case 8:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
+ : Intrinsic::aarch64_sve_whilerw_d;
assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
"Unexpected mask or element size");
IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
>From 3fcb9e842452e13cdf18f8b7d3bb59928139e552 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 27 Jan 2025 14:17:16 +0000
Subject: [PATCH 05/43] Move promote case
---
llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 68e8db38fa5ec..ac5ec3787824a 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -56,9 +56,6 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
N->dump(&DAG); dbgs() << "\n";
#endif
report_fatal_error("Do not know how to promote this operator!");
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
- break;
case ISD::MERGE_VALUES:Res = PromoteIntRes_MERGE_VALUES(N, ResNo); break;
case ISD::AssertSext: Res = PromoteIntRes_AssertSext(N); break;
case ISD::AssertZext: Res = PromoteIntRes_AssertZext(N); break;
@@ -327,6 +324,10 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
Res = PromoteIntRes_VP_REDUCE(N);
break;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
+ break;
+
case ISD::FREEZE:
Res = PromoteIntRes_FREEZE(N);
break;
>From 6628a98abd1c60212f0b633504eae13461acac09 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 27 Jan 2025 14:17:30 +0000
Subject: [PATCH 06/43] Fix tablegen comment
---
llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
index c33f298617ba8..c9b8297dc04bd 100644
--- a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
@@ -4132,7 +4132,7 @@ let Predicates = [HasSVE2_or_SME] in {
// SVE2 pointer conflict compare
defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", AArch64whilewr>;
defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", AArch64whilerw>;
-} // End HasSVE2orSME
+} // End HasSVE2_or_SME
let Predicates = [HasSVEAES, HasNonStreamingSVE_or_SSVE_AES] in {
// SVE2 crypto destructive binary operations
>From 064454204eef854749b4fa41aa2bd0a070d2f98b Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 27 Jan 2025 14:20:31 +0000
Subject: [PATCH 07/43] Remove DAGTypeLegalizer::
---
llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index ac5ec3787824a..12983f53f988f 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -2117,7 +2117,7 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
Res = PromoteIntOp_PARTIAL_REDUCE_MLA(N);
break;
case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Res = DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
+ Res = PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
break;
}
>From 75af361285de960f4aa3baa022b759890e60537d Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 27 Jan 2025 14:20:39 +0000
Subject: [PATCH 08/43] Use getConstantOperandVal
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index a0f3da8c274ec..feb67b5907827 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5242,8 +5242,8 @@ SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
unsigned IntrinsicID = 0;
- uint64_t EltSize = Op.getOperand(2)->getAsZExtVal();
- bool IsWriteAfterRead = Op.getOperand(3)->getAsZExtVal() == 1;
+ uint64_t EltSize = Op.getConstantOperandVal(2);
+ bool IsWriteAfterRead = Op.getConstantOperandVal(3) == 1;
EVT VT = Op.getValueType();
MVT SimpleVT = VT.getSimpleVT();
// Make sure that the promoted mask size and element size match
>From 5f563d947b9201f093560e3c35c356709ef52d2e Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 29 Jan 2025 11:40:32 +0000
Subject: [PATCH 09/43] Remove isPredicateCCSettingOp case
---
.../Target/AArch64/AArch64ISelLowering.cpp | 51 +++++++++----------
1 file changed, 25 insertions(+), 26 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index feb67b5907827..7b61d938c0908 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -6551,29 +6551,29 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
}
case Intrinsic::experimental_get_alias_lane_mask: {
unsigned IntrinsicID = 0;
- uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
- bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
- switch (EltSize) {
- case 1:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
- : Intrinsic::aarch64_sve_whilerw_b;
- break;
- case 2:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
- : Intrinsic::aarch64_sve_whilerw_h;
- break;
- case 4:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
- : Intrinsic::aarch64_sve_whilerw_s;
- break;
- case 8:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
- : Intrinsic::aarch64_sve_whilerw_d;
- break;
- default:
- llvm_unreachable("Unexpected element size for get.alias.lane.mask");
- break;
- }
+ uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
+ bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
+ switch (EltSize) {
+ case 1:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
+ : Intrinsic::aarch64_sve_whilerw_b;
+ break;
+ case 2:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
+ : Intrinsic::aarch64_sve_whilerw_h;
+ break;
+ case 4:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
+ : Intrinsic::aarch64_sve_whilerw_s;
+ break;
+ case 8:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
+ : Intrinsic::aarch64_sve_whilerw_d;
+ break;
+ default:
+ llvm_unreachable("Unexpected element size for get.alias.lane.mask");
+ break;
+ }
SDValue ID = DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
EVT VT = Op.getValueType();
@@ -20087,9 +20087,8 @@ static SDValue getPTest(SelectionDAG &DAG, EVT VT, SDValue Pg, SDValue Op,
static bool isPredicateCCSettingOp(SDValue N) {
if ((N.getOpcode() == ISD::SETCC ||
- // get_active_lane_mask is lowered to a whilelo instruction.
- N.getOpcode() == ISD::GET_ACTIVE_LANE_MASK ||
- N.getOpcode() == ISD::EXPERIMENTAL_ALIAS_LANE_MASK) ||
+ // get_active_lane_mask is lowered to a whilelo instruction.
+ N.getOpcode() == ISD::GET_ACTIVE_LANE_MASK) ||
(N.getOpcode() == ISD::INTRINSIC_WO_CHAIN &&
(N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilege ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilegt ||
>From 24df6bf67ed4c6934cd6e762e18a9f55c674b14e Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 14:40:12 +0000
Subject: [PATCH 10/43] Remove overloads for pointer and element size
parameters
---
llvm/docs/LangRef.rst | 12 +++----
llvm/include/llvm/IR/Intrinsics.td | 2 +-
.../SelectionDAG/LegalizeVectorOps.cpp | 11 ++++---
llvm/test/CodeGen/AArch64/alias_mask.ll | 32 +++++++++----------
.../CodeGen/AArch64/alias_mask_scalable.ll | 32 +++++++++----------
5 files changed, 46 insertions(+), 43 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index d95c0be3ae02e..95bea94f6dce7 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24115,10 +24115,10 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
Overview:
@@ -24158,7 +24158,7 @@ equivalent to:
%m[i] = (icmp ult i, %diff) || (%diff == 0)
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
-indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two i64 arguments to
+indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two ptr arguments to
``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
immediate argument. The ``%writeAfterRead`` argument is expected to be true if
``%ptrB`` is stored to after ``%ptrA`` is read from.
@@ -24184,7 +24184,7 @@ Examples:
.. code-block:: llvm
- %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 4, i1 1)
+ %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
%vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %alias.lane.mask, <4 x i32> poison)
[...]
call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %alias.lane.mask)
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index 0c5799a23d7a8..39fbcb018d468 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2422,7 +2422,7 @@ let IntrProperties = [IntrNoMem, ImmArg<ArgIndex<1>>] in {
def int_experimental_get_alias_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
- [llvm_anyint_ty, LLVMMatchType<1>, llvm_anyint_ty, llvm_i1_ty],
+ [llvm_anyptr_ty, LLVMMatchType<1>, llvm_i64_ty, llvm_i1_ty],
[IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
def int_get_active_lane_mask:
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index c9e5c7f8bbdb0..3df412db95e26 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -1807,8 +1807,7 @@ SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
SDValue SinkValue = N->getOperand(1);
SDValue EltSize = N->getOperand(2);
- bool IsWriteAfterRead =
- cast<ConstantSDNode>(N->getOperand(3))->getZExtValue() != 0;
+ bool IsWriteAfterRead = N->getConstantOperandVal(3) != 0;
auto VT = N->getValueType(0);
auto PtrVT = SourceValue->getValueType(0);
@@ -1817,14 +1816,15 @@ SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
Diff = DAG.getNode(ISD::ABS, DL, PtrVT, Diff);
Diff = DAG.getNode(ISD::SDIV, DL, PtrVT, Diff, EltSize);
- SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
// If the difference is positive then some elements may alias
auto CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
Diff.getValueType());
+ SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
SDValue Cmp = DAG.getSetCC(DL, CmpVT, Diff, Zero,
IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+ // Create the lane mask
EVT SplatTY =
EVT::getVectorVT(*DAG.getContext(), PtrVT, VT.getVectorElementCount());
SDValue DiffSplat = DAG.getSplat(SplatTY, DL, Diff);
@@ -1832,7 +1832,10 @@ SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
SDValue DiffMask =
DAG.getSetCC(DL, VT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
- // Splat the compare result then OR it with a lane mask
+ // Splat the compare result then OR it with the lane mask
+ auto VTElementTy = VT.getVectorElementType();
+ if (CmpVT.getScalarSizeInBits() < VTElementTy.getScalarSizeInBits())
+ Cmp = DAG.getNode(ISD::ZERO_EXTEND, DL, VTElementTy, Cmp);
SDValue Splat = DAG.getSplat(VT, DL, Cmp);
return DAG.getNode(ISD::OR, DL, VT, DiffMask, Splat);
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 9b344f03da077..f88baeece0356 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -2,7 +2,7 @@
; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE
; RUN: llc -mtriple=aarch64 %s -o - | FileCheck %s --check-prefix=CHECK-NOSVE
-define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
+define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_8:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
@@ -53,11 +53,11 @@ define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <16 x i1> %0
}
-define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
+define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_16:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
@@ -95,11 +95,11 @@ define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <8 x i1> %0
}
-define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
+define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
@@ -129,11 +129,11 @@ define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <4 x i1> %0
}
-define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
+define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
@@ -159,11 +159,11 @@ define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <2 x i1> %0
}
-define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
+define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_8:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
@@ -215,11 +215,11 @@ define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <16 x i1> %0
}
-define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
+define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_16:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
@@ -258,11 +258,11 @@ define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <8 x i1> %0
}
-define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
+define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_32:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilerw p0.h, x0, x1
@@ -293,11 +293,11 @@ define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <4 x i1> %0
}
-define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
+define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilerw p0.s, x0, x1
@@ -324,6 +324,6 @@ define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index a7c9c5e3cdd33..3d0f293b4687a 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -2,7 +2,7 @@
; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE2
; RUN: llc -mtriple=aarch64 -mattr=+sve %s -o - | FileCheck %s --check-prefix=CHECK-SVE
-define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
+define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_8:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
@@ -60,11 +60,11 @@ define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
-define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
+define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_16:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
@@ -98,11 +98,11 @@ define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <vscale x 8 x i1> %0
}
-define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
+define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_32:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
@@ -130,11 +130,11 @@ define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
-define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
+define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_64:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
@@ -158,11 +158,11 @@ define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
-define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
+define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilerw_8:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilerw p0.b, x0, x1
@@ -223,11 +223,11 @@ define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
-define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
+define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilerw_16:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilerw p0.h, x0, x1
@@ -262,11 +262,11 @@ define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <vscale x 8 x i1> %0
}
-define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
+define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilerw_32:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilerw p0.s, x0, x1
@@ -295,11 +295,11 @@ define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
-define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
+define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilerw_64:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilerw p0.d, x0, x1
@@ -324,6 +324,6 @@ define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
}
>From ec37dfa5136c26486066b8fda02c2db4008d9c02 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 15:16:57 +0000
Subject: [PATCH 11/43] Clarify elementSize and writeAfterRead = 0
---
llvm/docs/LangRef.rst | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 95bea94f6dce7..5a4438b83caf6 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24137,6 +24137,7 @@ The final two are immediates and the result is a vector with the i1 element type
Semantics:
""""""""""
+``%elementSize`` is the size of the accessed elements in bytes.
The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
In other cases when ``%writeAfterRead`` is true, the
@@ -24161,7 +24162,8 @@ where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two ptr arguments to
``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
immediate argument. The ``%writeAfterRead`` argument is expected to be true if
-``%ptrB`` is stored to after ``%ptrA`` is read from.
+``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise it is false for
+a read after write.
The above is equivalent to:
::
>From 8d81955612bae30c516577fcfe57cadc9aa40d13 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 15:23:46 +0000
Subject: [PATCH 12/43] Add i=0 to VF-1
---
llvm/docs/LangRef.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 5a4438b83caf6..ac3e4d8adbc67 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24159,11 +24159,11 @@ equivalent to:
%m[i] = (icmp ult i, %diff) || (%diff == 0)
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
-indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two ptr arguments to
-``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
-immediate argument. The ``%writeAfterRead`` argument is expected to be true if
-``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise it is false for
-a read after write.
+indexed by ``i`` (i = 0 to VF - 1), and ``%ptrA``, ``%ptrB`` are the two ptr
+arguments to ``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize``
+is the first immediate argument. The ``%writeAfterRead`` argument is expected
+to be true if ``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise
+it is false for a read after write.
The above is equivalent to:
::
>From 8a09412f0e0c805e4d5a12dedb36022a123a27df Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:08:47 +0000
Subject: [PATCH 13/43] Rename to get.nonalias.lane.mask
---
llvm/docs/LangRef.rst | 28 +++++++++----------
llvm/include/llvm/CodeGen/ISDOpcodes.h | 2 +-
llvm/include/llvm/IR/Intrinsics.td | 4 +--
.../SelectionDAG/LegalizeIntegerTypes.cpp | 16 +++++------
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 5 ++--
.../SelectionDAG/LegalizeVectorOps.cpp | 10 +++----
.../SelectionDAG/SelectionDAGBuilder.cpp | 6 ++--
.../SelectionDAG/SelectionDAGDumper.cpp | 2 +-
llvm/lib/CodeGen/TargetLoweringBase.cpp | 4 +--
.../Target/AArch64/AArch64ISelLowering.cpp | 19 +++++++------
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 2 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 16 +++++------
.../CodeGen/AArch64/alias_mask_scalable.ll | 16 +++++------
13 files changed, 66 insertions(+), 64 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index ac3e4d8adbc67..0b0c186c4f60f 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24104,9 +24104,9 @@ Examples:
%active.lane.mask = call <4 x i1> @llvm.get.active.lane.mask.v4i1.i64(i64 %elem0, i64 429)
%wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> poison)
-.. _int_experimental_get_alias_lane_mask:
+.. _int_experimental_get_nonalias_lane_mask:
-'``llvm.experimental.get.alias.lane.mask.*``' Intrinsics
+'``llvm.experimental.get.nonalias.lane.mask.*``' Intrinsics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
@@ -24115,16 +24115,16 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
Overview:
"""""""""
-Create a mask representing lanes that do or not overlap between two pointers
+Create a mask enabling lanes that do not overlap between two pointers
across one vector loop iteration.
@@ -24141,7 +24141,7 @@ Semantics:
The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
In other cases when ``%writeAfterRead`` is true, the
-'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+'``llvm.experimental.get.nonalias.lane.mask.*``' intrinsics are semantically
equivalent to:
::
@@ -24150,7 +24150,7 @@ equivalent to:
%m[i] = (icmp ult i, %diff) || (%diff <= 0)
When the return value is not poison and ``%writeAfterRead`` is false, the
-'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+'``llvm.experimental.get.nonalias.lane.mask.*``' intrinsics are semantically
equivalent to:
::
@@ -24160,7 +24160,7 @@ equivalent to:
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i`` (i = 0 to VF - 1), and ``%ptrA``, ``%ptrB`` are the two ptr
-arguments to ``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize``
+arguments to ``llvm.experimental.get.nonalias.lane.mask.*`` and ``%elementSize``
is the first immediate argument. The ``%writeAfterRead`` argument is expected
to be true if ``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise
it is false for a read after write.
@@ -24168,7 +24168,7 @@ The above is equivalent to:
::
- %m = @llvm.experimental.get.alias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
+ %m = @llvm.experimental.get.nonalias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
This can, for example, be emitted by the loop vectorizer in which case
``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a
@@ -24186,10 +24186,10 @@ Examples:
.. code-block:: llvm
- %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
- %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %alias.lane.mask, <4 x i32> poison)
+ %nonalias.lane.mask = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %nonalias.lane.mask, <4 x i32> poison)
[...]
- call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %alias.lane.mask)
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %nonalias.lane.mask)
.. _int_experimental_vp_splice:
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index 3acedb67fc7c0..e1e903d3063af 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1561,7 +1561,7 @@ enum NodeType {
// The `llvm.experimental.get.alias.lane.mask.*` intrinsics
// Operands: Load pointer, Store pointer, Element size, Write after read
// Output: Mask
- EXPERIMENTAL_ALIAS_LANE_MASK,
+ EXPERIMENTAL_NONALIAS_LANE_MASK,
// llvm.clear_cache intrinsic
// Operands: Input Chain, Start Addres, End Address
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index 39fbcb018d468..dd9ed94f19cd5 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2420,9 +2420,9 @@ let IntrProperties = [IntrNoMem, ImmArg<ArgIndex<1>>] in {
llvm_i32_ty]>;
}
-def int_experimental_get_alias_lane_mask:
+def int_experimental_get_nonalias_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
- [llvm_anyptr_ty, LLVMMatchType<1>, llvm_i64_ty, llvm_i1_ty],
+ [llvm_ptr_ty, llvm_ptr_ty, llvm_i64_ty, llvm_i1_ty],
[IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
def int_get_active_lane_mask:
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 12983f53f988f..d57f60fbe4830 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -324,8 +324,8 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
Res = PromoteIntRes_VP_REDUCE(N);
break;
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(N);
break;
case ISD::FREEZE:
@@ -379,10 +379,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
}
SDValue
-DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N) {
EVT VT = N->getValueType(0);
EVT NewVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
- return DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, SDLoc(N), NewVT,
+ return DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, SDLoc(N), NewVT,
N->ops());
}
@@ -2116,8 +2116,8 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::PARTIAL_REDUCE_SUMLA:
Res = PromoteIntOp_PARTIAL_REDUCE_MLA(N);
break;
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Res = PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ Res = PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(N, OpNo);
break;
}
@@ -2953,8 +2953,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N) {
}
SDValue
-DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N,
- unsigned OpNo) {
+DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N,
+ unsigned OpNo) {
SmallVector<SDValue, 4> NewOps(N->ops());
NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index c089da706b1b5..1d56b5efdb9d2 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -382,7 +382,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_VECTOR_FIND_LAST_ACTIVE(SDNode *N);
SDValue PromoteIntRes_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntRes_PARTIAL_REDUCE_MLA(SDNode *N);
- SDValue PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
+ SDValue PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N);
// Integer Operand Promotion.
bool PromoteIntegerOperand(SDNode *N, unsigned OpNo);
@@ -437,7 +437,8 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N);
- SDValue PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N, unsigned OpNo);
+ SDValue PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N,
+ unsigned OpNo);
void SExtOrZExtPromotedOperands(SDValue &LHS, SDValue &RHS);
void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index 3df412db95e26..59539fa658c58 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -138,7 +138,7 @@ class VectorLegalizer {
SDValue ExpandVP_FNEG(SDNode *Node);
SDValue ExpandVP_FABS(SDNode *Node);
SDValue ExpandVP_FCOPYSIGN(SDNode *Node);
- SDValue ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
+ SDValue ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N);
SDValue ExpandSELECT(SDNode *Node);
std::pair<SDValue, SDValue> ExpandLoad(SDNode *N);
SDValue ExpandStore(SDNode *N);
@@ -476,7 +476,7 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::VECTOR_COMPRESS:
case ISD::SCMP:
case ISD::UCMP:
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
@@ -1293,8 +1293,8 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
case ISD::UCMP:
Results.push_back(TLI.expandCMP(Node, DAG));
return;
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Results.push_back(ExpandEXPERIMENTAL_ALIAS_LANE_MASK(Node));
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ Results.push_back(ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(Node));
return;
case ISD::FADD:
@@ -1801,7 +1801,7 @@ SDValue VectorLegalizer::ExpandVP_FCOPYSIGN(SDNode *Node) {
return DAG.getNode(ISD::BITCAST, DL, VT, CopiedSign);
}
-SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+SDValue VectorLegalizer::ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N) {
SDLoc DL(N);
SDValue SourceValue = N->getOperand(0);
SDValue SinkValue = N->getOperand(1);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 031e2b8711c04..78fc429fdd254 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8314,13 +8314,13 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
visitVectorExtractLastActive(I, Intrinsic);
return;
}
- case Intrinsic::experimental_get_alias_lane_mask: {
+ case Intrinsic::experimental_get_nonalias_lane_mask: {
auto IntrinsicVT = EVT::getEVT(I.getType());
SmallVector<SDValue, 4> Ops;
for (auto &Op : I.operands())
Ops.push_back(getValue(Op));
- SDValue Mask =
- DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, sdl, IntrinsicVT, Ops);
+ SDValue Mask = DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, sdl,
+ IntrinsicVT, Ops);
setValue(&I, Mask);
}
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index 2693e86c36ae8..4bfdae4a80e99 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -587,7 +587,7 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
return "partial_reduce_smla";
case ISD::PARTIAL_REDUCE_SUMLA:
return "partial_reduce_sumla";
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
return "alias_lane_mask";
// Vector Predication
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index c9e07a9dbaada..f8364f8656175 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -900,8 +900,8 @@ void TargetLoweringBase::initActions() {
// Masked vector extracts default to expand.
setOperationAction(ISD::VECTOR_FIND_LAST_ACTIVE, VT, Expand);
- // Aliasing lanes mask default to expand
- setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Expand);
+ // Non-aliasing lanes mask default to expand
+ setOperationAction(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, VT, Expand);
// FP environment operations default to expand.
setOperationAction(ISD::GET_FPENV, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 7b61d938c0908..5bbbfd814b169 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1920,7 +1920,7 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
(Subtarget->hasSME() && Subtarget->isStreaming())) {
for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
- setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Custom);
+ setOperationAction(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, VT, Custom);
}
}
@@ -5238,8 +5238,9 @@ SDValue AArch64TargetLowering::LowerFSINCOS(SDValue Op,
static MVT getSVEContainerType(EVT ContentTy);
-SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
- SelectionDAG &DAG) const {
+SDValue
+AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
+ SelectionDAG &DAG) const {
SDLoc DL(Op);
unsigned IntrinsicID = 0;
uint64_t EltSize = Op.getConstantOperandVal(2);
@@ -6549,7 +6550,7 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(AArch64ISD::USDOT, DL, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2), Op.getOperand(3));
}
- case Intrinsic::experimental_get_alias_lane_mask: {
+ case Intrinsic::experimental_get_nonalias_lane_mask: {
unsigned IntrinsicID = 0;
uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
@@ -7480,8 +7481,8 @@ SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
default:
llvm_unreachable("unimplemented operand");
return SDValue();
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- return LowerALIAS_LANE_MASK(Op, DAG);
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ return LowerNONALIAS_LANE_MASK(Op, DAG);
case ISD::BITCAST:
return LowerBITCAST(Op, DAG);
case ISD::GlobalAddress:
@@ -28312,7 +28313,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
case ISD::GET_ACTIVE_LANE_MASK:
ReplaceGetActiveLaneMaskResults(N, Results, DAG);
return;
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK: {
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK: {
EVT VT = N->getValueType(0);
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
@@ -28324,7 +28325,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
SDLoc DL(N);
auto V =
- DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, DL, NewVT, N->ops());
+ DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, DL, NewVT, N->ops());
Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, VT, V));
return;
}
@@ -28385,7 +28386,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
return;
}
case Intrinsic::experimental_vector_match:
- case Intrinsic::experimental_get_alias_lane_mask: {
+ case Intrinsic::experimental_get_nonalias_lane_mask: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 381816bd3defd..536e259d3f2c9 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -728,7 +728,7 @@ class AArch64TargetLowering : public TargetLowering {
SDValue LowerXOR(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFSINCOS(SDValue Op, SelectionDAG &DAG) const;
- SDValue LowerALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
+ SDValue LowerNONALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerVSCALE(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerTRUNCATE(SDValue Op, SelectionDAG &DAG) const;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index f88baeece0356..5ef6b588fe767 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -53,7 +53,7 @@ define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <16 x i1> %0
}
@@ -95,7 +95,7 @@ define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
+ %0 = call <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <8 x i1> %0
}
@@ -129,7 +129,7 @@ define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <4 x i1> %0
}
@@ -159,7 +159,7 @@ define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <2 x i1> %0
}
@@ -215,7 +215,7 @@ define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <16 x i1> %0
}
@@ -258,7 +258,7 @@ define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
+ %0 = call <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <8 x i1> %0
}
@@ -293,7 +293,7 @@ define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <4 x i1> %0
}
@@ -324,6 +324,6 @@ define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 3d0f293b4687a..6884f14d685b5 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -60,7 +60,7 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
@@ -98,7 +98,7 @@ define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <vscale x 8 x i1> %0
}
@@ -130,7 +130,7 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
@@ -158,7 +158,7 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
@@ -223,7 +223,7 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
@@ -262,7 +262,7 @@ define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <vscale x 8 x i1> %0
}
@@ -295,7 +295,7 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
@@ -324,6 +324,6 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
}
>From 45cbaff190bd99b830292a066e1bcc8112ed2dca Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:15:26 +0000
Subject: [PATCH 14/43] Fix pointer types in example
---
llvm/docs/LangRef.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 0b0c186c4f60f..ad2457f645cf0 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24187,9 +24187,9 @@ Examples:
.. code-block:: llvm
%nonalias.lane.mask = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
- %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %nonalias.lane.mask, <4 x i32> poison)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %nonalias.lane.mask, <4 x i32> poison)
[...]
- call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %nonalias.lane.mask)
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %nonalias.lane.mask)
.. _int_experimental_vp_splice:
>From 1b7b0daff9f76e29c544f02d30c2a230813f3508 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:15:35 +0000
Subject: [PATCH 15/43] Remove shouldExpandGetAliasLaneMask
---
llvm/include/llvm/CodeGen/TargetLowering.h | 7 -------
1 file changed, 7 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index bc24a4a3947b7..ed7495694cc70 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -482,13 +482,6 @@ class LLVM_ABI TargetLoweringBase {
return true;
}
- /// Return true if the @llvm.experimental.get.alias.lane.mask intrinsic should
- /// be expanded using generic code in SelectionDAGBuilder.
- virtual bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
- unsigned EltSize) const {
- return true;
- }
-
virtual bool shouldExpandGetVectorLength(EVT CountVT, unsigned VF,
bool IsScalable) const {
return true;
>From 0a0de880fd5c757aef4a07122a31d7e68524e0d7 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:25:35 +0000
Subject: [PATCH 16/43] Lower to ISD node rather than intrinsic
---
.../Target/AArch64/AArch64ISelLowering.cpp | 19 +++++--------------
1 file changed, 5 insertions(+), 14 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 5bbbfd814b169..c027f67030551 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5242,40 +5242,33 @@ SDValue
AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
- unsigned IntrinsicID = 0;
uint64_t EltSize = Op.getConstantOperandVal(2);
bool IsWriteAfterRead = Op.getConstantOperandVal(3) == 1;
+ unsigned Opcode =
+ IsWriteAfterRead ? AArch64ISD::WHILEWR : AArch64ISD::WHILERW;
EVT VT = Op.getValueType();
MVT SimpleVT = VT.getSimpleVT();
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
- : Intrinsic::aarch64_sve_whilerw_b;
assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
"Unexpected mask or element size");
IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
: Intrinsic::aarch64_sve_whilerw_b;
break;
case 2:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
- : Intrinsic::aarch64_sve_whilerw_h;
assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
"Unexpected mask or element size");
IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
: Intrinsic::aarch64_sve_whilerw_h;
break;
case 4:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
- : Intrinsic::aarch64_sve_whilerw_s;
assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
"Unexpected mask or element size");
IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
: Intrinsic::aarch64_sve_whilerw_s;
break;
case 8:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
- : Intrinsic::aarch64_sve_whilerw_d;
assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
"Unexpected mask or element size");
IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
@@ -5285,11 +5278,9 @@ AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
llvm_unreachable("Unexpected element size for get.alias.lane.mask");
break;
}
- SDValue ID = DAG.getTargetConstant(IntrinsicID, DL, MVT::i64);
if (VT.isScalableVector())
- return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, VT, ID, Op.getOperand(0),
- Op.getOperand(1));
+ return DAG.getNode(Opcode, DL, VT, Op.getOperand(0), Op.getOperand(1));
// We can use the SVE whilewr/whilerw instruction to lower this
// intrinsic by creating the appropriate sequence of scalable vector
@@ -5299,8 +5290,8 @@ AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
- SDValue Mask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, WhileVT, ID,
- Op.getOperand(0), Op.getOperand(1));
+ SDValue Mask =
+ DAG.getNode(Opcode, DL, WhileVT, Op.getOperand(0), Op.getOperand(1));
SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
DAG.getVectorIdxConstant(0, DL));
>From 54d32ad31a62099eb3edcd383b330a271405c4c1 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 31 Jan 2025 14:24:11 +0000
Subject: [PATCH 17/43] Rename to noalias
---
llvm/docs/LangRef.rst | 28 +++++++++----------
llvm/include/llvm/CodeGen/ISDOpcodes.h | 2 +-
llvm/include/llvm/IR/Intrinsics.td | 2 +-
.../SelectionDAG/LegalizeIntegerTypes.cpp | 16 +++++------
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 5 ++--
.../SelectionDAG/LegalizeVectorOps.cpp | 10 +++----
.../SelectionDAG/SelectionDAGBuilder.cpp | 6 ++--
.../SelectionDAG/SelectionDAGDumper.cpp | 2 +-
llvm/lib/CodeGen/TargetLoweringBase.cpp | 2 +-
.../Target/AArch64/AArch64ISelLowering.cpp | 25 ++++++-----------
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 2 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 16 +++++------
.../CodeGen/AArch64/alias_mask_scalable.ll | 16 +++++------
13 files changed, 61 insertions(+), 71 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index ad2457f645cf0..e45bdc1c607d6 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24104,10 +24104,10 @@ Examples:
%active.lane.mask = call <4 x i1> @llvm.get.active.lane.mask.v4i1.i64(i64 %elem0, i64 429)
%wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> poison)
-.. _int_experimental_get_nonalias_lane_mask:
+.. _int_experimental_get_noalias_lane_mask:
-'``llvm.experimental.get.nonalias.lane.mask.*``' Intrinsics
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+'``llvm.experimental.get.noalias.lane.mask.*``' Intrinsics
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
"""""""
@@ -24115,10 +24115,10 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.noalias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
Overview:
@@ -24141,7 +24141,7 @@ Semantics:
The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
In other cases when ``%writeAfterRead`` is true, the
-'``llvm.experimental.get.nonalias.lane.mask.*``' intrinsics are semantically
+'``llvm.experimental.get.noalias.lane.mask.*``' intrinsics are semantically
equivalent to:
::
@@ -24150,7 +24150,7 @@ equivalent to:
%m[i] = (icmp ult i, %diff) || (%diff <= 0)
When the return value is not poison and ``%writeAfterRead`` is false, the
-'``llvm.experimental.get.nonalias.lane.mask.*``' intrinsics are semantically
+'``llvm.experimental.get.noalias.lane.mask.*``' intrinsics are semantically
equivalent to:
::
@@ -24160,7 +24160,7 @@ equivalent to:
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i`` (i = 0 to VF - 1), and ``%ptrA``, ``%ptrB`` are the two ptr
-arguments to ``llvm.experimental.get.nonalias.lane.mask.*`` and ``%elementSize``
+arguments to ``llvm.experimental.get.noalias.lane.mask.*`` and ``%elementSize``
is the first immediate argument. The ``%writeAfterRead`` argument is expected
to be true if ``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise
it is false for a read after write.
@@ -24168,7 +24168,7 @@ The above is equivalent to:
::
- %m = @llvm.experimental.get.nonalias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
+ %m = @llvm.experimental.get.noalias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
This can, for example, be emitted by the loop vectorizer in which case
``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a
@@ -24186,10 +24186,10 @@ Examples:
.. code-block:: llvm
- %nonalias.lane.mask = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
- %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %nonalias.lane.mask, <4 x i32> poison)
+ %noalias.lane.mask = call <4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %noalias.lane.mask, <4 x i32> poison)
[...]
- call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %nonalias.lane.mask)
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %noalias.lane.mask)
.. _int_experimental_vp_splice:
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index e1e903d3063af..c4f8e13814c81 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1561,7 +1561,7 @@ enum NodeType {
// The `llvm.experimental.get.alias.lane.mask.*` intrinsics
// Operands: Load pointer, Store pointer, Element size, Write after read
// Output: Mask
- EXPERIMENTAL_NONALIAS_LANE_MASK,
+ EXPERIMENTAL_NOALIAS_LANE_MASK,
// llvm.clear_cache intrinsic
// Operands: Input Chain, Start Addres, End Address
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index dd9ed94f19cd5..73291c1eb2bfa 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2420,7 +2420,7 @@ let IntrProperties = [IntrNoMem, ImmArg<ArgIndex<1>>] in {
llvm_i32_ty]>;
}
-def int_experimental_get_nonalias_lane_mask:
+def int_experimental_get_noalias_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[llvm_ptr_ty, llvm_ptr_ty, llvm_i64_ty, llvm_i1_ty],
[IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index d57f60fbe4830..e4580a17eef1a 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -324,8 +324,8 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
Res = PromoteIntRes_VP_REDUCE(N);
break;
- case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
- Res = PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(N);
+ case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_NOALIAS_LANE_MASK(N);
break;
case ISD::FREEZE:
@@ -379,10 +379,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
}
SDValue
-DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N) {
+DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N) {
EVT VT = N->getValueType(0);
EVT NewVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
- return DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, SDLoc(N), NewVT,
+ return DAG.getNode(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, SDLoc(N), NewVT,
N->ops());
}
@@ -2116,8 +2116,8 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::PARTIAL_REDUCE_SUMLA:
Res = PromoteIntOp_PARTIAL_REDUCE_MLA(N);
break;
- case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
- Res = PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(N, OpNo);
+ case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
+ Res = PromoteIntOp_EXPERIMENTAL_NOALIAS_LANE_MASK(N, OpNo);
break;
}
@@ -2953,8 +2953,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N) {
}
SDValue
-DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N,
- unsigned OpNo) {
+DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N,
+ unsigned OpNo) {
SmallVector<SDValue, 4> NewOps(N->ops());
NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 1d56b5efdb9d2..516d09ab65223 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -382,7 +382,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_VECTOR_FIND_LAST_ACTIVE(SDNode *N);
SDValue PromoteIntRes_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntRes_PARTIAL_REDUCE_MLA(SDNode *N);
- SDValue PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N);
+ SDValue PromoteIntRes_EXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N);
// Integer Operand Promotion.
bool PromoteIntegerOperand(SDNode *N, unsigned OpNo);
@@ -437,8 +437,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N);
- SDValue PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N,
- unsigned OpNo);
+ SDValue PromoteIntOp_EXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N, unsigned OpNo);
void SExtOrZExtPromotedOperands(SDValue &LHS, SDValue &RHS);
void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index 59539fa658c58..7f45bfdfe6758 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -138,7 +138,7 @@ class VectorLegalizer {
SDValue ExpandVP_FNEG(SDNode *Node);
SDValue ExpandVP_FABS(SDNode *Node);
SDValue ExpandVP_FCOPYSIGN(SDNode *Node);
- SDValue ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N);
+ SDValue ExpandEXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N);
SDValue ExpandSELECT(SDNode *Node);
std::pair<SDValue, SDValue> ExpandLoad(SDNode *N);
SDValue ExpandStore(SDNode *N);
@@ -476,7 +476,7 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::VECTOR_COMPRESS:
case ISD::SCMP:
case ISD::UCMP:
- case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
@@ -1293,8 +1293,8 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
case ISD::UCMP:
Results.push_back(TLI.expandCMP(Node, DAG));
return;
- case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
- Results.push_back(ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(Node));
+ case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
+ Results.push_back(ExpandEXPERIMENTAL_NOALIAS_LANE_MASK(Node));
return;
case ISD::FADD:
@@ -1801,7 +1801,7 @@ SDValue VectorLegalizer::ExpandVP_FCOPYSIGN(SDNode *Node) {
return DAG.getNode(ISD::BITCAST, DL, VT, CopiedSign);
}
-SDValue VectorLegalizer::ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N) {
+SDValue VectorLegalizer::ExpandEXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N) {
SDLoc DL(N);
SDValue SourceValue = N->getOperand(0);
SDValue SinkValue = N->getOperand(1);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 78fc429fdd254..ba2d125e7e067 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8314,13 +8314,13 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
visitVectorExtractLastActive(I, Intrinsic);
return;
}
- case Intrinsic::experimental_get_nonalias_lane_mask: {
+ case Intrinsic::experimental_get_noalias_lane_mask: {
auto IntrinsicVT = EVT::getEVT(I.getType());
SmallVector<SDValue, 4> Ops;
for (auto &Op : I.operands())
Ops.push_back(getValue(Op));
- SDValue Mask = DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, sdl,
- IntrinsicVT, Ops);
+ SDValue Mask =
+ DAG.getNode(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, sdl, IntrinsicVT, Ops);
setValue(&I, Mask);
}
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index 4bfdae4a80e99..7a751fc09faaa 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -587,7 +587,7 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
return "partial_reduce_smla";
case ISD::PARTIAL_REDUCE_SUMLA:
return "partial_reduce_sumla";
- case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
return "alias_lane_mask";
// Vector Predication
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index f8364f8656175..5d9668ef048b7 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -901,7 +901,7 @@ void TargetLoweringBase::initActions() {
setOperationAction(ISD::VECTOR_FIND_LAST_ACTIVE, VT, Expand);
// Non-aliasing lanes mask default to expand
- setOperationAction(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, VT, Expand);
+ setOperationAction(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, VT, Expand);
// FP environment operations default to expand.
setOperationAction(ISD::GET_FPENV, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index c027f67030551..6bed16e845707 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1920,7 +1920,7 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
(Subtarget->hasSME() && Subtarget->isStreaming())) {
for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
- setOperationAction(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, VT, Custom);
+ setOperationAction(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, VT, Custom);
}
}
@@ -5238,9 +5238,8 @@ SDValue AArch64TargetLowering::LowerFSINCOS(SDValue Op,
static MVT getSVEContainerType(EVT ContentTy);
-SDValue
-AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
- SelectionDAG &DAG) const {
+SDValue AArch64TargetLowering::LowerNOALIAS_LANE_MASK(SDValue Op,
+ SelectionDAG &DAG) const {
SDLoc DL(Op);
uint64_t EltSize = Op.getConstantOperandVal(2);
bool IsWriteAfterRead = Op.getConstantOperandVal(3) == 1;
@@ -5253,26 +5252,18 @@ AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
case 1:
assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
"Unexpected mask or element size");
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
- : Intrinsic::aarch64_sve_whilerw_b;
break;
case 2:
assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
"Unexpected mask or element size");
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
- : Intrinsic::aarch64_sve_whilerw_h;
break;
case 4:
assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
"Unexpected mask or element size");
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
- : Intrinsic::aarch64_sve_whilerw_s;
break;
case 8:
assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
"Unexpected mask or element size");
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
- : Intrinsic::aarch64_sve_whilerw_d;
break;
default:
llvm_unreachable("Unexpected element size for get.alias.lane.mask");
@@ -7472,8 +7463,8 @@ SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
default:
llvm_unreachable("unimplemented operand");
return SDValue();
- case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
- return LowerNONALIAS_LANE_MASK(Op, DAG);
+ case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
+ return LowerNOALIAS_LANE_MASK(Op, DAG);
case ISD::BITCAST:
return LowerBITCAST(Op, DAG);
case ISD::GlobalAddress:
@@ -28304,7 +28295,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
case ISD::GET_ACTIVE_LANE_MASK:
ReplaceGetActiveLaneMaskResults(N, Results, DAG);
return;
- case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK: {
+ case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK: {
EVT VT = N->getValueType(0);
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
@@ -28316,7 +28307,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
SDLoc DL(N);
auto V =
- DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, DL, NewVT, N->ops());
+ DAG.getNode(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, DL, NewVT, N->ops());
Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, VT, V));
return;
}
@@ -28377,7 +28368,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
return;
}
case Intrinsic::experimental_vector_match:
- case Intrinsic::experimental_get_nonalias_lane_mask: {
+ case Intrinsic::experimental_get_noalias_lane_mask: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 536e259d3f2c9..34b18723eaabf 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -728,7 +728,7 @@ class AArch64TargetLowering : public TargetLowering {
SDValue LowerXOR(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFSINCOS(SDValue Op, SelectionDAG &DAG) const;
- SDValue LowerNONALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
+ SDValue LowerNOALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerVSCALE(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerTRUNCATE(SDValue Op, SelectionDAG &DAG) const;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 5ef6b588fe767..21eff3b11c001 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -53,7 +53,7 @@ define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <16 x i1> %0
}
@@ -95,7 +95,7 @@ define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
+ %0 = call <8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <8 x i1> %0
}
@@ -129,7 +129,7 @@ define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <4 x i1> %0
}
@@ -159,7 +159,7 @@ define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.get.noalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <2 x i1> %0
}
@@ -215,7 +215,7 @@ define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <16 x i1> %0
}
@@ -258,7 +258,7 @@ define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
+ %0 = call <8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <8 x i1> %0
}
@@ -293,7 +293,7 @@ define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <4 x i1> %0
}
@@ -324,6 +324,6 @@ define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.get.noalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 6884f14d685b5..b29619c7f397d 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -60,7 +60,7 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
@@ -98,7 +98,7 @@ define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <vscale x 8 x i1> %0
}
@@ -130,7 +130,7 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
@@ -158,7 +158,7 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.noalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
@@ -223,7 +223,7 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
@@ -262,7 +262,7 @@ define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <vscale x 8 x i1> %0
}
@@ -295,7 +295,7 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
@@ -324,6 +324,6 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.noalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
}
>From 2066929205e6401566652c3b22cea29804eab513 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 26 Feb 2025 23:39:53 +0000
Subject: [PATCH 18/43] Rename to loop.dependence.raw/war.mask
---
llvm/include/llvm/CodeGen/ISDOpcodes.h | 7 ++---
llvm/include/llvm/IR/Intrinsics.td | 11 +++++---
.../SelectionDAG/LegalizeIntegerTypes.cpp | 20 +++++++-------
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 5 ++--
.../SelectionDAG/LegalizeVectorOps.cpp | 15 ++++++-----
.../SelectionDAG/SelectionDAGBuilder.cpp | 9 ++++---
.../SelectionDAG/SelectionDAGDumper.cpp | 6 +++--
llvm/lib/CodeGen/TargetLoweringBase.cpp | 5 ++--
.../Target/AArch64/AArch64ISelLowering.cpp | 27 ++++++++++++-------
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 2 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 16 +++++------
.../CodeGen/AArch64/alias_mask_scalable.ll | 16 +++++------
12 files changed, 81 insertions(+), 58 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index c4f8e13814c81..57c3873e1451d 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1558,10 +1558,11 @@ enum NodeType {
// bits conform to getBooleanContents similar to the SETCC operator.
GET_ACTIVE_LANE_MASK,
- // The `llvm.experimental.get.alias.lane.mask.*` intrinsics
- // Operands: Load pointer, Store pointer, Element size, Write after read
+ // The `llvm.experimental.loop.dependence.{war, raw}.mask` intrinsics
+ // Operands: Load pointer, Store pointer, Element size
// Output: Mask
- EXPERIMENTAL_NOALIAS_LANE_MASK,
+ EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK,
+ EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK,
// llvm.clear_cache intrinsic
// Operands: Input Chain, Start Addres, End Address
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index 73291c1eb2bfa..d190efd2f65f2 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2420,10 +2420,15 @@ let IntrProperties = [IntrNoMem, ImmArg<ArgIndex<1>>] in {
llvm_i32_ty]>;
}
-def int_experimental_get_noalias_lane_mask:
+def int_experimental_loop_dependence_raw_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
- [llvm_ptr_ty, llvm_ptr_ty, llvm_i64_ty, llvm_i1_ty],
- [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
+ [llvm_ptr_ty, llvm_ptr_ty, llvm_i64_ty],
+ [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>]>;
+
+def int_experimental_loop_dependence_war_mask:
+ DefaultAttrsIntrinsic<[llvm_anyvector_ty],
+ [llvm_ptr_ty, llvm_ptr_ty, llvm_i64_ty],
+ [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>]>;
def int_get_active_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index e4580a17eef1a..3f8caeb1d4f42 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -324,8 +324,9 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
Res = PromoteIntRes_VP_REDUCE(N);
break;
- case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
- Res = PromoteIntRes_EXPERIMENTAL_NOALIAS_LANE_MASK(N);
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(N);
break;
case ISD::FREEZE:
@@ -379,11 +380,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
}
SDValue
-DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N) {
+DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(SDNode *N) {
EVT VT = N->getValueType(0);
EVT NewVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
- return DAG.getNode(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, SDLoc(N), NewVT,
- N->ops());
+ return DAG.getNode(N->getOpcode(), SDLoc(N), NewVT, N->ops());
}
SDValue DAGTypeLegalizer::PromoteIntRes_AssertSext(SDNode *N) {
@@ -2116,8 +2116,9 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::PARTIAL_REDUCE_SUMLA:
Res = PromoteIntOp_PARTIAL_REDUCE_MLA(N);
break;
- case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
- Res = PromoteIntOp_EXPERIMENTAL_NOALIAS_LANE_MASK(N, OpNo);
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ Res = PromoteIntOp_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(N, OpNo);
break;
}
@@ -2952,9 +2953,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N) {
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
}
-SDValue
-DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N,
- unsigned OpNo) {
+SDValue DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(
+ SDNode *N, unsigned OpNo) {
SmallVector<SDValue, 4> NewOps(N->ops());
NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 516d09ab65223..fd508bead27c9 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -382,7 +382,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_VECTOR_FIND_LAST_ACTIVE(SDNode *N);
SDValue PromoteIntRes_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntRes_PARTIAL_REDUCE_MLA(SDNode *N);
- SDValue PromoteIntRes_EXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N);
+ SDValue PromoteIntRes_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(SDNode *N);
// Integer Operand Promotion.
bool PromoteIntegerOperand(SDNode *N, unsigned OpNo);
@@ -437,7 +437,8 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N);
- SDValue PromoteIntOp_EXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N, unsigned OpNo);
+ SDValue PromoteIntOp_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(SDNode *N,
+ unsigned OpNo);
void SExtOrZExtPromotedOperands(SDValue &LHS, SDValue &RHS);
void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index 7f45bfdfe6758..86720471e9584 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -138,7 +138,7 @@ class VectorLegalizer {
SDValue ExpandVP_FNEG(SDNode *Node);
SDValue ExpandVP_FABS(SDNode *Node);
SDValue ExpandVP_FCOPYSIGN(SDNode *Node);
- SDValue ExpandEXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N);
+ SDValue ExpandLOOP_DEPENDENCE_MASK(SDNode *N);
SDValue ExpandSELECT(SDNode *Node);
std::pair<SDValue, SDValue> ExpandLoad(SDNode *N);
SDValue ExpandStore(SDNode *N);
@@ -476,7 +476,8 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::VECTOR_COMPRESS:
case ISD::SCMP:
case ISD::UCMP:
- case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
@@ -1293,8 +1294,9 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
case ISD::UCMP:
Results.push_back(TLI.expandCMP(Node, DAG));
return;
- case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
- Results.push_back(ExpandEXPERIMENTAL_NOALIAS_LANE_MASK(Node));
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
+ Results.push_back(ExpandLOOP_DEPENDENCE_MASK(Node));
return;
case ISD::FADD:
@@ -1801,13 +1803,14 @@ SDValue VectorLegalizer::ExpandVP_FCOPYSIGN(SDNode *Node) {
return DAG.getNode(ISD::BITCAST, DL, VT, CopiedSign);
}
-SDValue VectorLegalizer::ExpandEXPERIMENTAL_NOALIAS_LANE_MASK(SDNode *N) {
+SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
SDLoc DL(N);
SDValue SourceValue = N->getOperand(0);
SDValue SinkValue = N->getOperand(1);
SDValue EltSize = N->getOperand(2);
- bool IsWriteAfterRead = N->getConstantOperandVal(3) != 0;
+ bool IsWriteAfterRead =
+ N->getOpcode() == ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK;
auto VT = N->getValueType(0);
auto PtrVT = SourceValue->getValueType(0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index ba2d125e7e067..77fe8b623762a 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8314,13 +8314,16 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
visitVectorExtractLastActive(I, Intrinsic);
return;
}
- case Intrinsic::experimental_get_noalias_lane_mask: {
+ case Intrinsic::experimental_loop_dependence_war_mask:
+ case Intrinsic::experimental_loop_dependence_raw_mask: {
auto IntrinsicVT = EVT::getEVT(I.getType());
SmallVector<SDValue, 4> Ops;
for (auto &Op : I.operands())
Ops.push_back(getValue(Op));
- SDValue Mask =
- DAG.getNode(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, sdl, IntrinsicVT, Ops);
+ unsigned ID = Intrinsic == Intrinsic::experimental_loop_dependence_war_mask
+ ? ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK
+ : ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK;
+ SDValue Mask = DAG.getNode(ID, sdl, IntrinsicVT, Ops);
setValue(&I, Mask);
}
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index 7a751fc09faaa..d5c8431370243 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -587,8 +587,10 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
return "partial_reduce_smla";
case ISD::PARTIAL_REDUCE_SUMLA:
return "partial_reduce_sumla";
- case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
- return "alias_lane_mask";
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ return "loop_dep_war";
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
+ return "loop_dep_raw";
// Vector Predication
#define BEGIN_REGISTER_VP_SDNODE(SDID, LEGALARG, NAME, ...) \
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index 5d9668ef048b7..00b195fa38c2f 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -900,8 +900,9 @@ void TargetLoweringBase::initActions() {
// Masked vector extracts default to expand.
setOperationAction(ISD::VECTOR_FIND_LAST_ACTIVE, VT, Expand);
- // Non-aliasing lanes mask default to expand
- setOperationAction(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, VT, Expand);
+ // Lane mask with non-aliasing lanes enabled default to expand
+ setOperationAction(ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK, VT, Expand);
+ setOperationAction(ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK, VT, Expand);
// FP environment operations default to expand.
setOperationAction(ISD::GET_FPENV, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 6bed16e845707..b39196ce38b53 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1920,7 +1920,10 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
(Subtarget->hasSME() && Subtarget->isStreaming())) {
for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
- setOperationAction(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, VT, Custom);
+ setOperationAction(ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK, VT,
+ Custom);
+ setOperationAction(ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK, VT,
+ Custom);
}
}
@@ -5238,11 +5241,13 @@ SDValue AArch64TargetLowering::LowerFSINCOS(SDValue Op,
static MVT getSVEContainerType(EVT ContentTy);
-SDValue AArch64TargetLowering::LowerNOALIAS_LANE_MASK(SDValue Op,
- SelectionDAG &DAG) const {
+SDValue
+AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
+ SelectionDAG &DAG) const {
SDLoc DL(Op);
uint64_t EltSize = Op.getConstantOperandVal(2);
- bool IsWriteAfterRead = Op.getConstantOperandVal(3) == 1;
+ bool IsWriteAfterRead =
+ Op.getOpcode() == ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK;
unsigned Opcode =
IsWriteAfterRead ? AArch64ISD::WHILEWR : AArch64ISD::WHILERW;
EVT VT = Op.getValueType();
@@ -7463,8 +7468,9 @@ SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
default:
llvm_unreachable("unimplemented operand");
return SDValue();
- case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK:
- return LowerNOALIAS_LANE_MASK(Op, DAG);
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ return LowerLOOP_DEPENDENCE_MASK(Op, DAG);
case ISD::BITCAST:
return LowerBITCAST(Op, DAG);
case ISD::GlobalAddress:
@@ -28295,7 +28301,8 @@ void AArch64TargetLowering::ReplaceNodeResults(
case ISD::GET_ACTIVE_LANE_MASK:
ReplaceGetActiveLaneMaskResults(N, Results, DAG);
return;
- case ISD::EXPERIMENTAL_NOALIAS_LANE_MASK: {
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK: {
EVT VT = N->getValueType(0);
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
@@ -28306,8 +28313,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
return;
SDLoc DL(N);
- auto V =
- DAG.getNode(ISD::EXPERIMENTAL_NOALIAS_LANE_MASK, DL, NewVT, N->ops());
+ auto V = DAG.getNode(N->getOpcode(), DL, NewVT, N->ops());
Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, VT, V));
return;
}
@@ -28368,7 +28374,8 @@ void AArch64TargetLowering::ReplaceNodeResults(
return;
}
case Intrinsic::experimental_vector_match:
- case Intrinsic::experimental_get_noalias_lane_mask: {
+ case Intrinsic::experimental_loop_dependence_raw_mask:
+ case Intrinsic::experimental_loop_dependence_war_mask: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 34b18723eaabf..8869ecf423a61 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -728,7 +728,7 @@ class AArch64TargetLowering : public TargetLowering {
SDValue LowerXOR(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFSINCOS(SDValue Op, SelectionDAG &DAG) const;
- SDValue LowerNOALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
+ SDValue LowerLOOP_DEPENDENCE_MASK(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerVSCALE(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerTRUNCATE(SDValue Op, SelectionDAG &DAG) const;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 21eff3b11c001..3248cb2de2644 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -53,7 +53,7 @@ define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <16 x i1> %0
}
@@ -95,7 +95,7 @@ define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
+ %0 = call <8 x i1> @llvm.experimental.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <8 x i1> %0
}
@@ -129,7 +129,7 @@ define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <4 x i1> %0
}
@@ -159,7 +159,7 @@ define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.noalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.loop.dependence.war.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <2 x i1> %0
}
@@ -215,7 +215,7 @@ define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <16 x i1> %0
}
@@ -258,7 +258,7 @@ define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
+ %0 = call <8 x i1> @llvm.experimental.loop.dependence.raw.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <8 x i1> %0
}
@@ -293,7 +293,7 @@ define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <4 x i1> %0
}
@@ -324,6 +324,6 @@ define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.noalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index b29619c7f397d..5a7c3180e2807 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -60,7 +60,7 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
}
@@ -98,7 +98,7 @@ define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
}
@@ -130,7 +130,7 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
}
@@ -158,7 +158,7 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.noalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.loop.dependence.war.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
@@ -223,7 +223,7 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
}
@@ -262,7 +262,7 @@ define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.loop.dependence.raw.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
}
@@ -295,7 +295,7 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
}
@@ -324,6 +324,6 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.noalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
>From 9b3a71a13db15b45df1cdb157ca13cf84c678e73 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Mon, 10 Mar 2025 13:29:48 +0000
Subject: [PATCH 19/43] Rename in langref
---
llvm/docs/LangRef.rst | 47 +++++++++++++++++++------------------------
1 file changed, 21 insertions(+), 26 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index e45bdc1c607d6..25efef8fb713b 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24104,10 +24104,12 @@ Examples:
%active.lane.mask = call <4 x i1> @llvm.get.active.lane.mask.v4i1.i64(i64 %elem0, i64 429)
%wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> poison)
-.. _int_experimental_get_noalias_lane_mask:
-'``llvm.experimental.get.noalias.lane.mask.*``' Intrinsics
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. _int_experimental_loop_dependence_war_mask:
+.. _int_experimental_loop_dependence_raw_mask:
+
+'``llvm.experimental.loop.dependence.raw.mask.*``' and '``llvm.experimental.loop.dependence.war.mask.*``' Intrinsics
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
"""""""
@@ -24115,10 +24117,10 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <8 x i1> @llvm.experimental.get.noalias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <16 x i1> @llvm.experimental.get.noalias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <vscale x 16 x i1> @llvm.experimental.get.noalias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <8 x i1> @llvm.experimental.loop.dependence.war.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <16 x i1> @llvm.experimental.loop.dependence.raw.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <vscale x 16 x i1> @llvm.experimental.loop.dependence.war.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
Overview:
@@ -24131,8 +24133,8 @@ across one vector loop iteration.
Arguments:
""""""""""
-The first two arguments have the same scalar integer type.
-The final two are immediates and the result is a vector with the i1 element type.
+The first two arguments have the same pointer type.
+The final one is an immediate and the result is a vector with the i1 element type.
Semantics:
""""""""""
@@ -24140,8 +24142,7 @@ Semantics:
``%elementSize`` is the size of the accessed elements in bytes.
The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
-In other cases when ``%writeAfterRead`` is true, the
-'``llvm.experimental.get.noalias.lane.mask.*``' intrinsics are semantically
+The '``llvm.experimental.loop.dependence.war.mask*``' intrinsics are semantically
equivalent to:
::
@@ -24149,9 +24150,8 @@ equivalent to:
%diff = (%ptrB - %ptrA) / %elementSize
%m[i] = (icmp ult i, %diff) || (%diff <= 0)
-When the return value is not poison and ``%writeAfterRead`` is false, the
-'``llvm.experimental.get.noalias.lane.mask.*``' intrinsics are semantically
-equivalent to:
+When the return value is not poison the '``llvm.experimental.loop.dependence.raw.mask.*``'
+intrinsics are semantically equivalent to:
::
@@ -24160,15 +24160,10 @@ equivalent to:
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i`` (i = 0 to VF - 1), and ``%ptrA``, ``%ptrB`` are the two ptr
-arguments to ``llvm.experimental.get.noalias.lane.mask.*`` and ``%elementSize``
-is the first immediate argument. The ``%writeAfterRead`` argument is expected
-to be true if ``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise
-it is false for a read after write.
-The above is equivalent to:
-
-::
-
- %m = @llvm.experimental.get.noalias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
+arguments to ``llvm.experimental.loop.dependence.{raw,war}.mask.*`` and ``%elementSize``
+is the first immediate argument. The ``war`` variant is expected to be used when
+``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise the ``raw`` variant is
+expected to be used.
This can, for example, be emitted by the loop vectorizer in which case
``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a
@@ -24186,10 +24181,10 @@ Examples:
.. code-block:: llvm
- %noalias.lane.mask = call <4 x i1> @llvm.experimental.get.noalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
- %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %noalias.lane.mask, <4 x i32> poison)
+ %loop.dependence.mask = call <4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %loop.dependence.mask, <4 x i32> poison)
[...]
- call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %noalias.lane.mask)
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %loop.dependence.mask)
.. _int_experimental_vp_splice:
>From 215d2e748d9d0db4cd8cc47cc7061c120b9eae57 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 21 Mar 2025 17:47:00 +0000
Subject: [PATCH 20/43] Reword argument description
---
llvm/docs/LangRef.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 25efef8fb713b..944c4286fbfd0 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24133,8 +24133,8 @@ across one vector loop iteration.
Arguments:
""""""""""
-The first two arguments have the same pointer type.
-The final one is an immediate and the result is a vector with the i1 element type.
+The first two arguments are pointers and the last argument is an immediate.
+The result is a vector with the i1 element type.
Semantics:
""""""""""
>From ec2bfeddccd0b32dceb09988c95552affcdcfdcb Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 20 May 2025 15:47:45 +0100
Subject: [PATCH 21/43] Fixup langref
---
llvm/docs/LangRef.rst | 112 ++++++++++++++++++++++++++++--------------
1 file changed, 76 insertions(+), 36 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 944c4286fbfd0..b69ed81c70635 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24106,10 +24106,9 @@ Examples:
.. _int_experimental_loop_dependence_war_mask:
-.. _int_experimental_loop_dependence_raw_mask:
-'``llvm.experimental.loop.dependence.raw.mask.*``' and '``llvm.experimental.loop.dependence.war.mask.*``' Intrinsics
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+'``llvm.experimental.loop.dependence.war.mask.*``' Intrinsics
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
"""""""
@@ -24117,18 +24116,22 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
declare <8 x i1> @llvm.experimental.loop.dependence.war.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
- declare <16 x i1> @llvm.experimental.loop.dependence.raw.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <16 x i1> @llvm.experimental.loop.dependence.war.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
declare <vscale x 16 x i1> @llvm.experimental.loop.dependence.war.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
Overview:
"""""""""
-Create a mask enabling lanes that do not overlap between two pointers
-across one vector loop iteration.
+Given a scalar load from %ptrA, followed by a scalar store to %ptrB, this
+instruction generates a mask where an active lane indicates that there is no
+write-after-read hazard for this lane.
+A write-after-read hazard occurs when a write-after-read sequence for a given
+lane in a vector ends up being executed as a read-after-write sequence due to
+the aliasing of pointers.
Arguments:
""""""""""
@@ -24140,51 +24143,88 @@ Semantics:
""""""""""
``%elementSize`` is the size of the accessed elements in bytes.
-The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
-VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
-The '``llvm.experimental.loop.dependence.war.mask*``' intrinsics are semantically
-equivalent to:
+The intrinsic returns ``poison`` if the distance between ``%prtA`` and ``%ptrB``
+is smaller than ``VF * %elementsize`` and either ``%ptrA + VF * %elementSize``
+or ``%ptrB + VF * %elementSize`` wrap.
+The element of the result mask is active when no write-after-read hazard occurs,
+meaning that:
-::
+* (ptrB - ptrA) <= 0 (guarantees that all lanes are loaded before any stores are
+ committed), or
+* (ptrB - ptrA) >= elementSize * lane (guarantees that this lane is loaded
+ before the store to the same address is committed)
- %diff = (%ptrB - %ptrA) / %elementSize
- %m[i] = (icmp ult i, %diff) || (%diff <= 0)
+Examples:
+"""""""""
-When the return value is not poison the '``llvm.experimental.loop.dependence.raw.mask.*``'
-intrinsics are semantically equivalent to:
+.. code-block:: llvm
+
+ %loop.dependence.mask = call <4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %loop.dependence.mask, <4 x i32> poison)
+ [...]
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %loop.dependence.mask)
+
+.. _int_experimental_loop_dependence_raw_mask:
+
+'``llvm.experimental.loop.dependence.raw.mask.*``' Intrinsics
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Syntax:
+"""""""
+This is an overloaded intrinsic.
::
- %diff = abs(%ptrB - %ptrA) / %elementSize
- %m[i] = (icmp ult i, %diff) || (%diff == 0)
+ declare <4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <8 x i1> @llvm.experimental.loop.dependence.raw.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <16 x i1> @llvm.experimental.loop.dependence.raw.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <vscale x 16 x i1> @llvm.experimental.loop.dependence.raw.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+
-where ``%m`` is a vector (mask) of active/inactive lanes with its elements
-indexed by ``i`` (i = 0 to VF - 1), and ``%ptrA``, ``%ptrB`` are the two ptr
-arguments to ``llvm.experimental.loop.dependence.{raw,war}.mask.*`` and ``%elementSize``
-is the first immediate argument. The ``war`` variant is expected to be used when
-``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise the ``raw`` variant is
-expected to be used.
+Overview:
+"""""""""
-This can, for example, be emitted by the loop vectorizer in which case
-``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a
-pointer that is stored to within the loop.
-If the difference between these pointers is less than the vector factor, then
-they overlap (alias) within a loop iteration.
-An example is if ``%ptrA`` is 20 and ``%ptrB`` is 23 with a vector factor of 8,
-then lanes 3, 4, 5, 6 and 7 of the vector loaded from ``%ptrA``
-share addresses with lanes 0, 1, 2, 3, 4 and 5 from the vector stored to at
-``%ptrB``.
+Given a scalar store to %ptrA, followed by a scalar load from %ptrB, this
+instruction generates a mask where an active lane indicates that there is no
+read-after-write hazard for this lane and that this lane does not introduce any
+new store-to-load forwarding hazard.
+A read-after-write hazard occurs when a read-after-write sequence for a given
+lane in a vector ends up being executed as a write-after-read sequence due to
+the aliasing of pointers.
+
+
+Arguments:
+""""""""""
+
+The first two arguments are pointers and the last argument is an immediate.
+The result is a vector with the i1 element type.
+
+Semantics:
+""""""""""
+
+``%elementSize`` is the size of the accessed elements in bytes.
+The intrinsic returns ``poison`` if the distance between ``%prtA`` and ``%ptrB``
+is smaller than ``VF * %elementsize`` and either ``%ptrA + VF * %elementSize``
+or ``%ptrB + VF * %elementSize`` wrap.
+The element of the result mask is active when no read-after-write hazard occurs, meaning that:
+
+ abs(ptrB - ptrA) >= elementSize * lane (guarantees that the store of this lane
+ is committed before loading from this address)
+
+Note that the case where (ptrB - ptrA) < 0 does not result in any
+read-after-write hazards, but may introduce new store-to-load-forwarding stalls
+where both the store and load partially access the same addresses.
Examples:
"""""""""
.. code-block:: llvm
- %loop.dependence.mask = call <4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4)
- %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %loop.dependence.mask, <4 x i32> poison)
+ %loop.dependence.mask = call <4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4)
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrA, i32 4, <4 x i1> %loop.dependence.mask)
[...]
- call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %loop.dependence.mask)
+ %vecB = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrB, i32 4, <4 x i1> %loop.dependence.mask, <4 x i32> poison)
.. _int_experimental_vp_splice:
>From 9f5f91a2e1a9207d9e62fe284720aff7a663e234 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 20 May 2025 15:50:40 +0100
Subject: [PATCH 22/43] IsWriteAfterRead -> IsReadAfterWrite and avoid using
ops vector
---
llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp | 10 +++++-----
llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp | 7 +++----
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index 86720471e9584..e8f0fedd78c97 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -1809,13 +1809,13 @@ SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
SDValue SinkValue = N->getOperand(1);
SDValue EltSize = N->getOperand(2);
- bool IsWriteAfterRead =
- N->getOpcode() == ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK;
+ bool IsReadAfterWrite =
+ N->getOpcode() == ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK;
auto VT = N->getValueType(0);
auto PtrVT = SourceValue->getValueType(0);
SDValue Diff = DAG.getNode(ISD::SUB, DL, PtrVT, SinkValue, SourceValue);
- if (!IsWriteAfterRead)
+ if (IsReadAfterWrite)
Diff = DAG.getNode(ISD::ABS, DL, PtrVT, Diff);
Diff = DAG.getNode(ISD::SDIV, DL, PtrVT, Diff, EltSize);
@@ -1825,7 +1825,7 @@ SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
Diff.getValueType());
SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
SDValue Cmp = DAG.getSetCC(DL, CmpVT, Diff, Zero,
- IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+ IsReadAfterWrite ? ISD::SETEQ : ISD::SETLE);
// Create the lane mask
EVT SplatTY =
@@ -1836,7 +1836,7 @@ SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
DAG.getSetCC(DL, VT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
// Splat the compare result then OR it with the lane mask
- auto VTElementTy = VT.getVectorElementType();
+ EVT VTElementTy = VT.getVectorElementType();
if (CmpVT.getScalarSizeInBits() < VTElementTy.getScalarSizeInBits())
Cmp = DAG.getNode(ISD::ZERO_EXTEND, DL, VTElementTy, Cmp);
SDValue Splat = DAG.getSplat(VT, DL, Cmp);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 77fe8b623762a..15e8b5a3bd033 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8318,13 +8318,12 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
case Intrinsic::experimental_loop_dependence_raw_mask: {
auto IntrinsicVT = EVT::getEVT(I.getType());
SmallVector<SDValue, 4> Ops;
- for (auto &Op : I.operands())
- Ops.push_back(getValue(Op));
unsigned ID = Intrinsic == Intrinsic::experimental_loop_dependence_war_mask
? ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK
: ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK;
- SDValue Mask = DAG.getNode(ID, sdl, IntrinsicVT, Ops);
- setValue(&I, Mask);
+ setValue(&I,
+ DAG.getNode(ID, sdl, IntrinsicVT, getValue(I.getOperand(0)),
+ getValue(I.getOperand(1)), getValue(I.getOperand(2))));
}
}
}
>From eb8d5af40508fd74cfe8b84be6c72f6dfb9c9e10 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 20 May 2025 16:41:07 +0100
Subject: [PATCH 23/43] Extend vXi1 setcc to account for intrinsic VT promotion
---
.../CodeGen/SelectionDAG/LegalizeVectorOps.cpp | 15 ++++++++++++---
llvm/test/CodeGen/AArch64/alias_mask.ll | 8 --------
2 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index e8f0fedd78c97..d72827759f549 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -1829,14 +1829,23 @@ SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
// Create the lane mask
EVT SplatTY =
- EVT::getVectorVT(*DAG.getContext(), PtrVT, VT.getVectorElementCount());
+ EVT::getVectorVT(*DAG.getContext(), PtrVT, VT.getVectorMinNumElements(),
+ VT.isScalableVector());
SDValue DiffSplat = DAG.getSplat(SplatTY, DL, Diff);
SDValue VectorStep = DAG.getStepVector(DL, SplatTY);
+ EVT MaskVT =
+ EVT::getVectorVT(*DAG.getContext(), MVT::i1, VT.getVectorMinNumElements(),
+ VT.isScalableVector());
SDValue DiffMask =
- DAG.getSetCC(DL, VT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
+ DAG.getSetCC(DL, MaskVT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
- // Splat the compare result then OR it with the lane mask
EVT VTElementTy = VT.getVectorElementType();
+ // Extend the diff setcc in case the intrinsic has been promoted to a vector
+ // type with elements larger than i1
+ if (VTElementTy.getScalarSizeInBits() > MaskVT.getScalarSizeInBits())
+ DiffMask = DAG.getNode(ISD::ANY_EXTEND, DL, VT, DiffMask);
+
+ // Splat the compare result then OR it with the lane mask
if (CmpVT.getScalarSizeInBits() < VTElementTy.getScalarSizeInBits())
Cmp = DAG.getNode(ISD::ZERO_EXTEND, DL, VTElementTy, Cmp);
SDValue Splat = DAG.getSplat(VT, DL, Cmp);
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 3248cb2de2644..75ab6f62095b6 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -48,8 +48,6 @@ define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -90,8 +88,6 @@ define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
; CHECK-NOSVE-NEXT: dup v1.8b, w8
; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: shl v0.8b, v0.8b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.8b, v0.8b, #0
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -210,8 +206,6 @@ define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -253,8 +247,6 @@ define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
; CHECK-NOSVE-NEXT: dup v1.8b, w8
; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: shl v0.8b, v0.8b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.8b, v0.8b, #0
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
>From c3d6fc8976a263a239af7b788207cc2fd8f2fd22 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Wed, 21 May 2025 14:35:29 +0100
Subject: [PATCH 24/43] Remove experimental from intrinsic name
---
llvm/docs/LangRef.rst | 28 +++++++++----------
llvm/include/llvm/CodeGen/ISDOpcodes.h | 4 +--
llvm/include/llvm/IR/Intrinsics.td | 4 +--
.../SelectionDAG/LegalizeIntegerTypes.cpp | 19 ++++++-------
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 5 ++--
.../SelectionDAG/LegalizeVectorOps.cpp | 11 ++++----
.../SelectionDAG/SelectionDAGBuilder.cpp | 10 +++----
.../SelectionDAG/SelectionDAGDumper.cpp | 4 +--
llvm/lib/CodeGen/TargetLoweringBase.cpp | 4 +--
.../Target/AArch64/AArch64ISelLowering.cpp | 21 ++++++--------
llvm/test/CodeGen/AArch64/alias_mask.ll | 16 +++++------
.../CodeGen/AArch64/alias_mask_scalable.ll | 16 +++++------
12 files changed, 68 insertions(+), 74 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index b69ed81c70635..59bc654209b1c 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24105,9 +24105,9 @@ Examples:
%wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> poison)
-.. _int_experimental_loop_dependence_war_mask:
+.. _int_loop_dependence_war_mask:
-'``llvm.experimental.loop.dependence.war.mask.*``' Intrinsics
+'``llvm.loop.dependence.war.mask.*``' Intrinsics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
@@ -24116,10 +24116,10 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
- declare <8 x i1> @llvm.experimental.loop.dependence.war.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
- declare <16 x i1> @llvm.experimental.loop.dependence.war.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
- declare <vscale x 16 x i1> @llvm.experimental.loop.dependence.war.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
Overview:
@@ -24159,14 +24159,14 @@ Examples:
.. code-block:: llvm
- %loop.dependence.mask = call <4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4)
+ %loop.dependence.mask = call <4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4)
%vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %loop.dependence.mask, <4 x i32> poison)
[...]
call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %loop.dependence.mask)
-.. _int_experimental_loop_dependence_raw_mask:
+.. _int_loop_dependence_raw_mask:
-'``llvm.experimental.loop.dependence.raw.mask.*``' Intrinsics
+'``llvm.loop.dependence.raw.mask.*``' Intrinsics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
@@ -24175,10 +24175,10 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
- declare <8 x i1> @llvm.experimental.loop.dependence.raw.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
- declare <16 x i1> @llvm.experimental.loop.dependence.raw.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
- declare <vscale x 16 x i1> @llvm.experimental.loop.dependence.raw.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <4 x i1> @llvm.loop.dependence.raw.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <8 x i1> @llvm.loop.dependence.raw.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <16 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
+ declare <vscale x 16 x i1> @llvm.loop.dependence.raw.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize)
Overview:
@@ -24221,7 +24221,7 @@ Examples:
.. code-block:: llvm
- %loop.dependence.mask = call <4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4)
+ %loop.dependence.mask = call <4 x i1> @llvm.loop.dependence.raw.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4)
call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrA, i32 4, <4 x i1> %loop.dependence.mask)
[...]
%vecB = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrB, i32 4, <4 x i1> %loop.dependence.mask, <4 x i32> poison)
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index 57c3873e1451d..492f83ae34f61 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1561,8 +1561,8 @@ enum NodeType {
// The `llvm.experimental.loop.dependence.{war, raw}.mask` intrinsics
// Operands: Load pointer, Store pointer, Element size
// Output: Mask
- EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK,
- EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK,
+ LOOP_DEPENDENCE_WAR_MASK,
+ LOOP_DEPENDENCE_RAW_MASK,
// llvm.clear_cache intrinsic
// Operands: Input Chain, Start Addres, End Address
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index d190efd2f65f2..8e2e0604cb3af 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2420,12 +2420,12 @@ let IntrProperties = [IntrNoMem, ImmArg<ArgIndex<1>>] in {
llvm_i32_ty]>;
}
-def int_experimental_loop_dependence_raw_mask:
+def int_loop_dependence_raw_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[llvm_ptr_ty, llvm_ptr_ty, llvm_i64_ty],
[IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>]>;
-def int_experimental_loop_dependence_war_mask:
+def int_loop_dependence_war_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[llvm_ptr_ty, llvm_ptr_ty, llvm_i64_ty],
[IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>]>;
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 3f8caeb1d4f42..f3c593ae02dd8 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -324,9 +324,9 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
Res = PromoteIntRes_VP_REDUCE(N);
break;
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
- Res = PromoteIntRes_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(N);
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::LOOP_DEPENDENCE_RAW_MASK:
+ Res = PromoteIntRes_LOOP_DEPENDENCE_MASK(N);
break;
case ISD::FREEZE:
@@ -379,8 +379,7 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
return GetPromotedInteger(Op);
}
-SDValue
-DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(SDNode *N) {
+SDValue DAGTypeLegalizer::PromoteIntRes_LOOP_DEPENDENCE_MASK(SDNode *N) {
EVT VT = N->getValueType(0);
EVT NewVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
return DAG.getNode(N->getOpcode(), SDLoc(N), NewVT, N->ops());
@@ -2116,9 +2115,9 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::PARTIAL_REDUCE_SUMLA:
Res = PromoteIntOp_PARTIAL_REDUCE_MLA(N);
break;
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
- Res = PromoteIntOp_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(N, OpNo);
+ case ISD::LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
+ Res = PromoteIntOp_LOOP_DEPENDENCE_MASK(N, OpNo);
break;
}
@@ -2953,8 +2952,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N) {
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
}
-SDValue DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(
- SDNode *N, unsigned OpNo) {
+SDValue DAGTypeLegalizer::PromoteIntOp_LOOP_DEPENDENCE_MASK(SDNode *N,
+ unsigned OpNo) {
SmallVector<SDValue, 4> NewOps(N->ops());
NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index fd508bead27c9..7af8caec27b15 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -382,7 +382,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_VECTOR_FIND_LAST_ACTIVE(SDNode *N);
SDValue PromoteIntRes_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntRes_PARTIAL_REDUCE_MLA(SDNode *N);
- SDValue PromoteIntRes_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(SDNode *N);
+ SDValue PromoteIntRes_LOOP_DEPENDENCE_MASK(SDNode *N);
// Integer Operand Promotion.
bool PromoteIntegerOperand(SDNode *N, unsigned OpNo);
@@ -437,8 +437,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_GET_ACTIVE_LANE_MASK(SDNode *N);
SDValue PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N);
- SDValue PromoteIntOp_EXPERIMENTAL_LOOP_DEPENDENCE_MASK(SDNode *N,
- unsigned OpNo);
+ SDValue PromoteIntOp_LOOP_DEPENDENCE_MASK(SDNode *N, unsigned OpNo);
void SExtOrZExtPromotedOperands(SDValue &LHS, SDValue &RHS);
void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index d72827759f549..74074d57c0291 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -476,8 +476,8 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::VECTOR_COMPRESS:
case ISD::SCMP:
case ISD::UCMP:
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::LOOP_DEPENDENCE_RAW_MASK:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
@@ -1294,8 +1294,8 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
case ISD::UCMP:
Results.push_back(TLI.expandCMP(Node, DAG));
return;
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::LOOP_DEPENDENCE_RAW_MASK:
Results.push_back(ExpandLOOP_DEPENDENCE_MASK(Node));
return;
@@ -1809,8 +1809,7 @@ SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
SDValue SinkValue = N->getOperand(1);
SDValue EltSize = N->getOperand(2);
- bool IsReadAfterWrite =
- N->getOpcode() == ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK;
+ bool IsReadAfterWrite = N->getOpcode() == ISD::LOOP_DEPENDENCE_RAW_MASK;
auto VT = N->getValueType(0);
auto PtrVT = SourceValue->getValueType(0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 15e8b5a3bd033..4689f9577fc41 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8314,13 +8314,13 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
visitVectorExtractLastActive(I, Intrinsic);
return;
}
- case Intrinsic::experimental_loop_dependence_war_mask:
- case Intrinsic::experimental_loop_dependence_raw_mask: {
+ case Intrinsic::loop_dependence_war_mask:
+ case Intrinsic::loop_dependence_raw_mask: {
auto IntrinsicVT = EVT::getEVT(I.getType());
SmallVector<SDValue, 4> Ops;
- unsigned ID = Intrinsic == Intrinsic::experimental_loop_dependence_war_mask
- ? ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK
- : ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK;
+ unsigned ID = Intrinsic == Intrinsic::loop_dependence_war_mask
+ ? ISD::LOOP_DEPENDENCE_WAR_MASK
+ : ISD::LOOP_DEPENDENCE_RAW_MASK;
setValue(&I,
DAG.getNode(ID, sdl, IntrinsicVT, getValue(I.getOperand(0)),
getValue(I.getOperand(1)), getValue(I.getOperand(2))));
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index d5c8431370243..4b2a00c2e2cfa 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -587,9 +587,9 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
return "partial_reduce_smla";
case ISD::PARTIAL_REDUCE_SUMLA:
return "partial_reduce_sumla";
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
return "loop_dep_war";
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::LOOP_DEPENDENCE_RAW_MASK:
return "loop_dep_raw";
// Vector Predication
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index 00b195fa38c2f..d4faa5e6fee71 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -901,8 +901,8 @@ void TargetLoweringBase::initActions() {
setOperationAction(ISD::VECTOR_FIND_LAST_ACTIVE, VT, Expand);
// Lane mask with non-aliasing lanes enabled default to expand
- setOperationAction(ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK, VT, Expand);
- setOperationAction(ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK, VT, Expand);
+ setOperationAction(ISD::LOOP_DEPENDENCE_RAW_MASK, VT, Expand);
+ setOperationAction(ISD::LOOP_DEPENDENCE_WAR_MASK, VT, Expand);
// FP environment operations default to expand.
setOperationAction(ISD::GET_FPENV, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index b39196ce38b53..f05f86047cae7 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1920,10 +1920,8 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
(Subtarget->hasSME() && Subtarget->isStreaming())) {
for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
- setOperationAction(ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK, VT,
- Custom);
- setOperationAction(ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK, VT,
- Custom);
+ setOperationAction(ISD::LOOP_DEPENDENCE_RAW_MASK, VT, Custom);
+ setOperationAction(ISD::LOOP_DEPENDENCE_WAR_MASK, VT, Custom);
}
}
@@ -5246,8 +5244,7 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
uint64_t EltSize = Op.getConstantOperandVal(2);
- bool IsWriteAfterRead =
- Op.getOpcode() == ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK;
+ bool IsWriteAfterRead = Op.getOpcode() == ISD::LOOP_DEPENDENCE_WAR_MASK;
unsigned Opcode =
IsWriteAfterRead ? AArch64ISD::WHILEWR : AArch64ISD::WHILERW;
EVT VT = Op.getValueType();
@@ -7468,8 +7465,8 @@ SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
default:
llvm_unreachable("unimplemented operand");
return SDValue();
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK:
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
return LowerLOOP_DEPENDENCE_MASK(Op, DAG);
case ISD::BITCAST:
return LowerBITCAST(Op, DAG);
@@ -28301,8 +28298,8 @@ void AArch64TargetLowering::ReplaceNodeResults(
case ISD::GET_ACTIVE_LANE_MASK:
ReplaceGetActiveLaneMaskResults(N, Results, DAG);
return;
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_WAR_MASK:
- case ISD::EXPERIMENTAL_LOOP_DEPENDENCE_RAW_MASK: {
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
+ case ISD::LOOP_DEPENDENCE_RAW_MASK: {
EVT VT = N->getValueType(0);
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
@@ -28374,8 +28371,8 @@ void AArch64TargetLowering::ReplaceNodeResults(
return;
}
case Intrinsic::experimental_vector_match:
- case Intrinsic::experimental_loop_dependence_raw_mask:
- case Intrinsic::experimental_loop_dependence_war_mask: {
+ case Intrinsic::loop_dependence_raw_mask:
+ case Intrinsic::loop_dependence_war_mask: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 75ab6f62095b6..8b74b7101a740 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -51,7 +51,7 @@ define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
+ %0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <16 x i1> %0
}
@@ -91,7 +91,7 @@ define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 2)
+ %0 = call <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <8 x i1> %0
}
@@ -125,7 +125,7 @@ define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 4)
+ %0 = call <4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <4 x i1> %0
}
@@ -155,7 +155,7 @@ define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.loop.dependence.war.mask.v2i1(ptr %a, ptr %b, i64 8)
+ %0 = call <2 x i1> @llvm.loop.dependence.war.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <2 x i1> %0
}
@@ -209,7 +209,7 @@ define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
+ %0 = call <16 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <16 x i1> %0
}
@@ -250,7 +250,7 @@ define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.loop.dependence.raw.mask.v8i1(ptr %a, ptr %b, i64 2)
+ %0 = call <8 x i1> @llvm.loop.dependence.raw.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <8 x i1> %0
}
@@ -285,7 +285,7 @@ define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %a, ptr %b, i64 4)
+ %0 = call <4 x i1> @llvm.loop.dependence.raw.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <4 x i1> %0
}
@@ -316,6 +316,6 @@ define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
+ %0 = call <2 x i1> @llvm.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 5a7c3180e2807..47b8e31d8b5be 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -60,7 +60,7 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
}
@@ -98,7 +98,7 @@ define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 2)
+ %0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
}
@@ -130,7 +130,7 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 4)
+ %0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
}
@@ -158,7 +158,7 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.loop.dependence.war.mask.v2i1(ptr %a, ptr %b, i64 8)
+ %0 = call <vscale x 2 x i1> @llvm.loop.dependence.war.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
@@ -223,7 +223,7 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
}
@@ -262,7 +262,7 @@ define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.loop.dependence.raw.mask.v8i1(ptr %a, ptr %b, i64 2)
+ %0 = call <vscale x 8 x i1> @llvm.loop.dependence.raw.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
}
@@ -295,7 +295,7 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.loop.dependence.raw.mask.v4i1(ptr %a, ptr %b, i64 4)
+ %0 = call <vscale x 4 x i1> @llvm.loop.dependence.raw.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
}
@@ -324,6 +324,6 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
+ %0 = call <vscale x 2 x i1> @llvm.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
>From 9c5631d393c1d38acdde3502090f831c6b97cf9a Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Wed, 21 May 2025 15:16:06 +0100
Subject: [PATCH 25/43] Clean up vector type creation
---
llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp | 12 ++++--------
.../lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp | 1 -
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 9 +++++----
3 files changed, 9 insertions(+), 13 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index 74074d57c0291..c7e63e813371f 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -1820,21 +1820,17 @@ SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
Diff = DAG.getNode(ISD::SDIV, DL, PtrVT, Diff, EltSize);
// If the difference is positive then some elements may alias
- auto CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
- Diff.getValueType());
+ EVT CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+ Diff.getValueType());
SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
SDValue Cmp = DAG.getSetCC(DL, CmpVT, Diff, Zero,
IsReadAfterWrite ? ISD::SETEQ : ISD::SETLE);
// Create the lane mask
- EVT SplatTY =
- EVT::getVectorVT(*DAG.getContext(), PtrVT, VT.getVectorMinNumElements(),
- VT.isScalableVector());
+ EVT SplatTY = VT.changeElementType(PtrVT);
SDValue DiffSplat = DAG.getSplat(SplatTY, DL, Diff);
SDValue VectorStep = DAG.getStepVector(DL, SplatTY);
- EVT MaskVT =
- EVT::getVectorVT(*DAG.getContext(), MVT::i1, VT.getVectorMinNumElements(),
- VT.isScalableVector());
+ EVT MaskVT = VT.changeElementType(MVT::i1);
SDValue DiffMask =
DAG.getSetCC(DL, MaskVT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 4689f9577fc41..6f8210c6dc166 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8317,7 +8317,6 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
case Intrinsic::loop_dependence_war_mask:
case Intrinsic::loop_dependence_raw_mask: {
auto IntrinsicVT = EVT::getEVT(I.getType());
- SmallVector<SDValue, 4> Ops;
unsigned ID = Intrinsic == Intrinsic::loop_dependence_war_mask
? ISD::LOOP_DEPENDENCE_WAR_MASK
: ISD::LOOP_DEPENDENCE_RAW_MASK;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index f05f86047cae7..7f52a7e58d493 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -6534,7 +6534,8 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(AArch64ISD::USDOT, DL, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2), Op.getOperand(3));
}
- case Intrinsic::experimental_get_nonalias_lane_mask: {
+ case Intrinsic::loop_dependence_war_mask:
+ case Intrinsic::loop_dependence_raw_mask: {
unsigned IntrinsicID = 0;
uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
@@ -20072,9 +20073,9 @@ static SDValue getPTest(SelectionDAG &DAG, EVT VT, SDValue Pg, SDValue Op,
AArch64CC::CondCode Cond);
static bool isPredicateCCSettingOp(SDValue N) {
- if ((N.getOpcode() == ISD::SETCC ||
- // get_active_lane_mask is lowered to a whilelo instruction.
- N.getOpcode() == ISD::GET_ACTIVE_LANE_MASK) ||
+ if ((N.getOpcode() == ISD::SETCC) ||
+ // get_active_lane_mask is lowered to a whilelo instruction.
+ (N.getOpcode() == ISD::GET_ACTIVE_LANE_MASK) ||
(N.getOpcode() == ISD::INTRINSIC_WO_CHAIN &&
(N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilege ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilegt ||
>From 52fca129405f5dc345946cf0cde31053e2da0549 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Tue, 5 Aug 2025 13:48:14 +0100
Subject: [PATCH 26/43] Address review
---
llvm/docs/LangRef.rst | 7 +-
.../include/llvm/Target/TargetSelectionDAG.td | 8 ++
.../SelectionDAG/LegalizeIntegerTypes.cpp | 11 --
.../SelectionDAG/LegalizeVectorOps.cpp | 18 +--
.../SelectionDAG/SelectionDAGBuilder.cpp | 16 +--
llvm/lib/CodeGen/TargetLoweringBase.cpp | 1 -
.../Target/AArch64/AArch64ISelLowering.cpp | 109 +++++++-----------
.../lib/Target/AArch64/AArch64SVEInstrInfo.td | 9 +-
llvm/lib/Target/AArch64/SVEInstrFormats.td | 12 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 36 +++---
.../CodeGen/AArch64/alias_mask_scalable.ll | 16 +--
11 files changed, 105 insertions(+), 138 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 59bc654209b1c..c5f8269907a03 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24193,6 +24193,9 @@ A read-after-write hazard occurs when a read-after-write sequence for a given
lane in a vector ends up being executed as a write-after-read sequence due to
the aliasing of pointers.
+Note that the case where (ptrB - ptrA) < 0 does not result in any
+read-after-write hazards, but may introduce new store-to-load-forwarding stalls
+where both the store and load partially access the same addresses.
Arguments:
""""""""""
@@ -24212,10 +24215,6 @@ The element of the result mask is active when no read-after-write hazard occurs,
abs(ptrB - ptrA) >= elementSize * lane (guarantees that the store of this lane
is committed before loading from this address)
-Note that the case where (ptrB - ptrA) < 0 does not result in any
-read-after-write hazards, but may introduce new store-to-load-forwarding stalls
-where both the store and load partially access the same addresses.
-
Examples:
"""""""""
diff --git a/llvm/include/llvm/Target/TargetSelectionDAG.td b/llvm/include/llvm/Target/TargetSelectionDAG.td
index a4ed62bb5715c..b4dd88a809d4e 100644
--- a/llvm/include/llvm/Target/TargetSelectionDAG.td
+++ b/llvm/include/llvm/Target/TargetSelectionDAG.td
@@ -833,6 +833,14 @@ def step_vector : SDNode<"ISD::STEP_VECTOR", SDTypeProfile<1, 1,
def scalar_to_vector : SDNode<"ISD::SCALAR_TO_VECTOR", SDTypeProfile<1, 1, []>,
[]>;
+def SDTLoopDepMask : SDTypeProfile<1, 3, [SDTCisVec<0>, SDTCisInt<1>,
+ SDTCisSameAs<2, 1>, SDTCisInt<3>,
+ SDTCVecEltisVT<0,i1>]>;
+def loop_dependence_war_mask : SDNode<"ISD::LOOP_DEPENDENCE_WAR_MASK",
+ SDTLoopDepMask, []>;
+def loop_dependence_raw_mask : SDNode<"ISD::LOOP_DEPENDENCE_RAW_MASK",
+ SDTLoopDepMask, []>;
+
// vector_extract/vector_insert are similar to extractelt/insertelt but allow
// types that require promotion (a 16i8 extract where i8 is not a legal type so
// uses i32 for example). extractelt/insertelt are preferred where the element
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index f3c593ae02dd8..922a1e1064f6c 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -2115,10 +2115,6 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::PARTIAL_REDUCE_SUMLA:
Res = PromoteIntOp_PARTIAL_REDUCE_MLA(N);
break;
- case ISD::LOOP_DEPENDENCE_RAW_MASK:
- case ISD::LOOP_DEPENDENCE_WAR_MASK:
- Res = PromoteIntOp_LOOP_DEPENDENCE_MASK(N, OpNo);
- break;
}
// If the result is null, the sub-method took care of registering results etc.
@@ -2952,13 +2948,6 @@ SDValue DAGTypeLegalizer::PromoteIntOp_PARTIAL_REDUCE_MLA(SDNode *N) {
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
}
-SDValue DAGTypeLegalizer::PromoteIntOp_LOOP_DEPENDENCE_MASK(SDNode *N,
- unsigned OpNo) {
- SmallVector<SDValue, 4> NewOps(N->ops());
- NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
- return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
-}
-
//===----------------------------------------------------------------------===//
// Integer Result Expansion
//===----------------------------------------------------------------------===//
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index c7e63e813371f..5420de97bd82d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -1810,8 +1810,8 @@ SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
SDValue EltSize = N->getOperand(2);
bool IsReadAfterWrite = N->getOpcode() == ISD::LOOP_DEPENDENCE_RAW_MASK;
- auto VT = N->getValueType(0);
- auto PtrVT = SourceValue->getValueType(0);
+ EVT VT = N->getValueType(0);
+ EVT PtrVT = SourceValue->getValueType(0);
SDValue Diff = DAG.getNode(ISD::SUB, DL, PtrVT, SinkValue, SourceValue);
if (IsReadAfterWrite)
@@ -1827,22 +1827,22 @@ SDValue VectorLegalizer::ExpandLOOP_DEPENDENCE_MASK(SDNode *N) {
IsReadAfterWrite ? ISD::SETEQ : ISD::SETLE);
// Create the lane mask
- EVT SplatTY = VT.changeElementType(PtrVT);
- SDValue DiffSplat = DAG.getSplat(SplatTY, DL, Diff);
- SDValue VectorStep = DAG.getStepVector(DL, SplatTY);
+ EVT SplatVT = VT.changeElementType(PtrVT);
+ SDValue DiffSplat = DAG.getSplat(SplatVT, DL, Diff);
+ SDValue VectorStep = DAG.getStepVector(DL, SplatVT);
EVT MaskVT = VT.changeElementType(MVT::i1);
SDValue DiffMask =
DAG.getSetCC(DL, MaskVT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
- EVT VTElementTy = VT.getVectorElementType();
+ EVT EltVT = VT.getVectorElementType();
// Extend the diff setcc in case the intrinsic has been promoted to a vector
// type with elements larger than i1
- if (VTElementTy.getScalarSizeInBits() > MaskVT.getScalarSizeInBits())
+ if (EltVT.getScalarSizeInBits() > MaskVT.getScalarSizeInBits())
DiffMask = DAG.getNode(ISD::ANY_EXTEND, DL, VT, DiffMask);
// Splat the compare result then OR it with the lane mask
- if (CmpVT.getScalarSizeInBits() < VTElementTy.getScalarSizeInBits())
- Cmp = DAG.getNode(ISD::ZERO_EXTEND, DL, VTElementTy, Cmp);
+ if (CmpVT.getScalarSizeInBits() < EltVT.getScalarSizeInBits())
+ Cmp = DAG.getNode(ISD::ZERO_EXTEND, DL, EltVT, Cmp);
SDValue Splat = DAG.getSplat(VT, DL, Cmp);
return DAG.getNode(ISD::OR, DL, VT, DiffMask, Splat);
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 6f8210c6dc166..f235ce532a09d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8315,15 +8315,17 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
return;
}
case Intrinsic::loop_dependence_war_mask:
- case Intrinsic::loop_dependence_raw_mask: {
- auto IntrinsicVT = EVT::getEVT(I.getType());
- unsigned ID = Intrinsic == Intrinsic::loop_dependence_war_mask
- ? ISD::LOOP_DEPENDENCE_WAR_MASK
- : ISD::LOOP_DEPENDENCE_RAW_MASK;
setValue(&I,
- DAG.getNode(ID, sdl, IntrinsicVT, getValue(I.getOperand(0)),
+ DAG.getNode(ISD::LOOP_DEPENDENCE_WAR_MASK, sdl,
+ EVT::getEVT(I.getType()), getValue(I.getOperand(0)),
getValue(I.getOperand(1)), getValue(I.getOperand(2))));
- }
+ return;
+ case Intrinsic::loop_dependence_raw_mask:
+ setValue(&I,
+ DAG.getNode(ISD::LOOP_DEPENDENCE_RAW_MASK, sdl,
+ EVT::getEVT(I.getType()), getValue(I.getOperand(0)),
+ getValue(I.getOperand(1)), getValue(I.getOperand(2))));
+ return;
}
}
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index d4faa5e6fee71..ea57bef1c0701 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -900,7 +900,6 @@ void TargetLoweringBase::initActions() {
// Masked vector extracts default to expand.
setOperationAction(ISD::VECTOR_FIND_LAST_ACTIVE, VT, Expand);
- // Lane mask with non-aliasing lanes enabled default to expand
setOperationAction(ISD::LOOP_DEPENDENCE_RAW_MASK, VT, Expand);
setOperationAction(ISD::LOOP_DEPENDENCE_WAR_MASK, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 7f52a7e58d493..e01e60f623e75 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1918,11 +1918,15 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
// Handle non-aliasing elements mask
if (Subtarget->hasSVE2() ||
(Subtarget->hasSME() && Subtarget->isStreaming())) {
- for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
- MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
+ // FIXME: Support wider fixed-length types when msve-vector-bits is used.
+ for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8}) {
setOperationAction(ISD::LOOP_DEPENDENCE_RAW_MASK, VT, Custom);
setOperationAction(ISD::LOOP_DEPENDENCE_WAR_MASK, VT, Custom);
}
+ for (auto VT : {MVT::nxv2i1, MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
+ setOperationAction(ISD::LOOP_DEPENDENCE_RAW_MASK, VT, Legal);
+ setOperationAction(ISD::LOOP_DEPENDENCE_WAR_MASK, VT, Legal);
+ }
}
// Handle operations that are only available in non-streaming SVE mode.
@@ -5244,27 +5248,23 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
uint64_t EltSize = Op.getConstantOperandVal(2);
- bool IsWriteAfterRead = Op.getOpcode() == ISD::LOOP_DEPENDENCE_WAR_MASK;
- unsigned Opcode =
- IsWriteAfterRead ? AArch64ISD::WHILEWR : AArch64ISD::WHILERW;
EVT VT = Op.getValueType();
- MVT SimpleVT = VT.getSimpleVT();
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
- assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
+ assert((VT == MVT::v16i8 || VT == MVT::nxv16i1) &&
"Unexpected mask or element size");
break;
case 2:
- assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
+ assert((VT == MVT::v8i8 || VT == MVT::nxv8i1) &&
"Unexpected mask or element size");
break;
case 4:
- assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
+ assert((VT == MVT::v4i16 || VT == MVT::nxv4i1) &&
"Unexpected mask or element size");
break;
case 8:
- assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
+ assert((VT == MVT::v2i32 || VT == MVT::nxv2i1) &&
"Unexpected mask or element size");
break;
default:
@@ -5272,19 +5272,17 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
break;
}
- if (VT.isScalableVector())
- return DAG.getNode(Opcode, DL, VT, Op.getOperand(0), Op.getOperand(1));
-
// We can use the SVE whilewr/whilerw instruction to lower this
// intrinsic by creating the appropriate sequence of scalable vector
// operations and then extracting a fixed-width subvector from the scalable
- // vector.
-
- EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
+ // vector. Scalable vector variants are already legal.
+ EVT ContainerVT =
+ EVT::getVectorVT(*DAG.getContext(), VT.getVectorElementType(),
+ VT.getVectorNumElements(), true);
EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
- SDValue Mask =
- DAG.getNode(Opcode, DL, WhileVT, Op.getOperand(0), Op.getOperand(1));
+ SDValue Mask = DAG.getNode(Op.getOpcode(), DL, WhileVT, Op.getOperand(0),
+ Op.getOperand(1), Op.getOperand(2));
SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
DAG.getVectorIdxConstant(0, DL));
@@ -6049,17 +6047,37 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(AArch64ISD::THREAD_POINTER, DL, PtrVT);
}
case Intrinsic::aarch64_sve_whilewr_b:
+ return DAG.getNode(ISD::LOOP_DEPENDENCE_WAR_MASK, DL, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2),
+ DAG.getConstant(1, DL, MVT::i64));
case Intrinsic::aarch64_sve_whilewr_h:
+ return DAG.getNode(ISD::LOOP_DEPENDENCE_WAR_MASK, DL, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2),
+ DAG.getConstant(2, DL, MVT::i64));
case Intrinsic::aarch64_sve_whilewr_s:
+ return DAG.getNode(ISD::LOOP_DEPENDENCE_WAR_MASK, DL, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2),
+ DAG.getConstant(4, DL, MVT::i64));
case Intrinsic::aarch64_sve_whilewr_d:
- return DAG.getNode(AArch64ISD::WHILEWR, dl, Op.getValueType(),
- Op.getOperand(1), Op.getOperand(2));
+ return DAG.getNode(ISD::LOOP_DEPENDENCE_WAR_MASK, DL, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2),
+ DAG.getConstant(8, DL, MVT::i64));
case Intrinsic::aarch64_sve_whilerw_b:
+ return DAG.getNode(ISD::LOOP_DEPENDENCE_RAW_MASK, DL, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2),
+ DAG.getConstant(1, DL, MVT::i64));
case Intrinsic::aarch64_sve_whilerw_h:
+ return DAG.getNode(ISD::LOOP_DEPENDENCE_RAW_MASK, DL, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2),
+ DAG.getConstant(2, DL, MVT::i64));
case Intrinsic::aarch64_sve_whilerw_s:
+ return DAG.getNode(ISD::LOOP_DEPENDENCE_RAW_MASK, DL, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2),
+ DAG.getConstant(4, DL, MVT::i64));
case Intrinsic::aarch64_sve_whilerw_d:
- return DAG.getNode(AArch64ISD::WHILERW, dl, Op.getValueType(),
- Op.getOperand(1), Op.getOperand(2));
+ return DAG.getNode(ISD::LOOP_DEPENDENCE_RAW_MASK, DL, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2),
+ DAG.getConstant(8, DL, MVT::i64));
case Intrinsic::aarch64_neon_abs: {
EVT Ty = Op.getValueType();
if (Ty == MVT::i64) {
@@ -6534,53 +6552,6 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(AArch64ISD::USDOT, DL, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2), Op.getOperand(3));
}
- case Intrinsic::loop_dependence_war_mask:
- case Intrinsic::loop_dependence_raw_mask: {
- unsigned IntrinsicID = 0;
- uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
- bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
- switch (EltSize) {
- case 1:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
- : Intrinsic::aarch64_sve_whilerw_b;
- break;
- case 2:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
- : Intrinsic::aarch64_sve_whilerw_h;
- break;
- case 4:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
- : Intrinsic::aarch64_sve_whilerw_s;
- break;
- case 8:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
- : Intrinsic::aarch64_sve_whilerw_d;
- break;
- default:
- llvm_unreachable("Unexpected element size for get.alias.lane.mask");
- break;
- }
- SDValue ID = DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
-
- EVT VT = Op.getValueType();
- if (VT.isScalableVector())
- return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, VT, ID, Op.getOperand(1),
- Op.getOperand(2));
-
- // We can use the SVE whilewr/whilerw instruction to lower this
- // intrinsic by creating the appropriate sequence of scalable vector
- // operations and then extracting a fixed-width subvector from the scalable
- // vector.
-
- EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
- EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
-
- SDValue Mask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, WhileVT, ID,
- Op.getOperand(1), Op.getOperand(2));
- SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, dl, ContainerVT, Mask);
- return DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, VT, MaskAsInt,
- DAG.getVectorIdxConstant(0, dl));
- }
case Intrinsic::aarch64_neon_saddlv:
case Intrinsic::aarch64_neon_uaddlv: {
EVT OpVT = Op.getOperand(1).getValueType();
diff --git a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
index c9b8297dc04bd..2c0a0bc91b8b1 100644
--- a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
@@ -167,11 +167,6 @@ def AArch64st1q_scatter : SDNode<"AArch64ISD::SST1Q_PRED", SDT_AArch64_SCATTER_V
// AArch64 SVE/SVE2 - the remaining node definitions
//
-// Alias masks
-def SDT_AArch64Mask : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisInt<1>, SDTCisSameAs<2, 1>, SDTCVecEltisVT<0,i1>]>;
-def AArch64whilewr : SDNode<"AArch64ISD::WHILEWR", SDT_AArch64Mask>;
-def AArch64whilerw : SDNode<"AArch64ISD::WHILERW", SDT_AArch64Mask>;
-
// SVE CNT/INC/RDVL
def sve_rdvl_imm : ComplexPattern<i64, 1, "SelectRDVLImm<-32, 31, 16>">;
def sve_cnth_imm : ComplexPattern<i64, 1, "SelectRDVLImm<1, 16, 8>">;
@@ -4130,8 +4125,8 @@ let Predicates = [HasSVE2_or_SME] in {
defm WHILEHI_PXX : sve_int_while8_rr<0b101, "whilehi", int_aarch64_sve_whilehi, get_active_lane_mask>;
// SVE2 pointer conflict compare
- defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", AArch64whilewr>;
- defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", AArch64whilerw>;
+ defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", loop_dependence_war_mask>;
+ defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", loop_dependence_raw_mask>;
} // End HasSVE2_or_SME
let Predicates = [HasSVEAES, HasNonStreamingSVE_or_SSVE_AES] in {
diff --git a/llvm/lib/Target/AArch64/SVEInstrFormats.td b/llvm/lib/Target/AArch64/SVEInstrFormats.td
index 3c671aed2185d..fa22ba8aa113b 100644
--- a/llvm/lib/Target/AArch64/SVEInstrFormats.td
+++ b/llvm/lib/Target/AArch64/SVEInstrFormats.td
@@ -5952,10 +5952,14 @@ multiclass sve2_int_while_rr<bits<1> rw, string asm, SDPatternOperator op> {
def _S : sve2_int_while_rr<0b10, rw, asm, PPR32>;
def _D : sve2_int_while_rr<0b11, rw, asm, PPR64>;
- def : SVE_2_Op_Pat<nxv16i1, op, i64, i64, !cast<Instruction>(NAME # _B)>;
- def : SVE_2_Op_Pat<nxv8i1, op, i64, i64, !cast<Instruction>(NAME # _H)>;
- def : SVE_2_Op_Pat<nxv4i1, op, i64, i64, !cast<Instruction>(NAME # _S)>;
- def : SVE_2_Op_Pat<nxv2i1, op, i64, i64, !cast<Instruction>(NAME # _D)>;
+ def : Pat<(nxv16i1 (op i64:$Op1, i64:$Op2, (i64 1))),
+ (!cast<Instruction>(NAME # _B) $Op1, $Op2)>;
+ def : Pat<(nxv8i1 (op i64:$Op1, i64:$Op2, (i64 2))),
+ (!cast<Instruction>(NAME # _H) $Op1, $Op2)>;
+ def : Pat<(nxv4i1 (op i64:$Op1, i64:$Op2, (i64 4))),
+ (!cast<Instruction>(NAME # _S) $Op1, $Op2)>;
+ def : Pat<(nxv2i1 (op i64:$Op1, i64:$Op2, (i64 8))),
+ (!cast<Instruction>(NAME # _D) $Op1, $Op2)>;
}
//===----------------------------------------------------------------------===//
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 8b74b7101a740..94fc39054200f 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -58,9 +58,9 @@ entry:
define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_16:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_16:
@@ -98,9 +98,9 @@ entry:
define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_32:
@@ -132,9 +132,9 @@ entry:
define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: xtn v0.2s, v0.2d
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_64:
@@ -216,9 +216,9 @@ entry:
define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_16:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
-; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: whilerw p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilerw_16:
@@ -257,9 +257,9 @@ entry:
define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_32:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilerw p0.h, x0, x1
-; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: whilerw p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilerw_32:
@@ -292,9 +292,9 @@ entry:
define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_64:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilerw p0.s, x0, x1
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: whilerw p0.d, x0, x1
+; CHECK-SVE-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: xtn v0.2s, v0.2d
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilerw_64:
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 47b8e31d8b5be..484e35fbbb0e4 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -60,7 +60,7 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
}
@@ -98,7 +98,7 @@ define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 2)
+ %0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
}
@@ -130,7 +130,7 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 4)
+ %0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.nxv4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
}
@@ -158,7 +158,7 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.loop.dependence.war.mask.v2i1(ptr %a, ptr %b, i64 8)
+ %0 = call <vscale x 2 x i1> @llvm.loop.dependence.war.mask.nxv2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
@@ -223,7 +223,7 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.raw.mask.nxv16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
}
@@ -262,7 +262,7 @@ define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.loop.dependence.raw.mask.v8i1(ptr %a, ptr %b, i64 2)
+ %0 = call <vscale x 8 x i1> @llvm.loop.dependence.raw.mask.nxv8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
}
@@ -295,7 +295,7 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.loop.dependence.raw.mask.v4i1(ptr %a, ptr %b, i64 4)
+ %0 = call <vscale x 4 x i1> @llvm.loop.dependence.raw.mask.nxv4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
}
@@ -324,6 +324,6 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
+ %0 = call <vscale x 2 x i1> @llvm.loop.dependence.raw.mask.nxv2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
>From 9a985ab6095e554c214a1e8bff8adf95efbf863f Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 7 Aug 2025 15:51:45 +0100
Subject: [PATCH 27/43] Remove experimental from comment
---
llvm/docs/LangRef.rst | 3 ++-
llvm/include/llvm/CodeGen/ISDOpcodes.h | 2 +-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index c5f8269907a03..8107e4d2fa1dc 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24210,7 +24210,8 @@ Semantics:
The intrinsic returns ``poison`` if the distance between ``%prtA`` and ``%ptrB``
is smaller than ``VF * %elementsize`` and either ``%ptrA + VF * %elementSize``
or ``%ptrB + VF * %elementSize`` wrap.
-The element of the result mask is active when no read-after-write hazard occurs, meaning that:
+The element of the result mask is active when no read-after-write hazard occurs,
+meaning that:
abs(ptrB - ptrA) >= elementSize * lane (guarantees that the store of this lane
is committed before loading from this address)
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index 492f83ae34f61..c76c83d84b3c7 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1558,7 +1558,7 @@ enum NodeType {
// bits conform to getBooleanContents similar to the SETCC operator.
GET_ACTIVE_LANE_MASK,
- // The `llvm.experimental.loop.dependence.{war, raw}.mask` intrinsics
+ // The `llvm.loop.dependence.{war, raw}.mask` intrinsics
// Operands: Load pointer, Store pointer, Element size
// Output: Mask
LOOP_DEPENDENCE_WAR_MASK,
>From b09d35408810f3bbabc3ff093d3f1c9a685d6953 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 7 Aug 2025 15:52:03 +0100
Subject: [PATCH 28/43] Add splitting
---
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 1 +
.../SelectionDAG/LegalizeVectorTypes.cpp | 17 +++++++++++++++++
2 files changed, 18 insertions(+)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 7af8caec27b15..ec7557110469c 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -965,6 +965,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
void SplitVecRes_FIX(SDNode *N, SDValue &Lo, SDValue &Hi);
void SplitVecRes_BITCAST(SDNode *N, SDValue &Lo, SDValue &Hi);
+ void SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo, SDValue &Hi);
void SplitVecRes_BUILD_VECTOR(SDNode *N, SDValue &Lo, SDValue &Hi);
void SplitVecRes_CONCAT_VECTORS(SDNode *N, SDValue &Lo, SDValue &Hi);
void SplitVecRes_EXTRACT_SUBVECTOR(SDNode *N, SDValue &Lo, SDValue &Hi);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index bc2dbfb4cbaae..94fae6be779d0 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1118,6 +1118,8 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
report_fatal_error("Do not know how to split the result of this "
"operator!\n");
+ case ISD::LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::LOOP_DEPENDENCE_WAR_MASK: SplitVecRes_LOOP_DEPENDENCE_MASK(N, Lo, Hi); break;
case ISD::MERGE_VALUES: SplitRes_MERGE_VALUES(N, ResNo, Lo, Hi); break;
case ISD::AssertZext: SplitVecRes_AssertZext(N, Lo, Hi); break;
case ISD::VSELECT:
@@ -1611,6 +1613,21 @@ void DAGTypeLegalizer::SplitVecRes_BITCAST(SDNode *N, SDValue &Lo,
Hi = DAG.getNode(ISD::BITCAST, dl, HiVT, Hi);
}
+void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo, SDValue &Hi) {
+ SDLoc DL(N);
+ EVT LoVT, HiVT;
+ std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(N->getValueType(0));
+ SDValue PtrA = N->getOperand(0);
+ SDValue PtrB = N->getOperand(1);
+ Lo = DAG.getNode(N->getOpcode(), DL, LoVT, PtrA, PtrB, N->getOperand(2));
+ Lo.dumpr();
+
+ PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, DAG.getConstant(HiVT.getStoreSizeInBits(), DL, MVT::i64));
+ PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, DAG.getConstant(HiVT.getStoreSizeInBits(), DL, MVT::i64));
+ Hi = DAG.getNode(N->getOpcode(), DL, HiVT, PtrA, PtrB, N->getOperand(2));
+ Hi.dumpr();
+}
+
void DAGTypeLegalizer::SplitVecRes_BUILD_VECTOR(SDNode *N, SDValue &Lo,
SDValue &Hi) {
EVT LoVT, HiVT;
>From 56f9a6bc3404e3c29ec474599b4d2da2152b6c50 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 7 Aug 2025 15:52:30 +0100
Subject: [PATCH 29/43] Add widening
---
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 1 +
.../SelectionDAG/LegalizeVectorTypes.cpp | 62 +-
.../Target/AArch64/AArch64ISelLowering.cpp | 101 ++-
llvm/test/CodeGen/AArch64/alias_mask.ll | 781 ++++++++++++++++++
4 files changed, 910 insertions(+), 35 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index ec7557110469c..2ebc9607a391c 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -1072,6 +1072,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue WidenVecRes_ADDRSPACECAST(SDNode *N);
SDValue WidenVecRes_AssertZext(SDNode* N);
SDValue WidenVecRes_BITCAST(SDNode* N);
+ SDValue WidenVecRes_LOOP_DEPENDENCE_MASK(SDNode *N);
SDValue WidenVecRes_BUILD_VECTOR(SDNode* N);
SDValue WidenVecRes_CONCAT_VECTORS(SDNode* N);
SDValue WidenVecRes_EXTEND_VECTOR_INREG(SDNode* N);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index 94fae6be779d0..3c9ac499b1d28 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1119,7 +1119,9 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
"operator!\n");
case ISD::LOOP_DEPENDENCE_RAW_MASK:
- case ISD::LOOP_DEPENDENCE_WAR_MASK: SplitVecRes_LOOP_DEPENDENCE_MASK(N, Lo, Hi); break;
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
+ SplitVecRes_LOOP_DEPENDENCE_MASK(N, Lo, Hi);
+ break;
case ISD::MERGE_VALUES: SplitRes_MERGE_VALUES(N, ResNo, Lo, Hi); break;
case ISD::AssertZext: SplitVecRes_AssertZext(N, Lo, Hi); break;
case ISD::VSELECT:
@@ -1613,19 +1615,40 @@ void DAGTypeLegalizer::SplitVecRes_BITCAST(SDNode *N, SDValue &Lo,
Hi = DAG.getNode(ISD::BITCAST, dl, HiVT, Hi);
}
-void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo, SDValue &Hi) {
- SDLoc DL(N);
- EVT LoVT, HiVT;
- std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(N->getValueType(0));
- SDValue PtrA = N->getOperand(0);
- SDValue PtrB = N->getOperand(1);
- Lo = DAG.getNode(N->getOpcode(), DL, LoVT, PtrA, PtrB, N->getOperand(2));
- Lo.dumpr();
-
- PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, DAG.getConstant(HiVT.getStoreSizeInBits(), DL, MVT::i64));
- PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, DAG.getConstant(HiVT.getStoreSizeInBits(), DL, MVT::i64));
- Hi = DAG.getNode(N->getOpcode(), DL, HiVT, PtrA, PtrB, N->getOperand(2));
- Hi.dumpr();
+void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
+ SDValue &Hi) {
+ EVT EltVT;
+ switch (N->getConstantOperandVal(2)) {
+ case 1:
+ EltVT = MVT::i8;
+ break;
+ case 2:
+ EltVT = MVT::i16;
+ break;
+ case 4:
+ EltVT = MVT::i32;
+ break;
+ case 8:
+ EltVT = MVT::i64;
+ break;
+ }
+ SDLoc DL(N);
+ EVT LoVT, HiVT;
+ std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(N->getValueType(0));
+ SDValue PtrA = N->getOperand(0);
+ SDValue PtrB = N->getOperand(1);
+ Lo = DAG.getNode(N->getOpcode(), DL, LoVT, PtrA, PtrB, N->getOperand(2));
+
+ EVT StoreVT =
+ EVT::getVectorVT(*DAG.getContext(), EltVT, HiVT.getVectorMinNumElements(),
+ HiVT.isScalableVT());
+ PtrA = DAG.getNode(
+ ISD::ADD, DL, MVT::i64, PtrA,
+ DAG.getConstant(StoreVT.getStoreSizeInBits() / 8, DL, MVT::i64));
+ PtrB = DAG.getNode(
+ ISD::ADD, DL, MVT::i64, PtrB,
+ DAG.getConstant(StoreVT.getStoreSizeInBits() / 8, DL, MVT::i64));
+ Hi = DAG.getNode(N->getOpcode(), DL, HiVT, PtrA, PtrB, N->getOperand(2));
}
void DAGTypeLegalizer::SplitVecRes_BUILD_VECTOR(SDNode *N, SDValue &Lo,
@@ -4728,6 +4751,10 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
#endif
report_fatal_error("Do not know how to widen the result of this operator!");
+ case ISD::LOOP_DEPENDENCE_RAW_MASK:
+ case ISD::LOOP_DEPENDENCE_WAR_MASK:
+ Res = WidenVecRes_LOOP_DEPENDENCE_MASK(N);
+ break;
case ISD::MERGE_VALUES: Res = WidenVecRes_MERGE_VALUES(N, ResNo); break;
case ISD::ADDRSPACECAST:
Res = WidenVecRes_ADDRSPACECAST(N);
@@ -5930,6 +5957,13 @@ SDValue DAGTypeLegalizer::WidenVecRes_BITCAST(SDNode *N) {
return CreateStackStoreLoad(InOp, WidenVT);
}
+SDValue DAGTypeLegalizer::WidenVecRes_LOOP_DEPENDENCE_MASK(SDNode *N) {
+ return DAG.getNode(
+ N->getOpcode(), SDLoc(N),
+ TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0)),
+ N->getOperand(0), N->getOperand(1), N->getOperand(2));
+}
+
SDValue DAGTypeLegalizer::WidenVecRes_BUILD_VECTOR(SDNode *N) {
SDLoc dl(N);
// Build a vector with undefined for the new nodes.
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index e01e60f623e75..01457c8fc4450 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5248,44 +5248,103 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
uint64_t EltSize = Op.getConstantOperandVal(2);
- EVT VT = Op.getValueType();
+ EVT FullVT = Op.getValueType();
+ unsigned NumElements = FullVT.getVectorMinNumElements();
+ unsigned NumSplits = 0;
+ EVT EltVT;
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
- assert((VT == MVT::v16i8 || VT == MVT::nxv16i1) &&
+ assert((FullVT == MVT::v16i8 || FullVT == MVT::nxv16i1) &&
"Unexpected mask or element size");
+ EltVT = MVT::i8;
break;
case 2:
- assert((VT == MVT::v8i8 || VT == MVT::nxv8i1) &&
- "Unexpected mask or element size");
+ if (FullVT == MVT::v16i8)
+ NumSplits = 1;
+ else
+ assert((FullVT == MVT::v8i8 || FullVT == MVT::nxv8i1) &&
+ "Unexpected mask or element size");
+ EltVT = MVT::i16;
break;
case 4:
- assert((VT == MVT::v4i16 || VT == MVT::nxv4i1) &&
- "Unexpected mask or element size");
+ if (NumElements >= 8)
+ NumSplits = NumElements / 8;
+ else
+ assert((FullVT == MVT::v4i16 || FullVT == MVT::nxv4i1) &&
+ "Unexpected mask or element size");
+ EltVT = MVT::i32;
break;
case 8:
- assert((VT == MVT::v2i32 || VT == MVT::nxv2i1) &&
- "Unexpected mask or element size");
+ if (NumElements >= 4)
+ NumSplits = NumElements / 4;
+ else
+ assert((FullVT == MVT::v2i32 || FullVT == MVT::nxv2i1) &&
+ "Unexpected mask or element size");
+ EltVT = MVT::i64;
break;
default:
llvm_unreachable("Unexpected element size for get.alias.lane.mask");
break;
}
- // We can use the SVE whilewr/whilerw instruction to lower this
- // intrinsic by creating the appropriate sequence of scalable vector
- // operations and then extracting a fixed-width subvector from the scalable
- // vector. Scalable vector variants are already legal.
- EVT ContainerVT =
- EVT::getVectorVT(*DAG.getContext(), VT.getVectorElementType(),
- VT.getVectorNumElements(), true);
- EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
+ auto LowerToWhile = [&](EVT VT, unsigned AddrScale) {
+ SDValue PtrA = Op.getOperand(0);
+ SDValue PtrB = Op.getOperand(1);
+
+ EVT StoreVT =
+ EVT::getVectorVT(*DAG.getContext(), EltVT, VT.getVectorMinNumElements(),
+ VT.isScalableVT());
+ PtrA = DAG.getNode(
+ ISD::ADD, DL, MVT::i64, PtrA,
+ DAG.getConstant(StoreVT.getStoreSizeInBits() / 8 * AddrScale, DL,
+ MVT::i64));
+ PtrB = DAG.getNode(
+ ISD::ADD, DL, MVT::i64, PtrB,
+ DAG.getConstant(StoreVT.getStoreSizeInBits() / 8 * AddrScale, DL,
+ MVT::i64));
+
+ if (VT.isScalableVT())
+ return DAG.getNode(Op.getOpcode(), DL, VT, PtrA, PtrB, Op.getOperand(2));
+
+ // We can use the SVE whilewr/whilerw instruction to lower this
+ // intrinsic by creating the appropriate sequence of scalable vector
+ // operations and then extracting a fixed-width subvector from the scalable
+ // vector. Scalable vector variants are already legal.
+ EVT ContainerVT =
+ EVT::getVectorVT(*DAG.getContext(), VT.getVectorElementType(),
+ VT.getVectorNumElements(), true);
+ EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
+
+ SDValue Mask =
+ DAG.getNode(Op.getOpcode(), DL, WhileVT, PtrA, PtrB, Op.getOperand(2));
+ SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
+ return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
+ DAG.getVectorIdxConstant(0, DL));
+ };
- SDValue Mask = DAG.getNode(Op.getOpcode(), DL, WhileVT, Op.getOperand(0),
- Op.getOperand(1), Op.getOperand(2));
- SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
- return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
- DAG.getVectorIdxConstant(0, DL));
+ if (NumSplits == 0)
+ return LowerToWhile(FullVT, false);
+
+ SDValue FullVec = DAG.getUNDEF(FullVT);
+
+ unsigned NumElementsPerSplit = NumElements / (2 * NumSplits);
+ for (unsigned Split = 0, InsertIdx = 0; Split < NumSplits;
+ Split++, InsertIdx += 2) {
+ EVT PartVT =
+ EVT::getVectorVT(*DAG.getContext(), FullVT.getVectorElementType(),
+ NumElementsPerSplit, FullVT.isScalableVT());
+ SDValue Low = LowerToWhile(PartVT, InsertIdx);
+ SDValue High = LowerToWhile(PartVT, InsertIdx + 1);
+ unsigned InsertIdxLow = InsertIdx * NumElementsPerSplit;
+ unsigned InsertIdxHigh = (InsertIdx + 1) * NumElementsPerSplit;
+ SDValue Insert =
+ DAG.getNode(ISD::INSERT_SUBVECTOR, DL, FullVT, FullVec, Low,
+ DAG.getVectorIdxConstant(InsertIdxLow, DL));
+ FullVec = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, FullVT, Insert, High,
+ DAG.getVectorIdxConstant(InsertIdxHigh, DL));
+ }
+ return FullVec;
}
SDValue AArch64TargetLowering::LowerBITCAST(SDValue Op,
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 94fc39054200f..0ee2f6f3a9b10 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -319,3 +319,784 @@ entry:
%0 = call <2 x i1> @llvm.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <2 x i1> %0
}
+
+define <32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_8_split:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: add x9, x1, #16
+; CHECK-SVE-NEXT: add x10, x0, #16
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: whilewr p1.b, x10, x9
+; CHECK-SVE-NEXT: adrp x9, .LCPI8_0
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_0]
+; CHECK-SVE-NEXT: mov z1.b, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
+; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
+; CHECK-SVE-NEXT: addv h0, v0.8h
+; CHECK-SVE-NEXT: addv h1, v1.8h
+; CHECK-SVE-NEXT: str h0, [x8]
+; CHECK-SVE-NEXT: str h1, [x8, #2]
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_8_split:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
+; CHECK-NOSVE-NEXT: sub x11, x1, x0
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x9, :lo12:.LCPI8_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_5
+; CHECK-NOSVE-NEXT: dup v2.2d, x11
+; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI8_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_6
+; CHECK-NOSVE-NEXT: ldr q6, [x9, :lo12:.LCPI8_5]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI8_6]
+; CHECK-NOSVE-NEXT: cmp x11, #1
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI8_8]
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 1)
+ ret <32 x i1> %0
+}
+
+define <64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_8_split2:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: add x9, x1, #48
+; CHECK-SVE-NEXT: add x10, x0, #48
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: whilewr p1.b, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #16
+; CHECK-SVE-NEXT: add x10, x1, #32
+; CHECK-SVE-NEXT: add x11, x0, #32
+; CHECK-SVE-NEXT: add x12, x0, #16
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p0.b, x11, x10
+; CHECK-SVE-NEXT: mov z1.b, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p1.b, x12, x9
+; CHECK-SVE-NEXT: adrp x9, .LCPI9_0
+; CHECK-SVE-NEXT: mov z2.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: ldr q4, [x9, :lo12:.LCPI9_0]
+; CHECK-SVE-NEXT: mov z3.b, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
+; CHECK-SVE-NEXT: shl v2.16b, v2.16b, #7
+; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-SVE-NEXT: shl v3.16b, v3.16b, #7
+; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-SVE-NEXT: cmlt v2.16b, v2.16b, #0
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v4.16b
+; CHECK-SVE-NEXT: cmlt v3.16b, v3.16b, #0
+; CHECK-SVE-NEXT: and v1.16b, v1.16b, v4.16b
+; CHECK-SVE-NEXT: and v2.16b, v2.16b, v4.16b
+; CHECK-SVE-NEXT: and v3.16b, v3.16b, v4.16b
+; CHECK-SVE-NEXT: ext v4.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: ext v5.16b, v1.16b, v1.16b, #8
+; CHECK-SVE-NEXT: ext v6.16b, v2.16b, v2.16b, #8
+; CHECK-SVE-NEXT: ext v7.16b, v3.16b, v3.16b, #8
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v4.16b
+; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v5.16b
+; CHECK-SVE-NEXT: zip1 v2.16b, v2.16b, v6.16b
+; CHECK-SVE-NEXT: zip1 v3.16b, v3.16b, v7.16b
+; CHECK-SVE-NEXT: addv h0, v0.8h
+; CHECK-SVE-NEXT: addv h1, v1.8h
+; CHECK-SVE-NEXT: addv h2, v2.8h
+; CHECK-SVE-NEXT: addv h3, v3.8h
+; CHECK-SVE-NEXT: str h0, [x8]
+; CHECK-SVE-NEXT: str h1, [x8, #6]
+; CHECK-SVE-NEXT: str h2, [x8, #4]
+; CHECK-SVE-NEXT: str h3, [x8, #2]
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_8_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_1
+; CHECK-NOSVE-NEXT: sub x11, x1, x0
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI9_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI9_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x9, :lo12:.LCPI9_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_4
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI9_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_5
+; CHECK-NOSVE-NEXT: dup v2.2d, x11
+; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI9_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_6
+; CHECK-NOSVE-NEXT: ldr q6, [x9, :lo12:.LCPI9_5]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI9_6]
+; CHECK-NOSVE-NEXT: cmp x11, #1
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI9_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI9_8]
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h0, [x8, #6]
+; CHECK-NOSVE-NEXT: str h0, [x8, #4]
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <64 x i1> @llvm.loop.dependence.war.mask.v64i1(ptr %a, ptr %b, i64 1)
+ ret <64 x i1> %0
+}
+
+define <16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_16_split:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: add x8, x1, #16
+; CHECK-SVE-NEXT: add x9, x0, #16
+; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE-NEXT: whilewr p1.h, x9, x8
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z1.h, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_16_split:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI10_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_2
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI10_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI10_1]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI10_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI10_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x8
+; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI10_6]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI10_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI10_7]
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 2)
+ ret <16 x i1> %0
+}
+
+define <32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_16_split2:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: add x9, x1, #48
+; CHECK-SVE-NEXT: add x10, x0, #48
+; CHECK-SVE-NEXT: add x11, x1, #16
+; CHECK-SVE-NEXT: whilewr p1.h, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #32
+; CHECK-SVE-NEXT: add x10, x0, #32
+; CHECK-SVE-NEXT: add x12, x0, #16
+; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE-NEXT: whilewr p2.h, x10, x9
+; CHECK-SVE-NEXT: mov z0.h, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: adrp x9, .LCPI11_0
+; CHECK-SVE-NEXT: whilewr p3.h, x12, x11
+; CHECK-SVE-NEXT: mov z2.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z1.h, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z3.h, p3/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-SVE-NEXT: uzp1 v1.16b, v2.16b, v3.16b
+; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI11_0]
+; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
+; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
+; CHECK-SVE-NEXT: addv h0, v0.8h
+; CHECK-SVE-NEXT: addv h1, v1.8h
+; CHECK-SVE-NEXT: str h0, [x8, #2]
+; CHECK-SVE-NEXT: str h1, [x8]
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_16_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_0
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_1
+; CHECK-NOSVE-NEXT: add x9, x9, x9, lsr #63
+; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI11_0]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI11_2]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_4
+; CHECK-NOSVE-NEXT: ldr q1, [x11, :lo12:.LCPI11_1]
+; CHECK-NOSVE-NEXT: asr x9, x9, #1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_3
+; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI11_4]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_6
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI11_3]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x9
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI11_6]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_7
+; CHECK-NOSVE-NEXT: ldr q6, [x11, :lo12:.LCPI11_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x10, :lo12:.LCPI11_7]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI11_8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI11_8]
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 2)
+ ret <32 x i1> %0
+}
+
+define <8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_32_split:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE-NEXT: add x10, x0, #16
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov w8, v0.s[1]
+; CHECK-SVE-NEXT: mov v1.16b, v0.16b
+; CHECK-SVE-NEXT: mov w9, v0.s[2]
+; CHECK-SVE-NEXT: mov v1.h[1], w8
+; CHECK-SVE-NEXT: mov w8, v0.s[3]
+; CHECK-SVE-NEXT: mov v1.h[2], w9
+; CHECK-SVE-NEXT: add x9, x1, #16
+; CHECK-SVE-NEXT: whilewr p0.s, x10, x9
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v1.h[3], w8
+; CHECK-SVE-NEXT: fmov w8, s0
+; CHECK-SVE-NEXT: mov w9, v0.s[1]
+; CHECK-SVE-NEXT: mov v1.h[4], w8
+; CHECK-SVE-NEXT: mov w8, v0.s[2]
+; CHECK-SVE-NEXT: mov v1.h[5], w9
+; CHECK-SVE-NEXT: mov w9, v0.s[3]
+; CHECK-SVE-NEXT: mov v1.h[6], w8
+; CHECK-SVE-NEXT: mov v1.h[7], w9
+; CHECK-SVE-NEXT: xtn v0.8b, v1.8h
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_32_split:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI12_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI12_2
+; CHECK-NOSVE-NEXT: add x9, x8, #3
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI12_1]
+; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_0
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI12_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI12_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_3
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI12_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 4)
+ ret <8 x i1> %0
+}
+
+define <16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_32_split2:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: add x8, x1, #32
+; CHECK-SVE-NEXT: add x9, x0, #32
+; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE-NEXT: whilewr p1.s, x9, x8
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z1.s, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov w8, v0.s[1]
+; CHECK-SVE-NEXT: mov v2.16b, v0.16b
+; CHECK-SVE-NEXT: mov w10, v0.s[2]
+; CHECK-SVE-NEXT: mov w9, v1.s[1]
+; CHECK-SVE-NEXT: mov v3.16b, v1.16b
+; CHECK-SVE-NEXT: mov w11, v1.s[3]
+; CHECK-SVE-NEXT: mov v2.h[1], w8
+; CHECK-SVE-NEXT: mov w8, v1.s[2]
+; CHECK-SVE-NEXT: mov v3.h[1], w9
+; CHECK-SVE-NEXT: mov w9, v0.s[3]
+; CHECK-SVE-NEXT: mov v2.h[2], w10
+; CHECK-SVE-NEXT: add x10, x1, #16
+; CHECK-SVE-NEXT: mov v3.h[2], w8
+; CHECK-SVE-NEXT: add x8, x0, #16
+; CHECK-SVE-NEXT: whilewr p0.s, x8, x10
+; CHECK-SVE-NEXT: add x8, x1, #48
+; CHECK-SVE-NEXT: add x10, x0, #48
+; CHECK-SVE-NEXT: whilewr p1.s, x10, x8
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v2.h[3], w9
+; CHECK-SVE-NEXT: mov z1.s, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v3.h[3], w11
+; CHECK-SVE-NEXT: fmov w9, s0
+; CHECK-SVE-NEXT: mov w8, v0.s[1]
+; CHECK-SVE-NEXT: fmov w10, s1
+; CHECK-SVE-NEXT: mov w11, v1.s[1]
+; CHECK-SVE-NEXT: mov v2.h[4], w9
+; CHECK-SVE-NEXT: mov w9, v0.s[2]
+; CHECK-SVE-NEXT: mov v3.h[4], w10
+; CHECK-SVE-NEXT: mov w10, v1.s[2]
+; CHECK-SVE-NEXT: mov v2.h[5], w8
+; CHECK-SVE-NEXT: mov w8, v0.s[3]
+; CHECK-SVE-NEXT: mov v3.h[5], w11
+; CHECK-SVE-NEXT: mov w11, v1.s[3]
+; CHECK-SVE-NEXT: mov v2.h[6], w9
+; CHECK-SVE-NEXT: mov v3.h[6], w10
+; CHECK-SVE-NEXT: mov v2.h[7], w8
+; CHECK-SVE-NEXT: mov v3.h[7], w11
+; CHECK-SVE-NEXT: uzp1 v0.16b, v2.16b, v3.16b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_32_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI13_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_2
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI13_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI13_1]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_3
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI13_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI13_3]
+; CHECK-NOSVE-NEXT: dup v4.2d, x9
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_5
+; CHECK-NOSVE-NEXT: ldr q7, [x8, :lo12:.LCPI13_6]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI13_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI13_7]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 4)
+ ret <16 x i1> %0
+}
+
+define <4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_64_split:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE-NEXT: add x8, x1, #16
+; CHECK-SVE-NEXT: add x9, x0, #16
+; CHECK-SVE-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p0.d, x9, x8
+; CHECK-SVE-NEXT: mov z1.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v0.s[1], v0.s[2]
+; CHECK-SVE-NEXT: mov v0.s[2], v1.s[0]
+; CHECK-SVE-NEXT: mov v0.s[3], v1.s[2]
+; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_64_split:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI14_0
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI14_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI14_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 8)
+ ret <4 x i1> %0
+}
+
+define <8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_64_split2:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: add x8, x1, #32
+; CHECK-SVE-NEXT: add x9, x0, #32
+; CHECK-SVE-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
+; CHECK-SVE-NEXT: add x8, x1, #16
+; CHECK-SVE-NEXT: add x9, x0, #16
+; CHECK-SVE-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p0.d, x9, x8
+; CHECK-SVE-NEXT: add x8, x1, #48
+; CHECK-SVE-NEXT: mov z1.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: add x9, x0, #48
+; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
+; CHECK-SVE-NEXT: mov z2.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v0.s[1], v0.s[2]
+; CHECK-SVE-NEXT: mov v1.s[1], v1.s[2]
+; CHECK-SVE-NEXT: mov z3.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v0.s[2], v2.s[0]
+; CHECK-SVE-NEXT: mov v1.s[2], v3.s[0]
+; CHECK-SVE-NEXT: mov v0.s[3], v2.s[2]
+; CHECK-SVE-NEXT: mov v1.s[3], v3.s[2]
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_64_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI15_2
+; CHECK-NOSVE-NEXT: add x9, x8, #7
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
+; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_0
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI15_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #3
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI15_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_3
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI15_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 8)
+ ret <8 x i1> %0
+}
+
+define <9 x i1> @whilewr_8_widen(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_8_widen:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: umov w9, v0.b[0]
+; CHECK-SVE-NEXT: umov w10, v0.b[1]
+; CHECK-SVE-NEXT: umov w11, v0.b[2]
+; CHECK-SVE-NEXT: umov w12, v0.b[7]
+; CHECK-SVE-NEXT: and w9, w9, #0x1
+; CHECK-SVE-NEXT: bfi w9, w10, #1, #1
+; CHECK-SVE-NEXT: umov w10, v0.b[3]
+; CHECK-SVE-NEXT: bfi w9, w11, #2, #1
+; CHECK-SVE-NEXT: umov w11, v0.b[4]
+; CHECK-SVE-NEXT: bfi w9, w10, #3, #1
+; CHECK-SVE-NEXT: umov w10, v0.b[5]
+; CHECK-SVE-NEXT: bfi w9, w11, #4, #1
+; CHECK-SVE-NEXT: umov w11, v0.b[6]
+; CHECK-SVE-NEXT: bfi w9, w10, #5, #1
+; CHECK-SVE-NEXT: umov w10, v0.b[8]
+; CHECK-SVE-NEXT: bfi w9, w11, #6, #1
+; CHECK-SVE-NEXT: ubfiz w11, w12, #7, #1
+; CHECK-SVE-NEXT: orr w9, w9, w11
+; CHECK-SVE-NEXT: orr w9, w9, w10, lsl #8
+; CHECK-SVE-NEXT: and w9, w9, #0x1ff
+; CHECK-SVE-NEXT: strh w9, [x8]
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_8_widen:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_0
+; CHECK-NOSVE-NEXT: sub x11, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_1
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI16_2
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI16_0]
+; CHECK-NOSVE-NEXT: dup v1.2d, x11
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI16_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI16_2]
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI16_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_4
+; CHECK-NOSVE-NEXT: cmp x11, #1
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v1.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
+; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI16_4]
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v2.4s, v0.4s
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v1.2d, v5.2d
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: xtn v1.2s, v1.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.4h, v1.4h, v0.4h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: umov w9, v0.b[0]
+; CHECK-NOSVE-NEXT: umov w10, v0.b[1]
+; CHECK-NOSVE-NEXT: umov w11, v0.b[2]
+; CHECK-NOSVE-NEXT: umov w12, v0.b[7]
+; CHECK-NOSVE-NEXT: and w9, w9, #0x1
+; CHECK-NOSVE-NEXT: bfi w9, w10, #1, #1
+; CHECK-NOSVE-NEXT: umov w10, v0.b[3]
+; CHECK-NOSVE-NEXT: bfi w9, w11, #2, #1
+; CHECK-NOSVE-NEXT: umov w11, v0.b[4]
+; CHECK-NOSVE-NEXT: bfi w9, w10, #3, #1
+; CHECK-NOSVE-NEXT: umov w10, v0.b[5]
+; CHECK-NOSVE-NEXT: bfi w9, w11, #4, #1
+; CHECK-NOSVE-NEXT: umov w11, v0.b[6]
+; CHECK-NOSVE-NEXT: bfi w9, w10, #5, #1
+; CHECK-NOSVE-NEXT: umov w10, v0.b[8]
+; CHECK-NOSVE-NEXT: bfi w9, w11, #6, #1
+; CHECK-NOSVE-NEXT: ubfiz w11, w12, #7, #1
+; CHECK-NOSVE-NEXT: orr w9, w9, w11
+; CHECK-NOSVE-NEXT: orr w9, w9, w10, lsl #8
+; CHECK-NOSVE-NEXT: and w9, w9, #0x1ff
+; CHECK-NOSVE-NEXT: strh w9, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <9 x i1> @llvm.loop.dependence.war.mask.v9i1(ptr %a, ptr %b, i64 1)
+ ret <9 x i1> %0
+}
+
+define <7 x i1> @whilewr_16_widen(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_16_widen:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-SVE-NEXT: umov w0, v0.b[0]
+; CHECK-SVE-NEXT: umov w1, v0.b[1]
+; CHECK-SVE-NEXT: umov w2, v0.b[2]
+; CHECK-SVE-NEXT: umov w3, v0.b[3]
+; CHECK-SVE-NEXT: umov w4, v0.b[4]
+; CHECK-SVE-NEXT: umov w5, v0.b[5]
+; CHECK-SVE-NEXT: umov w6, v0.b[6]
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_16_widen:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI17_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI17_2
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI17_3
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI17_0]
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI17_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI17_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: ldr q4, [x12, :lo12:.LCPI17_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: umov w0, v0.b[0]
+; CHECK-NOSVE-NEXT: umov w1, v0.b[1]
+; CHECK-NOSVE-NEXT: umov w2, v0.b[2]
+; CHECK-NOSVE-NEXT: umov w3, v0.b[3]
+; CHECK-NOSVE-NEXT: umov w4, v0.b[4]
+; CHECK-NOSVE-NEXT: umov w5, v0.b[5]
+; CHECK-NOSVE-NEXT: umov w6, v0.b[6]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <7 x i1> @llvm.loop.dependence.war.mask.v7i1(ptr %a, ptr %b, i64 2)
+ ret <7 x i1> %0
+}
+
+define <3 x i1> @whilewr_32_widen(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_32_widen:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-SVE-NEXT: umov w0, v0.h[0]
+; CHECK-SVE-NEXT: umov w1, v0.h[1]
+; CHECK-SVE-NEXT: umov w2, v0.h[2]
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_32_widen:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI18_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI18_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI18_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: umov w0, v0.h[0]
+; CHECK-NOSVE-NEXT: umov w1, v0.h[1]
+; CHECK-NOSVE-NEXT: umov w2, v0.h[2]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <3 x i1> @llvm.loop.dependence.war.mask.v3i1(ptr %a, ptr %b, i64 4)
+ ret <3 x i1> %0
+}
>From 26bf362436759c4fe8c2e7d39fdd2596e3781405 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 11 Aug 2025 10:50:30 +0100
Subject: [PATCH 30/43] Remove assertions and expand invalid immediates
---
.../SelectionDAG/LegalizeVectorTypes.cpp | 22 +-
.../Target/AArch64/AArch64ISelLowering.cpp | 46 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 605 ++++-
.../CodeGen/AArch64/alias_mask_scalable.ll | 2179 +++++++++++++++++
4 files changed, 2781 insertions(+), 71 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index 3c9ac499b1d28..b45df45b12d2a 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1639,15 +1639,19 @@ void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
SDValue PtrB = N->getOperand(1);
Lo = DAG.getNode(N->getOpcode(), DL, LoVT, PtrA, PtrB, N->getOperand(2));
- EVT StoreVT =
- EVT::getVectorVT(*DAG.getContext(), EltVT, HiVT.getVectorMinNumElements(),
- HiVT.isScalableVT());
- PtrA = DAG.getNode(
- ISD::ADD, DL, MVT::i64, PtrA,
- DAG.getConstant(StoreVT.getStoreSizeInBits() / 8, DL, MVT::i64));
- PtrB = DAG.getNode(
- ISD::ADD, DL, MVT::i64, PtrB,
- DAG.getConstant(StoreVT.getStoreSizeInBits() / 8, DL, MVT::i64));
+ EVT StoreVT = EVT::getVectorVT(*DAG.getContext(), EltVT,
+ HiVT.getVectorMinNumElements(), false);
+ unsigned Offset = StoreVT.getStoreSizeInBits() / 8;
+ SDValue Addend;
+
+ if (HiVT.isScalableVT())
+ Addend = DAG.getVScale(DL, MVT::i64, APInt(64, Offset));
+ else
+ Addend = DAG.getConstant(Offset, DL, MVT::i64);
+
+ PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, Addend);
+
+ PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, Addend);
Hi = DAG.getNode(N->getOpcode(), DL, HiVT, PtrA, PtrB, N->getOperand(2));
}
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 01457c8fc4450..eac2f5c5e753f 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1924,8 +1924,8 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
setOperationAction(ISD::LOOP_DEPENDENCE_WAR_MASK, VT, Custom);
}
for (auto VT : {MVT::nxv2i1, MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
- setOperationAction(ISD::LOOP_DEPENDENCE_RAW_MASK, VT, Legal);
- setOperationAction(ISD::LOOP_DEPENDENCE_WAR_MASK, VT, Legal);
+ setOperationAction(ISD::LOOP_DEPENDENCE_RAW_MASK, VT, Custom);
+ setOperationAction(ISD::LOOP_DEPENDENCE_WAR_MASK, VT, Custom);
}
}
@@ -5255,54 +5255,44 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
- assert((FullVT == MVT::v16i8 || FullVT == MVT::nxv16i1) &&
- "Unexpected mask or element size");
EltVT = MVT::i8;
break;
case 2:
- if (FullVT == MVT::v16i8)
- NumSplits = 1;
- else
- assert((FullVT == MVT::v8i8 || FullVT == MVT::nxv8i1) &&
- "Unexpected mask or element size");
+ if (NumElements >= 16)
+ NumSplits = NumElements / 16;
EltVT = MVT::i16;
break;
case 4:
if (NumElements >= 8)
NumSplits = NumElements / 8;
- else
- assert((FullVT == MVT::v4i16 || FullVT == MVT::nxv4i1) &&
- "Unexpected mask or element size");
EltVT = MVT::i32;
break;
case 8:
if (NumElements >= 4)
NumSplits = NumElements / 4;
- else
- assert((FullVT == MVT::v2i32 || FullVT == MVT::nxv2i1) &&
- "Unexpected mask or element size");
EltVT = MVT::i64;
break;
default:
- llvm_unreachable("Unexpected element size for get.alias.lane.mask");
- break;
+ // Other element sizes are incompatible with whilewr/rw, so expand instead
+ return SDValue();
}
auto LowerToWhile = [&](EVT VT, unsigned AddrScale) {
SDValue PtrA = Op.getOperand(0);
SDValue PtrB = Op.getOperand(1);
- EVT StoreVT =
- EVT::getVectorVT(*DAG.getContext(), EltVT, VT.getVectorMinNumElements(),
- VT.isScalableVT());
- PtrA = DAG.getNode(
- ISD::ADD, DL, MVT::i64, PtrA,
- DAG.getConstant(StoreVT.getStoreSizeInBits() / 8 * AddrScale, DL,
- MVT::i64));
- PtrB = DAG.getNode(
- ISD::ADD, DL, MVT::i64, PtrB,
- DAG.getConstant(StoreVT.getStoreSizeInBits() / 8 * AddrScale, DL,
- MVT::i64));
+ EVT StoreVT = EVT::getVectorVT(*DAG.getContext(), EltVT,
+ VT.getVectorMinNumElements(), false);
+ unsigned Offset = StoreVT.getStoreSizeInBits() / 8 * AddrScale;
+ SDValue Addend;
+
+ if (VT.isScalableVT())
+ Addend = DAG.getVScale(DL, MVT::i64, APInt(64, Offset));
+ else
+ Addend = DAG.getConstant(Offset, DL, MVT::i64);
+
+ PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, Addend);
+ PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, Addend);
if (VT.isScalableVT())
return DAG.getNode(Op.getOpcode(), DL, VT, PtrA, PtrB, Op.getOperand(2));
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 0ee2f6f3a9b10..ec00f9de39c22 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -821,6 +821,178 @@ entry:
ret <16 x i1> %0
}
+define <32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_32_split3:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE-NEXT: add x9, x1, #96
+; CHECK-SVE-NEXT: add x10, x0, #96
+; CHECK-SVE-NEXT: add x11, x1, #64
+; CHECK-SVE-NEXT: add x12, x0, #64
+; CHECK-SVE-NEXT: whilewr p1.s, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #32
+; CHECK-SVE-NEXT: add x10, x0, #32
+; CHECK-SVE-NEXT: whilewr p2.s, x12, x11
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p0.s, x10, x9
+; CHECK-SVE-NEXT: mov z4.s, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z5.s, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z1.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov w9, v0.s[1]
+; CHECK-SVE-NEXT: mov w12, v4.s[1]
+; CHECK-SVE-NEXT: mov w10, v0.s[2]
+; CHECK-SVE-NEXT: mov w13, v5.s[1]
+; CHECK-SVE-NEXT: mov w11, v0.s[3]
+; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-SVE-NEXT: mov v2.16b, v4.16b
+; CHECK-SVE-NEXT: mov w14, v1.s[1]
+; CHECK-SVE-NEXT: mov v3.16b, v5.16b
+; CHECK-SVE-NEXT: mov w15, v1.s[2]
+; CHECK-SVE-NEXT: mov w16, v1.s[3]
+; CHECK-SVE-NEXT: // kill: def $q1 killed $q1 killed $z1
+; CHECK-SVE-NEXT: mov w17, v4.s[2]
+; CHECK-SVE-NEXT: mov w18, v5.s[2]
+; CHECK-SVE-NEXT: mov v0.h[1], w9
+; CHECK-SVE-NEXT: mov v2.h[1], w12
+; CHECK-SVE-NEXT: add x9, x1, #16
+; CHECK-SVE-NEXT: mov v3.h[1], w13
+; CHECK-SVE-NEXT: add x12, x0, #16
+; CHECK-SVE-NEXT: add x13, x1, #112
+; CHECK-SVE-NEXT: mov v1.h[1], w14
+; CHECK-SVE-NEXT: add x14, x0, #112
+; CHECK-SVE-NEXT: whilewr p0.s, x12, x9
+; CHECK-SVE-NEXT: whilewr p1.s, x14, x13
+; CHECK-SVE-NEXT: add x13, x0, #80
+; CHECK-SVE-NEXT: mov w9, v4.s[3]
+; CHECK-SVE-NEXT: mov v0.h[2], w10
+; CHECK-SVE-NEXT: add x10, x1, #80
+; CHECK-SVE-NEXT: mov w12, v5.s[3]
+; CHECK-SVE-NEXT: whilewr p2.s, x13, x10
+; CHECK-SVE-NEXT: add x10, x1, #48
+; CHECK-SVE-NEXT: add x13, x0, #48
+; CHECK-SVE-NEXT: mov v2.h[2], w17
+; CHECK-SVE-NEXT: mov v3.h[2], w18
+; CHECK-SVE-NEXT: mov v1.h[2], w15
+; CHECK-SVE-NEXT: mov z4.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p0.s, x13, x10
+; CHECK-SVE-NEXT: mov z5.s, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z6.s, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v0.h[3], w11
+; CHECK-SVE-NEXT: mov z7.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v2.h[3], w9
+; CHECK-SVE-NEXT: mov v3.h[3], w12
+; CHECK-SVE-NEXT: mov v1.h[3], w16
+; CHECK-SVE-NEXT: fmov w10, s4
+; CHECK-SVE-NEXT: fmov w12, s5
+; CHECK-SVE-NEXT: fmov w14, s6
+; CHECK-SVE-NEXT: fmov w15, s7
+; CHECK-SVE-NEXT: mov w9, v4.s[1]
+; CHECK-SVE-NEXT: mov w11, v5.s[1]
+; CHECK-SVE-NEXT: mov w13, v6.s[1]
+; CHECK-SVE-NEXT: mov v2.h[4], w12
+; CHECK-SVE-NEXT: mov v3.h[4], w14
+; CHECK-SVE-NEXT: mov w12, v7.s[1]
+; CHECK-SVE-NEXT: mov v0.h[4], w10
+; CHECK-SVE-NEXT: mov v1.h[4], w15
+; CHECK-SVE-NEXT: mov w10, v4.s[2]
+; CHECK-SVE-NEXT: mov w14, v5.s[2]
+; CHECK-SVE-NEXT: mov w15, v6.s[2]
+; CHECK-SVE-NEXT: mov v2.h[5], w11
+; CHECK-SVE-NEXT: mov v3.h[5], w13
+; CHECK-SVE-NEXT: mov w11, v7.s[2]
+; CHECK-SVE-NEXT: mov v0.h[5], w9
+; CHECK-SVE-NEXT: mov v1.h[5], w12
+; CHECK-SVE-NEXT: mov w9, v4.s[3]
+; CHECK-SVE-NEXT: mov w12, v5.s[3]
+; CHECK-SVE-NEXT: mov w13, v6.s[3]
+; CHECK-SVE-NEXT: mov v2.h[6], w14
+; CHECK-SVE-NEXT: mov v3.h[6], w15
+; CHECK-SVE-NEXT: mov w14, v7.s[3]
+; CHECK-SVE-NEXT: mov v0.h[6], w10
+; CHECK-SVE-NEXT: mov v1.h[6], w11
+; CHECK-SVE-NEXT: mov v2.h[7], w12
+; CHECK-SVE-NEXT: mov v3.h[7], w13
+; CHECK-SVE-NEXT: mov v0.h[7], w9
+; CHECK-SVE-NEXT: mov v1.h[7], w14
+; CHECK-SVE-NEXT: adrp x9, .LCPI14_0
+; CHECK-SVE-NEXT: uzp1 v2.16b, v3.16b, v2.16b
+; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: shl v1.16b, v2.16b, #7
+; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI14_0]
+; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-SVE-NEXT: addv h1, v1.8h
+; CHECK-SVE-NEXT: addv h0, v0.8h
+; CHECK-SVE-NEXT: str h1, [x8, #2]
+; CHECK-SVE-NEXT: str h0, [x8]
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_32_split3:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q0, [x11, :lo12:.LCPI14_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_2
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI14_1]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_3
+; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI14_2]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_4
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI14_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x9
+; CHECK-NOSVE-NEXT: ldr q5, [x11, :lo12:.LCPI14_4]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_6
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI14_5]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_7
+; CHECK-NOSVE-NEXT: ldr q7, [x11, :lo12:.LCPI14_6]
+; CHECK-NOSVE-NEXT: ldr q16, [x10, :lo12:.LCPI14_7]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI14_8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI14_8]
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 4)
+ ret <32 x i1> %0
+}
+
define <4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split:
; CHECK-SVE: // %bb.0: // %entry
@@ -839,14 +1011,14 @@ define <4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
; CHECK-NOSVE-LABEL: whilewr_64_split:
; CHECK-NOSVE: // %bb.0: // %entry
; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI14_0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI15_0
; CHECK-NOSVE-NEXT: add x10, x9, #7
; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI14_0]
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI15_0]
; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI14_1]
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
; CHECK-NOSVE-NEXT: dup v0.2d, x9
; CHECK-NOSVE-NEXT: cmp x9, #1
; CHECK-NOSVE-NEXT: cset w8, lt
@@ -892,18 +1064,18 @@ define <8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
; CHECK-NOSVE-LABEL: whilewr_64_split2:
; CHECK-NOSVE: // %bb.0: // %entry
; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI15_2
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI16_2
; CHECK-NOSVE-NEXT: add x9, x8, #7
; CHECK-NOSVE-NEXT: cmp x8, #0
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI16_1]
; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_0
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI15_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_0
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI16_2]
; CHECK-NOSVE-NEXT: asr x8, x8, #3
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI15_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_3
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI15_3]
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI16_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_3
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI16_3]
; CHECK-NOSVE-NEXT: dup v0.2d, x8
; CHECK-NOSVE-NEXT: cmp x8, #1
; CHECK-NOSVE-NEXT: cset w8, lt
@@ -923,6 +1095,277 @@ entry:
ret <8 x i1> %0
}
+define <16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_64_split3:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: add x8, x1, #96
+; CHECK-SVE-NEXT: add x9, x0, #96
+; CHECK-SVE-NEXT: whilewr p2.d, x0, x1
+; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
+; CHECK-SVE-NEXT: add x8, x1, #112
+; CHECK-SVE-NEXT: add x9, x0, #112
+; CHECK-SVE-NEXT: whilewr p0.d, x9, x8
+; CHECK-SVE-NEXT: add x8, x1, #64
+; CHECK-SVE-NEXT: add x9, x0, #64
+; CHECK-SVE-NEXT: whilewr p3.d, x9, x8
+; CHECK-SVE-NEXT: add x8, x1, #32
+; CHECK-SVE-NEXT: add x9, x0, #32
+; CHECK-SVE-NEXT: mov z0.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
+; CHECK-SVE-NEXT: add x8, x1, #80
+; CHECK-SVE-NEXT: add x9, x0, #80
+; CHECK-SVE-NEXT: mov z1.d, p3/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z2.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p2.d, x9, x8
+; CHECK-SVE-NEXT: add x8, x1, #16
+; CHECK-SVE-NEXT: add x9, x0, #16
+; CHECK-SVE-NEXT: mov z3.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
+; CHECK-SVE-NEXT: add x8, x1, #48
+; CHECK-SVE-NEXT: add x9, x0, #48
+; CHECK-SVE-NEXT: mov z4.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v0.s[1], v0.s[2]
+; CHECK-SVE-NEXT: whilewr p0.d, x9, x8
+; CHECK-SVE-NEXT: mov v1.s[1], v1.s[2]
+; CHECK-SVE-NEXT: mov v2.s[1], v2.s[2]
+; CHECK-SVE-NEXT: mov v3.s[1], v3.s[2]
+; CHECK-SVE-NEXT: mov z5.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z6.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z7.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v0.s[2], v4.s[0]
+; CHECK-SVE-NEXT: mov v2.s[2], v6.s[0]
+; CHECK-SVE-NEXT: mov v1.s[2], v5.s[0]
+; CHECK-SVE-NEXT: mov v3.s[2], v7.s[0]
+; CHECK-SVE-NEXT: mov v0.s[3], v4.s[2]
+; CHECK-SVE-NEXT: mov v1.s[3], v5.s[2]
+; CHECK-SVE-NEXT: mov v2.s[3], v6.s[2]
+; CHECK-SVE-NEXT: mov v3.s[3], v7.s[2]
+; CHECK-SVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-SVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_64_split3:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_0
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI17_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_2
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI17_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI17_1]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_3
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI17_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI17_3]
+; CHECK-NOSVE-NEXT: dup v4.2d, x9
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_5
+; CHECK-NOSVE-NEXT: ldr q7, [x8, :lo12:.LCPI17_6]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI17_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI17_7]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 8)
+ ret <16 x i1> %0
+}
+
+define <32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_64_split4:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: add x9, x1, #96
+; CHECK-SVE-NEXT: add x10, x0, #96
+; CHECK-SVE-NEXT: whilewr p3.d, x0, x1
+; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #112
+; CHECK-SVE-NEXT: add x10, x0, #112
+; CHECK-SVE-NEXT: whilewr p0.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #64
+; CHECK-SVE-NEXT: add x10, x0, #64
+; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #32
+; CHECK-SVE-NEXT: add x10, x0, #32
+; CHECK-SVE-NEXT: mov z0.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #224
+; CHECK-SVE-NEXT: add x10, x0, #224
+; CHECK-SVE-NEXT: mov z5.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z1.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p4.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #240
+; CHECK-SVE-NEXT: add x10, x0, #240
+; CHECK-SVE-NEXT: whilewr p0.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #192
+; CHECK-SVE-NEXT: add x10, x0, #192
+; CHECK-SVE-NEXT: mov z3.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z2.d, p3/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z4.d, p4/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z6.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p0.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #208
+; CHECK-SVE-NEXT: add x10, x0, #208
+; CHECK-SVE-NEXT: mov v0.s[1], v0.s[2]
+; CHECK-SVE-NEXT: mov v1.s[1], v1.s[2]
+; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #160
+; CHECK-SVE-NEXT: add x10, x0, #160
+; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #80
+; CHECK-SVE-NEXT: add x10, x0, #80
+; CHECK-SVE-NEXT: mov z7.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p0.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #176
+; CHECK-SVE-NEXT: add x10, x0, #176
+; CHECK-SVE-NEXT: mov z17.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z16.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #128
+; CHECK-SVE-NEXT: add x10, x0, #128
+; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #144
+; CHECK-SVE-NEXT: add x10, x0, #144
+; CHECK-SVE-NEXT: mov z18.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #16
+; CHECK-SVE-NEXT: add x10, x0, #16
+; CHECK-SVE-NEXT: mov z19.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v4.s[1], v4.s[2]
+; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
+; CHECK-SVE-NEXT: add x9, x1, #48
+; CHECK-SVE-NEXT: add x10, x0, #48
+; CHECK-SVE-NEXT: mov z20.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
+; CHECK-SVE-NEXT: mov v7.s[1], v7.s[2]
+; CHECK-SVE-NEXT: mov v16.s[1], v16.s[2]
+; CHECK-SVE-NEXT: mov v19.s[1], v19.s[2]
+; CHECK-SVE-NEXT: mov v2.s[1], v2.s[2]
+; CHECK-SVE-NEXT: mov v3.s[1], v3.s[2]
+; CHECK-SVE-NEXT: mov z21.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z22.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov z23.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: mov v0.s[2], v5.s[0]
+; CHECK-SVE-NEXT: mov v4.s[2], v6.s[0]
+; CHECK-SVE-NEXT: mov v7.s[2], v17.s[0]
+; CHECK-SVE-NEXT: adrp x9, .LCPI18_0
+; CHECK-SVE-NEXT: mov v16.s[2], v18.s[0]
+; CHECK-SVE-NEXT: mov v19.s[2], v20.s[0]
+; CHECK-SVE-NEXT: mov v1.s[2], v21.s[0]
+; CHECK-SVE-NEXT: mov v2.s[2], v22.s[0]
+; CHECK-SVE-NEXT: mov v3.s[2], v23.s[0]
+; CHECK-SVE-NEXT: mov v0.s[3], v5.s[2]
+; CHECK-SVE-NEXT: mov v4.s[3], v6.s[2]
+; CHECK-SVE-NEXT: mov v7.s[3], v17.s[2]
+; CHECK-SVE-NEXT: mov v16.s[3], v18.s[2]
+; CHECK-SVE-NEXT: mov v19.s[3], v20.s[2]
+; CHECK-SVE-NEXT: mov v1.s[3], v21.s[2]
+; CHECK-SVE-NEXT: mov v2.s[3], v22.s[2]
+; CHECK-SVE-NEXT: mov v3.s[3], v23.s[2]
+; CHECK-SVE-NEXT: uzp1 v4.8h, v7.8h, v4.8h
+; CHECK-SVE-NEXT: uzp1 v5.8h, v19.8h, v16.8h
+; CHECK-SVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-SVE-NEXT: uzp1 v2.16b, v5.16b, v4.16b
+; CHECK-SVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-SVE-NEXT: shl v1.16b, v2.16b, #7
+; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI18_0]
+; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-SVE-NEXT: addv h1, v1.8h
+; CHECK-SVE-NEXT: addv h0, v0.8h
+; CHECK-SVE-NEXT: str h1, [x8, #2]
+; CHECK-SVE-NEXT: str h0, [x8]
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_64_split4:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_0
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q0, [x11, :lo12:.LCPI18_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_2
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI18_1]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_3
+; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI18_2]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_4
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI18_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x9
+; CHECK-NOSVE-NEXT: ldr q5, [x11, :lo12:.LCPI18_4]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_6
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI18_5]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_7
+; CHECK-NOSVE-NEXT: ldr q7, [x11, :lo12:.LCPI18_6]
+; CHECK-NOSVE-NEXT: ldr q16, [x10, :lo12:.LCPI18_7]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI18_8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI18_8]
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 8)
+ ret <32 x i1> %0
+}
+
define <9 x i1> @whilewr_8_widen(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_8_widen:
; CHECK-SVE: // %bb.0: // %entry
@@ -953,23 +1396,23 @@ define <9 x i1> @whilewr_8_widen(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_8_widen:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI19_0
; CHECK-NOSVE-NEXT: sub x11, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_1
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI16_2
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI16_0]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI19_1
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI19_2
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI19_0]
; CHECK-NOSVE-NEXT: dup v1.2d, x11
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI16_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI16_2]
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI16_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_4
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI19_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI19_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI19_2]
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI19_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI19_4
; CHECK-NOSVE-NEXT: cmp x11, #1
; CHECK-NOSVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
; CHECK-NOSVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
; CHECK-NOSVE-NEXT: cmhi v3.2d, v1.2d, v3.2d
; CHECK-NOSVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
-; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI16_4]
+; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI19_4]
; CHECK-NOSVE-NEXT: cset w9, lt
; CHECK-NOSVE-NEXT: uzp1 v0.4s, v2.4s, v0.4s
; CHECK-NOSVE-NEXT: cmhi v1.2d, v1.2d, v5.2d
@@ -1025,16 +1468,16 @@ define <7 x i1> @whilewr_16_widen(ptr %a, ptr %b) {
; CHECK-NOSVE-LABEL: whilewr_16_widen:
; CHECK-NOSVE: // %bb.0: // %entry
; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI17_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_1
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI20_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI20_1
; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI17_2
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI17_3
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI17_0]
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI17_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI17_2]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI20_2
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI20_3
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI20_0]
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI20_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI20_2]
; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: ldr q4, [x12, :lo12:.LCPI17_3]
+; CHECK-NOSVE-NEXT: ldr q4, [x12, :lo12:.LCPI20_3]
; CHECK-NOSVE-NEXT: dup v0.2d, x8
; CHECK-NOSVE-NEXT: cmp x8, #1
; CHECK-NOSVE-NEXT: cset w8, lt
@@ -1075,14 +1518,14 @@ define <3 x i1> @whilewr_32_widen(ptr %a, ptr %b) {
; CHECK-NOSVE-LABEL: whilewr_32_widen:
; CHECK-NOSVE: // %bb.0: // %entry
; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI18_0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI21_0
; CHECK-NOSVE-NEXT: add x10, x9, #3
; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI18_0]
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI21_0]
; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI21_1
; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI18_1]
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI21_1]
; CHECK-NOSVE-NEXT: dup v0.2d, x9
; CHECK-NOSVE-NEXT: cmp x9, #1
; CHECK-NOSVE-NEXT: cset w8, lt
@@ -1100,3 +1543,97 @@ entry:
%0 = call <3 x i1> @llvm.loop.dependence.war.mask.v3i1(ptr %a, ptr %b, i64 4)
ret <3 x i1> %0
}
+
+define <16 x i1> @whilewr_badimm(ptr %a, ptr %b) {
+; CHECK-SVE-LABEL: whilewr_badimm:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: movk x8, #21846
+; CHECK-SVE-NEXT: smulh x8, x9, x8
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z6.d, z0.d
+; CHECK-SVE-NEXT: mov z7.d, z0.d
+; CHECK-SVE-NEXT: mov z16.d, z0.d
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: add z4.d, z4.d, #8 // =0x8
+; CHECK-SVE-NEXT: add z5.d, z5.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z6.d, z6.d, #4 // =0x4
+; CHECK-SVE-NEXT: dup v3.2d, x8
+; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v16.2d, v3.2d, v16.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v3.2d, v7.2d
+; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v16.4s
+; CHECK-SVE-NEXT: uzp1 v2.4s, v4.4s, v2.4s
+; CHECK-SVE-NEXT: uzp1 v4.4s, v6.4s, v5.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v4.8h
+; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: dup v1.16b, w8
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_badimm:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_1
+; CHECK-NOSVE-NEXT: movk x8, #21846
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI22_1]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_3
+; CHECK-NOSVE-NEXT: smulh x8, x9, x8
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_0
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI22_3]
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI22_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_2
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_5
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI22_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_4
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI22_5]
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI22_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_6
+; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI22_6]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_7
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI22_7]
+; CHECK-NOSVE-NEXT: dup v4.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 3)
+ ret <16 x i1> %0
+}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 484e35fbbb0e4..e0198f9461486 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -59,6 +59,46 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_8:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_1
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI0_0]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI0_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x8, :lo12:.LCPI0_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_4
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_3
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI0_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_5
+; CHECK-NOSVE-NEXT: dup v2.2d, x9
+; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI0_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_6
+; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI0_5]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI0_6]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI0_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
@@ -97,6 +137,33 @@ define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_16:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI1_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI1_2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI1_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI1_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI1_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI1_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
@@ -129,6 +196,27 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI2_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI2_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI2_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI2_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.nxv4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
@@ -157,6 +245,23 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_64:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI3_0
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI3_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v1.2s, w8
+; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 2 x i1> @llvm.loop.dependence.war.mask.nxv2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
@@ -222,6 +327,47 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilerw_8:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_1
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI4_0]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_2
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI4_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_3
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI4_1]
+; CHECK-NOSVE-NEXT: ldr q4, [x8, :lo12:.LCPI4_3]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_4
+; CHECK-NOSVE-NEXT: dup v3.2d, x9
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI4_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_5
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_6
+; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI4_5]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI4_6]
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI4_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v3.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v3.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v2.4s
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v3.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.raw.mask.nxv16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
@@ -261,6 +407,34 @@ define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilerw_16:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI5_1
+; CHECK-NOSVE-NEXT: cneg x8, x8, mi
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI5_2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI5_0]
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI5_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI5_2]
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI5_3]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.raw.mask.nxv8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
@@ -294,6 +468,28 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilerw_32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI6_0
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI6_0]
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI6_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI6_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.raw.mask.nxv4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
@@ -323,7 +519,1990 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilerw_64:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI7_0
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI7_0]
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v1.2s, w8
+; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 2 x i1> @llvm.loop.dependence.raw.mask.nxv2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
+
+define <vscale x 32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_8_split:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE2-NEXT: incb x1
+; CHECK-SVE2-NEXT: incb x0
+; CHECK-SVE2-NEXT: whilewr p1.b, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_8_split:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: rdvl x8, #1
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x0, x8
+; CHECK-SVE-NEXT: add x8, x1, x8
+; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: mov z6.d, z1.d
+; CHECK-SVE-NEXT: mov z7.d, z2.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z5.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z6.d, all, mul #4
+; CHECK-SVE-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: mov z24.d, z4.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z3.d, z6.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z3.d, z4.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z3.d, z7.d
+; CHECK-SVE-NEXT: incd z24.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p3.s, p4.s
+; CHECK-SVE-NEXT: uzp1 p3.s, p5.s, p6.s
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z3.d, z24.d
+; CHECK-SVE-NEXT: mov z3.d, x9
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z3.d, z4.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: uzp1 p7.s, p7.s, p8.s
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z6.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z3.d, z5.d
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p7.h
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z3.d, z7.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z24.d
+; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-SVE-NEXT: uzp1 p5.s, p9.s, p6.s
+; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: whilelo p6.b, xzr, x8
+; CHECK-SVE-NEXT: uzp1 p3.s, p8.s, p3.s
+; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p4.h, p5.h, p4.h
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p3.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p1.b, p1.b, p2.b
+; CHECK-SVE-NEXT: uzp1 p0.b, p4.b, p0.b
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: whilelo p2.b, xzr, x8
+; CHECK-SVE-NEXT: sel p1.b, p1, p1.b, p6.b
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p2.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_8_split:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
+; CHECK-NOSVE-NEXT: sub x11, x1, x0
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x9, :lo12:.LCPI8_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_5
+; CHECK-NOSVE-NEXT: dup v2.2d, x11
+; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI8_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_6
+; CHECK-NOSVE-NEXT: ldr q6, [x9, :lo12:.LCPI8_5]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI8_6]
+; CHECK-NOSVE-NEXT: cmp x11, #1
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI8_8]
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 1)
+ ret <vscale x 32 x i1> %0
+}
+
+define <vscale x 64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_8_split2:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: mov x8, x1
+; CHECK-SVE2-NEXT: mov x9, x0
+; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE2-NEXT: addvl x10, x1, #3
+; CHECK-SVE2-NEXT: incb x1, all, mul #2
+; CHECK-SVE2-NEXT: addvl x11, x0, #3
+; CHECK-SVE2-NEXT: incb x0, all, mul #2
+; CHECK-SVE2-NEXT: incb x8
+; CHECK-SVE2-NEXT: incb x9
+; CHECK-SVE2-NEXT: whilewr p3.b, x11, x10
+; CHECK-SVE2-NEXT: whilewr p2.b, x0, x1
+; CHECK-SVE2-NEXT: whilewr p1.b, x9, x8
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_8_split2:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-2
+; CHECK-SVE-NEXT: str p12, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p11, [sp, #8, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p10, [sp, #9, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p9, [sp, #10, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p8, [sp, #11, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p7, [sp, #12, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #13, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #14, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #15, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: rdvl x8, #1
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x0, x8
+; CHECK-SVE-NEXT: add x8, x1, x8
+; CHECK-SVE-NEXT: rdvl x10, #2
+; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: add x9, x0, x10
+; CHECK-SVE-NEXT: add x10, x1, x10
+; CHECK-SVE-NEXT: mov z24.d, x8
+; CHECK-SVE-NEXT: sub x9, x10, x9
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z25.d, x9
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z24.d, z0.d
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z5.d, all, mul #2
+; CHECK-SVE-NEXT: incd z2.d, all, mul #4
+; CHECK-SVE-NEXT: mov z7.d, z1.d
+; CHECK-SVE-NEXT: mov z3.d, z1.d
+; CHECK-SVE-NEXT: mov z4.d, z5.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z24.d, z1.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z24.d, z5.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z24.d, z2.d
+; CHECK-SVE-NEXT: incd z7.d, all, mul #2
+; CHECK-SVE-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p8.s, p1.s, p2.s
+; CHECK-SVE-NEXT: mov z6.d, z7.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z24.d, z7.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z24.d, z3.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z24.d, z4.d
+; CHECK-SVE-NEXT: incd z6.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p5.s, p5.s, p6.s
+; CHECK-SVE-NEXT: uzp1 p1.s, p4.s, p7.s
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z24.d, z6.d
+; CHECK-SVE-NEXT: uzp1 p4.h, p8.h, p5.h
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: rdvl x8, #3
+; CHECK-SVE-NEXT: cset w10, lt
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z25.d, z6.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z25.d, z4.d
+; CHECK-SVE-NEXT: sbfx x10, x10, #0, #1
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z25.d, z3.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z25.d, z2.d
+; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p3.s
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z25.d, z7.d
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z25.d, z5.d
+; CHECK-SVE-NEXT: whilelo p10.b, xzr, x10
+; CHECK-SVE-NEXT: add x10, x0, x8
+; CHECK-SVE-NEXT: add x8, x1, x8
+; CHECK-SVE-NEXT: cmphi p11.d, p0/z, z25.d, z1.d
+; CHECK-SVE-NEXT: cmphi p12.d, p0/z, z25.d, z0.d
+; CHECK-SVE-NEXT: sub x8, x8, x10
+; CHECK-SVE-NEXT: mov z24.d, x8
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p2.h
+; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: uzp1 p2.s, p6.s, p5.s
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: sub x10, x1, x0
+; CHECK-SVE-NEXT: uzp1 p5.s, p8.s, p7.s
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: mov z25.d, x10
+; CHECK-SVE-NEXT: uzp1 p3.s, p9.s, p3.s
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z24.d, z3.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z24.d, z7.d
+; CHECK-SVE-NEXT: uzp1 p6.s, p12.s, p11.s
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z24.d, z5.d
+; CHECK-SVE-NEXT: uzp1 p1.b, p4.b, p1.b
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z24.d, z6.d
+; CHECK-SVE-NEXT: uzp1 p2.h, p5.h, p2.h
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z24.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p3.h, p6.h, p3.h
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z24.d, z2.d
+; CHECK-SVE-NEXT: sel p1.b, p1, p1.b, p10.b
+; CHECK-SVE-NEXT: cmphi p10.d, p0/z, z24.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z24.d, z1.d
+; CHECK-SVE-NEXT: whilelo p3.b, xzr, x9
+; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p7.s
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: uzp1 p7.s, p9.s, p8.s
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z25.d, z7.d
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z25.d, z5.d
+; CHECK-SVE-NEXT: sel p2.b, p2, p2.b, p3.b
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z25.d, z1.d
+; CHECK-SVE-NEXT: cmphi p11.d, p0/z, z25.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p4.h, p6.h, p4.h
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z25.d, z3.d
+; CHECK-SVE-NEXT: cmphi p12.d, p0/z, z25.d, z2.d
+; CHECK-SVE-NEXT: uzp1 p5.s, p10.s, p5.s
+; CHECK-SVE-NEXT: cmphi p10.d, p0/z, z25.d, z4.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z25.d, z6.d
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p8.s, p9.s, p8.s
+; CHECK-SVE-NEXT: whilelo p9.b, xzr, x8
+; CHECK-SVE-NEXT: uzp1 p3.s, p11.s, p3.s
+; CHECK-SVE-NEXT: cmp x10, #1
+; CHECK-SVE-NEXT: ldr p11, [sp, #8, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p6.s, p12.s, p6.s
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: ldr p12, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.s, p10.s, p0.s
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: ldr p10, [sp, #9, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p3.h, p3.h, p8.h
+; CHECK-SVE-NEXT: ldr p8, [sp, #11, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p6.h, p0.h
+; CHECK-SVE-NEXT: ldr p6, [sp, #13, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p5.h, p5.h, p7.h
+; CHECK-SVE-NEXT: ldr p7, [sp, #12, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.b, p3.b, p0.b
+; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
+; CHECK-SVE-NEXT: uzp1 p4.b, p5.b, p4.b
+; CHECK-SVE-NEXT: ldr p5, [sp, #14, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p3.b
+; CHECK-SVE-NEXT: sel p3.b, p4, p4.b, p9.b
+; CHECK-SVE-NEXT: ldr p9, [sp, #10, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #15, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: addvl sp, sp, #2
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_8_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_1
+; CHECK-NOSVE-NEXT: sub x11, x1, x0
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI9_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI9_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x9, :lo12:.LCPI9_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_4
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI9_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_5
+; CHECK-NOSVE-NEXT: dup v2.2d, x11
+; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI9_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_6
+; CHECK-NOSVE-NEXT: ldr q6, [x9, :lo12:.LCPI9_5]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI9_6]
+; CHECK-NOSVE-NEXT: cmp x11, #1
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI9_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI9_8]
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h0, [x8, #6]
+; CHECK-NOSVE-NEXT: str h0, [x8, #4]
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 64 x i1> @llvm.loop.dependence.war.mask.nxv64i1(ptr %a, ptr %b, i64 1)
+ ret <vscale x 64 x i1> %0
+}
+
+define <vscale x 16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_16_split:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE2-NEXT: incb x1
+; CHECK-SVE2-NEXT: incb x0
+; CHECK-SVE2-NEXT: whilewr p1.h, x0, x1
+; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_16_split:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE-NEXT: mov z0.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_16_split:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI10_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_2
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI10_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI10_1]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI10_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI10_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x8
+; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI10_6]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI10_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI10_7]
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 2)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_16_split2:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: mov x8, x1
+; CHECK-SVE2-NEXT: mov x9, x0
+; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE2-NEXT: addvl x10, x1, #3
+; CHECK-SVE2-NEXT: incb x8
+; CHECK-SVE2-NEXT: incb x9
+; CHECK-SVE2-NEXT: addvl x11, x0, #3
+; CHECK-SVE2-NEXT: incb x1, all, mul #2
+; CHECK-SVE2-NEXT: incb x0, all, mul #2
+; CHECK-SVE2-NEXT: whilewr p1.h, x11, x10
+; CHECK-SVE2-NEXT: whilewr p2.h, x9, x8
+; CHECK-SVE2-NEXT: whilewr p3.h, x0, x1
+; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p2.b
+; CHECK-SVE2-NEXT: uzp1 p1.b, p3.b, p1.b
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_16_split2:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: rdvl x8, #2
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x0, x8
+; CHECK-SVE-NEXT: add x8, x1, x8
+; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: add x9, x9, x9, lsr #63
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: asr x9, x9, #1
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: mov z5.d, x8
+; CHECK-SVE-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: mov z6.d, z1.d
+; CHECK-SVE-NEXT: mov z7.d, z2.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z5.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z0.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z3.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z6.d, all, mul #4
+; CHECK-SVE-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE-NEXT: mov z24.d, z4.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z6.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z4.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-SVE-NEXT: incd z24.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p3.s, p4.s
+; CHECK-SVE-NEXT: uzp1 p3.s, p5.s, p6.s
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z24.d
+; CHECK-SVE-NEXT: mov z5.d, x9
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z24.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z7.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-SVE-NEXT: uzp1 p7.s, p7.s, p8.s
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z3.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z4.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z2.d
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p7.h
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z1.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-SVE-NEXT: uzp1 p5.s, p9.s, p6.s
+; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: whilelo p6.b, xzr, x8
+; CHECK-SVE-NEXT: uzp1 p3.s, p8.s, p3.s
+; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p7.s
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p4.h, p5.h, p4.h
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p3.h
+; CHECK-SVE-NEXT: uzp1 p1.b, p1.b, p2.b
+; CHECK-SVE-NEXT: uzp1 p0.b, p0.b, p4.b
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: whilelo p2.b, xzr, x8
+; CHECK-SVE-NEXT: sel p1.b, p1, p1.b, p6.b
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p2.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_16_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_0
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_1
+; CHECK-NOSVE-NEXT: add x9, x9, x9, lsr #63
+; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI11_0]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI11_2]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_4
+; CHECK-NOSVE-NEXT: ldr q1, [x11, :lo12:.LCPI11_1]
+; CHECK-NOSVE-NEXT: asr x9, x9, #1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_3
+; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI11_4]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_6
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI11_3]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x9
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI11_6]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_7
+; CHECK-NOSVE-NEXT: ldr q6, [x11, :lo12:.LCPI11_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x10, :lo12:.LCPI11_7]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI11_8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI11_8]
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 2)
+ ret <vscale x 32 x i1> %0
+}
+
+define <vscale x 8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_32_split:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE2-NEXT: incb x1
+; CHECK-SVE2-NEXT: incb x0
+; CHECK-SVE2-NEXT: whilewr p1.s, x0, x1
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_32_split:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_32_split:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI12_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI12_2
+; CHECK-NOSVE-NEXT: add x9, x8, #3
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI12_1]
+; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_0
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI12_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI12_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_3
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI12_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 4)
+ ret <vscale x 8 x i1> %0
+}
+
+define <vscale x 16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_32_split2:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE2-NEXT: mov x10, x1
+; CHECK-SVE2-NEXT: mov x11, x0
+; CHECK-SVE2-NEXT: addvl x8, x1, #3
+; CHECK-SVE2-NEXT: addvl x9, x0, #3
+; CHECK-SVE2-NEXT: incb x10, all, mul #2
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-SVE2-NEXT: incb x11, all, mul #2
+; CHECK-SVE2-NEXT: incb x1
+; CHECK-SVE2-NEXT: incb x0
+; CHECK-SVE2-NEXT: whilewr p1.s, x9, x8
+; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p0.b
+; CHECK-SVE2-NEXT: whilewr p2.s, x11, x10
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: whilewr p3.s, x0, x1
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: uzp1 p1.h, p2.h, p1.h
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p3.h
+; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_32_split2:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE-NEXT: mov z0.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_32_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI13_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_2
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI13_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI13_1]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_3
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI13_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI13_3]
+; CHECK-NOSVE-NEXT: dup v4.2d, x9
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_5
+; CHECK-NOSVE-NEXT: ldr q7, [x8, :lo12:.LCPI13_6]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI13_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI13_7]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 4)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_32_split3:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_offset w29, -16
+; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE2-NEXT: mov x10, x1
+; CHECK-SVE2-NEXT: mov x11, x0
+; CHECK-SVE2-NEXT: mov x12, x1
+; CHECK-SVE2-NEXT: mov x13, x0
+; CHECK-SVE2-NEXT: incb x10, all, mul #2
+; CHECK-SVE2-NEXT: incb x11, all, mul #2
+; CHECK-SVE2-NEXT: incb x12
+; CHECK-SVE2-NEXT: incb x13
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-SVE2-NEXT: addvl x8, x1, #3
+; CHECK-SVE2-NEXT: addvl x9, x0, #3
+; CHECK-SVE2-NEXT: whilewr p1.s, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #7
+; CHECK-SVE2-NEXT: addvl x9, x0, #7
+; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p0.b
+; CHECK-SVE2-NEXT: whilewr p2.s, x11, x10
+; CHECK-SVE2-NEXT: addvl x10, x1, #6
+; CHECK-SVE2-NEXT: addvl x11, x0, #6
+; CHECK-SVE2-NEXT: whilewr p3.s, x13, x12
+; CHECK-SVE2-NEXT: addvl x12, x1, #5
+; CHECK-SVE2-NEXT: addvl x13, x0, #5
+; CHECK-SVE2-NEXT: incb x1, all, mul #4
+; CHECK-SVE2-NEXT: incb x0, all, mul #4
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: uzp1 p1.h, p2.h, p1.h
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: whilewr p5.s, x0, x1
+; CHECK-SVE2-NEXT: whilewr p4.s, x9, x8
+; CHECK-SVE2-NEXT: uzp1 p2.h, p5.h, p0.h
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p3.h
+; CHECK-SVE2-NEXT: whilewr p3.s, x11, x10
+; CHECK-SVE2-NEXT: uzp1 p2.b, p2.b, p0.b
+; CHECK-SVE2-NEXT: whilewr p5.s, x13, x12
+; CHECK-SVE2-NEXT: punpklo p2.h, p2.b
+; CHECK-SVE2-NEXT: uzp1 p3.h, p3.h, p4.h
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: punpklo p2.h, p2.b
+; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p5.h
+; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p1.b, p2.b, p3.b
+; CHECK-SVE2-NEXT: addvl sp, sp, #1
+; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_32_split3:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: rdvl x8, #4
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x0, x8
+; CHECK-SVE-NEXT: add x8, x1, x8
+; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: mov z5.d, x8
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: mov z3.d, z1.d
+; CHECK-SVE-NEXT: mov z6.d, z2.d
+; CHECK-SVE-NEXT: mov z7.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: incd z6.d, all, mul #4
+; CHECK-SVE-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE-NEXT: mov z24.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-SVE-NEXT: incd z24.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p7.s
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p8.s
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: uzp1 p3.h, p4.h, p3.h
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p9.s
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: add x8, x9, #3
+; CHECK-SVE-NEXT: cmp x9, #0
+; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p6.h
+; CHECK-SVE-NEXT: csel x8, x8, x9, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-SVE-NEXT: mov z5.d, x8
+; CHECK-SVE-NEXT: mov p1.b, p2/m, p2.b
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
+; CHECK-SVE-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: uzp1 p5.s, p7.s, p5.s
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: uzp1 p7.s, p9.s, p8.s
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p4.s, p10.s, p4.s
+; CHECK-SVE-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p6.s
+; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p5.h, p7.h, p5.h
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p4.h
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.b, p0.b, p5.b
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p3.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_32_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI13_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_2
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI13_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI13_1]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_3
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI13_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI13_3]
+; CHECK-NOSVE-NEXT: dup v4.2d, x9
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_5
+; CHECK-NOSVE-NEXT: ldr q7, [x8, :lo12:.LCPI13_6]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI13_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI13_7]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 4)
+ ret <vscale x 32 x i1> %0
+}
+
+define <vscale x 4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_64_split:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE2-NEXT: incb x1
+; CHECK-SVE2-NEXT: incb x0
+; CHECK-SVE2-NEXT: whilewr p1.d, x0, x1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p1.s
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_64_split:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_64_split:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI14_0
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI14_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI14_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.nxv4i1(ptr %a, ptr %b, i64 8)
+ ret <vscale x 4 x i1> %0
+}
+
+define <vscale x 8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_64_split2:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE2-NEXT: mov x10, x1
+; CHECK-SVE2-NEXT: mov x11, x0
+; CHECK-SVE2-NEXT: addvl x8, x1, #3
+; CHECK-SVE2-NEXT: addvl x9, x0, #3
+; CHECK-SVE2-NEXT: incb x10, all, mul #2
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p0.s
+; CHECK-SVE2-NEXT: incb x11, all, mul #2
+; CHECK-SVE2-NEXT: incb x1
+; CHECK-SVE2-NEXT: incb x0
+; CHECK-SVE2-NEXT: whilewr p1.d, x9, x8
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-SVE2-NEXT: whilewr p2.d, x11, x10
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: whilewr p3.d, x0, x1
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p3.s
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_64_split2:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_64_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI15_2
+; CHECK-NOSVE-NEXT: add x9, x8, #7
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
+; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_0
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI15_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #3
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI15_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_3
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI15_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 8)
+ ret <vscale x 8 x i1> %0
+}
+
+define <vscale x 16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_64_split3:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_offset w29, -16
+; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE2-NEXT: mov x10, x1
+; CHECK-SVE2-NEXT: mov x11, x0
+; CHECK-SVE2-NEXT: mov x12, x1
+; CHECK-SVE2-NEXT: mov x13, x0
+; CHECK-SVE2-NEXT: incb x10, all, mul #2
+; CHECK-SVE2-NEXT: incb x11, all, mul #2
+; CHECK-SVE2-NEXT: incb x12
+; CHECK-SVE2-NEXT: incb x13
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p0.s
+; CHECK-SVE2-NEXT: addvl x8, x1, #3
+; CHECK-SVE2-NEXT: addvl x9, x0, #3
+; CHECK-SVE2-NEXT: whilewr p1.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #7
+; CHECK-SVE2-NEXT: addvl x9, x0, #7
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-SVE2-NEXT: whilewr p2.d, x11, x10
+; CHECK-SVE2-NEXT: addvl x10, x1, #6
+; CHECK-SVE2-NEXT: addvl x11, x0, #6
+; CHECK-SVE2-NEXT: whilewr p3.d, x13, x12
+; CHECK-SVE2-NEXT: addvl x12, x1, #5
+; CHECK-SVE2-NEXT: addvl x13, x0, #5
+; CHECK-SVE2-NEXT: incb x1, all, mul #4
+; CHECK-SVE2-NEXT: incb x0, all, mul #4
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: whilewr p5.d, x0, x1
+; CHECK-SVE2-NEXT: whilewr p4.d, x9, x8
+; CHECK-SVE2-NEXT: uzp1 p2.s, p5.s, p0.s
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p3.s
+; CHECK-SVE2-NEXT: whilewr p3.d, x11, x10
+; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p0.h
+; CHECK-SVE2-NEXT: whilewr p5.d, x13, x12
+; CHECK-SVE2-NEXT: punpklo p2.h, p2.b
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p4.s
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: punpklo p2.h, p2.b
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-SVE2-NEXT: uzp1 p2.s, p2.s, p5.s
+; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p1.h, p2.h, p3.h
+; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-SVE2-NEXT: addvl sp, sp, #1
+; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_64_split3:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE-NEXT: mov z0.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_64_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI15_2
+; CHECK-NOSVE-NEXT: add x9, x8, #7
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
+; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_0
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI15_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #3
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI15_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_3
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI15_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 8)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_64_split4:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_offset w29, -16
+; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE2-NEXT: mov x11, x1
+; CHECK-SVE2-NEXT: mov x12, x0
+; CHECK-SVE2-NEXT: incb x11, all, mul #2
+; CHECK-SVE2-NEXT: incb x12, all, mul #2
+; CHECK-SVE2-NEXT: mov x10, x1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p0.s
+; CHECK-SVE2-NEXT: mov x13, x0
+; CHECK-SVE2-NEXT: incb x10
+; CHECK-SVE2-NEXT: incb x13
+; CHECK-SVE2-NEXT: addvl x8, x1, #3
+; CHECK-SVE2-NEXT: addvl x9, x0, #3
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-SVE2-NEXT: whilewr p2.d, x12, x11
+; CHECK-SVE2-NEXT: mov x11, x1
+; CHECK-SVE2-NEXT: mov x12, x0
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: incb x11, all, mul #4
+; CHECK-SVE2-NEXT: incb x12, all, mul #4
+; CHECK-SVE2-NEXT: whilewr p1.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #7
+; CHECK-SVE2-NEXT: addvl x9, x0, #7
+; CHECK-SVE2-NEXT: whilewr p3.d, x13, x10
+; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: whilewr p5.d, x12, x11
+; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p3.s
+; CHECK-SVE2-NEXT: whilewr p4.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #6
+; CHECK-SVE2-NEXT: addvl x9, x0, #6
+; CHECK-SVE2-NEXT: uzp1 p2.s, p5.s, p0.s
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-SVE2-NEXT: uzp1 p1.h, p2.h, p0.h
+; CHECK-SVE2-NEXT: whilewr p2.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #5
+; CHECK-SVE2-NEXT: addvl x9, x0, #5
+; CHECK-SVE2-NEXT: punpklo p1.h, p1.b
+; CHECK-SVE2-NEXT: whilewr p3.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #12
+; CHECK-SVE2-NEXT: addvl x9, x0, #12
+; CHECK-SVE2-NEXT: punpklo p1.h, p1.b
+; CHECK-SVE2-NEXT: uzp1 p2.s, p2.s, p4.s
+; CHECK-SVE2-NEXT: uzp1 p1.s, p1.s, p3.s
+; CHECK-SVE2-NEXT: whilewr p3.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #15
+; CHECK-SVE2-NEXT: addvl x9, x0, #15
+; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p2.h
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p0.s
+; CHECK-SVE2-NEXT: whilewr p2.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #14
+; CHECK-SVE2-NEXT: addvl x9, x0, #14
+; CHECK-SVE2-NEXT: uzp1 p3.h, p3.h, p0.h
+; CHECK-SVE2-NEXT: whilewr p4.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #13
+; CHECK-SVE2-NEXT: addvl x9, x0, #13
+; CHECK-SVE2-NEXT: punpklo p3.h, p3.b
+; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p2.s
+; CHECK-SVE2-NEXT: whilewr p4.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #8
+; CHECK-SVE2-NEXT: addvl x9, x0, #8
+; CHECK-SVE2-NEXT: punpklo p3.h, p3.b
+; CHECK-SVE2-NEXT: whilewr p5.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #11
+; CHECK-SVE2-NEXT: addvl x9, x0, #11
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p4.s
+; CHECK-SVE2-NEXT: uzp1 p4.s, p5.s, p0.s
+; CHECK-SVE2-NEXT: whilewr p5.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #10
+; CHECK-SVE2-NEXT: addvl x9, x0, #10
+; CHECK-SVE2-NEXT: uzp1 p4.h, p4.h, p0.h
+; CHECK-SVE2-NEXT: whilewr p6.d, x9, x8
+; CHECK-SVE2-NEXT: addvl x8, x1, #9
+; CHECK-SVE2-NEXT: addvl x9, x0, #9
+; CHECK-SVE2-NEXT: punpklo p4.h, p4.b
+; CHECK-SVE2-NEXT: whilewr p7.d, x9, x8
+; CHECK-SVE2-NEXT: punpklo p4.h, p4.b
+; CHECK-SVE2-NEXT: uzp1 p5.s, p6.s, p5.s
+; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p4.s, p4.s, p7.s
+; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p2.h, p3.h, p2.h
+; CHECK-SVE2-NEXT: uzp1 p3.h, p4.h, p5.h
+; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-SVE2-NEXT: uzp1 p1.b, p3.b, p2.b
+; CHECK-SVE2-NEXT: addvl sp, sp, #1
+; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_64_split4:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: rdvl x8, #8
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x0, x8
+; CHECK-SVE-NEXT: add x8, x1, x8
+; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: mov z5.d, x8
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: mov z3.d, z1.d
+; CHECK-SVE-NEXT: mov z6.d, z2.d
+; CHECK-SVE-NEXT: mov z7.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: incd z6.d, all, mul #4
+; CHECK-SVE-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE-NEXT: mov z24.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-SVE-NEXT: incd z24.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p7.s
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p8.s
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: uzp1 p3.h, p4.h, p3.h
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p9.s
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: add x8, x9, #7
+; CHECK-SVE-NEXT: cmp x9, #0
+; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p6.h
+; CHECK-SVE-NEXT: csel x8, x8, x9, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-SVE-NEXT: mov z5.d, x8
+; CHECK-SVE-NEXT: mov p1.b, p2/m, p2.b
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
+; CHECK-SVE-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: uzp1 p5.s, p7.s, p5.s
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: uzp1 p7.s, p9.s, p8.s
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p4.s, p10.s, p4.s
+; CHECK-SVE-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p6.s
+; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p5.h, p7.h, p5.h
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p4.h
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.b, p0.b, p5.b
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p3.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_64_split2:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI15_2
+; CHECK-NOSVE-NEXT: add x9, x8, #7
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
+; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_0
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI15_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #3
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI15_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_3
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI15_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 8)
+ ret <vscale x 32 x i1> %0
+}
+
+define <vscale x 9 x i1> @whilewr_8_widen(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_8_widen:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_8_widen:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p5.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p4.s
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_8_widen:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_0
+; CHECK-NOSVE-NEXT: sub x11, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_1
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI16_2
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI16_0]
+; CHECK-NOSVE-NEXT: dup v1.2d, x11
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI16_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI16_2]
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI16_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_4
+; CHECK-NOSVE-NEXT: cmp x11, #1
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v1.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
+; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI16_4]
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v2.4s, v0.4s
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v1.2d, v5.2d
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: xtn v1.2s, v1.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.4h, v1.4h, v0.4h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: umov w9, v0.b[0]
+; CHECK-NOSVE-NEXT: umov w10, v0.b[1]
+; CHECK-NOSVE-NEXT: umov w11, v0.b[2]
+; CHECK-NOSVE-NEXT: umov w12, v0.b[7]
+; CHECK-NOSVE-NEXT: and w9, w9, #0x1
+; CHECK-NOSVE-NEXT: bfi w9, w10, #1, #1
+; CHECK-NOSVE-NEXT: umov w10, v0.b[3]
+; CHECK-NOSVE-NEXT: bfi w9, w11, #2, #1
+; CHECK-NOSVE-NEXT: umov w11, v0.b[4]
+; CHECK-NOSVE-NEXT: bfi w9, w10, #3, #1
+; CHECK-NOSVE-NEXT: umov w10, v0.b[5]
+; CHECK-NOSVE-NEXT: bfi w9, w11, #4, #1
+; CHECK-NOSVE-NEXT: umov w11, v0.b[6]
+; CHECK-NOSVE-NEXT: bfi w9, w10, #5, #1
+; CHECK-NOSVE-NEXT: umov w10, v0.b[8]
+; CHECK-NOSVE-NEXT: bfi w9, w11, #6, #1
+; CHECK-NOSVE-NEXT: ubfiz w11, w12, #7, #1
+; CHECK-NOSVE-NEXT: orr w9, w9, w11
+; CHECK-NOSVE-NEXT: orr w9, w9, w10, lsl #8
+; CHECK-NOSVE-NEXT: and w9, w9, #0x1ff
+; CHECK-NOSVE-NEXT: strh w9, [x8]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 9 x i1> @llvm.loop.dependence.war.mask.nxv9i1(ptr %a, ptr %b, i64 1)
+ ret <vscale x 9 x i1> %0
+}
+
+define <vscale x 7 x i1> @whilewr_16_widen(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_16_widen:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_16_widen:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_16_widen:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI17_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI17_2
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI17_3
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI17_0]
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI17_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI17_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: ldr q4, [x12, :lo12:.LCPI17_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: umov w0, v0.b[0]
+; CHECK-NOSVE-NEXT: umov w1, v0.b[1]
+; CHECK-NOSVE-NEXT: umov w2, v0.b[2]
+; CHECK-NOSVE-NEXT: umov w3, v0.b[3]
+; CHECK-NOSVE-NEXT: umov w4, v0.b[4]
+; CHECK-NOSVE-NEXT: umov w5, v0.b[5]
+; CHECK-NOSVE-NEXT: umov w6, v0.b[6]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 7 x i1> @llvm.loop.dependence.war.mask.nxv7i1(ptr %a, ptr %b, i64 2)
+ ret <vscale x 7 x i1> %0
+}
+
+define <vscale x 3 x i1> @whilewr_32_widen(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_32_widen:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_32_widen:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_32_widen:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI18_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI18_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI18_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: umov w0, v0.h[0]
+; CHECK-NOSVE-NEXT: umov w1, v0.h[1]
+; CHECK-NOSVE-NEXT: umov w2, v0.h[2]
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 3 x i1> @llvm.loop.dependence.war.mask.nxv3i1(ptr %a, ptr %b, i64 4)
+ ret <vscale x 3 x i1> %0
+}
+
+define <vscale x 16 x i1> @whilewr_badimm(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_badimm:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_offset w29, -16
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
+; CHECK-SVE2-NEXT: sub x9, x1, x0
+; CHECK-SVE2-NEXT: movk x8, #21846
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: smulh x8, x9, x8
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z4.d, z0.d
+; CHECK-SVE2-NEXT: mov z5.d, z0.d
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE2-NEXT: mov z2.d, x8
+; CHECK-SVE2-NEXT: mov z3.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: mov z0.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE2-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE2-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE2-NEXT: addvl sp, sp, #1
+; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_badimm:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: movk x8, #21846
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: smulh x8, x9, x8
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: mov z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: mov z0.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-SVE-NEXT: ret
+; CHECK-NOSVE-LABEL: whilewr_badimm:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_1
+; CHECK-NOSVE-NEXT: movk x8, #21846
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI22_1]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_3
+; CHECK-NOSVE-NEXT: smulh x8, x9, x8
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_0
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI22_3]
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI22_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_2
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_5
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI22_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_4
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI22_5]
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI22_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_6
+; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI22_6]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_7
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI22_7]
+; CHECK-NOSVE-NEXT: dup v4.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 3)
+ ret <vscale x 16 x i1> %0
+}
>From a84e5e2420dc81f76ff1a86c2d91ef37d3032171 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 11 Aug 2025 10:53:49 +0100
Subject: [PATCH 31/43] Remove comment about mismatched type and immediate
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 1 -
1 file changed, 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index eac2f5c5e753f..389cfdc4c9dd9 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5252,7 +5252,6 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
unsigned NumElements = FullVT.getVectorMinNumElements();
unsigned NumSplits = 0;
EVT EltVT;
- // Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
EltVT = MVT::i8;
>From 054f85957d58f96204ff18721b04fc8ab4dc2bb1 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 12 Aug 2025 11:23:02 +0100
Subject: [PATCH 32/43] Improve lowering and splitting code a bit
---
.../SelectionDAG/LegalizeVectorTypes.cpp | 20 +-
.../Target/AArch64/AArch64ISelLowering.cpp | 26 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 696 ++++----
.../CodeGen/AArch64/alias_mask_scalable.ll | 1476 +++++------------
4 files changed, 791 insertions(+), 1427 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index b45df45b12d2a..d8a61beef737a 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1617,21 +1617,9 @@ void DAGTypeLegalizer::SplitVecRes_BITCAST(SDNode *N, SDValue &Lo,
void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
SDValue &Hi) {
- EVT EltVT;
- switch (N->getConstantOperandVal(2)) {
- case 1:
- EltVT = MVT::i8;
- break;
- case 2:
- EltVT = MVT::i16;
- break;
- case 4:
- EltVT = MVT::i32;
- break;
- case 8:
- EltVT = MVT::i64;
- break;
- }
+ unsigned EltSize = N->getConstantOperandVal(2);
+ EVT EltVT = EVT::getIntegerVT(*DAG.getContext(), EltSize * 8);
+
SDLoc DL(N);
EVT LoVT, HiVT;
std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(N->getValueType(0));
@@ -1643,14 +1631,12 @@ void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
HiVT.getVectorMinNumElements(), false);
unsigned Offset = StoreVT.getStoreSizeInBits() / 8;
SDValue Addend;
-
if (HiVT.isScalableVT())
Addend = DAG.getVScale(DL, MVT::i64, APInt(64, Offset));
else
Addend = DAG.getConstant(Offset, DL, MVT::i64);
PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, Addend);
-
PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, Addend);
Hi = DAG.getNode(N->getOpcode(), DL, HiVT, PtrA, PtrB, N->getOperand(2));
}
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 389cfdc4c9dd9..516f1da786701 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5282,16 +5282,18 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
EVT StoreVT = EVT::getVectorVT(*DAG.getContext(), EltVT,
VT.getVectorMinNumElements(), false);
- unsigned Offset = StoreVT.getStoreSizeInBits() / 8 * AddrScale;
- SDValue Addend;
+ if (AddrScale > 0) {
+ unsigned Offset = StoreVT.getStoreSizeInBits() / 8 * AddrScale;
+ SDValue Addend;
- if (VT.isScalableVT())
- Addend = DAG.getVScale(DL, MVT::i64, APInt(64, Offset));
- else
- Addend = DAG.getConstant(Offset, DL, MVT::i64);
+ if (VT.isScalableVT())
+ Addend = DAG.getVScale(DL, MVT::i64, APInt(64, Offset));
+ else
+ Addend = DAG.getConstant(Offset, DL, MVT::i64);
- PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, Addend);
- PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, Addend);
+ PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, Addend);
+ PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, Addend);
+ }
if (VT.isScalableVT())
return DAG.getNode(Op.getOpcode(), DL, VT, PtrA, PtrB, Op.getOperand(2));
@@ -5313,16 +5315,16 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
};
if (NumSplits == 0)
- return LowerToWhile(FullVT, false);
+ return LowerToWhile(FullVT, 0);
SDValue FullVec = DAG.getUNDEF(FullVT);
unsigned NumElementsPerSplit = NumElements / (2 * NumSplits);
+ EVT PartVT =
+ EVT::getVectorVT(*DAG.getContext(), FullVT.getVectorElementType(),
+ NumElementsPerSplit, FullVT.isScalableVT());
for (unsigned Split = 0, InsertIdx = 0; Split < NumSplits;
Split++, InsertIdx += 2) {
- EVT PartVT =
- EVT::getVectorVT(*DAG.getContext(), FullVT.getVectorElementType(),
- NumElementsPerSplit, FullVT.isScalableVT());
SDValue Low = LowerToWhile(PartVT, InsertIdx);
SDValue High = LowerToWhile(PartVT, InsertIdx + 1);
unsigned InsertIdxLow = InsertIdx * NumElementsPerSplit;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index ec00f9de39c22..b1491c41135fa 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -511,13 +511,44 @@ entry:
define <16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_16_split:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x8, x1, #16
-; CHECK-SVE-NEXT: add x9, x0, #16
-; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE-NEXT: whilewr p1.h, x9, x8
-; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z1.h, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z6.d, z0.d
+; CHECK-SVE-NEXT: mov z7.d, z0.d
+; CHECK-SVE-NEXT: mov z16.d, z0.d
+; CHECK-SVE-NEXT: dup v3.2d, x8
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: add z4.d, z4.d, #8 // =0x8
+; CHECK-SVE-NEXT: add z5.d, z5.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z6.d, z6.d, #4 // =0x4
+; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-SVE-NEXT: cmhi v16.2d, v3.2d, v16.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v3.2d, v7.2d
+; CHECK-SVE-NEXT: uzp1 v2.4s, v4.4s, v2.4s
+; CHECK-SVE-NEXT: uzp1 v4.4s, v6.4s, v5.4s
+; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v16.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v4.8h
; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: dup v1.16b, w8
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_16_split:
@@ -570,38 +601,54 @@ entry:
define <32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_16_split2:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x9, x1, #48
-; CHECK-SVE-NEXT: add x10, x0, #48
-; CHECK-SVE-NEXT: add x11, x1, #16
-; CHECK-SVE-NEXT: whilewr p1.h, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #32
-; CHECK-SVE-NEXT: add x10, x0, #32
-; CHECK-SVE-NEXT: add x12, x0, #16
-; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE-NEXT: whilewr p2.h, x10, x9
-; CHECK-SVE-NEXT: mov z0.h, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: add x9, x9, x9, lsr #63
+; CHECK-SVE-NEXT: asr x9, x9, #1
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z7.d, z0.d
+; CHECK-SVE-NEXT: mov z16.d, z0.d
+; CHECK-SVE-NEXT: dup v6.2d, x9
+; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
+; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
+; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: cmhi v1.2d, v6.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v6.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v6.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v6.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v6.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v7.2d, v6.2d, v7.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v6.2d, v16.2d
+; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
+; CHECK-SVE-NEXT: uzp1 v3.4s, v5.4s, v4.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v7.4s
+; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v6.4s
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v3.8h
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: dup v1.16b, w9
; CHECK-SVE-NEXT: adrp x9, .LCPI11_0
-; CHECK-SVE-NEXT: whilewr p3.h, x12, x11
-; CHECK-SVE-NEXT: mov z2.h, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z1.h, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z3.h, p3/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-SVE-NEXT: uzp1 v1.16b, v2.16b, v3.16b
-; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI11_0]
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: ldr q1, [x9, :lo12:.LCPI11_0]
; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
; CHECK-SVE-NEXT: addv h0, v0.8h
-; CHECK-SVE-NEXT: addv h1, v1.8h
; CHECK-SVE-NEXT: str h0, [x8, #2]
-; CHECK-SVE-NEXT: str h1, [x8]
+; CHECK-SVE-NEXT: str h0, [x8]
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_16_split2:
@@ -664,28 +711,31 @@ entry:
define <8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_split:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE-NEXT: add x10, x0, #16
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov w8, v0.s[1]
-; CHECK-SVE-NEXT: mov v1.16b, v0.16b
-; CHECK-SVE-NEXT: mov w9, v0.s[2]
-; CHECK-SVE-NEXT: mov v1.h[1], w8
-; CHECK-SVE-NEXT: mov w8, v0.s[3]
-; CHECK-SVE-NEXT: mov v1.h[2], w9
-; CHECK-SVE-NEXT: add x9, x1, #16
-; CHECK-SVE-NEXT: whilewr p0.s, x10, x9
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v1.h[3], w8
-; CHECK-SVE-NEXT: fmov w8, s0
-; CHECK-SVE-NEXT: mov w9, v0.s[1]
-; CHECK-SVE-NEXT: mov v1.h[4], w8
-; CHECK-SVE-NEXT: mov w8, v0.s[2]
-; CHECK-SVE-NEXT: mov v1.h[5], w9
-; CHECK-SVE-NEXT: mov w9, v0.s[3]
-; CHECK-SVE-NEXT: mov v1.h[6], w8
-; CHECK-SVE-NEXT: mov v1.h[7], w9
-; CHECK-SVE-NEXT: xtn v0.8b, v1.8h
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: dup v1.2d, x8
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z2.d, z2.d, #4 // =0x4
+; CHECK-SVE-NEXT: add z3.d, z3.d, #2 // =0x2
+; CHECK-SVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v1.2d, v1.2d, v3.2d
+; CHECK-SVE-NEXT: uzp1 v2.4s, v2.4s, v4.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-SVE-NEXT: dup v1.8b, w8
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v2.8h
+; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-SVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_32_split:
@@ -725,51 +775,46 @@ entry:
define <16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_split2:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x8, x1, #32
-; CHECK-SVE-NEXT: add x9, x0, #32
-; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE-NEXT: whilewr p1.s, x9, x8
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z1.s, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov w8, v0.s[1]
-; CHECK-SVE-NEXT: mov v2.16b, v0.16b
-; CHECK-SVE-NEXT: mov w10, v0.s[2]
-; CHECK-SVE-NEXT: mov w9, v1.s[1]
-; CHECK-SVE-NEXT: mov v3.16b, v1.16b
-; CHECK-SVE-NEXT: mov w11, v1.s[3]
-; CHECK-SVE-NEXT: mov v2.h[1], w8
-; CHECK-SVE-NEXT: mov w8, v1.s[2]
-; CHECK-SVE-NEXT: mov v3.h[1], w9
-; CHECK-SVE-NEXT: mov w9, v0.s[3]
-; CHECK-SVE-NEXT: mov v2.h[2], w10
-; CHECK-SVE-NEXT: add x10, x1, #16
-; CHECK-SVE-NEXT: mov v3.h[2], w8
-; CHECK-SVE-NEXT: add x8, x0, #16
-; CHECK-SVE-NEXT: whilewr p0.s, x8, x10
-; CHECK-SVE-NEXT: add x8, x1, #48
-; CHECK-SVE-NEXT: add x10, x0, #48
-; CHECK-SVE-NEXT: whilewr p1.s, x10, x8
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v2.h[3], w9
-; CHECK-SVE-NEXT: mov z1.s, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v3.h[3], w11
-; CHECK-SVE-NEXT: fmov w9, s0
-; CHECK-SVE-NEXT: mov w8, v0.s[1]
-; CHECK-SVE-NEXT: fmov w10, s1
-; CHECK-SVE-NEXT: mov w11, v1.s[1]
-; CHECK-SVE-NEXT: mov v2.h[4], w9
-; CHECK-SVE-NEXT: mov w9, v0.s[2]
-; CHECK-SVE-NEXT: mov v3.h[4], w10
-; CHECK-SVE-NEXT: mov w10, v1.s[2]
-; CHECK-SVE-NEXT: mov v2.h[5], w8
-; CHECK-SVE-NEXT: mov w8, v0.s[3]
-; CHECK-SVE-NEXT: mov v3.h[5], w11
-; CHECK-SVE-NEXT: mov w11, v1.s[3]
-; CHECK-SVE-NEXT: mov v2.h[6], w9
-; CHECK-SVE-NEXT: mov v3.h[6], w10
-; CHECK-SVE-NEXT: mov v2.h[7], w8
-; CHECK-SVE-NEXT: mov v3.h[7], w11
-; CHECK-SVE-NEXT: uzp1 v0.16b, v2.16b, v3.16b
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z6.d, z0.d
+; CHECK-SVE-NEXT: mov z7.d, z0.d
+; CHECK-SVE-NEXT: mov z16.d, z0.d
+; CHECK-SVE-NEXT: dup v3.2d, x8
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: add z4.d, z4.d, #8 // =0x8
+; CHECK-SVE-NEXT: add z5.d, z5.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z6.d, z6.d, #4 // =0x4
+; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-SVE-NEXT: cmhi v16.2d, v3.2d, v16.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v3.2d, v7.2d
+; CHECK-SVE-NEXT: uzp1 v2.4s, v4.4s, v2.4s
+; CHECK-SVE-NEXT: uzp1 v4.4s, v6.4s, v5.4s
+; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v16.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v4.8h
+; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: dup v1.16b, w8
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_32_split2:
@@ -824,113 +869,55 @@ entry:
define <32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_split3:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE-NEXT: add x9, x1, #96
-; CHECK-SVE-NEXT: add x10, x0, #96
-; CHECK-SVE-NEXT: add x11, x1, #64
-; CHECK-SVE-NEXT: add x12, x0, #64
-; CHECK-SVE-NEXT: whilewr p1.s, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #32
-; CHECK-SVE-NEXT: add x10, x0, #32
-; CHECK-SVE-NEXT: whilewr p2.s, x12, x11
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p0.s, x10, x9
-; CHECK-SVE-NEXT: mov z4.s, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z5.s, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z1.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov w9, v0.s[1]
-; CHECK-SVE-NEXT: mov w12, v4.s[1]
-; CHECK-SVE-NEXT: mov w10, v0.s[2]
-; CHECK-SVE-NEXT: mov w13, v5.s[1]
-; CHECK-SVE-NEXT: mov w11, v0.s[3]
-; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
-; CHECK-SVE-NEXT: mov v2.16b, v4.16b
-; CHECK-SVE-NEXT: mov w14, v1.s[1]
-; CHECK-SVE-NEXT: mov v3.16b, v5.16b
-; CHECK-SVE-NEXT: mov w15, v1.s[2]
-; CHECK-SVE-NEXT: mov w16, v1.s[3]
-; CHECK-SVE-NEXT: // kill: def $q1 killed $q1 killed $z1
-; CHECK-SVE-NEXT: mov w17, v4.s[2]
-; CHECK-SVE-NEXT: mov w18, v5.s[2]
-; CHECK-SVE-NEXT: mov v0.h[1], w9
-; CHECK-SVE-NEXT: mov v2.h[1], w12
-; CHECK-SVE-NEXT: add x9, x1, #16
-; CHECK-SVE-NEXT: mov v3.h[1], w13
-; CHECK-SVE-NEXT: add x12, x0, #16
-; CHECK-SVE-NEXT: add x13, x1, #112
-; CHECK-SVE-NEXT: mov v1.h[1], w14
-; CHECK-SVE-NEXT: add x14, x0, #112
-; CHECK-SVE-NEXT: whilewr p0.s, x12, x9
-; CHECK-SVE-NEXT: whilewr p1.s, x14, x13
-; CHECK-SVE-NEXT: add x13, x0, #80
-; CHECK-SVE-NEXT: mov w9, v4.s[3]
-; CHECK-SVE-NEXT: mov v0.h[2], w10
-; CHECK-SVE-NEXT: add x10, x1, #80
-; CHECK-SVE-NEXT: mov w12, v5.s[3]
-; CHECK-SVE-NEXT: whilewr p2.s, x13, x10
-; CHECK-SVE-NEXT: add x10, x1, #48
-; CHECK-SVE-NEXT: add x13, x0, #48
-; CHECK-SVE-NEXT: mov v2.h[2], w17
-; CHECK-SVE-NEXT: mov v3.h[2], w18
-; CHECK-SVE-NEXT: mov v1.h[2], w15
-; CHECK-SVE-NEXT: mov z4.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p0.s, x13, x10
-; CHECK-SVE-NEXT: mov z5.s, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z6.s, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v0.h[3], w11
-; CHECK-SVE-NEXT: mov z7.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v2.h[3], w9
-; CHECK-SVE-NEXT: mov v3.h[3], w12
-; CHECK-SVE-NEXT: mov v1.h[3], w16
-; CHECK-SVE-NEXT: fmov w10, s4
-; CHECK-SVE-NEXT: fmov w12, s5
-; CHECK-SVE-NEXT: fmov w14, s6
-; CHECK-SVE-NEXT: fmov w15, s7
-; CHECK-SVE-NEXT: mov w9, v4.s[1]
-; CHECK-SVE-NEXT: mov w11, v5.s[1]
-; CHECK-SVE-NEXT: mov w13, v6.s[1]
-; CHECK-SVE-NEXT: mov v2.h[4], w12
-; CHECK-SVE-NEXT: mov v3.h[4], w14
-; CHECK-SVE-NEXT: mov w12, v7.s[1]
-; CHECK-SVE-NEXT: mov v0.h[4], w10
-; CHECK-SVE-NEXT: mov v1.h[4], w15
-; CHECK-SVE-NEXT: mov w10, v4.s[2]
-; CHECK-SVE-NEXT: mov w14, v5.s[2]
-; CHECK-SVE-NEXT: mov w15, v6.s[2]
-; CHECK-SVE-NEXT: mov v2.h[5], w11
-; CHECK-SVE-NEXT: mov v3.h[5], w13
-; CHECK-SVE-NEXT: mov w11, v7.s[2]
-; CHECK-SVE-NEXT: mov v0.h[5], w9
-; CHECK-SVE-NEXT: mov v1.h[5], w12
-; CHECK-SVE-NEXT: mov w9, v4.s[3]
-; CHECK-SVE-NEXT: mov w12, v5.s[3]
-; CHECK-SVE-NEXT: mov w13, v6.s[3]
-; CHECK-SVE-NEXT: mov v2.h[6], w14
-; CHECK-SVE-NEXT: mov v3.h[6], w15
-; CHECK-SVE-NEXT: mov w14, v7.s[3]
-; CHECK-SVE-NEXT: mov v0.h[6], w10
-; CHECK-SVE-NEXT: mov v1.h[6], w11
-; CHECK-SVE-NEXT: mov v2.h[7], w12
-; CHECK-SVE-NEXT: mov v3.h[7], w13
-; CHECK-SVE-NEXT: mov v0.h[7], w9
-; CHECK-SVE-NEXT: mov v1.h[7], w14
-; CHECK-SVE-NEXT: adrp x9, .LCPI14_0
-; CHECK-SVE-NEXT: uzp1 v2.16b, v3.16b, v2.16b
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: add x10, x9, #3
+; CHECK-SVE-NEXT: cmp x9, #0
+; CHECK-SVE-NEXT: csel x9, x10, x9, lt
+; CHECK-SVE-NEXT: asr x9, x9, #2
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z7.d, z0.d
+; CHECK-SVE-NEXT: mov z16.d, z0.d
+; CHECK-SVE-NEXT: dup v6.2d, x9
+; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
+; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
+; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: cmhi v1.2d, v6.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v6.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v6.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v6.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v6.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v7.2d, v6.2d, v7.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v6.2d, v16.2d
+; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
+; CHECK-SVE-NEXT: uzp1 v3.4s, v5.4s, v4.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v7.4s
+; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v6.4s
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v3.8h
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: shl v1.16b, v2.16b, #7
-; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI14_0]
+; CHECK-SVE-NEXT: dup v1.16b, w9
+; CHECK-SVE-NEXT: adrp x9, .LCPI14_0
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: ldr q1, [x9, :lo12:.LCPI14_0]
; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
-; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
-; CHECK-SVE-NEXT: addv h1, v1.8h
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
; CHECK-SVE-NEXT: addv h0, v0.8h
-; CHECK-SVE-NEXT: str h1, [x8, #2]
+; CHECK-SVE-NEXT: str h0, [x8, #2]
; CHECK-SVE-NEXT: str h0, [x8]
; CHECK-SVE-NEXT: ret
;
@@ -996,16 +983,23 @@ entry:
define <4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.d, x0, x1
-; CHECK-SVE-NEXT: add x8, x1, #16
-; CHECK-SVE-NEXT: add x9, x0, #16
-; CHECK-SVE-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p0.d, x9, x8
-; CHECK-SVE-NEXT: mov z1.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v0.s[1], v0.s[2]
-; CHECK-SVE-NEXT: mov v0.s[2], v1.s[0]
-; CHECK-SVE-NEXT: mov v0.s[3], v1.s[2]
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: dup v2.2d, x8
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: add z1.d, z1.d, #2 // =0x2
+; CHECK-SVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-SVE-NEXT: dup v1.4h, w8
; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-SVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_64_split:
@@ -1037,28 +1031,31 @@ entry:
define <8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split2:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x8, x1, #32
-; CHECK-SVE-NEXT: add x9, x0, #32
-; CHECK-SVE-NEXT: whilewr p0.d, x0, x1
-; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
-; CHECK-SVE-NEXT: add x8, x1, #16
-; CHECK-SVE-NEXT: add x9, x0, #16
-; CHECK-SVE-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p0.d, x9, x8
-; CHECK-SVE-NEXT: add x8, x1, #48
-; CHECK-SVE-NEXT: mov z1.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: add x9, x0, #48
-; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
-; CHECK-SVE-NEXT: mov z2.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v0.s[1], v0.s[2]
-; CHECK-SVE-NEXT: mov v1.s[1], v1.s[2]
-; CHECK-SVE-NEXT: mov z3.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v0.s[2], v2.s[0]
-; CHECK-SVE-NEXT: mov v1.s[2], v3.s[0]
-; CHECK-SVE-NEXT: mov v0.s[3], v2.s[2]
-; CHECK-SVE-NEXT: mov v1.s[3], v3.s[2]
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: dup v1.2d, x8
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z2.d, z2.d, #4 // =0x4
+; CHECK-SVE-NEXT: add z3.d, z3.d, #2 // =0x2
+; CHECK-SVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v1.2d, v1.2d, v3.2d
+; CHECK-SVE-NEXT: uzp1 v2.4s, v2.4s, v4.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-SVE-NEXT: dup v1.8b, w8
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v2.8h
; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-SVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_64_split2:
@@ -1098,51 +1095,46 @@ entry:
define <16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split3:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x8, x1, #96
-; CHECK-SVE-NEXT: add x9, x0, #96
-; CHECK-SVE-NEXT: whilewr p2.d, x0, x1
-; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
-; CHECK-SVE-NEXT: add x8, x1, #112
-; CHECK-SVE-NEXT: add x9, x0, #112
-; CHECK-SVE-NEXT: whilewr p0.d, x9, x8
-; CHECK-SVE-NEXT: add x8, x1, #64
-; CHECK-SVE-NEXT: add x9, x0, #64
-; CHECK-SVE-NEXT: whilewr p3.d, x9, x8
-; CHECK-SVE-NEXT: add x8, x1, #32
-; CHECK-SVE-NEXT: add x9, x0, #32
-; CHECK-SVE-NEXT: mov z0.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
-; CHECK-SVE-NEXT: add x8, x1, #80
-; CHECK-SVE-NEXT: add x9, x0, #80
-; CHECK-SVE-NEXT: mov z1.d, p3/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z2.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p2.d, x9, x8
-; CHECK-SVE-NEXT: add x8, x1, #16
-; CHECK-SVE-NEXT: add x9, x0, #16
-; CHECK-SVE-NEXT: mov z3.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p1.d, x9, x8
-; CHECK-SVE-NEXT: add x8, x1, #48
-; CHECK-SVE-NEXT: add x9, x0, #48
-; CHECK-SVE-NEXT: mov z4.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v0.s[1], v0.s[2]
-; CHECK-SVE-NEXT: whilewr p0.d, x9, x8
-; CHECK-SVE-NEXT: mov v1.s[1], v1.s[2]
-; CHECK-SVE-NEXT: mov v2.s[1], v2.s[2]
-; CHECK-SVE-NEXT: mov v3.s[1], v3.s[2]
-; CHECK-SVE-NEXT: mov z5.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z6.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z7.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v0.s[2], v4.s[0]
-; CHECK-SVE-NEXT: mov v2.s[2], v6.s[0]
-; CHECK-SVE-NEXT: mov v1.s[2], v5.s[0]
-; CHECK-SVE-NEXT: mov v3.s[2], v7.s[0]
-; CHECK-SVE-NEXT: mov v0.s[3], v4.s[2]
-; CHECK-SVE-NEXT: mov v1.s[3], v5.s[2]
-; CHECK-SVE-NEXT: mov v2.s[3], v6.s[2]
-; CHECK-SVE-NEXT: mov v3.s[3], v7.s[2]
-; CHECK-SVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
-; CHECK-SVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z6.d, z0.d
+; CHECK-SVE-NEXT: mov z7.d, z0.d
+; CHECK-SVE-NEXT: mov z16.d, z0.d
+; CHECK-SVE-NEXT: dup v3.2d, x8
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: add z4.d, z4.d, #8 // =0x8
+; CHECK-SVE-NEXT: add z5.d, z5.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z6.d, z6.d, #4 // =0x4
+; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-SVE-NEXT: cmhi v16.2d, v3.2d, v16.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v3.2d, v7.2d
+; CHECK-SVE-NEXT: uzp1 v2.4s, v4.4s, v2.4s
+; CHECK-SVE-NEXT: uzp1 v4.4s, v6.4s, v5.4s
+; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v16.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v4.8h
+; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: dup v1.16b, w8
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_64_split3:
@@ -1197,113 +1189,55 @@ entry:
define <32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split4:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x9, x1, #96
-; CHECK-SVE-NEXT: add x10, x0, #96
-; CHECK-SVE-NEXT: whilewr p3.d, x0, x1
-; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #112
-; CHECK-SVE-NEXT: add x10, x0, #112
-; CHECK-SVE-NEXT: whilewr p0.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #64
-; CHECK-SVE-NEXT: add x10, x0, #64
-; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #32
-; CHECK-SVE-NEXT: add x10, x0, #32
-; CHECK-SVE-NEXT: mov z0.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #224
-; CHECK-SVE-NEXT: add x10, x0, #224
-; CHECK-SVE-NEXT: mov z5.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z1.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p4.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #240
-; CHECK-SVE-NEXT: add x10, x0, #240
-; CHECK-SVE-NEXT: whilewr p0.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #192
-; CHECK-SVE-NEXT: add x10, x0, #192
-; CHECK-SVE-NEXT: mov z3.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z2.d, p3/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z4.d, p4/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z6.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p0.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #208
-; CHECK-SVE-NEXT: add x10, x0, #208
-; CHECK-SVE-NEXT: mov v0.s[1], v0.s[2]
-; CHECK-SVE-NEXT: mov v1.s[1], v1.s[2]
-; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #160
-; CHECK-SVE-NEXT: add x10, x0, #160
-; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #80
-; CHECK-SVE-NEXT: add x10, x0, #80
-; CHECK-SVE-NEXT: mov z7.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p0.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #176
-; CHECK-SVE-NEXT: add x10, x0, #176
-; CHECK-SVE-NEXT: mov z17.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z16.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #128
-; CHECK-SVE-NEXT: add x10, x0, #128
-; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #144
-; CHECK-SVE-NEXT: add x10, x0, #144
-; CHECK-SVE-NEXT: mov z18.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #16
-; CHECK-SVE-NEXT: add x10, x0, #16
-; CHECK-SVE-NEXT: mov z19.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v4.s[1], v4.s[2]
-; CHECK-SVE-NEXT: whilewr p2.d, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #48
-; CHECK-SVE-NEXT: add x10, x0, #48
-; CHECK-SVE-NEXT: mov z20.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p1.d, x10, x9
-; CHECK-SVE-NEXT: mov v7.s[1], v7.s[2]
-; CHECK-SVE-NEXT: mov v16.s[1], v16.s[2]
-; CHECK-SVE-NEXT: mov v19.s[1], v19.s[2]
-; CHECK-SVE-NEXT: mov v2.s[1], v2.s[2]
-; CHECK-SVE-NEXT: mov v3.s[1], v3.s[2]
-; CHECK-SVE-NEXT: mov z21.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z22.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z23.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov v0.s[2], v5.s[0]
-; CHECK-SVE-NEXT: mov v4.s[2], v6.s[0]
-; CHECK-SVE-NEXT: mov v7.s[2], v17.s[0]
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: add x10, x9, #7
+; CHECK-SVE-NEXT: cmp x9, #0
+; CHECK-SVE-NEXT: csel x9, x10, x9, lt
+; CHECK-SVE-NEXT: asr x9, x9, #3
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: mov z7.d, z0.d
+; CHECK-SVE-NEXT: mov z16.d, z0.d
+; CHECK-SVE-NEXT: dup v6.2d, x9
+; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
+; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
+; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: cmhi v1.2d, v6.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v6.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v6.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v6.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v6.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v7.2d, v6.2d, v7.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v6.2d, v16.2d
+; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
+; CHECK-SVE-NEXT: uzp1 v3.4s, v5.4s, v4.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v7.4s
+; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v6.4s
+; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v3.8h
+; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: dup v1.16b, w9
; CHECK-SVE-NEXT: adrp x9, .LCPI18_0
-; CHECK-SVE-NEXT: mov v16.s[2], v18.s[0]
-; CHECK-SVE-NEXT: mov v19.s[2], v20.s[0]
-; CHECK-SVE-NEXT: mov v1.s[2], v21.s[0]
-; CHECK-SVE-NEXT: mov v2.s[2], v22.s[0]
-; CHECK-SVE-NEXT: mov v3.s[2], v23.s[0]
-; CHECK-SVE-NEXT: mov v0.s[3], v5.s[2]
-; CHECK-SVE-NEXT: mov v4.s[3], v6.s[2]
-; CHECK-SVE-NEXT: mov v7.s[3], v17.s[2]
-; CHECK-SVE-NEXT: mov v16.s[3], v18.s[2]
-; CHECK-SVE-NEXT: mov v19.s[3], v20.s[2]
-; CHECK-SVE-NEXT: mov v1.s[3], v21.s[2]
-; CHECK-SVE-NEXT: mov v2.s[3], v22.s[2]
-; CHECK-SVE-NEXT: mov v3.s[3], v23.s[2]
-; CHECK-SVE-NEXT: uzp1 v4.8h, v7.8h, v4.8h
-; CHECK-SVE-NEXT: uzp1 v5.8h, v19.8h, v16.8h
-; CHECK-SVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
-; CHECK-SVE-NEXT: uzp1 v2.16b, v5.16b, v4.16b
-; CHECK-SVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-SVE-NEXT: shl v1.16b, v2.16b, #7
-; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI18_0]
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: ldr q1, [x9, :lo12:.LCPI18_0]
; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
-; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
-; CHECK-SVE-NEXT: addv h1, v1.8h
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
; CHECK-SVE-NEXT: addv h0, v0.8h
-; CHECK-SVE-NEXT: str h1, [x8, #2]
+; CHECK-SVE-NEXT: str h0, [x8, #2]
; CHECK-SVE-NEXT: str h0, [x8]
; CHECK-SVE-NEXT: ret
;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index e0198f9461486..0de9db10657b8 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -59,46 +59,6 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_8:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_1
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI0_0]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_2
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI0_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x8, :lo12:.LCPI0_2]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_4
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_3
-; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI0_4]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_5
-; CHECK-NOSVE-NEXT: dup v2.2d, x9
-; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI0_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_6
-; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI0_5]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_7
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI0_6]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI0_7]
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
@@ -137,33 +97,6 @@ define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_16:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI1_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI1_2
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI1_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI1_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI1_2]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI1_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
@@ -196,27 +129,6 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_32:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI2_0
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI2_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI2_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI2_1]
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-NOSVE-NEXT: dup v1.4h, w8
-; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.nxv4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
@@ -245,23 +157,6 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_64:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI3_0
-; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI3_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: dup v1.2s, w8
-; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 2 x i1> @llvm.loop.dependence.war.mask.nxv2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
@@ -327,47 +222,6 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilerw_8:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_0
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_1
-; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI4_0]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_2
-; CHECK-NOSVE-NEXT: cneg x9, x9, mi
-; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI4_2]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_3
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI4_1]
-; CHECK-NOSVE-NEXT: ldr q4, [x8, :lo12:.LCPI4_3]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_4
-; CHECK-NOSVE-NEXT: dup v3.2d, x9
-; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI4_4]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_5
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_6
-; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI4_5]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_7
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI4_6]
-; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI4_7]
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v3.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v3.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v2.4s
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v3.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.raw.mask.nxv16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
@@ -407,34 +261,6 @@ define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilerw_16:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI5_1
-; CHECK-NOSVE-NEXT: cneg x8, x8, mi
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI5_2
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI5_0]
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI5_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI5_2]
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI5_3]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.raw.mask.nxv8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
@@ -468,28 +294,6 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilerw_32:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI6_0
-; CHECK-NOSVE-NEXT: cneg x9, x9, mi
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI6_0]
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI6_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI6_1]
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-NOSVE-NEXT: dup v1.4h, w8
-; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.raw.mask.nxv4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
@@ -519,24 +323,6 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilerw_64:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI7_0
-; CHECK-NOSVE-NEXT: cneg x9, x9, mi
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI7_0]
-; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: dup v1.2s, w8
-; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 2 x i1> @llvm.loop.dependence.raw.mask.nxv2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
@@ -635,56 +421,6 @@ define <vscale x 32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_8_split:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
-; CHECK-NOSVE-NEXT: sub x11, x1, x0
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x9, :lo12:.LCPI8_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_5
-; CHECK-NOSVE-NEXT: dup v2.2d, x11
-; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI8_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_6
-; CHECK-NOSVE-NEXT: ldr q6, [x9, :lo12:.LCPI8_5]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI8_6]
-; CHECK-NOSVE-NEXT: cmp x11, #1
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI8_8]
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
-; CHECK-NOSVE-NEXT: str h0, [x8]
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 1)
ret <vscale x 32 x i1> %0
@@ -847,58 +583,6 @@ define <vscale x 64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #2
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_8_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_1
-; CHECK-NOSVE-NEXT: sub x11, x1, x0
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI9_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_2
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI9_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x9, :lo12:.LCPI9_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_4
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI9_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_5
-; CHECK-NOSVE-NEXT: dup v2.2d, x11
-; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI9_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_6
-; CHECK-NOSVE-NEXT: ldr q6, [x9, :lo12:.LCPI9_5]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_7
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI9_6]
-; CHECK-NOSVE-NEXT: cmp x11, #1
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI9_7]
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI9_8]
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: str h0, [x8, #6]
-; CHECK-NOSVE-NEXT: str h0, [x8, #4]
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
-; CHECK-NOSVE-NEXT: str h0, [x8]
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 64 x i1> @llvm.loop.dependence.war.mask.nxv64i1(ptr %a, ptr %b, i64 1)
ret <vscale x 64 x i1> %0
@@ -907,11 +591,58 @@ entry:
define <vscale x 16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_16_split:
; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE2-NEXT: incb x1
-; CHECK-SVE2-NEXT: incb x0
-; CHECK-SVE2-NEXT: whilewr p1.h, x0, x1
-; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_offset w29, -16
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE2-NEXT: asr x8, x8, #1
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z4.d, z0.d
+; CHECK-SVE2-NEXT: mov z5.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, x8
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: mov z3.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE2-NEXT: mov z0.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE2-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE2-NEXT: addvl sp, sp, #1
+; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
; CHECK-SVE-LABEL: whilewr_16_split:
@@ -969,48 +700,6 @@ define <vscale x 16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_16_split:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI10_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_2
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI10_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI10_1]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI10_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI10_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x8
-; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI10_6]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI10_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI10_7]
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 2)
ret <vscale x 16 x i1> %0
@@ -1019,20 +708,90 @@ entry:
define <vscale x 32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_16_split2:
; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: mov x8, x1
-; CHECK-SVE2-NEXT: mov x9, x0
-; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE2-NEXT: addvl x10, x1, #3
-; CHECK-SVE2-NEXT: incb x8
-; CHECK-SVE2-NEXT: incb x9
-; CHECK-SVE2-NEXT: addvl x11, x0, #3
-; CHECK-SVE2-NEXT: incb x1, all, mul #2
+; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_offset w29, -16
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
; CHECK-SVE2-NEXT: incb x0, all, mul #2
-; CHECK-SVE2-NEXT: whilewr p1.h, x11, x10
-; CHECK-SVE2-NEXT: whilewr p2.h, x9, x8
-; CHECK-SVE2-NEXT: whilewr p3.h, x0, x1
-; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p2.b
-; CHECK-SVE2-NEXT: uzp1 p1.b, p3.b, p1.b
+; CHECK-SVE2-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE2-NEXT: incb x1, all, mul #2
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: asr x8, x8, #1
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, z0.d
+; CHECK-SVE2-NEXT: mov z3.d, z0.d
+; CHECK-SVE2-NEXT: sub x9, x1, x0
+; CHECK-SVE2-NEXT: mov z5.d, x8
+; CHECK-SVE2-NEXT: add x9, x9, x9, lsr #63
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z5.d, z0.d
+; CHECK-SVE2-NEXT: asr x9, x9, #1
+; CHECK-SVE2-NEXT: mov z4.d, z1.d
+; CHECK-SVE2-NEXT: mov z6.d, z1.d
+; CHECK-SVE2-NEXT: mov z7.d, z2.d
+; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z5.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z5.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z2.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z6.d, all, mul #4
+; CHECK-SVE2-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE2-NEXT: mov z24.d, z4.d
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z6.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z4.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-SVE2-NEXT: incd z24.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p2.s, p3.s, p4.s
+; CHECK-SVE2-NEXT: uzp1 p3.s, p5.s, p6.s
+; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z24.d
+; CHECK-SVE2-NEXT: mov z5.d, x9
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z24.d
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z7.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-SVE2-NEXT: uzp1 p7.s, p7.s, p8.s
+; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z5.d, z4.d
+; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z2.d
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p7.h
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-SVE2-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-SVE2-NEXT: uzp1 p5.s, p9.s, p6.s
+; CHECK-SVE2-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: whilelo p6.b, xzr, x8
+; CHECK-SVE2-NEXT: uzp1 p3.s, p8.s, p3.s
+; CHECK-SVE2-NEXT: cmp x9, #1
+; CHECK-SVE2-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p7.s
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p4.h, p5.h, p4.h
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p3.h
+; CHECK-SVE2-NEXT: uzp1 p1.b, p1.b, p2.b
+; CHECK-SVE2-NEXT: uzp1 p2.b, p0.b, p4.b
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: whilelo p3.b, xzr, x8
+; CHECK-SVE2-NEXT: sel p0.b, p1, p1.b, p6.b
+; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: sel p1.b, p2, p2.b, p3.b
+; CHECK-SVE2-NEXT: addvl sp, sp, #1
+; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
; CHECK-SVE-LABEL: whilewr_16_split2:
@@ -1123,58 +882,6 @@ define <vscale x 32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_16_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_0
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_1
-; CHECK-NOSVE-NEXT: add x9, x9, x9, lsr #63
-; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI11_0]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI11_2]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_4
-; CHECK-NOSVE-NEXT: ldr q1, [x11, :lo12:.LCPI11_1]
-; CHECK-NOSVE-NEXT: asr x9, x9, #1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_3
-; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI11_4]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_6
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI11_3]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI11_6]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_7
-; CHECK-NOSVE-NEXT: ldr q6, [x11, :lo12:.LCPI11_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x10, :lo12:.LCPI11_7]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI11_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI11_8]
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
-; CHECK-NOSVE-NEXT: str h0, [x8]
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 2)
ret <vscale x 32 x i1> %0
@@ -1183,11 +890,32 @@ entry:
define <vscale x 8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_32_split:
; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE2-NEXT: incb x1
-; CHECK-SVE2-NEXT: incb x0
-; CHECK-SVE2-NEXT: whilewr p1.s, x0, x1
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: add x9, x8, #3
+; CHECK-SVE2-NEXT: cmp x8, #0
+; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: asr x8, x8, #2
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, z0.d
+; CHECK-SVE2-NEXT: mov z3.d, x8
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE2-NEXT: mov z4.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE2-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE2-NEXT: uzp1 p0.h, p1.h, p0.h
+; CHECK-SVE2-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE2-NEXT: ret
;
; CHECK-SVE-LABEL: whilewr_32_split:
@@ -1219,35 +947,6 @@ define <vscale x 8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_32_split:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI12_1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI12_2
-; CHECK-NOSVE-NEXT: add x9, x8, #3
-; CHECK-NOSVE-NEXT: cmp x8, #0
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI12_1]
-; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_0
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI12_2]
-; CHECK-NOSVE-NEXT: asr x8, x8, #2
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI12_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_3
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI12_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 4)
ret <vscale x 8 x i1> %0
@@ -1256,25 +955,60 @@ entry:
define <vscale x 16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_32_split2:
; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE2-NEXT: mov x10, x1
-; CHECK-SVE2-NEXT: mov x11, x0
-; CHECK-SVE2-NEXT: addvl x8, x1, #3
-; CHECK-SVE2-NEXT: addvl x9, x0, #3
-; CHECK-SVE2-NEXT: incb x10, all, mul #2
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-SVE2-NEXT: incb x11, all, mul #2
-; CHECK-SVE2-NEXT: incb x1
-; CHECK-SVE2-NEXT: incb x0
-; CHECK-SVE2-NEXT: whilewr p1.s, x9, x8
-; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p0.b
-; CHECK-SVE2-NEXT: whilewr p2.s, x11, x10
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: whilewr p3.s, x0, x1
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: uzp1 p1.h, p2.h, p1.h
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p3.h
-; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_offset w29, -16
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: add x9, x8, #3
+; CHECK-SVE2-NEXT: cmp x8, #0
+; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: asr x8, x8, #2
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z4.d, z0.d
+; CHECK-SVE2-NEXT: mov z5.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, x8
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: mov z3.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE2-NEXT: mov z0.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE2-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE2-NEXT: addvl sp, sp, #1
+; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
; CHECK-SVE-LABEL: whilewr_32_split2:
@@ -1334,50 +1068,6 @@ define <vscale x 16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_32_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_0
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI13_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_2
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI13_2]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI13_1]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_3
-; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI13_4]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI13_3]
-; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_5
-; CHECK-NOSVE-NEXT: ldr q7, [x8, :lo12:.LCPI13_6]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI13_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI13_7]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 4)
ret <vscale x 16 x i1> %0
@@ -1388,52 +1078,92 @@ define <vscale x 32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE2-NEXT: mov x10, x1
-; CHECK-SVE2-NEXT: mov x11, x0
-; CHECK-SVE2-NEXT: mov x12, x1
-; CHECK-SVE2-NEXT: mov x13, x0
-; CHECK-SVE2-NEXT: incb x10, all, mul #2
-; CHECK-SVE2-NEXT: incb x11, all, mul #2
-; CHECK-SVE2-NEXT: incb x12
-; CHECK-SVE2-NEXT: incb x13
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-SVE2-NEXT: addvl x8, x1, #3
-; CHECK-SVE2-NEXT: addvl x9, x0, #3
-; CHECK-SVE2-NEXT: whilewr p1.s, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #7
-; CHECK-SVE2-NEXT: addvl x9, x0, #7
-; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p0.b
-; CHECK-SVE2-NEXT: whilewr p2.s, x11, x10
-; CHECK-SVE2-NEXT: addvl x10, x1, #6
-; CHECK-SVE2-NEXT: addvl x11, x0, #6
-; CHECK-SVE2-NEXT: whilewr p3.s, x13, x12
-; CHECK-SVE2-NEXT: addvl x12, x1, #5
-; CHECK-SVE2-NEXT: addvl x13, x0, #5
-; CHECK-SVE2-NEXT: incb x1, all, mul #4
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: add x9, x8, #3
+; CHECK-SVE2-NEXT: cmp x8, #0
; CHECK-SVE2-NEXT: incb x0, all, mul #4
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: uzp1 p1.h, p2.h, p1.h
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: whilewr p5.s, x0, x1
-; CHECK-SVE2-NEXT: whilewr p4.s, x9, x8
-; CHECK-SVE2-NEXT: uzp1 p2.h, p5.h, p0.h
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p3.h
-; CHECK-SVE2-NEXT: whilewr p3.s, x11, x10
-; CHECK-SVE2-NEXT: uzp1 p2.b, p2.b, p0.b
-; CHECK-SVE2-NEXT: whilewr p5.s, x13, x12
-; CHECK-SVE2-NEXT: punpklo p2.h, p2.b
-; CHECK-SVE2-NEXT: uzp1 p3.h, p3.h, p4.h
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: punpklo p2.h, p2.b
-; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
-; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p5.h
+; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: incb x1, all, mul #4
+; CHECK-SVE2-NEXT: asr x8, x8, #2
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, z0.d
+; CHECK-SVE2-NEXT: mov z4.d, z0.d
+; CHECK-SVE2-NEXT: mov z5.d, x8
+; CHECK-SVE2-NEXT: sub x9, x1, x0
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
+; CHECK-SVE2-NEXT: mov z3.d, z1.d
+; CHECK-SVE2-NEXT: mov z6.d, z2.d
+; CHECK-SVE2-NEXT: mov z7.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
+; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z6.d, all, mul #4
+; CHECK-SVE2-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-SVE2-NEXT: mov z24.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
+; CHECK-SVE2-NEXT: incd z24.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p2.s, p2.s, p7.s
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p8.s
+; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: uzp1 p3.h, p4.h, p3.h
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p6.s, p6.s, p9.s
+; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE2-NEXT: add x8, x9, #3
+; CHECK-SVE2-NEXT: cmp x9, #0
+; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p6.h
+; CHECK-SVE2-NEXT: csel x8, x8, x9, lt
+; CHECK-SVE2-NEXT: asr x8, x8, #2
+; CHECK-SVE2-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-SVE2-NEXT: mov z5.d, x8
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
+; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
+; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: uzp1 p5.s, p7.s, p5.s
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: uzp1 p7.s, p9.s, p8.s
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p4.s, p10.s, p4.s
+; CHECK-SVE2-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p6.s
+; CHECK-SVE2-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p5.h, p7.h, p5.h
+; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p4.h
+; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: whilelo p4.b, xzr, x8
+; CHECK-SVE2-NEXT: uzp1 p3.b, p0.b, p5.b
; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p1.b, p2.b, p3.b
+; CHECK-SVE2-NEXT: sel p0.b, p2, p2.b, p1.b
+; CHECK-SVE2-NEXT: sel p1.b, p3, p3.b, p4.b
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
; CHECK-SVE2-NEXT: addvl sp, sp, #1
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
@@ -1532,50 +1262,6 @@ define <vscale x 32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_32_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_0
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI13_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_2
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI13_2]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI13_1]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_3
-; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI13_4]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI13_3]
-; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_5
-; CHECK-NOSVE-NEXT: ldr q7, [x8, :lo12:.LCPI13_6]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI13_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI13_7]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 4)
ret <vscale x 32 x i1> %0
@@ -1584,11 +1270,24 @@ entry:
define <vscale x 4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_64_split:
; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
-; CHECK-SVE2-NEXT: incb x1
-; CHECK-SVE2-NEXT: incb x0
-; CHECK-SVE2-NEXT: whilewr p1.d, x0, x1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p1.s
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: add x9, x8, #7
+; CHECK-SVE2-NEXT: cmp x8, #0
+; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: asr x8, x8, #3
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, x8
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE2-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE2-NEXT: ret
;
; CHECK-SVE-LABEL: whilewr_64_split:
@@ -1612,27 +1311,6 @@ define <vscale x 4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_64_split:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI14_0
-; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI14_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI14_1]
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-NOSVE-NEXT: dup v1.4h, w8
-; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.nxv4i1(ptr %a, ptr %b, i64 8)
ret <vscale x 4 x i1> %0
@@ -1641,25 +1319,32 @@ entry:
define <vscale x 8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_64_split2:
; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
-; CHECK-SVE2-NEXT: mov x10, x1
-; CHECK-SVE2-NEXT: mov x11, x0
-; CHECK-SVE2-NEXT: addvl x8, x1, #3
-; CHECK-SVE2-NEXT: addvl x9, x0, #3
-; CHECK-SVE2-NEXT: incb x10, all, mul #2
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p0.s
-; CHECK-SVE2-NEXT: incb x11, all, mul #2
-; CHECK-SVE2-NEXT: incb x1
-; CHECK-SVE2-NEXT: incb x0
-; CHECK-SVE2-NEXT: whilewr p1.d, x9, x8
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-SVE2-NEXT: whilewr p2.d, x11, x10
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: whilewr p3.d, x0, x1
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p3.s
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: add x9, x8, #7
+; CHECK-SVE2-NEXT: cmp x8, #0
+; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: asr x8, x8, #3
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, z0.d
+; CHECK-SVE2-NEXT: mov z3.d, x8
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE2-NEXT: mov z4.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE2-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE2-NEXT: uzp1 p0.h, p1.h, p0.h
+; CHECK-SVE2-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE2-NEXT: ret
;
; CHECK-SVE-LABEL: whilewr_64_split2:
@@ -1691,35 +1376,6 @@ define <vscale x 8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_64_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI15_2
-; CHECK-NOSVE-NEXT: add x9, x8, #7
-; CHECK-NOSVE-NEXT: cmp x8, #0
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
-; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_0
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI15_2]
-; CHECK-NOSVE-NEXT: asr x8, x8, #3
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI15_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_3
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI15_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 8)
ret <vscale x 8 x i1> %0
@@ -1730,53 +1386,56 @@ define <vscale x 16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
-; CHECK-SVE2-NEXT: mov x10, x1
-; CHECK-SVE2-NEXT: mov x11, x0
-; CHECK-SVE2-NEXT: mov x12, x1
-; CHECK-SVE2-NEXT: mov x13, x0
-; CHECK-SVE2-NEXT: incb x10, all, mul #2
-; CHECK-SVE2-NEXT: incb x11, all, mul #2
-; CHECK-SVE2-NEXT: incb x12
-; CHECK-SVE2-NEXT: incb x13
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p0.s
-; CHECK-SVE2-NEXT: addvl x8, x1, #3
-; CHECK-SVE2-NEXT: addvl x9, x0, #3
-; CHECK-SVE2-NEXT: whilewr p1.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #7
-; CHECK-SVE2-NEXT: addvl x9, x0, #7
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-SVE2-NEXT: whilewr p2.d, x11, x10
-; CHECK-SVE2-NEXT: addvl x10, x1, #6
-; CHECK-SVE2-NEXT: addvl x11, x0, #6
-; CHECK-SVE2-NEXT: whilewr p3.d, x13, x12
-; CHECK-SVE2-NEXT: addvl x12, x1, #5
-; CHECK-SVE2-NEXT: addvl x13, x0, #5
-; CHECK-SVE2-NEXT: incb x1, all, mul #4
-; CHECK-SVE2-NEXT: incb x0, all, mul #4
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: add x9, x8, #7
+; CHECK-SVE2-NEXT: cmp x8, #0
+; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: asr x8, x8, #3
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z4.d, z0.d
+; CHECK-SVE2-NEXT: mov z5.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, x8
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: mov z3.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: whilewr p5.d, x0, x1
-; CHECK-SVE2-NEXT: whilewr p4.d, x9, x8
-; CHECK-SVE2-NEXT: uzp1 p2.s, p5.s, p0.s
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p3.s
-; CHECK-SVE2-NEXT: whilewr p3.d, x11, x10
-; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p0.h
-; CHECK-SVE2-NEXT: whilewr p5.d, x13, x12
-; CHECK-SVE2-NEXT: punpklo p2.h, p2.b
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p4.s
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: punpklo p2.h, p2.b
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p1.h
-; CHECK-SVE2-NEXT: uzp1 p2.s, p2.s, p5.s
+; CHECK-SVE2-NEXT: mov z0.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p5.s
; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p1.h, p2.h, p3.h
-; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE2-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE2-NEXT: addvl sp, sp, #1
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
@@ -1838,35 +1497,6 @@ define <vscale x 16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_64_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI15_2
-; CHECK-NOSVE-NEXT: add x9, x8, #7
-; CHECK-NOSVE-NEXT: cmp x8, #0
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
-; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_0
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI15_2]
-; CHECK-NOSVE-NEXT: asr x8, x8, #3
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI15_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_3
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI15_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 8)
ret <vscale x 16 x i1> %0
@@ -1877,98 +1507,92 @@ define <vscale x 32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
+; CHECK-SVE2-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-SVE2-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
-; CHECK-SVE2-NEXT: mov x11, x1
-; CHECK-SVE2-NEXT: mov x12, x0
-; CHECK-SVE2-NEXT: incb x11, all, mul #2
-; CHECK-SVE2-NEXT: incb x12, all, mul #2
-; CHECK-SVE2-NEXT: mov x10, x1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p0.s
-; CHECK-SVE2-NEXT: mov x13, x0
-; CHECK-SVE2-NEXT: incb x10
-; CHECK-SVE2-NEXT: incb x13
-; CHECK-SVE2-NEXT: addvl x8, x1, #3
-; CHECK-SVE2-NEXT: addvl x9, x0, #3
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-SVE2-NEXT: whilewr p2.d, x12, x11
-; CHECK-SVE2-NEXT: mov x11, x1
-; CHECK-SVE2-NEXT: mov x12, x0
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: incb x11, all, mul #4
-; CHECK-SVE2-NEXT: incb x12, all, mul #4
-; CHECK-SVE2-NEXT: whilewr p1.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #7
-; CHECK-SVE2-NEXT: addvl x9, x0, #7
-; CHECK-SVE2-NEXT: whilewr p3.d, x13, x10
-; CHECK-SVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-SVE2-NEXT: whilewr p5.d, x12, x11
-; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p3.s
-; CHECK-SVE2-NEXT: whilewr p4.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #6
-; CHECK-SVE2-NEXT: addvl x9, x0, #6
-; CHECK-SVE2-NEXT: uzp1 p2.s, p5.s, p0.s
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p1.h
-; CHECK-SVE2-NEXT: uzp1 p1.h, p2.h, p0.h
-; CHECK-SVE2-NEXT: whilewr p2.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #5
-; CHECK-SVE2-NEXT: addvl x9, x0, #5
-; CHECK-SVE2-NEXT: punpklo p1.h, p1.b
-; CHECK-SVE2-NEXT: whilewr p3.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #12
-; CHECK-SVE2-NEXT: addvl x9, x0, #12
-; CHECK-SVE2-NEXT: punpklo p1.h, p1.b
-; CHECK-SVE2-NEXT: uzp1 p2.s, p2.s, p4.s
-; CHECK-SVE2-NEXT: uzp1 p1.s, p1.s, p3.s
-; CHECK-SVE2-NEXT: whilewr p3.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #15
-; CHECK-SVE2-NEXT: addvl x9, x0, #15
-; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p2.h
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p0.s
-; CHECK-SVE2-NEXT: whilewr p2.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #14
-; CHECK-SVE2-NEXT: addvl x9, x0, #14
-; CHECK-SVE2-NEXT: uzp1 p3.h, p3.h, p0.h
-; CHECK-SVE2-NEXT: whilewr p4.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #13
-; CHECK-SVE2-NEXT: addvl x9, x0, #13
-; CHECK-SVE2-NEXT: punpklo p3.h, p3.b
-; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p2.s
-; CHECK-SVE2-NEXT: whilewr p4.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #8
+; CHECK-SVE2-NEXT: index z0.d, #0, #1
+; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: ptrue p0.d
+; CHECK-SVE2-NEXT: add x9, x8, #7
+; CHECK-SVE2-NEXT: cmp x8, #0
+; CHECK-SVE2-NEXT: addvl x10, x1, #8
+; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
; CHECK-SVE2-NEXT: addvl x9, x0, #8
-; CHECK-SVE2-NEXT: punpklo p3.h, p3.b
-; CHECK-SVE2-NEXT: whilewr p5.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #11
-; CHECK-SVE2-NEXT: addvl x9, x0, #11
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p4.s
-; CHECK-SVE2-NEXT: uzp1 p4.s, p5.s, p0.s
-; CHECK-SVE2-NEXT: whilewr p5.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #10
-; CHECK-SVE2-NEXT: addvl x9, x0, #10
-; CHECK-SVE2-NEXT: uzp1 p4.h, p4.h, p0.h
-; CHECK-SVE2-NEXT: whilewr p6.d, x9, x8
-; CHECK-SVE2-NEXT: addvl x8, x1, #9
-; CHECK-SVE2-NEXT: addvl x9, x0, #9
-; CHECK-SVE2-NEXT: punpklo p4.h, p4.b
-; CHECK-SVE2-NEXT: whilewr p7.d, x9, x8
-; CHECK-SVE2-NEXT: punpklo p4.h, p4.b
-; CHECK-SVE2-NEXT: uzp1 p5.s, p6.s, p5.s
-; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p4.s, p4.s, p7.s
+; CHECK-SVE2-NEXT: asr x8, x8, #3
+; CHECK-SVE2-NEXT: sub x9, x10, x9
+; CHECK-SVE2-NEXT: mov z1.d, z0.d
+; CHECK-SVE2-NEXT: mov z2.d, z0.d
+; CHECK-SVE2-NEXT: mov z4.d, z0.d
+; CHECK-SVE2-NEXT: mov z5.d, x8
+; CHECK-SVE2-NEXT: incd z1.d
+; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
+; CHECK-SVE2-NEXT: mov z3.d, z1.d
+; CHECK-SVE2-NEXT: mov z6.d, z2.d
+; CHECK-SVE2-NEXT: mov z7.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
+; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
+; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE2-NEXT: incd z6.d, all, mul #4
+; CHECK-SVE2-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-SVE2-NEXT: mov z24.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
+; CHECK-SVE2-NEXT: incd z24.d, all, mul #4
+; CHECK-SVE2-NEXT: uzp1 p2.s, p2.s, p7.s
+; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p8.s
+; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: uzp1 p3.h, p4.h, p3.h
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: uzp1 p6.s, p6.s, p9.s
+; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE2-NEXT: add x8, x9, #7
+; CHECK-SVE2-NEXT: cmp x9, #0
+; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p6.h
+; CHECK-SVE2-NEXT: csel x8, x8, x9, lt
+; CHECK-SVE2-NEXT: asr x8, x8, #3
+; CHECK-SVE2-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-SVE2-NEXT: mov z5.d, x8
+; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
+; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
+; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
+; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
+; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
+; CHECK-SVE2-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
+; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
+; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-SVE2-NEXT: cmp x8, #1
+; CHECK-SVE2-NEXT: uzp1 p5.s, p7.s, p5.s
+; CHECK-SVE2-NEXT: cset w8, lt
+; CHECK-SVE2-NEXT: uzp1 p7.s, p9.s, p8.s
+; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE2-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p4.s, p10.s, p4.s
+; CHECK-SVE2-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p6.s
+; CHECK-SVE2-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: uzp1 p5.h, p7.h, p5.h
; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p2.h, p3.h, p2.h
-; CHECK-SVE2-NEXT: uzp1 p3.h, p4.h, p5.h
+; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p4.h
+; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: whilelo p4.b, xzr, x8
+; CHECK-SVE2-NEXT: uzp1 p3.b, p0.b, p5.b
; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE2-NEXT: sel p0.b, p2, p2.b, p1.b
+; CHECK-SVE2-NEXT: sel p1.b, p3, p3.b, p4.b
; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.b, p0.b, p1.b
-; CHECK-SVE2-NEXT: uzp1 p1.b, p3.b, p2.b
; CHECK-SVE2-NEXT: addvl sp, sp, #1
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
@@ -2067,35 +1691,6 @@ define <vscale x 32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_64_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI15_2
-; CHECK-NOSVE-NEXT: add x9, x8, #7
-; CHECK-NOSVE-NEXT: cmp x8, #0
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
-; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_0
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI15_2]
-; CHECK-NOSVE-NEXT: asr x8, x8, #3
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI15_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI15_3
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI15_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 8)
ret <vscale x 32 x i1> %0
@@ -2158,57 +1753,6 @@ define <vscale x 9 x i1> @whilewr_8_widen(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_8_widen:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_0
-; CHECK-NOSVE-NEXT: sub x11, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_1
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI16_2
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI16_0]
-; CHECK-NOSVE-NEXT: dup v1.2d, x11
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI16_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI16_2]
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI16_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_4
-; CHECK-NOSVE-NEXT: cmp x11, #1
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v1.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
-; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI16_4]
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v2.4s, v0.4s
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v1.2d, v5.2d
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: xtn v1.2s, v1.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.4h, v1.4h, v0.4h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: umov w9, v0.b[0]
-; CHECK-NOSVE-NEXT: umov w10, v0.b[1]
-; CHECK-NOSVE-NEXT: umov w11, v0.b[2]
-; CHECK-NOSVE-NEXT: umov w12, v0.b[7]
-; CHECK-NOSVE-NEXT: and w9, w9, #0x1
-; CHECK-NOSVE-NEXT: bfi w9, w10, #1, #1
-; CHECK-NOSVE-NEXT: umov w10, v0.b[3]
-; CHECK-NOSVE-NEXT: bfi w9, w11, #2, #1
-; CHECK-NOSVE-NEXT: umov w11, v0.b[4]
-; CHECK-NOSVE-NEXT: bfi w9, w10, #3, #1
-; CHECK-NOSVE-NEXT: umov w10, v0.b[5]
-; CHECK-NOSVE-NEXT: bfi w9, w11, #4, #1
-; CHECK-NOSVE-NEXT: umov w11, v0.b[6]
-; CHECK-NOSVE-NEXT: bfi w9, w10, #5, #1
-; CHECK-NOSVE-NEXT: umov w10, v0.b[8]
-; CHECK-NOSVE-NEXT: bfi w9, w11, #6, #1
-; CHECK-NOSVE-NEXT: ubfiz w11, w12, #7, #1
-; CHECK-NOSVE-NEXT: orr w9, w9, w11
-; CHECK-NOSVE-NEXT: orr w9, w9, w10, lsl #8
-; CHECK-NOSVE-NEXT: and w9, w9, #0x1ff
-; CHECK-NOSVE-NEXT: strh w9, [x8]
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 9 x i1> @llvm.loop.dependence.war.mask.nxv9i1(ptr %a, ptr %b, i64 1)
ret <vscale x 9 x i1> %0
@@ -2247,40 +1791,6 @@ define <vscale x 7 x i1> @whilewr_16_widen(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_16_widen:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI17_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI17_2
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI17_3
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI17_0]
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI17_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI17_2]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: ldr q4, [x12, :lo12:.LCPI17_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: umov w0, v0.b[0]
-; CHECK-NOSVE-NEXT: umov w1, v0.b[1]
-; CHECK-NOSVE-NEXT: umov w2, v0.b[2]
-; CHECK-NOSVE-NEXT: umov w3, v0.b[3]
-; CHECK-NOSVE-NEXT: umov w4, v0.b[4]
-; CHECK-NOSVE-NEXT: umov w5, v0.b[5]
-; CHECK-NOSVE-NEXT: umov w6, v0.b[6]
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 7 x i1> @llvm.loop.dependence.war.mask.nxv7i1(ptr %a, ptr %b, i64 2)
ret <vscale x 7 x i1> %0
@@ -2313,30 +1823,6 @@ define <vscale x 3 x i1> @whilewr_32_widen(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_32_widen:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI18_0
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI18_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI18_1]
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-NOSVE-NEXT: dup v1.4h, w8
-; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: umov w0, v0.h[0]
-; CHECK-NOSVE-NEXT: umov w1, v0.h[1]
-; CHECK-NOSVE-NEXT: umov w2, v0.h[2]
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 3 x i1> @llvm.loop.dependence.war.mask.nxv3i1(ptr %a, ptr %b, i64 4)
ret <vscale x 3 x i1> %0
@@ -2458,50 +1944,6 @@ define <vscale x 16 x i1> @whilewr_badimm(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
-; CHECK-NOSVE-LABEL: whilewr_badimm:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_1
-; CHECK-NOSVE-NEXT: movk x8, #21846
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI22_1]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_3
-; CHECK-NOSVE-NEXT: smulh x8, x9, x8
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_0
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI22_3]
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI22_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_2
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_5
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI22_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_4
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI22_5]
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI22_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_6
-; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI22_6]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_7
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI22_7]
-; CHECK-NOSVE-NEXT: dup v4.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 3)
ret <vscale x 16 x i1> %0
>From 970e7f9d233ddab6975a769a51573f3f89afb3b1 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 12 Aug 2025 13:21:09 +0100
Subject: [PATCH 33/43] Remove splitting from lowering
---
.../Target/AArch64/AArch64ISelLowering.cpp | 97 +++++--------------
.../CodeGen/AArch64/alias_mask_scalable.ll | 54 +++++------
2 files changed, 53 insertions(+), 98 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 516f1da786701..ebbec633d2835 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5248,94 +5248,49 @@ AArch64TargetLowering::LowerLOOP_DEPENDENCE_MASK(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
uint64_t EltSize = Op.getConstantOperandVal(2);
- EVT FullVT = Op.getValueType();
- unsigned NumElements = FullVT.getVectorMinNumElements();
- unsigned NumSplits = 0;
- EVT EltVT;
+ EVT VT = Op.getValueType();
switch (EltSize) {
case 1:
- EltVT = MVT::i8;
+ if (VT != MVT::v16i8 && VT != MVT::nxv16i1)
+ return SDValue();
break;
case 2:
- if (NumElements >= 16)
- NumSplits = NumElements / 16;
- EltVT = MVT::i16;
+ if (VT != MVT::v8i8 && VT != MVT::nxv8i1)
+ return SDValue();
break;
case 4:
- if (NumElements >= 8)
- NumSplits = NumElements / 8;
- EltVT = MVT::i32;
+ if (VT != MVT::v4i16 && VT != MVT::nxv4i1)
+ return SDValue();
break;
case 8:
- if (NumElements >= 4)
- NumSplits = NumElements / 4;
- EltVT = MVT::i64;
+ if (VT != MVT::v2i32 && VT != MVT::nxv2i1)
+ return SDValue();
break;
default:
// Other element sizes are incompatible with whilewr/rw, so expand instead
return SDValue();
}
- auto LowerToWhile = [&](EVT VT, unsigned AddrScale) {
- SDValue PtrA = Op.getOperand(0);
- SDValue PtrB = Op.getOperand(1);
+ SDValue PtrA = Op.getOperand(0);
+ SDValue PtrB = Op.getOperand(1);
- EVT StoreVT = EVT::getVectorVT(*DAG.getContext(), EltVT,
- VT.getVectorMinNumElements(), false);
- if (AddrScale > 0) {
- unsigned Offset = StoreVT.getStoreSizeInBits() / 8 * AddrScale;
- SDValue Addend;
-
- if (VT.isScalableVT())
- Addend = DAG.getVScale(DL, MVT::i64, APInt(64, Offset));
- else
- Addend = DAG.getConstant(Offset, DL, MVT::i64);
-
- PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, Addend);
- PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, Addend);
- }
+ if (VT.isScalableVT())
+ return DAG.getNode(Op.getOpcode(), DL, VT, PtrA, PtrB, Op.getOperand(2));
- if (VT.isScalableVT())
- return DAG.getNode(Op.getOpcode(), DL, VT, PtrA, PtrB, Op.getOperand(2));
-
- // We can use the SVE whilewr/whilerw instruction to lower this
- // intrinsic by creating the appropriate sequence of scalable vector
- // operations and then extracting a fixed-width subvector from the scalable
- // vector. Scalable vector variants are already legal.
- EVT ContainerVT =
- EVT::getVectorVT(*DAG.getContext(), VT.getVectorElementType(),
- VT.getVectorNumElements(), true);
- EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
-
- SDValue Mask =
- DAG.getNode(Op.getOpcode(), DL, WhileVT, PtrA, PtrB, Op.getOperand(2));
- SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
- return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
- DAG.getVectorIdxConstant(0, DL));
- };
+ // We can use the SVE whilewr/whilerw instruction to lower this
+ // intrinsic by creating the appropriate sequence of scalable vector
+ // operations and then extracting a fixed-width subvector from the scalable
+ // vector. Scalable vector variants are already legal.
+ EVT ContainerVT =
+ EVT::getVectorVT(*DAG.getContext(), VT.getVectorElementType(),
+ VT.getVectorNumElements(), true);
+ EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
- if (NumSplits == 0)
- return LowerToWhile(FullVT, 0);
-
- SDValue FullVec = DAG.getUNDEF(FullVT);
-
- unsigned NumElementsPerSplit = NumElements / (2 * NumSplits);
- EVT PartVT =
- EVT::getVectorVT(*DAG.getContext(), FullVT.getVectorElementType(),
- NumElementsPerSplit, FullVT.isScalableVT());
- for (unsigned Split = 0, InsertIdx = 0; Split < NumSplits;
- Split++, InsertIdx += 2) {
- SDValue Low = LowerToWhile(PartVT, InsertIdx);
- SDValue High = LowerToWhile(PartVT, InsertIdx + 1);
- unsigned InsertIdxLow = InsertIdx * NumElementsPerSplit;
- unsigned InsertIdxHigh = (InsertIdx + 1) * NumElementsPerSplit;
- SDValue Insert =
- DAG.getNode(ISD::INSERT_SUBVECTOR, DL, FullVT, FullVec, Low,
- DAG.getVectorIdxConstant(InsertIdxLow, DL));
- FullVec = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, FullVT, Insert, High,
- DAG.getVectorIdxConstant(InsertIdxHigh, DL));
- }
- return FullVec;
+ SDValue Mask =
+ DAG.getNode(Op.getOpcode(), DL, WhileVT, PtrA, PtrB, Op.getOperand(2));
+ SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
+ return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
+ DAG.getVectorIdxConstant(0, DL));
}
SDValue AArch64TargetLowering::LowerBITCAST(SDValue Op,
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 0de9db10657b8..a615a669fe117 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -588,8 +588,8 @@ entry:
ret <vscale x 64 x i1> %0
}
-define <vscale x 16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_16_split:
+define <vscale x 16 x i1> @whilewr_16_expand(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_16_expand:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
@@ -645,7 +645,7 @@ define <vscale x 16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_16_split:
+; CHECK-SVE-LABEL: whilewr_16_expand:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE-NEXT: addvl sp, sp, #-1
@@ -705,8 +705,8 @@ entry:
ret <vscale x 16 x i1> %0
}
-define <vscale x 32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_16_split2:
+define <vscale x 32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_16_expand2:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
@@ -794,7 +794,7 @@ define <vscale x 32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_16_split2:
+; CHECK-SVE-LABEL: whilewr_16_expand2:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE-NEXT: addvl sp, sp, #-1
@@ -887,8 +887,8 @@ entry:
ret <vscale x 32 x i1> %0
}
-define <vscale x 8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_32_split:
+define <vscale x 8 x i1> @whilewr_32_expand(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_32_expand:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: index z0.d, #0, #1
; CHECK-SVE2-NEXT: sub x8, x1, x0
@@ -918,7 +918,7 @@ define <vscale x 8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_32_split:
+; CHECK-SVE-LABEL: whilewr_32_expand:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
@@ -952,8 +952,8 @@ entry:
ret <vscale x 8 x i1> %0
}
-define <vscale x 16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_32_split2:
+define <vscale x 16 x i1> @whilewr_32_expand2(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_32_expand2:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
@@ -1011,7 +1011,7 @@ define <vscale x 16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_32_split2:
+; CHECK-SVE-LABEL: whilewr_32_expand2:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE-NEXT: addvl sp, sp, #-1
@@ -1073,8 +1073,8 @@ entry:
ret <vscale x 16 x i1> %0
}
-define <vscale x 32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_32_split3:
+define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_32_expand3:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
@@ -1168,7 +1168,7 @@ define <vscale x 32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_32_split3:
+; CHECK-SVE-LABEL: whilewr_32_expand3:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE-NEXT: addvl sp, sp, #-1
@@ -1267,8 +1267,8 @@ entry:
ret <vscale x 32 x i1> %0
}
-define <vscale x 4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64_split:
+define <vscale x 4 x i1> @whilewr_64_expand(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_64_expand:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: index z0.d, #0, #1
; CHECK-SVE2-NEXT: sub x8, x1, x0
@@ -1290,7 +1290,7 @@ define <vscale x 4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_64_split:
+; CHECK-SVE-LABEL: whilewr_64_expand:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
@@ -1316,8 +1316,8 @@ entry:
ret <vscale x 4 x i1> %0
}
-define <vscale x 8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64_split2:
+define <vscale x 8 x i1> @whilewr_64_expand2(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_64_expand2:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: index z0.d, #0, #1
; CHECK-SVE2-NEXT: sub x8, x1, x0
@@ -1347,7 +1347,7 @@ define <vscale x 8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_64_split2:
+; CHECK-SVE-LABEL: whilewr_64_expand2:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
@@ -1381,8 +1381,8 @@ entry:
ret <vscale x 8 x i1> %0
}
-define <vscale x 16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64_split3:
+define <vscale x 16 x i1> @whilewr_64_expand3(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_64_expand3:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
@@ -1440,7 +1440,7 @@ define <vscale x 16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_64_split3:
+; CHECK-SVE-LABEL: whilewr_64_expand3:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE-NEXT: addvl sp, sp, #-1
@@ -1502,8 +1502,8 @@ entry:
ret <vscale x 16 x i1> %0
}
-define <vscale x 32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64_split4:
+define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
+; CHECK-SVE2-LABEL: whilewr_64_expand4:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE2-NEXT: addvl sp, sp, #-1
@@ -1597,7 +1597,7 @@ define <vscale x 32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE2-NEXT: ret
;
-; CHECK-SVE-LABEL: whilewr_64_split4:
+; CHECK-SVE-LABEL: whilewr_64_expand4:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-SVE-NEXT: addvl sp, sp, #-1
>From fddda1493a928c22db6dc37b9cb450cb13d01c01 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 12 Aug 2025 14:12:56 +0100
Subject: [PATCH 34/43] Improve wording in lang ref
---
llvm/docs/LangRef.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 8107e4d2fa1dc..06c06cc7e5333 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24125,9 +24125,9 @@ This is an overloaded intrinsic.
Overview:
"""""""""
-Given a scalar load from %ptrA, followed by a scalar store to %ptrB, this
-instruction generates a mask where an active lane indicates that there is no
-write-after-read hazard for this lane.
+Given a vector load from %ptrA, followed by a vector store to %ptrB, this
+intrinsic generates a mask where a true lane indicates that the accesses don't
+overlap for that lane.
A write-after-read hazard occurs when a write-after-read sequence for a given
lane in a vector ends up being executed as a read-after-write sequence due to
>From 36be558ad2a56549a04481b45287a9462892a014 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 12 Aug 2025 18:30:53 +0100
Subject: [PATCH 35/43] Rebase
---
llvm/test/CodeGen/AArch64/alias_mask.ll | 105 +++++------
.../CodeGen/AArch64/alias_mask_scalable.ll | 167 ++++++++----------
2 files changed, 117 insertions(+), 155 deletions(-)
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index b1491c41135fa..06cc90904ef44 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -105,12 +105,11 @@ define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_32:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x8, .LCPI2_0
; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI2_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
; CHECK-NOSVE-NEXT: adrp x10, .LCPI2_1
; CHECK-NOSVE-NEXT: asr x9, x9, #2
; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI2_1]
@@ -139,12 +138,11 @@ define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_64:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x8, .LCPI3_0
; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: cmp x9, #0
; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI3_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
; CHECK-NOSVE-NEXT: asr x9, x9, #3
; CHECK-NOSVE-NEXT: dup v0.2d, x9
; CHECK-NOSVE-NEXT: cmp x9, #1
@@ -270,7 +268,7 @@ define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI6_0]
; CHECK-NOSVE-NEXT: add x10, x9, #3
; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
; CHECK-NOSVE-NEXT: adrp x10, .LCPI6_1
; CHECK-NOSVE-NEXT: asr x9, x9, #2
; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI6_1]
@@ -305,7 +303,7 @@ define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI7_0]
; CHECK-NOSVE-NEXT: add x10, x9, #7
; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
; CHECK-NOSVE-NEXT: asr x9, x9, #3
; CHECK-NOSVE-NEXT: dup v0.2d, x9
; CHECK-NOSVE-NEXT: cmp x9, #0
@@ -712,10 +710,9 @@ define <8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_split:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #2
; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: mov z3.d, z0.d
@@ -740,15 +737,14 @@ define <8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_32_split:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: subs x8, x1, x0
; CHECK-NOSVE-NEXT: adrp x10, .LCPI12_1
; CHECK-NOSVE-NEXT: adrp x11, .LCPI12_2
; CHECK-NOSVE-NEXT: add x9, x8, #3
-; CHECK-NOSVE-NEXT: cmp x8, #0
; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI12_1]
-; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_0
; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI12_2]
+; CHECK-NOSVE-NEXT: csel x8, x9, x8, mi
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_0
; CHECK-NOSVE-NEXT: asr x8, x8, #2
; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI12_0]
; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_3
@@ -776,10 +772,9 @@ define <16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_split2:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #2
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
@@ -819,16 +814,15 @@ define <16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_32_split2:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_0
; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI13_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_2
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI13_2]
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_4
; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI13_1]
; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_3
@@ -870,10 +864,9 @@ define <32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_split3:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: subs x9, x1, x0
; CHECK-SVE-NEXT: add x10, x9, #3
-; CHECK-SVE-NEXT: cmp x9, #0
-; CHECK-SVE-NEXT: csel x9, x10, x9, lt
+; CHECK-SVE-NEXT: csel x9, x10, x9, mi
; CHECK-SVE-NEXT: asr x9, x9, #2
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
@@ -923,18 +916,17 @@ define <32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_32_split3:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_0
; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
; CHECK-NOSVE-NEXT: ldr q0, [x11, :lo12:.LCPI14_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_2
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
+; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI14_2]
; CHECK-NOSVE-NEXT: asr x9, x9, #2
; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI14_1]
; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_3
-; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI14_2]
; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_4
; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI14_3]
; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_5
@@ -984,10 +976,9 @@ define <4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: dup v2.2d, x8
@@ -1004,12 +995,11 @@ define <4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_64_split:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x8, .LCPI15_0
; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: cmp x9, #0
; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI15_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
; CHECK-NOSVE-NEXT: asr x9, x9, #3
; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
@@ -1032,10 +1022,9 @@ define <8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split2:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: mov z3.d, z0.d
@@ -1060,15 +1049,14 @@ define <8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_64_split2:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: subs x8, x1, x0
; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_1
; CHECK-NOSVE-NEXT: adrp x11, .LCPI16_2
; CHECK-NOSVE-NEXT: add x9, x8, #7
-; CHECK-NOSVE-NEXT: cmp x8, #0
; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI16_1]
-; CHECK-NOSVE-NEXT: csel x8, x9, x8, lt
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_0
; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI16_2]
+; CHECK-NOSVE-NEXT: csel x8, x9, x8, mi
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_0
; CHECK-NOSVE-NEXT: asr x8, x8, #3
; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI16_0]
; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_3
@@ -1096,10 +1084,9 @@ define <16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split3:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
@@ -1139,16 +1126,15 @@ define <16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_64_split3:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_0
; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: cmp x9, #0
; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI17_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_2
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI17_2]
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_4
; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI17_1]
; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_3
@@ -1190,10 +1176,9 @@ define <32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split4:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: subs x9, x1, x0
; CHECK-SVE-NEXT: add x10, x9, #7
-; CHECK-SVE-NEXT: cmp x9, #0
-; CHECK-SVE-NEXT: csel x9, x10, x9, lt
+; CHECK-SVE-NEXT: csel x9, x10, x9, mi
; CHECK-SVE-NEXT: asr x9, x9, #3
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
@@ -1243,18 +1228,17 @@ define <32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_64_split4:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_0
; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: cmp x9, #0
; CHECK-NOSVE-NEXT: ldr q0, [x11, :lo12:.LCPI18_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_2
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
+; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI18_2]
; CHECK-NOSVE-NEXT: asr x9, x9, #3
; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI18_1]
; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_3
-; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI18_2]
; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_4
; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI18_3]
; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_5
@@ -1451,12 +1435,11 @@ define <3 x i1> @whilewr_32_widen(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_32_widen:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x8, .LCPI21_0
; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI21_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
; CHECK-NOSVE-NEXT: adrp x10, .LCPI21_1
; CHECK-NOSVE-NEXT: asr x9, x9, #2
; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI21_1]
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index a615a669fe117..de48739251f4c 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -16,7 +16,7 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
@@ -111,11 +111,10 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #2
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, x8
@@ -142,12 +141,11 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
;
; CHECK-SVE-LABEL: whilewr_64:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov z1.d, x8
; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
@@ -176,7 +174,7 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
@@ -280,7 +278,7 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x9, x8, #3
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #2
; CHECK-SVE-NEXT: mov z2.d, x8
@@ -313,7 +311,7 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x9, x8, #7
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov z1.d, x8
; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
@@ -347,7 +345,7 @@ define <vscale x 32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: rdvl x8, #1
@@ -456,7 +454,7 @@ define <vscale x 64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #13, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #14, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #15, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: rdvl x8, #1
@@ -597,7 +595,7 @@ define <vscale x 16 x i1> @whilewr_16_expand(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
; CHECK-SVE2-NEXT: index z0.d, #0, #1
; CHECK-SVE2-NEXT: sub x8, x1, x0
@@ -653,7 +651,7 @@ define <vscale x 16 x i1> @whilewr_16_expand(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
@@ -716,7 +714,7 @@ define <vscale x 32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
; CHECK-SVE2-NEXT: index z0.d, #0, #1
; CHECK-SVE2-NEXT: sub x8, x1, x0
@@ -804,7 +802,7 @@ define <vscale x 32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: rdvl x8, #2
@@ -891,11 +889,10 @@ define <vscale x 8 x i1> @whilewr_32_expand(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_32_expand:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #3
-; CHECK-SVE2-NEXT: cmp x8, #0
-; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: asr x8, x8, #2
; CHECK-SVE2-NEXT: mov z1.d, z0.d
; CHECK-SVE2-NEXT: mov z2.d, z0.d
@@ -921,11 +918,10 @@ define <vscale x 8 x i1> @whilewr_32_expand(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_expand:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #2
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
@@ -961,14 +957,13 @@ define <vscale x 16 x i1> @whilewr_32_expand2(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #3
-; CHECK-SVE2-NEXT: cmp x8, #0
-; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: asr x8, x8, #2
; CHECK-SVE2-NEXT: mov z1.d, z0.d
; CHECK-SVE2-NEXT: mov z4.d, z0.d
@@ -1019,14 +1014,13 @@ define <vscale x 16 x i1> @whilewr_32_expand2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #2
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z4.d, z0.d
@@ -1085,22 +1079,20 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #3
-; CHECK-SVE2-NEXT: cmp x8, #0
; CHECK-SVE2-NEXT: incb x0, all, mul #4
-; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
; CHECK-SVE2-NEXT: incb x1, all, mul #4
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: asr x8, x8, #2
; CHECK-SVE2-NEXT: mov z1.d, z0.d
; CHECK-SVE2-NEXT: mov z2.d, z0.d
; CHECK-SVE2-NEXT: mov z4.d, z0.d
; CHECK-SVE2-NEXT: mov z5.d, x8
-; CHECK-SVE2-NEXT: sub x9, x1, x0
; CHECK-SVE2-NEXT: incd z1.d
; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
@@ -1129,12 +1121,12 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
; CHECK-SVE2-NEXT: uzp1 p6.s, p6.s, p9.s
; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: add x8, x9, #3
-; CHECK-SVE2-NEXT: cmp x9, #0
+; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p6.h
-; CHECK-SVE2-NEXT: csel x8, x8, x9, lt
-; CHECK-SVE2-NEXT: asr x8, x8, #2
+; CHECK-SVE2-NEXT: add x9, x8, #3
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-SVE2-NEXT: asr x8, x8, #2
; CHECK-SVE2-NEXT: mov z5.d, x8
; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
@@ -1179,22 +1171,20 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: rdvl x8, #4
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x0, x8
; CHECK-SVE-NEXT: add x8, x1, x8
-; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: subs x8, x8, x9
; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: mov z4.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: sub x9, x1, x0
; CHECK-SVE-NEXT: incd z1.d
; CHECK-SVE-NEXT: incd z2.d, all, mul #2
; CHECK-SVE-NEXT: mov z5.d, x8
@@ -1224,14 +1214,14 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p9.s
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: add x8, x9, #3
-; CHECK-SVE-NEXT: cmp x9, #0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p6.h
-; CHECK-SVE-NEXT: csel x8, x8, x9, lt
-; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
-; CHECK-SVE-NEXT: mov z5.d, x8
+; CHECK-SVE-NEXT: asr x8, x8, #2
; CHECK-SVE-NEXT: mov p1.b, p2/m, p2.b
+; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
@@ -1271,11 +1261,10 @@ define <vscale x 4 x i1> @whilewr_64_expand(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_64_expand:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: cmp x8, #0
-; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: asr x8, x8, #3
; CHECK-SVE2-NEXT: mov z1.d, z0.d
; CHECK-SVE2-NEXT: mov z2.d, x8
@@ -1293,11 +1282,10 @@ define <vscale x 4 x i1> @whilewr_64_expand(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_expand:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, x8
@@ -1320,11 +1308,10 @@ define <vscale x 8 x i1> @whilewr_64_expand2(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_64_expand2:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: cmp x8, #0
-; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: asr x8, x8, #3
; CHECK-SVE2-NEXT: mov z1.d, z0.d
; CHECK-SVE2-NEXT: mov z2.d, z0.d
@@ -1350,11 +1337,10 @@ define <vscale x 8 x i1> @whilewr_64_expand2(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_expand2:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
@@ -1390,14 +1376,13 @@ define <vscale x 16 x i1> @whilewr_64_expand3(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: cmp x8, #0
-; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: asr x8, x8, #3
; CHECK-SVE2-NEXT: mov z1.d, z0.d
; CHECK-SVE2-NEXT: mov z4.d, z0.d
@@ -1448,14 +1433,13 @@ define <vscale x 16 x i1> @whilewr_64_expand3(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z4.d, z0.d
@@ -1514,18 +1498,16 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
+; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: cmp x8, #0
; CHECK-SVE2-NEXT: addvl x10, x1, #8
-; CHECK-SVE2-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: addvl x9, x0, #8
; CHECK-SVE2-NEXT: asr x8, x8, #3
-; CHECK-SVE2-NEXT: sub x9, x10, x9
; CHECK-SVE2-NEXT: mov z1.d, z0.d
; CHECK-SVE2-NEXT: mov z2.d, z0.d
; CHECK-SVE2-NEXT: mov z4.d, z0.d
@@ -1558,12 +1540,12 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
; CHECK-SVE2-NEXT: uzp1 p6.s, p6.s, p9.s
; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: add x8, x9, #7
-; CHECK-SVE2-NEXT: cmp x9, #0
+; CHECK-SVE2-NEXT: subs x8, x10, x9
; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p6.h
-; CHECK-SVE2-NEXT: csel x8, x8, x9, lt
-; CHECK-SVE2-NEXT: asr x8, x8, #3
+; CHECK-SVE2-NEXT: add x9, x8, #7
+; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-SVE2-NEXT: asr x8, x8, #3
; CHECK-SVE2-NEXT: mov z5.d, x8
; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
@@ -1608,22 +1590,20 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: rdvl x8, #8
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x0, x8
; CHECK-SVE-NEXT: add x8, x1, x8
-; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: subs x8, x8, x9
; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: mov z4.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: sub x9, x1, x0
; CHECK-SVE-NEXT: incd z1.d
; CHECK-SVE-NEXT: incd z2.d, all, mul #2
; CHECK-SVE-NEXT: mov z5.d, x8
@@ -1653,14 +1633,14 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p9.s
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: add x8, x9, #7
-; CHECK-SVE-NEXT: cmp x9, #0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p6.h
-; CHECK-SVE-NEXT: csel x8, x8, x9, lt
-; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
-; CHECK-SVE-NEXT: mov z5.d, x8
+; CHECK-SVE-NEXT: asr x8, x8, #3
; CHECK-SVE-NEXT: mov p1.b, p2/m, p2.b
+; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
@@ -1710,7 +1690,7 @@ define <vscale x 9 x i1> @whilewr_8_widen(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
@@ -1805,11 +1785,10 @@ define <vscale x 3 x i1> @whilewr_32_widen(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_widen:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: asr x8, x8, #2
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, x8
@@ -1837,7 +1816,7 @@ define <vscale x 16 x i1> @whilewr_badimm(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE2-NEXT: .cfi_offset w29, -16
; CHECK-SVE2-NEXT: index z0.d, #0, #1
; CHECK-SVE2-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
@@ -1895,7 +1874,7 @@ define <vscale x 16 x i1> @whilewr_badimm(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
>From c3d2acf8636a1cf0fe3131e45fb1b474c23b8ba5 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 13 Aug 2025 10:14:20 +0100
Subject: [PATCH 36/43] Remove backend promotion
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 16 ----------------
1 file changed, 16 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index ebbec633d2835..a8aa1f67e342d 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -28275,22 +28275,6 @@ void AArch64TargetLowering::ReplaceNodeResults(
case ISD::GET_ACTIVE_LANE_MASK:
ReplaceGetActiveLaneMaskResults(N, Results, DAG);
return;
- case ISD::LOOP_DEPENDENCE_WAR_MASK:
- case ISD::LOOP_DEPENDENCE_RAW_MASK: {
- EVT VT = N->getValueType(0);
- if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
- return;
-
- // NOTE: Only trivial type promotion is supported.
- EVT NewVT = getTypeToTransformTo(*DAG.getContext(), VT);
- if (NewVT.getVectorNumElements() != VT.getVectorNumElements())
- return;
-
- SDLoc DL(N);
- auto V = DAG.getNode(N->getOpcode(), DL, NewVT, N->ops());
- Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, VT, V));
- return;
- }
case ISD::INTRINSIC_WO_CHAIN: {
EVT VT = N->getValueType(0);
>From 8af501988d12be06e14584a8d7b887a05473edcf Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 13 Aug 2025 10:16:45 +0100
Subject: [PATCH 37/43] Don't create StoreVT
---
llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index d8a61beef737a..086633dc7593e 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1618,7 +1618,6 @@ void DAGTypeLegalizer::SplitVecRes_BITCAST(SDNode *N, SDValue &Lo,
void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
SDValue &Hi) {
unsigned EltSize = N->getConstantOperandVal(2);
- EVT EltVT = EVT::getIntegerVT(*DAG.getContext(), EltSize * 8);
SDLoc DL(N);
EVT LoVT, HiVT;
@@ -1627,9 +1626,7 @@ void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
SDValue PtrB = N->getOperand(1);
Lo = DAG.getNode(N->getOpcode(), DL, LoVT, PtrA, PtrB, N->getOperand(2));
- EVT StoreVT = EVT::getVectorVT(*DAG.getContext(), EltVT,
- HiVT.getVectorMinNumElements(), false);
- unsigned Offset = StoreVT.getStoreSizeInBits() / 8;
+ unsigned Offset = EltSize * HiVT.getVectorMinNumElements();
SDValue Addend;
if (HiVT.isScalableVT())
Addend = DAG.getVScale(DL, MVT::i64, APInt(64, Offset));
>From 558bc3e5757ca6498f3732c935486f4d19f9458f Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 13 Aug 2025 10:19:55 +0100
Subject: [PATCH 38/43] Use ternary for Addend
---
llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index 086633dc7593e..4854e96b56574 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1627,11 +1627,9 @@ void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
Lo = DAG.getNode(N->getOpcode(), DL, LoVT, PtrA, PtrB, N->getOperand(2));
unsigned Offset = EltSize * HiVT.getVectorMinNumElements();
- SDValue Addend;
- if (HiVT.isScalableVT())
- Addend = DAG.getVScale(DL, MVT::i64, APInt(64, Offset));
- else
- Addend = DAG.getConstant(Offset, DL, MVT::i64);
+ SDValue Addend = HiVT.isScalableVT()
+ ? DAG.getVScale(DL, MVT::i64, APInt(64, Offset))
+ : DAG.getConstant(Offset, DL, MVT::i64);
PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, Addend);
PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, Addend);
>From 32e01923ec83c452a70b5c6dac4c0c910c06816b Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 13 Aug 2025 10:21:21 +0100
Subject: [PATCH 39/43] Stop adding to PtrB
---
.../SelectionDAG/LegalizeVectorTypes.cpp | 4 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 840 ++++++++++++------
.../CodeGen/AArch64/alias_mask_scalable.ll | 395 ++++----
3 files changed, 753 insertions(+), 486 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index 4854e96b56574..7b2d7051731bb 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -1617,8 +1617,6 @@ void DAGTypeLegalizer::SplitVecRes_BITCAST(SDNode *N, SDValue &Lo,
void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
SDValue &Hi) {
- unsigned EltSize = N->getConstantOperandVal(2);
-
SDLoc DL(N);
EVT LoVT, HiVT;
std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(N->getValueType(0));
@@ -1626,13 +1624,13 @@ void DAGTypeLegalizer::SplitVecRes_LOOP_DEPENDENCE_MASK(SDNode *N, SDValue &Lo,
SDValue PtrB = N->getOperand(1);
Lo = DAG.getNode(N->getOpcode(), DL, LoVT, PtrA, PtrB, N->getOperand(2));
+ unsigned EltSize = N->getConstantOperandVal(2);
unsigned Offset = EltSize * HiVT.getVectorMinNumElements();
SDValue Addend = HiVT.isScalableVT()
? DAG.getVScale(DL, MVT::i64, APInt(64, Offset))
: DAG.getConstant(Offset, DL, MVT::i64);
PtrA = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrA, Addend);
- PtrB = DAG.getNode(ISD::ADD, DL, MVT::i64, PtrB, Addend);
Hi = DAG.getNode(N->getOpcode(), DL, HiVT, PtrA, PtrB, N->getOperand(2));
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 06cc90904ef44..1eb54e5b50627 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -321,10 +321,9 @@ entry:
define <32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_8_split:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x9, x1, #16
-; CHECK-SVE-NEXT: add x10, x0, #16
+; CHECK-SVE-NEXT: add x9, x0, #16
; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE-NEXT: whilewr p1.b, x10, x9
+; CHECK-SVE-NEXT: whilewr p1.b, x9, x1
; CHECK-SVE-NEXT: adrp x9, .LCPI8_0
; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_0]
@@ -347,53 +346,80 @@ define <32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_8_split:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
-; CHECK-NOSVE-NEXT: sub x11, x1, x0
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x9, :lo12:.LCPI8_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_5
-; CHECK-NOSVE-NEXT: dup v2.2d, x11
-; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI8_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_6
-; CHECK-NOSVE-NEXT: ldr q6, [x9, :lo12:.LCPI8_5]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI8_6]
-; CHECK-NOSVE-NEXT: cmp x11, #1
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_1
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_0
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: ldr q1, [x11, :lo12:.LCPI8_1]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_3
+; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI8_0]
+; CHECK-NOSVE-NEXT: ldr q4, [x11, :lo12:.LCPI8_3]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_4
+; CHECK-NOSVE-NEXT: sub x10, x9, #16
+; CHECK-NOSVE-NEXT: ldr q5, [x11, :lo12:.LCPI8_4]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_5
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI8_2
+; CHECK-NOSVE-NEXT: ldr q19, [x11, :lo12:.LCPI8_5]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_6
+; CHECK-NOSVE-NEXT: dup v2.2d, x10
+; CHECK-NOSVE-NEXT: dup v6.2d, x9
+; CHECK-NOSVE-NEXT: ldr q21, [x11, :lo12:.LCPI8_6]
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_7
+; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI8_2]
+; CHECK-NOSVE-NEXT: ldr q22, [x11, :lo12:.LCPI8_7]
+; CHECK-NOSVE-NEXT: cmp x10, #1
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v16.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v18.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v17.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v20.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v6.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v6.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v6.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v6.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v23.2d, v6.2d, v19.2d
+; CHECK-NOSVE-NEXT: cmhi v24.2d, v6.2d, v21.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v6.2d, v22.2d
+; CHECK-NOSVE-NEXT: cmhi v19.2d, v2.2d, v19.2d
+; CHECK-NOSVE-NEXT: cmhi v21.2d, v2.2d, v21.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v22.2d
+; CHECK-NOSVE-NEXT: uzp1 v7.4s, v16.4s, v7.4s
; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cset w9, lt
; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v23.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v5.4s, v18.4s, v17.4s
+; CHECK-NOSVE-NEXT: uzp1 v4.4s, v6.4s, v24.4s
+; CHECK-NOSVE-NEXT: uzp1 v6.4s, v19.4s, v20.4s
+; CHECK-NOSVE-NEXT: cset w10, lt
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v21.4s
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w9, lt
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v4.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v3.8h, v5.8h, v7.8h
+; CHECK-NOSVE-NEXT: uzp1 v2.8h, v2.8h, v6.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: uzp1 v1.16b, v2.16b, v3.16b
+; CHECK-NOSVE-NEXT: dup v2.16b, w9
+; CHECK-NOSVE-NEXT: dup v3.16b, w10
; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI8_8]
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_8]
+; CHECK-NOSVE-NEXT: orr v1.16b, v1.16b, v3.16b
; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-NOSVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: addv h1, v1.8h
; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: str h1, [x8, #2]
; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 1)
@@ -403,31 +429,28 @@ entry:
define <64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_8_split2:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x9, x1, #48
-; CHECK-SVE-NEXT: add x10, x0, #48
+; CHECK-SVE-NEXT: add x9, x0, #48
; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE-NEXT: whilewr p1.b, x10, x9
-; CHECK-SVE-NEXT: add x9, x1, #16
-; CHECK-SVE-NEXT: add x10, x1, #32
-; CHECK-SVE-NEXT: add x11, x0, #32
-; CHECK-SVE-NEXT: add x12, x0, #16
+; CHECK-SVE-NEXT: add x10, x0, #16
+; CHECK-SVE-NEXT: whilewr p1.b, x9, x1
+; CHECK-SVE-NEXT: add x9, x0, #32
; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p0.b, x11, x10
-; CHECK-SVE-NEXT: mov z1.b, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p1.b, x12, x9
+; CHECK-SVE-NEXT: whilewr p0.b, x9, x1
; CHECK-SVE-NEXT: adrp x9, .LCPI9_0
-; CHECK-SVE-NEXT: mov z2.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: mov z1.b, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: whilewr p1.b, x10, x1
; CHECK-SVE-NEXT: ldr q4, [x9, :lo12:.LCPI9_0]
+; CHECK-SVE-NEXT: mov z2.b, p0/z, #-1 // =0xffffffffffffffff
; CHECK-SVE-NEXT: mov z3.b, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-SVE-NEXT: shl v2.16b, v2.16b, #7
-; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-SVE-NEXT: shl v3.16b, v3.16b, #7
+; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
; CHECK-SVE-NEXT: cmlt v2.16b, v2.16b, #0
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v4.16b
; CHECK-SVE-NEXT: cmlt v3.16b, v3.16b, #0
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v4.16b
; CHECK-SVE-NEXT: and v1.16b, v1.16b, v4.16b
; CHECK-SVE-NEXT: and v2.16b, v2.16b, v4.16b
; CHECK-SVE-NEXT: and v3.16b, v3.16b, v4.16b
@@ -451,55 +474,145 @@ define <64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
;
; CHECK-NOSVE-LABEL: whilewr_8_split2:
; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_1
-; CHECK-NOSVE-NEXT: sub x11, x1, x0
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI9_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_2
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI9_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x9, :lo12:.LCPI9_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_4
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI9_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_5
-; CHECK-NOSVE-NEXT: dup v2.2d, x11
-; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI9_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI9_6
-; CHECK-NOSVE-NEXT: ldr q6, [x9, :lo12:.LCPI9_5]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_7
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI9_6]
-; CHECK-NOSVE-NEXT: cmp x11, #1
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI9_7]
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: stp d11, d10, [sp, #-32]! // 16-byte Folded Spill
+; CHECK-NOSVE-NEXT: stp d9, d8, [sp, #16] // 16-byte Folded Spill
+; CHECK-NOSVE-NEXT: .cfi_def_cfa_offset 32
+; CHECK-NOSVE-NEXT: .cfi_offset b8, -8
+; CHECK-NOSVE-NEXT: .cfi_offset b9, -16
+; CHECK-NOSVE-NEXT: .cfi_offset b10, -24
+; CHECK-NOSVE-NEXT: .cfi_offset b11, -32
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI9_0
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI9_1
+; CHECK-NOSVE-NEXT: sub x10, x9, #16
+; CHECK-NOSVE-NEXT: adrp x13, .LCPI9_2
+; CHECK-NOSVE-NEXT: ldr q0, [x11, :lo12:.LCPI9_0]
+; CHECK-NOSVE-NEXT: dup v5.2d, x10
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI9_3
+; CHECK-NOSVE-NEXT: ldr q1, [x12, :lo12:.LCPI9_1]
+; CHECK-NOSVE-NEXT: ldr q2, [x13, :lo12:.LCPI9_2]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI9_3]
+; CHECK-NOSVE-NEXT: sub x11, x9, #32
+; CHECK-NOSVE-NEXT: dup v4.2d, x11
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI9_4
+; CHECK-NOSVE-NEXT: adrp x13, .LCPI9_5
+; CHECK-NOSVE-NEXT: cmhi v18.2d, v5.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v19.2d, v5.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v20.2d, v5.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v21.2d, v5.2d, v3.2d
+; CHECK-NOSVE-NEXT: ldr q17, [x12, :lo12:.LCPI9_4]
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI9_6
+; CHECK-NOSVE-NEXT: ldr q7, [x12, :lo12:.LCPI9_6]
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI9_7
+; CHECK-NOSVE-NEXT: ldr q6, [x13, :lo12:.LCPI9_5]
+; CHECK-NOSVE-NEXT: uzp1 v18.4s, v19.4s, v18.4s
+; CHECK-NOSVE-NEXT: ldr q16, [x12, :lo12:.LCPI9_7]
+; CHECK-NOSVE-NEXT: sub x12, x9, #48
+; CHECK-NOSVE-NEXT: uzp1 v19.4s, v21.4s, v20.4s
+; CHECK-NOSVE-NEXT: cmhi v20.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v21.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v22.2d, x12
+; CHECK-NOSVE-NEXT: cmhi v23.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v24.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v25.2d, v5.2d, v17.2d
+; CHECK-NOSVE-NEXT: cmhi v26.2d, v5.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v27.2d, v5.2d, v7.2d
+; CHECK-NOSVE-NEXT: uzp1 v20.4s, v21.4s, v20.4s
+; CHECK-NOSVE-NEXT: dup v21.2d, x9
+; CHECK-NOSVE-NEXT: cmhi v28.2d, v5.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v5.8h, v19.8h, v18.8h
+; CHECK-NOSVE-NEXT: cmhi v19.2d, v4.2d, v17.2d
+; CHECK-NOSVE-NEXT: cmhi v29.2d, v22.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v30.2d, v22.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v31.2d, v22.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v8.2d, v22.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v9.2d, v22.2d, v17.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v21.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v21.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v21.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v21.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v17.2d, v21.2d, v17.2d
+; CHECK-NOSVE-NEXT: cmhi v10.2d, v21.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v11.2d, v21.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v21.2d, v21.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v18.4s, v24.4s, v23.4s
+; CHECK-NOSVE-NEXT: cmhi v23.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v24.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v22.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v22.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v22.2d, v22.2d, v16.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: cmp x10, #1
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v10.4s, v17.4s
+; CHECK-NOSVE-NEXT: cset w10, lt
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v21.4s, v11.4s
+; CHECK-NOSVE-NEXT: uzp1 v16.4s, v30.4s, v29.4s
+; CHECK-NOSVE-NEXT: cmp x11, #1
+; CHECK-NOSVE-NEXT: uzp1 v17.4s, v8.4s, v31.4s
+; CHECK-NOSVE-NEXT: uzp1 v6.4s, v6.4s, v9.4s
+; CHECK-NOSVE-NEXT: cset w11, lt
+; CHECK-NOSVE-NEXT: uzp1 v7.4s, v22.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v21.4s, v26.4s, v25.4s
+; CHECK-NOSVE-NEXT: cmp x12, #1
+; CHECK-NOSVE-NEXT: uzp1 v19.4s, v23.4s, v19.4s
+; CHECK-NOSVE-NEXT: uzp1 v4.4s, v4.4s, v24.4s
+; CHECK-NOSVE-NEXT: cset w12, lt
+; CHECK-NOSVE-NEXT: uzp1 v22.4s, v28.4s, v27.4s
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v18.8h, v18.8h, v20.8h
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v2.8h, v17.8h, v16.8h
+; CHECK-NOSVE-NEXT: uzp1 v3.8h, v7.8h, v6.8h
+; CHECK-NOSVE-NEXT: dup v7.16b, w10
+; CHECK-NOSVE-NEXT: uzp1 v4.8h, v4.8h, v19.8h
+; CHECK-NOSVE-NEXT: uzp1 v6.8h, v22.8h, v21.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w9
; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_8
+; CHECK-NOSVE-NEXT: uzp1 v2.16b, v3.16b, v2.16b
+; CHECK-NOSVE-NEXT: uzp1 v3.16b, v4.16b, v18.16b
+; CHECK-NOSVE-NEXT: dup v4.16b, w12
+; CHECK-NOSVE-NEXT: uzp1 v5.16b, v6.16b, v5.16b
+; CHECK-NOSVE-NEXT: dup v6.16b, w11
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI9_8]
+; CHECK-NOSVE-NEXT: orr v1.16b, v2.16b, v4.16b
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI9_8]
+; CHECK-NOSVE-NEXT: orr v2.16b, v3.16b, v6.16b
+; CHECK-NOSVE-NEXT: orr v3.16b, v5.16b, v7.16b
; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
+; CHECK-NOSVE-NEXT: shl v2.16b, v2.16b, #7
; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: shl v3.16b, v3.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NOSVE-NEXT: cmlt v2.16b, v2.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v4.16b
+; CHECK-NOSVE-NEXT: cmlt v3.16b, v3.16b, #0
+; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v4.16b
+; CHECK-NOSVE-NEXT: and v2.16b, v2.16b, v4.16b
+; CHECK-NOSVE-NEXT: ext v5.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: and v3.16b, v3.16b, v4.16b
+; CHECK-NOSVE-NEXT: ext v4.16b, v1.16b, v1.16b, #8
+; CHECK-NOSVE-NEXT: ext v6.16b, v2.16b, v2.16b, #8
+; CHECK-NOSVE-NEXT: ext v7.16b, v3.16b, v3.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v5.16b
+; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v4.16b
+; CHECK-NOSVE-NEXT: zip1 v2.16b, v2.16b, v6.16b
+; CHECK-NOSVE-NEXT: zip1 v3.16b, v3.16b, v7.16b
; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: str h0, [x8, #6]
-; CHECK-NOSVE-NEXT: str h0, [x8, #4]
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: addv h1, v1.8h
+; CHECK-NOSVE-NEXT: addv h2, v2.8h
; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: addv h0, v3.8h
+; CHECK-NOSVE-NEXT: str h1, [x8, #6]
+; CHECK-NOSVE-NEXT: str h2, [x8, #4]
+; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: ldp d9, d8, [sp, #16] // 16-byte Folded Reload
+; CHECK-NOSVE-NEXT: ldp d11, d10, [sp], #32 // 16-byte Folded Reload
; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <64 x i1> @llvm.loop.dependence.war.mask.v64i1(ptr %a, ptr %b, i64 1)
@@ -599,107 +712,164 @@ entry:
define <32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_16_split2:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: sub x10, x9, #32
; CHECK-SVE-NEXT: add x9, x9, x9, lsr #63
+; CHECK-SVE-NEXT: add x10, x10, x10, lsr #63
; CHECK-SVE-NEXT: asr x9, x9, #1
+; CHECK-SVE-NEXT: asr x10, x10, #1
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: mov z3.d, z0.d
; CHECK-SVE-NEXT: mov z4.d, z0.d
; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z7.d, z0.d
-; CHECK-SVE-NEXT: mov z16.d, z0.d
-; CHECK-SVE-NEXT: dup v6.2d, x9
-; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: mov z6.d, z0.d
+; CHECK-SVE-NEXT: dup v7.2d, x9
+; CHECK-SVE-NEXT: dup v16.2d, x10
; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: cmp x10, #1
; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
-; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
+; CHECK-SVE-NEXT: add z6.d, z6.d, #2 // =0x2
+; CHECK-SVE-NEXT: cmhi v17.2d, v7.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v18.2d, v16.2d, v0.2d
+; CHECK-SVE-NEXT: add z0.d, z0.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v19.2d, v7.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v20.2d, v7.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v21.2d, v7.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v22.2d, v7.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v23.2d, v7.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v24.2d, v7.2d, v6.2d
+; CHECK-SVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v16.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v7.2d, v7.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v16.2d, v6.2d
+; CHECK-SVE-NEXT: cset w10, lt
+; CHECK-SVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
+; CHECK-SVE-NEXT: uzp1 v16.4s, v21.4s, v20.4s
+; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: uzp1 v20.4s, v23.4s, v22.4s
+; CHECK-SVE-NEXT: uzp1 v17.4s, v17.4s, v24.4s
; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: cmhi v1.2d, v6.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v6.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v6.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v6.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v6.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v7.2d, v6.2d, v7.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v6.2d, v16.2d
; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v3.4s, v5.4s, v4.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v7.4s
-; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v6.4s
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v3.8h
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: dup v1.16b, w9
+; CHECK-SVE-NEXT: uzp1 v3.4s, v19.4s, v7.4s
+; CHECK-SVE-NEXT: uzp1 v4.4s, v5.4s, v4.4s
+; CHECK-SVE-NEXT: uzp1 v5.4s, v18.4s, v6.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-SVE-NEXT: uzp1 v1.8h, v17.8h, v20.8h
+; CHECK-SVE-NEXT: uzp1 v3.8h, v16.8h, v3.8h
+; CHECK-SVE-NEXT: uzp1 v4.8h, v5.8h, v4.8h
+; CHECK-SVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
+; CHECK-SVE-NEXT: dup v2.16b, w9
; CHECK-SVE-NEXT: adrp x9, .LCPI11_0
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ldr q1, [x9, :lo12:.LCPI11_0]
+; CHECK-SVE-NEXT: uzp1 v1.16b, v1.16b, v3.16b
+; CHECK-SVE-NEXT: dup v3.16b, w10
+; CHECK-SVE-NEXT: uzp1 v0.16b, v4.16b, v0.16b
+; CHECK-SVE-NEXT: orr v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI11_0]
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v3.16b
+; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-SVE-NEXT: addv h1, v1.8h
; CHECK-SVE-NEXT: addv h0, v0.8h
+; CHECK-SVE-NEXT: str h1, [x8]
; CHECK-SVE-NEXT: str h0, [x8, #2]
-; CHECK-SVE-NEXT: str h0, [x8]
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_16_split2:
; CHECK-NOSVE: // %bb.0: // %entry
; CHECK-NOSVE-NEXT: sub x9, x1, x0
; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_0
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_1
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI11_1
+; CHECK-NOSVE-NEXT: sub x11, x9, #32
; CHECK-NOSVE-NEXT: add x9, x9, x9, lsr #63
; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI11_0]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI11_2]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_4
-; CHECK-NOSVE-NEXT: ldr q1, [x11, :lo12:.LCPI11_1]
+; CHECK-NOSVE-NEXT: add x10, x11, x11, lsr #63
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_2
+; CHECK-NOSVE-NEXT: ldr q1, [x12, :lo12:.LCPI11_1]
; CHECK-NOSVE-NEXT: asr x9, x9, #1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_3
-; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI11_4]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_6
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI11_3]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_5
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI11_3
+; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI11_2]
+; CHECK-NOSVE-NEXT: asr x10, x10, #1
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_4
+; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI11_3]
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI11_5
; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI11_6]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_7
-; CHECK-NOSVE-NEXT: ldr q6, [x11, :lo12:.LCPI11_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x10, :lo12:.LCPI11_7]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: ldr q5, [x11, :lo12:.LCPI11_4]
+; CHECK-NOSVE-NEXT: dup v6.2d, x10
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_6
+; CHECK-NOSVE-NEXT: ldr q7, [x12, :lo12:.LCPI11_5]
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI11_7
+; CHECK-NOSVE-NEXT: ldr q16, [x11, :lo12:.LCPI11_6]
+; CHECK-NOSVE-NEXT: cmp x10, #1
+; CHECK-NOSVE-NEXT: ldr q17, [x12, :lo12:.LCPI11_7]
+; CHECK-NOSVE-NEXT: cmhi v18.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v23.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v19.2d, v6.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v20.2d, v6.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v21.2d, v6.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v22.2d, v6.2d, v5.2d
; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: cmhi v24.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v17.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v6.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v16.2d, v6.2d, v16.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v6.2d, v17.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v19.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v1.4s, v18.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v23.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v4.4s, v4.4s, v24.4s
+; CHECK-NOSVE-NEXT: uzp1 v5.4s, v21.4s, v20.4s
+; CHECK-NOSVE-NEXT: uzp1 v7.4s, v7.4s, v22.4s
+; CHECK-NOSVE-NEXT: uzp1 v6.4s, v6.4s, v16.4s
+; CHECK-NOSVE-NEXT: cset w10, lt
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-NOSVE-NEXT: uzp1 v2.8h, v4.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v5.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v3.8h, v6.8h, v7.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.16b, v2.16b, v1.16b
+; CHECK-NOSVE-NEXT: dup v2.16b, w9
; CHECK-NOSVE-NEXT: adrp x9, .LCPI11_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI11_8]
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v3.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v3.16b, w10
+; CHECK-NOSVE-NEXT: orr v1.16b, v1.16b, v2.16b
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI11_8]
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v3.16b
+; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-NOSVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-NOSVE-NEXT: addv h1, v1.8h
; CHECK-NOSVE-NEXT: addv h0, v0.8h
+; CHECK-NOSVE-NEXT: str h1, [x8]
; CHECK-NOSVE-NEXT: str h0, [x8, #2]
-; CHECK-NOSVE-NEXT: str h0, [x8]
; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 2)
@@ -863,109 +1033,168 @@ entry:
define <32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32_split3:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x9, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: add x10, x9, #3
-; CHECK-SVE-NEXT: csel x9, x10, x9, mi
+; CHECK-SVE-NEXT: sub x11, x9, #61
+; CHECK-SVE-NEXT: csel x10, x10, x9, mi
+; CHECK-SVE-NEXT: subs x9, x9, #64
+; CHECK-SVE-NEXT: csel x9, x11, x9, mi
+; CHECK-SVE-NEXT: asr x10, x10, #2
; CHECK-SVE-NEXT: asr x9, x9, #2
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: mov z3.d, z0.d
; CHECK-SVE-NEXT: mov z4.d, z0.d
; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z7.d, z0.d
-; CHECK-SVE-NEXT: mov z16.d, z0.d
-; CHECK-SVE-NEXT: dup v6.2d, x9
-; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: mov z6.d, z0.d
+; CHECK-SVE-NEXT: dup v7.2d, x10
+; CHECK-SVE-NEXT: dup v16.2d, x9
; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: cmp x9, #1
; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
-; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
+; CHECK-SVE-NEXT: add z6.d, z6.d, #2 // =0x2
+; CHECK-SVE-NEXT: cmhi v17.2d, v7.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v18.2d, v16.2d, v0.2d
+; CHECK-SVE-NEXT: add z0.d, z0.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v19.2d, v7.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v20.2d, v7.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v21.2d, v7.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v22.2d, v7.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v23.2d, v7.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v24.2d, v7.2d, v6.2d
+; CHECK-SVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v16.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v7.2d, v7.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v16.2d, v6.2d
; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: cmhi v1.2d, v6.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v6.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v6.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v6.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v6.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v7.2d, v6.2d, v7.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v6.2d, v16.2d
+; CHECK-SVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
+; CHECK-SVE-NEXT: uzp1 v16.4s, v21.4s, v20.4s
+; CHECK-SVE-NEXT: cmp x10, #1
+; CHECK-SVE-NEXT: uzp1 v20.4s, v23.4s, v22.4s
+; CHECK-SVE-NEXT: uzp1 v17.4s, v17.4s, v24.4s
+; CHECK-SVE-NEXT: cset w10, lt
; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v3.4s, v5.4s, v4.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v7.4s
-; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v6.4s
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v3.8h
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: dup v1.16b, w9
+; CHECK-SVE-NEXT: uzp1 v3.4s, v19.4s, v7.4s
+; CHECK-SVE-NEXT: uzp1 v4.4s, v5.4s, v4.4s
+; CHECK-SVE-NEXT: uzp1 v5.4s, v18.4s, v6.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-SVE-NEXT: uzp1 v1.8h, v17.8h, v20.8h
+; CHECK-SVE-NEXT: uzp1 v3.8h, v16.8h, v3.8h
+; CHECK-SVE-NEXT: uzp1 v4.8h, v5.8h, v4.8h
+; CHECK-SVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
+; CHECK-SVE-NEXT: dup v2.16b, w10
+; CHECK-SVE-NEXT: uzp1 v1.16b, v1.16b, v3.16b
+; CHECK-SVE-NEXT: dup v3.16b, w9
; CHECK-SVE-NEXT: adrp x9, .LCPI14_0
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ldr q1, [x9, :lo12:.LCPI14_0]
+; CHECK-SVE-NEXT: uzp1 v0.16b, v4.16b, v0.16b
+; CHECK-SVE-NEXT: orr v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI14_0]
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v3.16b
+; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-SVE-NEXT: addv h1, v1.8h
; CHECK-SVE-NEXT: addv h0, v0.8h
+; CHECK-SVE-NEXT: str h1, [x8]
; CHECK-SVE-NEXT: str h0, [x8, #2]
-; CHECK-SVE-NEXT: str h0, [x8]
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_32_split3:
; CHECK-NOSVE: // %bb.0: // %entry
; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_0
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: ldr q0, [x11, :lo12:.LCPI14_0]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_2
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_0
+; CHECK-NOSVE-NEXT: add x11, x9, #3
+; CHECK-NOSVE-NEXT: sub x12, x9, #61
+; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI14_0]
+; CHECK-NOSVE-NEXT: csel x11, x11, x9, mi
+; CHECK-NOSVE-NEXT: subs x9, x9, #64
; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
-; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI14_2]
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: csel x9, x12, x9, mi
; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI14_1]
; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_3
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_4
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI14_2
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI14_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_5
+; CHECK-NOSVE-NEXT: asr x10, x11, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x12, :lo12:.LCPI14_2]
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI14_4
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_5
; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: ldr q5, [x11, :lo12:.LCPI14_4]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_6
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI14_5]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_7
-; CHECK-NOSVE-NEXT: ldr q7, [x11, :lo12:.LCPI14_6]
-; CHECK-NOSVE-NEXT: ldr q16, [x10, :lo12:.LCPI14_7]
+; CHECK-NOSVE-NEXT: ldr q5, [x12, :lo12:.LCPI14_4]
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI14_6
+; CHECK-NOSVE-NEXT: ldr q6, [x11, :lo12:.LCPI14_5]
+; CHECK-NOSVE-NEXT: dup v16.2d, x10
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_7
+; CHECK-NOSVE-NEXT: ldr q7, [x12, :lo12:.LCPI14_6]
; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: ldr q22, [x11, :lo12:.LCPI14_7]
+; CHECK-NOSVE-NEXT: cmhi v17.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v18.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v19.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v20.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v21.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v23.2d, v16.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v24.2d, v16.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v16.2d, v16.2d, v22.2d
; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v22.2d
+; CHECK-NOSVE-NEXT: uzp1 v17.4s, v18.4s, v17.4s
; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v23.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v16.4s, v24.4s
+; CHECK-NOSVE-NEXT: uzp1 v5.4s, v20.4s, v19.4s
+; CHECK-NOSVE-NEXT: uzp1 v6.4s, v6.4s, v21.4s
+; CHECK-NOSVE-NEXT: uzp1 v4.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: cmp x10, #1
+; CHECK-NOSVE-NEXT: cset w10, lt
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v2.8h, v5.8h, v17.8h
+; CHECK-NOSVE-NEXT: uzp1 v3.8h, v4.8h, v6.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: uzp1 v1.16b, v3.16b, v2.16b
+; CHECK-NOSVE-NEXT: dup v2.16b, w10
+; CHECK-NOSVE-NEXT: dup v3.16b, w9
; CHECK-NOSVE-NEXT: adrp x9, .LCPI14_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI14_8]
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI14_8]
+; CHECK-NOSVE-NEXT: orr v1.16b, v1.16b, v3.16b
; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-NOSVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: addv h1, v1.8h
; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: str h1, [x8, #2]
; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 4)
@@ -1175,109 +1404,168 @@ entry:
define <32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64_split4:
; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x9, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: add x10, x9, #7
-; CHECK-SVE-NEXT: csel x9, x10, x9, mi
+; CHECK-SVE-NEXT: sub x11, x9, #121
+; CHECK-SVE-NEXT: csel x10, x10, x9, mi
+; CHECK-SVE-NEXT: subs x9, x9, #128
+; CHECK-SVE-NEXT: csel x9, x11, x9, mi
+; CHECK-SVE-NEXT: asr x10, x10, #3
; CHECK-SVE-NEXT: asr x9, x9, #3
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: mov z3.d, z0.d
; CHECK-SVE-NEXT: mov z4.d, z0.d
; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z7.d, z0.d
-; CHECK-SVE-NEXT: mov z16.d, z0.d
-; CHECK-SVE-NEXT: dup v6.2d, x9
-; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: mov z6.d, z0.d
+; CHECK-SVE-NEXT: dup v7.2d, x10
+; CHECK-SVE-NEXT: dup v16.2d, x9
; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-SVE-NEXT: cmp x9, #1
; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
-; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
+; CHECK-SVE-NEXT: add z6.d, z6.d, #2 // =0x2
+; CHECK-SVE-NEXT: cmhi v17.2d, v7.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v18.2d, v16.2d, v0.2d
+; CHECK-SVE-NEXT: add z0.d, z0.d, #14 // =0xe
+; CHECK-SVE-NEXT: cmhi v19.2d, v7.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v20.2d, v7.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v21.2d, v7.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v22.2d, v7.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v23.2d, v7.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v24.2d, v7.2d, v6.2d
+; CHECK-SVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
+; CHECK-SVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
+; CHECK-SVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
+; CHECK-SVE-NEXT: cmhi v4.2d, v16.2d, v4.2d
+; CHECK-SVE-NEXT: cmhi v7.2d, v7.2d, v0.2d
+; CHECK-SVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
+; CHECK-SVE-NEXT: cmhi v6.2d, v16.2d, v6.2d
; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: cmhi v1.2d, v6.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v6.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v6.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v6.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v6.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v7.2d, v6.2d, v7.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v6.2d, v16.2d
+; CHECK-SVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
+; CHECK-SVE-NEXT: uzp1 v16.4s, v21.4s, v20.4s
+; CHECK-SVE-NEXT: cmp x10, #1
+; CHECK-SVE-NEXT: uzp1 v20.4s, v23.4s, v22.4s
+; CHECK-SVE-NEXT: uzp1 v17.4s, v17.4s, v24.4s
+; CHECK-SVE-NEXT: cset w10, lt
; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v3.4s, v5.4s, v4.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v7.4s
-; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v6.4s
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v3.8h
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: dup v1.16b, w9
+; CHECK-SVE-NEXT: uzp1 v3.4s, v19.4s, v7.4s
+; CHECK-SVE-NEXT: uzp1 v4.4s, v5.4s, v4.4s
+; CHECK-SVE-NEXT: uzp1 v5.4s, v18.4s, v6.4s
+; CHECK-SVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-SVE-NEXT: uzp1 v1.8h, v17.8h, v20.8h
+; CHECK-SVE-NEXT: uzp1 v3.8h, v16.8h, v3.8h
+; CHECK-SVE-NEXT: uzp1 v4.8h, v5.8h, v4.8h
+; CHECK-SVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
+; CHECK-SVE-NEXT: dup v2.16b, w10
+; CHECK-SVE-NEXT: uzp1 v1.16b, v1.16b, v3.16b
+; CHECK-SVE-NEXT: dup v3.16b, w9
; CHECK-SVE-NEXT: adrp x9, .LCPI18_0
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ldr q1, [x9, :lo12:.LCPI18_0]
+; CHECK-SVE-NEXT: uzp1 v0.16b, v4.16b, v0.16b
+; CHECK-SVE-NEXT: orr v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI18_0]
+; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v3.16b
+; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-SVE-NEXT: addv h1, v1.8h
; CHECK-SVE-NEXT: addv h0, v0.8h
+; CHECK-SVE-NEXT: str h1, [x8]
; CHECK-SVE-NEXT: str h0, [x8, #2]
-; CHECK-SVE-NEXT: str h0, [x8]
; CHECK-SVE-NEXT: ret
;
; CHECK-NOSVE-LABEL: whilewr_64_split4:
; CHECK-NOSVE: // %bb.0: // %entry
; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_0
-; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: ldr q0, [x11, :lo12:.LCPI18_0]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_2
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_0
+; CHECK-NOSVE-NEXT: add x11, x9, #7
+; CHECK-NOSVE-NEXT: sub x12, x9, #121
+; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI18_0]
+; CHECK-NOSVE-NEXT: csel x11, x11, x9, mi
+; CHECK-NOSVE-NEXT: subs x9, x9, #128
; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
-; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI18_2]
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: csel x9, x12, x9, mi
; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI18_1]
; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_3
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_4
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI18_2
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI18_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_5
+; CHECK-NOSVE-NEXT: asr x10, x11, #3
+; CHECK-NOSVE-NEXT: ldr q2, [x12, :lo12:.LCPI18_2]
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI18_4
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_5
; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: ldr q5, [x11, :lo12:.LCPI18_4]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_6
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI18_5]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_7
-; CHECK-NOSVE-NEXT: ldr q7, [x11, :lo12:.LCPI18_6]
-; CHECK-NOSVE-NEXT: ldr q16, [x10, :lo12:.LCPI18_7]
+; CHECK-NOSVE-NEXT: ldr q5, [x12, :lo12:.LCPI18_4]
+; CHECK-NOSVE-NEXT: adrp x12, .LCPI18_6
+; CHECK-NOSVE-NEXT: ldr q6, [x11, :lo12:.LCPI18_5]
+; CHECK-NOSVE-NEXT: dup v16.2d, x10
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_7
+; CHECK-NOSVE-NEXT: ldr q7, [x12, :lo12:.LCPI18_6]
; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: ldr q22, [x11, :lo12:.LCPI18_7]
+; CHECK-NOSVE-NEXT: cmhi v17.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v18.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v19.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v20.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v21.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v23.2d, v16.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v24.2d, v16.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v16.2d, v16.2d, v22.2d
; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v22.2d
+; CHECK-NOSVE-NEXT: uzp1 v17.4s, v18.4s, v17.4s
; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v23.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v16.4s, v24.4s
+; CHECK-NOSVE-NEXT: uzp1 v5.4s, v20.4s, v19.4s
+; CHECK-NOSVE-NEXT: uzp1 v6.4s, v6.4s, v21.4s
+; CHECK-NOSVE-NEXT: uzp1 v4.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: cset w9, lt
+; CHECK-NOSVE-NEXT: cmp x10, #1
+; CHECK-NOSVE-NEXT: cset w10, lt
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v2.8h, v5.8h, v17.8h
+; CHECK-NOSVE-NEXT: uzp1 v3.8h, v4.8h, v6.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
+; CHECK-NOSVE-NEXT: uzp1 v1.16b, v3.16b, v2.16b
+; CHECK-NOSVE-NEXT: dup v2.16b, w10
+; CHECK-NOSVE-NEXT: dup v3.16b, w9
; CHECK-NOSVE-NEXT: adrp x9, .LCPI18_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI18_8]
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI18_8]
+; CHECK-NOSVE-NEXT: orr v1.16b, v1.16b, v3.16b
; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ext v1.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-NOSVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
+; CHECK-NOSVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
+; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
+; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
+; CHECK-NOSVE-NEXT: addv h1, v1.8h
; CHECK-NOSVE-NEXT: str h0, [x8]
+; CHECK-NOSVE-NEXT: str h1, [x8, #2]
; CHECK-NOSVE-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 8)
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index de48739251f4c..576ec81292014 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -330,7 +330,6 @@ define <vscale x 32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_8_split:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE2-NEXT: incb x1
; CHECK-SVE2-NEXT: incb x0
; CHECK-SVE2-NEXT: whilewr p1.b, x0, x1
; CHECK-SVE2-NEXT: ret
@@ -348,74 +347,73 @@ define <vscale x 32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: rdvl x8, #1
+; CHECK-SVE-NEXT: sub x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x0, x8
-; CHECK-SVE-NEXT: add x8, x1, x8
-; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: mov z4.d, x8
; CHECK-SVE-NEXT: sub x9, x1, x0
-; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: rdvl x10, #1
+; CHECK-SVE-NEXT: sub x9, x9, x10
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z4.d, z0.d
; CHECK-SVE-NEXT: incd z1.d
; CHECK-SVE-NEXT: incd z2.d, all, mul #2
; CHECK-SVE-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: mov z3.d, z1.d
; CHECK-SVE-NEXT: mov z6.d, z1.d
; CHECK-SVE-NEXT: mov z7.d, z2.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z5.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z3.d, z2.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z4.d, z2.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z4.d, z5.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
; CHECK-SVE-NEXT: incd z6.d, all, mul #4
; CHECK-SVE-NEXT: incd z7.d, all, mul #4
; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: mov z24.d, z4.d
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z3.d, z6.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z3.d, z4.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z3.d, z7.d
+; CHECK-SVE-NEXT: mov z24.d, z3.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z4.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z4.d, z6.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z4.d, z7.d
; CHECK-SVE-NEXT: incd z24.d, all, mul #4
; CHECK-SVE-NEXT: uzp1 p2.s, p3.s, p4.s
; CHECK-SVE-NEXT: uzp1 p3.s, p5.s, p6.s
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z3.d, z24.d
-; CHECK-SVE-NEXT: mov z3.d, x9
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z4.d, z24.d
+; CHECK-SVE-NEXT: mov z4.d, x9
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p2.h
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z3.d, z4.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z3.d, z2.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z4.d, z24.d
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z4.d, z7.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z4.d, z6.d
; CHECK-SVE-NEXT: uzp1 p7.s, p7.s, p8.s
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z3.d, z0.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z6.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z3.d, z5.d
+; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z4.d, z5.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z4.d, z3.d
+; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z4.d, z2.d
; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p7.h
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z3.d, z7.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z24.d
+; CHECK-SVE-NEXT: uzp1 p3.h, p3.h, p7.h
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z4.d, z0.d
; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
; CHECK-SVE-NEXT: uzp1 p5.s, p9.s, p6.s
; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: whilelo p6.b, xzr, x8
-; CHECK-SVE-NEXT: uzp1 p3.s, p8.s, p3.s
+; CHECK-SVE-NEXT: uzp1 p2.s, p8.s, p2.s
; CHECK-SVE-NEXT: cmp x9, #1
; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p7.s
; CHECK-SVE-NEXT: cset w8, lt
; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: uzp1 p4.h, p5.h, p4.h
; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p3.h, p0.h
-; CHECK-SVE-NEXT: uzp1 p1.b, p1.b, p2.b
-; CHECK-SVE-NEXT: uzp1 p0.b, p4.b, p0.b
+; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p2.h
+; CHECK-SVE-NEXT: uzp1 p1.b, p1.b, p3.b
+; CHECK-SVE-NEXT: uzp1 p2.b, p0.b, p4.b
; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p2.b, xzr, x8
-; CHECK-SVE-NEXT: sel p1.b, p1, p1.b, p6.b
+; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p1, p1.b, p6.b
; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p2.b
+; CHECK-SVE-NEXT: sel p1.b, p2, p2.b, p3.b
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
@@ -427,158 +425,147 @@ entry:
define <vscale x 64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_8_split2:
; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: mov x8, x1
-; CHECK-SVE2-NEXT: mov x9, x0
+; CHECK-SVE2-NEXT: mov x8, x0
; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE2-NEXT: addvl x10, x1, #3
-; CHECK-SVE2-NEXT: incb x1, all, mul #2
-; CHECK-SVE2-NEXT: addvl x11, x0, #3
+; CHECK-SVE2-NEXT: addvl x9, x0, #3
; CHECK-SVE2-NEXT: incb x0, all, mul #2
; CHECK-SVE2-NEXT: incb x8
-; CHECK-SVE2-NEXT: incb x9
-; CHECK-SVE2-NEXT: whilewr p3.b, x11, x10
+; CHECK-SVE2-NEXT: whilewr p3.b, x9, x1
; CHECK-SVE2-NEXT: whilewr p2.b, x0, x1
-; CHECK-SVE2-NEXT: whilewr p1.b, x9, x8
+; CHECK-SVE2-NEXT: whilewr p1.b, x8, x1
; CHECK-SVE2-NEXT: ret
;
; CHECK-SVE-LABEL: whilewr_8_split2:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-2
-; CHECK-SVE-NEXT: str p12, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p11, [sp, #8, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p10, [sp, #9, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p9, [sp, #10, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p8, [sp, #11, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p7, [sp, #12, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #13, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #14, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #15, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x40, 0x1e, 0x22 // sp + 16 + 16 * VG
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p11, [sp] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: rdvl x8, #1
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x0, x8
-; CHECK-SVE-NEXT: add x8, x1, x8
-; CHECK-SVE-NEXT: rdvl x10, #2
-; CHECK-SVE-NEXT: sub x8, x8, x9
-; CHECK-SVE-NEXT: add x9, x0, x10
-; CHECK-SVE-NEXT: add x10, x1, x10
-; CHECK-SVE-NEXT: mov z24.d, x8
-; CHECK-SVE-NEXT: sub x9, x10, x9
+; CHECK-SVE-NEXT: sub x9, x1, x0
+; CHECK-SVE-NEXT: ptrue p2.d
+; CHECK-SVE-NEXT: mov z24.d, x9
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: rdvl x10, #1
+; CHECK-SVE-NEXT: sub x10, x8, x10
; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z25.d, x9
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z24.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p2/z, z24.d, z0.d
; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z5.d, all, mul #2
-; CHECK-SVE-NEXT: incd z2.d, all, mul #4
-; CHECK-SVE-NEXT: mov z7.d, z1.d
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: mov z4.d, z5.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z24.d, z1.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z24.d, z5.d
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z24.d, z2.d
-; CHECK-SVE-NEXT: incd z7.d, all, mul #2
-; CHECK-SVE-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p8.s, p1.s, p2.s
-; CHECK-SVE-NEXT: mov z6.d, z7.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z24.d, z7.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z24.d, z3.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z24.d, z4.d
+; CHECK-SVE-NEXT: mov z3.d, z1.d
+; CHECK-SVE-NEXT: mov z5.d, z1.d
+; CHECK-SVE-NEXT: mov z6.d, z2.d
+; CHECK-SVE-NEXT: cmphi p3.d, p2/z, z24.d, z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p2/z, z24.d, z2.d
+; CHECK-SVE-NEXT: cmphi p5.d, p2/z, z24.d, z4.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
; CHECK-SVE-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p5.s, p5.s, p6.s
-; CHECK-SVE-NEXT: uzp1 p1.s, p4.s, p7.s
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z24.d, z6.d
-; CHECK-SVE-NEXT: uzp1 p4.h, p8.h, p5.h
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: rdvl x8, #3
+; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p3.s
+; CHECK-SVE-NEXT: mov z7.d, z3.d
+; CHECK-SVE-NEXT: cmphi p4.d, p2/z, z24.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p2/z, z24.d, z5.d
+; CHECK-SVE-NEXT: cmphi p3.d, p2/z, z24.d, z6.d
+; CHECK-SVE-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p4.s
+; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p6.s
+; CHECK-SVE-NEXT: cmphi p7.d, p2/z, z24.d, z7.d
+; CHECK-SVE-NEXT: mov z24.d, x10
+; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-SVE-NEXT: cmp x9, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: cmphi p5.d, p2/z, z24.d, z7.d
+; CHECK-SVE-NEXT: cmphi p6.d, p2/z, z24.d, z6.d
+; CHECK-SVE-NEXT: cmphi p8.d, p2/z, z24.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p3.s, p7.s
+; CHECK-SVE-NEXT: cmphi p7.d, p2/z, z24.d, z5.d
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: cmphi p9.d, p2/z, z24.d, z3.d
+; CHECK-SVE-NEXT: cmphi p10.d, p2/z, z24.d, z2.d
+; CHECK-SVE-NEXT: cmphi p11.d, p2/z, z24.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p3.h, p4.h, p1.h
+; CHECK-SVE-NEXT: cmphi p4.d, p2/z, z24.d, z1.d
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x9
+; CHECK-SVE-NEXT: rdvl x9, #2
+; CHECK-SVE-NEXT: sub x9, x8, x9
+; CHECK-SVE-NEXT: uzp1 p5.s, p6.s, p5.s
+; CHECK-SVE-NEXT: cmp x10, #1
+; CHECK-SVE-NEXT: uzp1 p6.s, p8.s, p7.s
+; CHECK-SVE-NEXT: mov z24.d, x9
; CHECK-SVE-NEXT: cset w10, lt
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z25.d, z6.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z25.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p7.s, p10.s, p9.s
; CHECK-SVE-NEXT: sbfx x10, x10, #0, #1
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z25.d, z3.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z25.d, z2.d
-; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p3.s
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z25.d, z7.d
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z25.d, z5.d
-; CHECK-SVE-NEXT: whilelo p10.b, xzr, x10
-; CHECK-SVE-NEXT: add x10, x0, x8
-; CHECK-SVE-NEXT: add x8, x1, x8
-; CHECK-SVE-NEXT: cmphi p11.d, p0/z, z25.d, z1.d
-; CHECK-SVE-NEXT: cmphi p12.d, p0/z, z25.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p8.s, p11.s, p4.s
+; CHECK-SVE-NEXT: uzp1 p0.b, p0.b, p3.b
+; CHECK-SVE-NEXT: cmphi p3.d, p2/z, z24.d, z7.d
+; CHECK-SVE-NEXT: cmphi p9.d, p2/z, z24.d, z6.d
+; CHECK-SVE-NEXT: uzp1 p5.h, p6.h, p5.h
+; CHECK-SVE-NEXT: cmphi p6.d, p2/z, z24.d, z5.d
+; CHECK-SVE-NEXT: cmphi p10.d, p2/z, z24.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p7.h, p8.h, p7.h
+; CHECK-SVE-NEXT: cmphi p8.d, p2/z, z24.d, z3.d
+; CHECK-SVE-NEXT: cmphi p11.d, p2/z, z24.d, z2.d
+; CHECK-SVE-NEXT: whilelo p4.b, xzr, x10
+; CHECK-SVE-NEXT: rdvl x10, #3
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: cmphi p1.d, p2/z, z24.d, z1.d
; CHECK-SVE-NEXT: sub x8, x8, x10
+; CHECK-SVE-NEXT: uzp1 p5.b, p7.b, p5.b
+; CHECK-SVE-NEXT: cmphi p7.d, p2/z, z24.d, z0.d
; CHECK-SVE-NEXT: mov z24.d, x8
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p2.h
+; CHECK-SVE-NEXT: uzp1 p3.s, p9.s, p3.s
; CHECK-SVE-NEXT: cmp x9, #1
-; CHECK-SVE-NEXT: uzp1 p2.s, p6.s, p5.s
+; CHECK-SVE-NEXT: uzp1 p6.s, p10.s, p6.s
; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: sub x10, x1, x0
-; CHECK-SVE-NEXT: uzp1 p5.s, p8.s, p7.s
+; CHECK-SVE-NEXT: uzp1 p8.s, p11.s, p8.s
+; CHECK-SVE-NEXT: cmphi p11.d, p2/z, z24.d, z2.d
+; CHECK-SVE-NEXT: cmphi p9.d, p2/z, z24.d, z7.d
+; CHECK-SVE-NEXT: uzp1 p7.s, p7.s, p1.s
+; CHECK-SVE-NEXT: cmphi p10.d, p2/z, z24.d, z6.d
; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: mov z25.d, x10
-; CHECK-SVE-NEXT: uzp1 p3.s, p9.s, p3.s
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z24.d, z3.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z24.d, z7.d
-; CHECK-SVE-NEXT: uzp1 p6.s, p12.s, p11.s
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z24.d, z5.d
-; CHECK-SVE-NEXT: uzp1 p1.b, p4.b, p1.b
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z24.d, z6.d
-; CHECK-SVE-NEXT: uzp1 p2.h, p5.h, p2.h
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z24.d, z4.d
+; CHECK-SVE-NEXT: sel p1.b, p5, p5.b, p4.b
+; CHECK-SVE-NEXT: cmphi p4.d, p2/z, z24.d, z5.d
+; CHECK-SVE-NEXT: cmphi p5.d, p2/z, z24.d, z4.d
; CHECK-SVE-NEXT: uzp1 p3.h, p6.h, p3.h
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z24.d, z2.d
-; CHECK-SVE-NEXT: sel p1.b, p1, p1.b, p10.b
-; CHECK-SVE-NEXT: cmphi p10.d, p0/z, z24.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-SVE-NEXT: cmphi p6.d, p2/z, z24.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p7.h, p7.h, p8.h
+; CHECK-SVE-NEXT: cmphi p8.d, p2/z, z24.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p2/z, z24.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p9.s, p10.s, p9.s
+; CHECK-SVE-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z24.d, z1.d
-; CHECK-SVE-NEXT: whilelo p3.b, xzr, x9
-; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p7.s
+; CHECK-SVE-NEXT: uzp1 p5.s, p11.s, p6.s
+; CHECK-SVE-NEXT: ldr p11, [sp] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: whilelo p6.b, xzr, x9
+; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p8.s
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: uzp1 p7.s, p9.s, p8.s
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z25.d, z7.d
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z25.d, z5.d
-; CHECK-SVE-NEXT: sel p2.b, p2, p2.b, p3.b
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z25.d, z1.d
-; CHECK-SVE-NEXT: cmphi p11.d, p0/z, z25.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p4.h, p6.h, p4.h
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z25.d, z3.d
-; CHECK-SVE-NEXT: cmphi p12.d, p0/z, z25.d, z2.d
-; CHECK-SVE-NEXT: uzp1 p5.s, p10.s, p5.s
-; CHECK-SVE-NEXT: cmphi p10.d, p0/z, z25.d, z4.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z25.d, z6.d
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p8.s, p9.s, p8.s
-; CHECK-SVE-NEXT: whilelo p9.b, xzr, x8
-; CHECK-SVE-NEXT: uzp1 p3.s, p11.s, p3.s
-; CHECK-SVE-NEXT: cmp x10, #1
-; CHECK-SVE-NEXT: ldr p11, [sp, #8, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p6.s, p12.s, p6.s
+; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: ldr p12, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.s, p10.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p4.h, p4.h, p9.h
+; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p5.h
; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: ldr p10, [sp, #9, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p3.h, p3.h, p8.h
-; CHECK-SVE-NEXT: ldr p8, [sp, #11, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p6.h, p0.h
-; CHECK-SVE-NEXT: ldr p6, [sp, #13, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p5.h, p5.h, p7.h
-; CHECK-SVE-NEXT: ldr p7, [sp, #12, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.b, p3.b, p0.b
-; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
-; CHECK-SVE-NEXT: uzp1 p4.b, p5.b, p4.b
-; CHECK-SVE-NEXT: ldr p5, [sp, #14, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p3.b
-; CHECK-SVE-NEXT: sel p3.b, p4, p4.b, p9.b
-; CHECK-SVE-NEXT: ldr p9, [sp, #10, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p4, [sp, #15, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: addvl sp, sp, #2
+; CHECK-SVE-NEXT: uzp1 p3.b, p7.b, p3.b
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p4.b, p2.b, p4.b
+; CHECK-SVE-NEXT: whilelo p5.b, xzr, x8
+; CHECK-SVE-NEXT: sel p2.b, p3, p3.b, p6.b
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: sel p3.b, p4, p4.b, p5.b
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
@@ -720,13 +707,12 @@ define <vscale x 32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: sub x8, x1, x0
; CHECK-SVE2-NEXT: incb x0, all, mul #2
; CHECK-SVE2-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE2-NEXT: incb x1, all, mul #2
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: asr x8, x8, #1
+; CHECK-SVE2-NEXT: sub x9, x1, x0
; CHECK-SVE2-NEXT: mov z1.d, z0.d
; CHECK-SVE2-NEXT: mov z2.d, z0.d
; CHECK-SVE2-NEXT: mov z3.d, z0.d
-; CHECK-SVE2-NEXT: sub x9, x1, x0
; CHECK-SVE2-NEXT: mov z5.d, x8
; CHECK-SVE2-NEXT: add x9, x9, x9, lsr #63
; CHECK-SVE2-NEXT: incd z1.d
@@ -805,28 +791,27 @@ define <vscale x 32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: rdvl x8, #2
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x0, x8
-; CHECK-SVE-NEXT: add x8, x1, x8
-; CHECK-SVE-NEXT: sub x8, x8, x9
+; CHECK-SVE-NEXT: sub x8, x1, x0
; CHECK-SVE-NEXT: sub x9, x1, x0
; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: rdvl x10, #2
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: sub x9, x9, x10
+; CHECK-SVE-NEXT: asr x8, x8, #1
; CHECK-SVE-NEXT: add x9, x9, x9, lsr #63
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: asr x9, x9, #1
; CHECK-SVE-NEXT: incd z1.d
; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z0.d
; CHECK-SVE-NEXT: mov z4.d, z1.d
; CHECK-SVE-NEXT: mov z6.d, z1.d
; CHECK-SVE-NEXT: mov z7.d, z2.d
; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z0.d
; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z3.d
; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z2.d
; CHECK-SVE-NEXT: incd z4.d, all, mul #2
@@ -871,12 +856,12 @@ define <vscale x 32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p3.h
; CHECK-SVE-NEXT: uzp1 p1.b, p1.b, p2.b
-; CHECK-SVE-NEXT: uzp1 p0.b, p0.b, p4.b
+; CHECK-SVE-NEXT: uzp1 p2.b, p0.b, p4.b
; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p2.b, xzr, x8
-; CHECK-SVE-NEXT: sel p1.b, p1, p1.b, p6.b
+; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p1, p1.b, p6.b
; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p2.b
+; CHECK-SVE-NEXT: sel p1.b, p2, p2.b, p3.b
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
@@ -1086,7 +1071,6 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #3
; CHECK-SVE2-NEXT: incb x0, all, mul #4
-; CHECK-SVE2-NEXT: incb x1, all, mul #4
; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: asr x8, x8, #2
; CHECK-SVE2-NEXT: mov z1.d, z0.d
@@ -1174,36 +1158,35 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: rdvl x8, #4
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x0, x8
-; CHECK-SVE-NEXT: add x8, x1, x8
-; CHECK-SVE-NEXT: subs x8, x8, x9
; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
+; CHECK-SVE-NEXT: rdvl x9, #4
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: add x9, x0, x9
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: incd z1.d
; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
; CHECK-SVE-NEXT: mov z3.d, z1.d
; CHECK-SVE-NEXT: mov z6.d, z2.d
; CHECK-SVE-NEXT: mov z7.d, z1.d
; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
; CHECK-SVE-NEXT: incd z3.d, all, mul #2
; CHECK-SVE-NEXT: incd z6.d, all, mul #4
; CHECK-SVE-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
; CHECK-SVE-NEXT: mov z24.d, z3.d
; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
; CHECK-SVE-NEXT: incd z24.d, all, mul #4
; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p7.s
; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p8.s
@@ -1214,13 +1197,12 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p9.s
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x9
; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p6.h
; CHECK-SVE-NEXT: add x9, x8, #3
; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov p1.b, p2/m, p2.b
; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
@@ -1244,11 +1226,12 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p4.h
; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.b, p0.b, p5.b
+; CHECK-SVE-NEXT: whilelo p4.b, xzr, x8
+; CHECK-SVE-NEXT: uzp1 p3.b, p0.b, p5.b
; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p3.b
+; CHECK-SVE-NEXT: sel p0.b, p2, p2.b, p1.b
+; CHECK-SVE-NEXT: sel p1.b, p3, p3.b, p4.b
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
@@ -1504,7 +1487,6 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: subs x8, x1, x0
; CHECK-SVE2-NEXT: ptrue p0.d
; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: addvl x10, x1, #8
; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
; CHECK-SVE2-NEXT: addvl x9, x0, #8
; CHECK-SVE2-NEXT: asr x8, x8, #3
@@ -1540,7 +1522,7 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
; CHECK-SVE2-NEXT: uzp1 p6.s, p6.s, p9.s
; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: subs x8, x10, x9
+; CHECK-SVE2-NEXT: subs x8, x1, x9
; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p6.h
; CHECK-SVE2-NEXT: add x9, x8, #7
; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
@@ -1593,36 +1575,35 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-SVE-NEXT: .cfi_offset w29, -16
; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: rdvl x8, #8
+; CHECK-SVE-NEXT: subs x8, x1, x0
; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x0, x8
-; CHECK-SVE-NEXT: add x8, x1, x8
-; CHECK-SVE-NEXT: subs x8, x8, x9
; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: csel x8, x9, x8, mi
+; CHECK-SVE-NEXT: rdvl x9, #8
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: add x9, x0, x9
; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: incd z1.d
; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
; CHECK-SVE-NEXT: mov z3.d, z1.d
; CHECK-SVE-NEXT: mov z6.d, z2.d
; CHECK-SVE-NEXT: mov z7.d, z1.d
; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
; CHECK-SVE-NEXT: incd z3.d, all, mul #2
; CHECK-SVE-NEXT: incd z6.d, all, mul #4
; CHECK-SVE-NEXT: incd z7.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
; CHECK-SVE-NEXT: mov z24.d, z3.d
; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
; CHECK-SVE-NEXT: incd z24.d, all, mul #4
; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p7.s
; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p8.s
@@ -1633,13 +1614,12 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p9.s
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: subs x8, x1, x9
; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p6.h
; CHECK-SVE-NEXT: add x9, x8, #7
; CHECK-SVE-NEXT: csel x8, x9, x8, mi
; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov p1.b, p2/m, p2.b
; CHECK-SVE-NEXT: mov z5.d, x8
; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
@@ -1663,11 +1643,12 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p4.h
; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.b, p0.b, p5.b
+; CHECK-SVE-NEXT: whilelo p4.b, xzr, x8
+; CHECK-SVE-NEXT: uzp1 p3.b, p0.b, p5.b
; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p3.b
+; CHECK-SVE-NEXT: sel p0.b, p2, p2.b, p1.b
+; CHECK-SVE-NEXT: sel p1.b, p3, p3.b, p4.b
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
; CHECK-SVE-NEXT: addvl sp, sp, #1
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
>From 3d7c2da0935d23331cf65996a35fe72b659d46e6 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 13 Aug 2025 11:26:13 +0100
Subject: [PATCH 40/43] Move nosve/nosve2 tests to separate files
---
llvm/test/CodeGen/AArch64/alias_mask.ll | 2298 +++++------------
llvm/test/CodeGen/AArch64/alias_mask_nosve.ll | 48 +
.../CodeGen/AArch64/alias_mask_scalable.ll | 2130 +++------------
.../AArch64/alias_mask_scalable_nosve2.ll | 59 +
4 files changed, 1059 insertions(+), 3476 deletions(-)
create mode 100644 llvm/test/CodeGen/AArch64/alias_mask_nosve.ll
create mode 100644 llvm/test/CodeGen/AArch64/alias_mask_scalable_nosve2.ll
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 1eb54e5b50627..750d67c1b8ce7 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -1,1843 +1,731 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE
-; RUN: llc -mtriple=aarch64 %s -o - | FileCheck %s --check-prefix=CHECK-NOSVE
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s
define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_8:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_8:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_1
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI0_0]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_2
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI0_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x8, :lo12:.LCPI0_2]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_4
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_3
-; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI0_4]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_5
-; CHECK-NOSVE-NEXT: dup v2.2d, x9
-; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI0_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_6
-; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI0_5]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_7
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI0_6]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI0_7]
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_8:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <16 x i1> %0
}
define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_16:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_16:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI1_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI1_2
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI1_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI1_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI1_2]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI1_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_16:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: xtn v0.8b, v0.8h
+; CHECK-NEXT: ret
entry:
%0 = call <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <8 x i1> %0
}
define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_32:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_32:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI2_0
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI2_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI2_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI2_1]
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-NOSVE-NEXT: dup v1.4h, w8
-; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_32:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: xtn v0.4h, v0.4s
+; CHECK-NEXT: ret
entry:
%0 = call <4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <4 x i1> %0
}
define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_64:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.d, x0, x1
-; CHECK-SVE-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: xtn v0.2s, v0.2d
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_64:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI3_0
-; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI3_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: dup v1.2s, w8
-; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_64:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: xtn v0.2s, v0.2d
+; CHECK-NEXT: ret
entry:
%0 = call <2 x i1> @llvm.loop.dependence.war.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <2 x i1> %0
}
define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilerw_8:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
-; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilerw_8:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_0
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_1
-; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI4_0]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_2
-; CHECK-NOSVE-NEXT: cneg x9, x9, mi
-; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI4_2]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_3
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI4_1]
-; CHECK-NOSVE-NEXT: ldr q4, [x8, :lo12:.LCPI4_3]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_4
-; CHECK-NOSVE-NEXT: dup v3.2d, x9
-; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI4_4]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_5
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_6
-; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI4_5]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_7
-; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI4_6]
-; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI4_7]
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v3.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v3.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v2.4s
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v3.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilerw_8:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.b, x0, x1
+; CHECK-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.raw.mask.v16i1(ptr %a, ptr %b, i64 1)
ret <16 x i1> %0
}
define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilerw_16:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilerw p0.h, x0, x1
-; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilerw_16:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI5_1
-; CHECK-NOSVE-NEXT: cneg x8, x8, mi
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI5_2
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI5_0]
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI5_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI5_2]
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI5_3]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilerw_16:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.h, x0, x1
+; CHECK-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: xtn v0.8b, v0.8h
+; CHECK-NEXT: ret
entry:
%0 = call <8 x i1> @llvm.loop.dependence.raw.mask.v8i1(ptr %a, ptr %b, i64 2)
ret <8 x i1> %0
}
define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilerw_32:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilerw p0.s, x0, x1
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilerw_32:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI6_0
-; CHECK-NOSVE-NEXT: cneg x9, x9, mi
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI6_0]
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI6_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI6_1]
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-NOSVE-NEXT: dup v1.4h, w8
-; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilerw_32:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.s, x0, x1
+; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: xtn v0.4h, v0.4s
+; CHECK-NEXT: ret
entry:
%0 = call <4 x i1> @llvm.loop.dependence.raw.mask.v4i1(ptr %a, ptr %b, i64 4)
ret <4 x i1> %0
}
define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilerw_64:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilerw p0.d, x0, x1
-; CHECK-SVE-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: xtn v0.2s, v0.2d
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilerw_64:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI7_0
-; CHECK-NOSVE-NEXT: cneg x9, x9, mi
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI7_0]
-; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: dup v1.2s, w8
-; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilerw_64:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.d, x0, x1
+; CHECK-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: xtn v0.2s, v0.2d
+; CHECK-NEXT: ret
entry:
%0 = call <2 x i1> @llvm.loop.dependence.raw.mask.v2i1(ptr %a, ptr %b, i64 8)
ret <2 x i1> %0
}
define <32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_8_split:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x9, x0, #16
-; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE-NEXT: whilewr p1.b, x9, x1
-; CHECK-SVE-NEXT: adrp x9, .LCPI8_0
-; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_0]
-; CHECK-SVE-NEXT: mov z1.b, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
-; CHECK-SVE-NEXT: addv h0, v0.8h
-; CHECK-SVE-NEXT: addv h1, v1.8h
-; CHECK-SVE-NEXT: str h0, [x8]
-; CHECK-SVE-NEXT: str h1, [x8, #2]
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_8_split:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_1
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_0
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: ldr q1, [x11, :lo12:.LCPI8_1]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_3
-; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI8_0]
-; CHECK-NOSVE-NEXT: ldr q4, [x11, :lo12:.LCPI8_3]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_4
-; CHECK-NOSVE-NEXT: sub x10, x9, #16
-; CHECK-NOSVE-NEXT: ldr q5, [x11, :lo12:.LCPI8_4]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_5
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI8_2
-; CHECK-NOSVE-NEXT: ldr q19, [x11, :lo12:.LCPI8_5]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_6
-; CHECK-NOSVE-NEXT: dup v2.2d, x10
-; CHECK-NOSVE-NEXT: dup v6.2d, x9
-; CHECK-NOSVE-NEXT: ldr q21, [x11, :lo12:.LCPI8_6]
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI8_7
-; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI8_2]
-; CHECK-NOSVE-NEXT: ldr q22, [x11, :lo12:.LCPI8_7]
-; CHECK-NOSVE-NEXT: cmp x10, #1
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v16.2d, v2.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v18.2d, v2.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v17.2d, v2.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v20.2d, v2.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v6.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v6.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v6.2d, v4.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v6.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v23.2d, v6.2d, v19.2d
-; CHECK-NOSVE-NEXT: cmhi v24.2d, v6.2d, v21.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v6.2d, v22.2d
-; CHECK-NOSVE-NEXT: cmhi v19.2d, v2.2d, v19.2d
-; CHECK-NOSVE-NEXT: cmhi v21.2d, v2.2d, v21.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v22.2d
-; CHECK-NOSVE-NEXT: uzp1 v7.4s, v16.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v23.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v5.4s, v18.4s, v17.4s
-; CHECK-NOSVE-NEXT: uzp1 v4.4s, v6.4s, v24.4s
-; CHECK-NOSVE-NEXT: uzp1 v6.4s, v19.4s, v20.4s
-; CHECK-NOSVE-NEXT: cset w10, lt
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v21.4s
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v4.8h, v3.8h
-; CHECK-NOSVE-NEXT: uzp1 v3.8h, v5.8h, v7.8h
-; CHECK-NOSVE-NEXT: uzp1 v2.8h, v2.8h, v6.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: uzp1 v1.16b, v2.16b, v3.16b
-; CHECK-NOSVE-NEXT: dup v2.16b, w9
-; CHECK-NOSVE-NEXT: dup v3.16b, w10
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_8]
-; CHECK-NOSVE-NEXT: orr v1.16b, v1.16b, v3.16b
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-NOSVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
-; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: addv h1, v1.8h
-; CHECK-NOSVE-NEXT: str h0, [x8]
-; CHECK-NOSVE-NEXT: str h1, [x8, #2]
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_8_split:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: add x9, x0, #16
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: whilewr p1.b, x9, x1
+; CHECK-NEXT: adrp x9, .LCPI8_0
+; CHECK-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: ldr q2, [x9, :lo12:.LCPI8_0]
+; CHECK-NEXT: mov z1.b, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NEXT: shl v1.16b, v1.16b, #7
+; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-NEXT: ext v2.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: ext v3.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT: zip1 v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: zip1 v1.16b, v1.16b, v3.16b
+; CHECK-NEXT: addv h0, v0.8h
+; CHECK-NEXT: addv h1, v1.8h
+; CHECK-NEXT: str h0, [x8]
+; CHECK-NEXT: str h1, [x8, #2]
+; CHECK-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 1)
ret <32 x i1> %0
}
define <64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_8_split2:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: add x9, x0, #48
-; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE-NEXT: add x10, x0, #16
-; CHECK-SVE-NEXT: whilewr p1.b, x9, x1
-; CHECK-SVE-NEXT: add x9, x0, #32
-; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p0.b, x9, x1
-; CHECK-SVE-NEXT: adrp x9, .LCPI9_0
-; CHECK-SVE-NEXT: mov z1.b, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: whilewr p1.b, x10, x1
-; CHECK-SVE-NEXT: ldr q4, [x9, :lo12:.LCPI9_0]
-; CHECK-SVE-NEXT: mov z2.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: mov z3.b, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-SVE-NEXT: shl v2.16b, v2.16b, #7
-; CHECK-SVE-NEXT: shl v3.16b, v3.16b, #7
-; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-SVE-NEXT: cmlt v2.16b, v2.16b, #0
-; CHECK-SVE-NEXT: cmlt v3.16b, v3.16b, #0
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v4.16b
-; CHECK-SVE-NEXT: and v1.16b, v1.16b, v4.16b
-; CHECK-SVE-NEXT: and v2.16b, v2.16b, v4.16b
-; CHECK-SVE-NEXT: and v3.16b, v3.16b, v4.16b
-; CHECK-SVE-NEXT: ext v4.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: ext v5.16b, v1.16b, v1.16b, #8
-; CHECK-SVE-NEXT: ext v6.16b, v2.16b, v2.16b, #8
-; CHECK-SVE-NEXT: ext v7.16b, v3.16b, v3.16b, #8
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v4.16b
-; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v5.16b
-; CHECK-SVE-NEXT: zip1 v2.16b, v2.16b, v6.16b
-; CHECK-SVE-NEXT: zip1 v3.16b, v3.16b, v7.16b
-; CHECK-SVE-NEXT: addv h0, v0.8h
-; CHECK-SVE-NEXT: addv h1, v1.8h
-; CHECK-SVE-NEXT: addv h2, v2.8h
-; CHECK-SVE-NEXT: addv h3, v3.8h
-; CHECK-SVE-NEXT: str h0, [x8]
-; CHECK-SVE-NEXT: str h1, [x8, #6]
-; CHECK-SVE-NEXT: str h2, [x8, #4]
-; CHECK-SVE-NEXT: str h3, [x8, #2]
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_8_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: stp d11, d10, [sp, #-32]! // 16-byte Folded Spill
-; CHECK-NOSVE-NEXT: stp d9, d8, [sp, #16] // 16-byte Folded Spill
-; CHECK-NOSVE-NEXT: .cfi_def_cfa_offset 32
-; CHECK-NOSVE-NEXT: .cfi_offset b8, -8
-; CHECK-NOSVE-NEXT: .cfi_offset b9, -16
-; CHECK-NOSVE-NEXT: .cfi_offset b10, -24
-; CHECK-NOSVE-NEXT: .cfi_offset b11, -32
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI9_0
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI9_1
-; CHECK-NOSVE-NEXT: sub x10, x9, #16
-; CHECK-NOSVE-NEXT: adrp x13, .LCPI9_2
-; CHECK-NOSVE-NEXT: ldr q0, [x11, :lo12:.LCPI9_0]
-; CHECK-NOSVE-NEXT: dup v5.2d, x10
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI9_3
-; CHECK-NOSVE-NEXT: ldr q1, [x12, :lo12:.LCPI9_1]
-; CHECK-NOSVE-NEXT: ldr q2, [x13, :lo12:.LCPI9_2]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI9_3]
-; CHECK-NOSVE-NEXT: sub x11, x9, #32
-; CHECK-NOSVE-NEXT: dup v4.2d, x11
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI9_4
-; CHECK-NOSVE-NEXT: adrp x13, .LCPI9_5
-; CHECK-NOSVE-NEXT: cmhi v18.2d, v5.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v19.2d, v5.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v20.2d, v5.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v21.2d, v5.2d, v3.2d
-; CHECK-NOSVE-NEXT: ldr q17, [x12, :lo12:.LCPI9_4]
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI9_6
-; CHECK-NOSVE-NEXT: ldr q7, [x12, :lo12:.LCPI9_6]
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI9_7
-; CHECK-NOSVE-NEXT: ldr q6, [x13, :lo12:.LCPI9_5]
-; CHECK-NOSVE-NEXT: uzp1 v18.4s, v19.4s, v18.4s
-; CHECK-NOSVE-NEXT: ldr q16, [x12, :lo12:.LCPI9_7]
-; CHECK-NOSVE-NEXT: sub x12, x9, #48
-; CHECK-NOSVE-NEXT: uzp1 v19.4s, v21.4s, v20.4s
-; CHECK-NOSVE-NEXT: cmhi v20.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v21.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: dup v22.2d, x12
-; CHECK-NOSVE-NEXT: cmhi v23.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v24.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v25.2d, v5.2d, v17.2d
-; CHECK-NOSVE-NEXT: cmhi v26.2d, v5.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v27.2d, v5.2d, v7.2d
-; CHECK-NOSVE-NEXT: uzp1 v20.4s, v21.4s, v20.4s
-; CHECK-NOSVE-NEXT: dup v21.2d, x9
-; CHECK-NOSVE-NEXT: cmhi v28.2d, v5.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v5.8h, v19.8h, v18.8h
-; CHECK-NOSVE-NEXT: cmhi v19.2d, v4.2d, v17.2d
-; CHECK-NOSVE-NEXT: cmhi v29.2d, v22.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v30.2d, v22.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v31.2d, v22.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v8.2d, v22.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v9.2d, v22.2d, v17.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v21.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v21.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v21.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v21.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v17.2d, v21.2d, v17.2d
-; CHECK-NOSVE-NEXT: cmhi v10.2d, v21.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v11.2d, v21.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v21.2d, v21.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v18.4s, v24.4s, v23.4s
-; CHECK-NOSVE-NEXT: cmhi v23.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v24.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v22.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v22.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v22.2d, v22.2d, v16.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: cmp x10, #1
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v10.4s, v17.4s
-; CHECK-NOSVE-NEXT: cset w10, lt
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v21.4s, v11.4s
-; CHECK-NOSVE-NEXT: uzp1 v16.4s, v30.4s, v29.4s
-; CHECK-NOSVE-NEXT: cmp x11, #1
-; CHECK-NOSVE-NEXT: uzp1 v17.4s, v8.4s, v31.4s
-; CHECK-NOSVE-NEXT: uzp1 v6.4s, v6.4s, v9.4s
-; CHECK-NOSVE-NEXT: cset w11, lt
-; CHECK-NOSVE-NEXT: uzp1 v7.4s, v22.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v21.4s, v26.4s, v25.4s
-; CHECK-NOSVE-NEXT: cmp x12, #1
-; CHECK-NOSVE-NEXT: uzp1 v19.4s, v23.4s, v19.4s
-; CHECK-NOSVE-NEXT: uzp1 v4.4s, v4.4s, v24.4s
-; CHECK-NOSVE-NEXT: cset w12, lt
-; CHECK-NOSVE-NEXT: uzp1 v22.4s, v28.4s, v27.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v18.8h, v18.8h, v20.8h
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: uzp1 v2.8h, v17.8h, v16.8h
-; CHECK-NOSVE-NEXT: uzp1 v3.8h, v7.8h, v6.8h
-; CHECK-NOSVE-NEXT: dup v7.16b, w10
-; CHECK-NOSVE-NEXT: uzp1 v4.8h, v4.8h, v19.8h
-; CHECK-NOSVE-NEXT: uzp1 v6.8h, v22.8h, v21.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI9_8
-; CHECK-NOSVE-NEXT: uzp1 v2.16b, v3.16b, v2.16b
-; CHECK-NOSVE-NEXT: uzp1 v3.16b, v4.16b, v18.16b
-; CHECK-NOSVE-NEXT: dup v4.16b, w12
-; CHECK-NOSVE-NEXT: uzp1 v5.16b, v6.16b, v5.16b
-; CHECK-NOSVE-NEXT: dup v6.16b, w11
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: orr v1.16b, v2.16b, v4.16b
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI9_8]
-; CHECK-NOSVE-NEXT: orr v2.16b, v3.16b, v6.16b
-; CHECK-NOSVE-NEXT: orr v3.16b, v5.16b, v7.16b
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-NOSVE-NEXT: shl v2.16b, v2.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: shl v3.16b, v3.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-NOSVE-NEXT: cmlt v2.16b, v2.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v4.16b
-; CHECK-NOSVE-NEXT: cmlt v3.16b, v3.16b, #0
-; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v4.16b
-; CHECK-NOSVE-NEXT: and v2.16b, v2.16b, v4.16b
-; CHECK-NOSVE-NEXT: ext v5.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: and v3.16b, v3.16b, v4.16b
-; CHECK-NOSVE-NEXT: ext v4.16b, v1.16b, v1.16b, #8
-; CHECK-NOSVE-NEXT: ext v6.16b, v2.16b, v2.16b, #8
-; CHECK-NOSVE-NEXT: ext v7.16b, v3.16b, v3.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v5.16b
-; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v4.16b
-; CHECK-NOSVE-NEXT: zip1 v2.16b, v2.16b, v6.16b
-; CHECK-NOSVE-NEXT: zip1 v3.16b, v3.16b, v7.16b
-; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: addv h1, v1.8h
-; CHECK-NOSVE-NEXT: addv h2, v2.8h
-; CHECK-NOSVE-NEXT: str h0, [x8]
-; CHECK-NOSVE-NEXT: addv h0, v3.8h
-; CHECK-NOSVE-NEXT: str h1, [x8, #6]
-; CHECK-NOSVE-NEXT: str h2, [x8, #4]
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
-; CHECK-NOSVE-NEXT: ldp d9, d8, [sp, #16] // 16-byte Folded Reload
-; CHECK-NOSVE-NEXT: ldp d11, d10, [sp], #32 // 16-byte Folded Reload
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_8_split2:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: add x9, x0, #48
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: add x10, x0, #16
+; CHECK-NEXT: whilewr p1.b, x9, x1
+; CHECK-NEXT: add x9, x0, #32
+; CHECK-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p0.b, x9, x1
+; CHECK-NEXT: adrp x9, .LCPI9_0
+; CHECK-NEXT: mov z1.b, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.b, x10, x1
+; CHECK-NEXT: ldr q4, [x9, :lo12:.LCPI9_0]
+; CHECK-NEXT: mov z2.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z3.b, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NEXT: shl v1.16b, v1.16b, #7
+; CHECK-NEXT: shl v2.16b, v2.16b, #7
+; CHECK-NEXT: shl v3.16b, v3.16b, #7
+; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NEXT: cmlt v2.16b, v2.16b, #0
+; CHECK-NEXT: cmlt v3.16b, v3.16b, #0
+; CHECK-NEXT: and v0.16b, v0.16b, v4.16b
+; CHECK-NEXT: and v1.16b, v1.16b, v4.16b
+; CHECK-NEXT: and v2.16b, v2.16b, v4.16b
+; CHECK-NEXT: and v3.16b, v3.16b, v4.16b
+; CHECK-NEXT: ext v4.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: ext v5.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT: ext v6.16b, v2.16b, v2.16b, #8
+; CHECK-NEXT: ext v7.16b, v3.16b, v3.16b, #8
+; CHECK-NEXT: zip1 v0.16b, v0.16b, v4.16b
+; CHECK-NEXT: zip1 v1.16b, v1.16b, v5.16b
+; CHECK-NEXT: zip1 v2.16b, v2.16b, v6.16b
+; CHECK-NEXT: zip1 v3.16b, v3.16b, v7.16b
+; CHECK-NEXT: addv h0, v0.8h
+; CHECK-NEXT: addv h1, v1.8h
+; CHECK-NEXT: addv h2, v2.8h
+; CHECK-NEXT: addv h3, v3.8h
+; CHECK-NEXT: str h0, [x8]
+; CHECK-NEXT: str h1, [x8, #6]
+; CHECK-NEXT: str h2, [x8, #4]
+; CHECK-NEXT: str h3, [x8, #2]
+; CHECK-NEXT: ret
entry:
%0 = call <64 x i1> @llvm.loop.dependence.war.mask.v64i1(ptr %a, ptr %b, i64 1)
ret <64 x i1> %0
}
define <16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_16_split:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z6.d, z0.d
-; CHECK-SVE-NEXT: mov z7.d, z0.d
-; CHECK-SVE-NEXT: mov z16.d, z0.d
-; CHECK-SVE-NEXT: dup v3.2d, x8
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
-; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
-; CHECK-SVE-NEXT: add z4.d, z4.d, #8 // =0x8
-; CHECK-SVE-NEXT: add z5.d, z5.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z6.d, z6.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
-; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
-; CHECK-SVE-NEXT: cmhi v16.2d, v3.2d, v16.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v3.2d, v7.2d
-; CHECK-SVE-NEXT: uzp1 v2.4s, v4.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v4.4s, v6.4s, v5.4s
-; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v16.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v4.8h
-; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: dup v1.16b, w8
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_16_split:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI10_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_2
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI10_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI10_1]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI10_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI10_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI10_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x8
-; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI10_6]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI10_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI10_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI10_7]
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_16_split:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: add x8, x0, #16
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: whilewr p1.h, x8, x1
+; CHECK-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z1.h, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 2)
ret <16 x i1> %0
}
define <32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_16_split2:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: sub x9, x1, x0
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x10, x9, #32
-; CHECK-SVE-NEXT: add x9, x9, x9, lsr #63
-; CHECK-SVE-NEXT: add x10, x10, x10, lsr #63
-; CHECK-SVE-NEXT: asr x9, x9, #1
-; CHECK-SVE-NEXT: asr x10, x10, #1
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z6.d, z0.d
-; CHECK-SVE-NEXT: dup v7.2d, x9
-; CHECK-SVE-NEXT: dup v16.2d, x10
-; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
-; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
-; CHECK-SVE-NEXT: cmp x10, #1
-; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
-; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z6.d, z6.d, #2 // =0x2
-; CHECK-SVE-NEXT: cmhi v17.2d, v7.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v18.2d, v16.2d, v0.2d
-; CHECK-SVE-NEXT: add z0.d, z0.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v19.2d, v7.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v20.2d, v7.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v21.2d, v7.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v22.2d, v7.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v23.2d, v7.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v24.2d, v7.2d, v6.2d
-; CHECK-SVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v16.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v7.2d, v7.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v16.2d, v6.2d
-; CHECK-SVE-NEXT: cset w10, lt
-; CHECK-SVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
-; CHECK-SVE-NEXT: uzp1 v16.4s, v21.4s, v20.4s
-; CHECK-SVE-NEXT: cmp x9, #1
-; CHECK-SVE-NEXT: uzp1 v20.4s, v23.4s, v22.4s
-; CHECK-SVE-NEXT: uzp1 v17.4s, v17.4s, v24.4s
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v3.4s, v19.4s, v7.4s
-; CHECK-SVE-NEXT: uzp1 v4.4s, v5.4s, v4.4s
-; CHECK-SVE-NEXT: uzp1 v5.4s, v18.4s, v6.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-SVE-NEXT: uzp1 v1.8h, v17.8h, v20.8h
-; CHECK-SVE-NEXT: uzp1 v3.8h, v16.8h, v3.8h
-; CHECK-SVE-NEXT: uzp1 v4.8h, v5.8h, v4.8h
-; CHECK-SVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
-; CHECK-SVE-NEXT: dup v2.16b, w9
-; CHECK-SVE-NEXT: adrp x9, .LCPI11_0
-; CHECK-SVE-NEXT: uzp1 v1.16b, v1.16b, v3.16b
-; CHECK-SVE-NEXT: dup v3.16b, w10
-; CHECK-SVE-NEXT: uzp1 v0.16b, v4.16b, v0.16b
-; CHECK-SVE-NEXT: orr v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI11_0]
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v3.16b
-; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
-; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
-; CHECK-SVE-NEXT: addv h1, v1.8h
-; CHECK-SVE-NEXT: addv h0, v0.8h
-; CHECK-SVE-NEXT: str h1, [x8]
-; CHECK-SVE-NEXT: str h0, [x8, #2]
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_16_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI11_0
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI11_1
-; CHECK-NOSVE-NEXT: sub x11, x9, #32
-; CHECK-NOSVE-NEXT: add x9, x9, x9, lsr #63
-; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI11_0]
-; CHECK-NOSVE-NEXT: add x10, x11, x11, lsr #63
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_2
-; CHECK-NOSVE-NEXT: ldr q1, [x12, :lo12:.LCPI11_1]
-; CHECK-NOSVE-NEXT: asr x9, x9, #1
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI11_3
-; CHECK-NOSVE-NEXT: ldr q2, [x11, :lo12:.LCPI11_2]
-; CHECK-NOSVE-NEXT: asr x10, x10, #1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_4
-; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI11_3]
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI11_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: ldr q5, [x11, :lo12:.LCPI11_4]
-; CHECK-NOSVE-NEXT: dup v6.2d, x10
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI11_6
-; CHECK-NOSVE-NEXT: ldr q7, [x12, :lo12:.LCPI11_5]
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI11_7
-; CHECK-NOSVE-NEXT: ldr q16, [x11, :lo12:.LCPI11_6]
-; CHECK-NOSVE-NEXT: cmp x10, #1
-; CHECK-NOSVE-NEXT: ldr q17, [x12, :lo12:.LCPI11_7]
-; CHECK-NOSVE-NEXT: cmhi v18.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v23.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v6.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v19.2d, v6.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v20.2d, v6.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v21.2d, v6.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v22.2d, v6.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v24.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v17.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v6.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v16.2d, v6.2d, v16.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v6.2d, v17.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v19.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v1.4s, v18.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v23.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v4.4s, v4.4s, v24.4s
-; CHECK-NOSVE-NEXT: uzp1 v5.4s, v21.4s, v20.4s
-; CHECK-NOSVE-NEXT: uzp1 v7.4s, v7.4s, v22.4s
-; CHECK-NOSVE-NEXT: uzp1 v6.4s, v6.4s, v16.4s
-; CHECK-NOSVE-NEXT: cset w10, lt
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-NOSVE-NEXT: uzp1 v2.8h, v4.8h, v3.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v5.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v3.8h, v6.8h, v7.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.16b, v2.16b, v1.16b
-; CHECK-NOSVE-NEXT: dup v2.16b, w9
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI11_8
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v3.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v3.16b, w10
-; CHECK-NOSVE-NEXT: orr v1.16b, v1.16b, v2.16b
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI11_8]
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v3.16b
-; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
-; CHECK-NOSVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
-; CHECK-NOSVE-NEXT: addv h1, v1.8h
-; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: str h1, [x8]
-; CHECK-NOSVE-NEXT: str h0, [x8, #2]
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_16_split2:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: add x9, x0, #48
+; CHECK-NEXT: add x10, x0, #16
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: whilewr p1.h, x9, x1
+; CHECK-NEXT: add x9, x0, #32
+; CHECK-NEXT: whilewr p2.h, x9, x1
+; CHECK-NEXT: mov z2.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: adrp x9, .LCPI11_0
+; CHECK-NEXT: whilewr p3.h, x10, x1
+; CHECK-NEXT: mov z0.h, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z1.h, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z3.h, p3/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NEXT: uzp1 v1.16b, v2.16b, v3.16b
+; CHECK-NEXT: ldr q2, [x9, :lo12:.LCPI11_0]
+; CHECK-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NEXT: shl v1.16b, v1.16b, #7
+; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-NEXT: ext v2.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: ext v3.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT: zip1 v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: zip1 v1.16b, v1.16b, v3.16b
+; CHECK-NEXT: addv h0, v0.8h
+; CHECK-NEXT: addv h1, v1.8h
+; CHECK-NEXT: str h0, [x8, #2]
+; CHECK-NEXT: str h1, [x8]
+; CHECK-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 2)
ret <32 x i1> %0
}
define <8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_32_split:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: dup v1.2d, x8
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z2.d, z2.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z3.d, z3.d, #2 // =0x2
-; CHECK-SVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v1.2d, v1.2d, v3.2d
-; CHECK-SVE-NEXT: uzp1 v2.4s, v2.4s, v4.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-SVE-NEXT: dup v1.8b, w8
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v2.8h
-; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-SVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_32_split:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI12_1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI12_2
-; CHECK-NOSVE-NEXT: add x9, x8, #3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI12_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI12_2]
-; CHECK-NOSVE-NEXT: csel x8, x9, x8, mi
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_0
-; CHECK-NOSVE-NEXT: asr x8, x8, #2
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI12_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI12_3
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI12_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_32_split:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov w8, v0.s[1]
+; CHECK-NEXT: mov v1.16b, v0.16b
+; CHECK-NEXT: mov w9, v0.s[2]
+; CHECK-NEXT: mov v1.h[1], w8
+; CHECK-NEXT: mov w8, v0.s[3]
+; CHECK-NEXT: mov v1.h[2], w9
+; CHECK-NEXT: add x9, x0, #16
+; CHECK-NEXT: whilewr p0.s, x9, x1
+; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v1.h[3], w8
+; CHECK-NEXT: fmov w8, s0
+; CHECK-NEXT: mov w9, v0.s[1]
+; CHECK-NEXT: mov v1.h[4], w8
+; CHECK-NEXT: mov w8, v0.s[2]
+; CHECK-NEXT: mov v1.h[5], w9
+; CHECK-NEXT: mov w9, v0.s[3]
+; CHECK-NEXT: mov v1.h[6], w8
+; CHECK-NEXT: mov v1.h[7], w9
+; CHECK-NEXT: xtn v0.8b, v1.8h
+; CHECK-NEXT: ret
entry:
%0 = call <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 4)
ret <8 x i1> %0
}
define <16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_32_split2:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z6.d, z0.d
-; CHECK-SVE-NEXT: mov z7.d, z0.d
-; CHECK-SVE-NEXT: mov z16.d, z0.d
-; CHECK-SVE-NEXT: dup v3.2d, x8
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
-; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
-; CHECK-SVE-NEXT: add z4.d, z4.d, #8 // =0x8
-; CHECK-SVE-NEXT: add z5.d, z5.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z6.d, z6.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
-; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
-; CHECK-SVE-NEXT: cmhi v16.2d, v3.2d, v16.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v3.2d, v7.2d
-; CHECK-SVE-NEXT: uzp1 v2.4s, v4.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v4.4s, v6.4s, v5.4s
-; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v16.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v4.8h
-; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: dup v1.16b, w8
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_32_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_0
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI13_0]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_2
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_1
-; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI13_2]
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI13_1]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_3
-; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI13_4]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI13_3]
-; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI13_5
-; CHECK-NOSVE-NEXT: ldr q7, [x8, :lo12:.LCPI13_6]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI13_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI13_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI13_7]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_32_split2:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: add x8, x0, #32
+; CHECK-NEXT: whilewr p1.s, x8, x1
+; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z1.s, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov w8, v0.s[1]
+; CHECK-NEXT: mov v2.16b, v0.16b
+; CHECK-NEXT: mov w10, v0.s[2]
+; CHECK-NEXT: mov w9, v1.s[1]
+; CHECK-NEXT: mov v3.16b, v1.16b
+; CHECK-NEXT: mov w11, v1.s[2]
+; CHECK-NEXT: mov v2.h[1], w8
+; CHECK-NEXT: mov w8, v0.s[3]
+; CHECK-NEXT: mov v3.h[1], w9
+; CHECK-NEXT: mov w9, v1.s[3]
+; CHECK-NEXT: mov v2.h[2], w10
+; CHECK-NEXT: add x10, x0, #16
+; CHECK-NEXT: whilewr p0.s, x10, x1
+; CHECK-NEXT: add x10, x0, #48
+; CHECK-NEXT: mov v3.h[2], w11
+; CHECK-NEXT: whilewr p1.s, x10, x1
+; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z1.s, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v2.h[3], w8
+; CHECK-NEXT: mov v3.h[3], w9
+; CHECK-NEXT: fmov w9, s0
+; CHECK-NEXT: mov w8, v0.s[1]
+; CHECK-NEXT: fmov w10, s1
+; CHECK-NEXT: mov w11, v1.s[1]
+; CHECK-NEXT: mov v2.h[4], w9
+; CHECK-NEXT: mov w9, v0.s[2]
+; CHECK-NEXT: mov v3.h[4], w10
+; CHECK-NEXT: mov w10, v1.s[2]
+; CHECK-NEXT: mov v2.h[5], w8
+; CHECK-NEXT: mov w8, v0.s[3]
+; CHECK-NEXT: mov v3.h[5], w11
+; CHECK-NEXT: mov w11, v1.s[3]
+; CHECK-NEXT: mov v2.h[6], w9
+; CHECK-NEXT: mov v3.h[6], w10
+; CHECK-NEXT: mov v2.h[7], w8
+; CHECK-NEXT: mov v3.h[7], w11
+; CHECK-NEXT: uzp1 v0.16b, v2.16b, v3.16b
+; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 4)
ret <16 x i1> %0
}
define <32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_32_split3:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: subs x9, x1, x0
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: add x10, x9, #3
-; CHECK-SVE-NEXT: sub x11, x9, #61
-; CHECK-SVE-NEXT: csel x10, x10, x9, mi
-; CHECK-SVE-NEXT: subs x9, x9, #64
-; CHECK-SVE-NEXT: csel x9, x11, x9, mi
-; CHECK-SVE-NEXT: asr x10, x10, #2
-; CHECK-SVE-NEXT: asr x9, x9, #2
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z6.d, z0.d
-; CHECK-SVE-NEXT: dup v7.2d, x10
-; CHECK-SVE-NEXT: dup v16.2d, x9
-; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
-; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
-; CHECK-SVE-NEXT: cmp x9, #1
-; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
-; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z6.d, z6.d, #2 // =0x2
-; CHECK-SVE-NEXT: cmhi v17.2d, v7.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v18.2d, v16.2d, v0.2d
-; CHECK-SVE-NEXT: add z0.d, z0.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v19.2d, v7.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v20.2d, v7.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v21.2d, v7.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v22.2d, v7.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v23.2d, v7.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v24.2d, v7.2d, v6.2d
-; CHECK-SVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v16.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v7.2d, v7.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v16.2d, v6.2d
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
-; CHECK-SVE-NEXT: uzp1 v16.4s, v21.4s, v20.4s
-; CHECK-SVE-NEXT: cmp x10, #1
-; CHECK-SVE-NEXT: uzp1 v20.4s, v23.4s, v22.4s
-; CHECK-SVE-NEXT: uzp1 v17.4s, v17.4s, v24.4s
-; CHECK-SVE-NEXT: cset w10, lt
-; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v3.4s, v19.4s, v7.4s
-; CHECK-SVE-NEXT: uzp1 v4.4s, v5.4s, v4.4s
-; CHECK-SVE-NEXT: uzp1 v5.4s, v18.4s, v6.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-SVE-NEXT: uzp1 v1.8h, v17.8h, v20.8h
-; CHECK-SVE-NEXT: uzp1 v3.8h, v16.8h, v3.8h
-; CHECK-SVE-NEXT: uzp1 v4.8h, v5.8h, v4.8h
-; CHECK-SVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
-; CHECK-SVE-NEXT: dup v2.16b, w10
-; CHECK-SVE-NEXT: uzp1 v1.16b, v1.16b, v3.16b
-; CHECK-SVE-NEXT: dup v3.16b, w9
-; CHECK-SVE-NEXT: adrp x9, .LCPI14_0
-; CHECK-SVE-NEXT: uzp1 v0.16b, v4.16b, v0.16b
-; CHECK-SVE-NEXT: orr v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI14_0]
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v3.16b
-; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
-; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
-; CHECK-SVE-NEXT: addv h1, v1.8h
-; CHECK-SVE-NEXT: addv h0, v0.8h
-; CHECK-SVE-NEXT: str h1, [x8]
-; CHECK-SVE-NEXT: str h0, [x8, #2]
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_32_split3:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_0
-; CHECK-NOSVE-NEXT: add x11, x9, #3
-; CHECK-NOSVE-NEXT: sub x12, x9, #61
-; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI14_0]
-; CHECK-NOSVE-NEXT: csel x11, x11, x9, mi
-; CHECK-NOSVE-NEXT: subs x9, x9, #64
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_1
-; CHECK-NOSVE-NEXT: csel x9, x12, x9, mi
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI14_1]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI14_3
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI14_2
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI14_3]
-; CHECK-NOSVE-NEXT: asr x10, x11, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x12, :lo12:.LCPI14_2]
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI14_4
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: ldr q5, [x12, :lo12:.LCPI14_4]
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI14_6
-; CHECK-NOSVE-NEXT: ldr q6, [x11, :lo12:.LCPI14_5]
-; CHECK-NOSVE-NEXT: dup v16.2d, x10
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI14_7
-; CHECK-NOSVE-NEXT: ldr q7, [x12, :lo12:.LCPI14_6]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: ldr q22, [x11, :lo12:.LCPI14_7]
-; CHECK-NOSVE-NEXT: cmhi v17.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v18.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v19.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v20.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v21.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v23.2d, v16.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v24.2d, v16.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v16.2d, v16.2d, v22.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v22.2d
-; CHECK-NOSVE-NEXT: uzp1 v17.4s, v18.4s, v17.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v23.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v16.4s, v24.4s
-; CHECK-NOSVE-NEXT: uzp1 v5.4s, v20.4s, v19.4s
-; CHECK-NOSVE-NEXT: uzp1 v6.4s, v6.4s, v21.4s
-; CHECK-NOSVE-NEXT: uzp1 v4.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: cmp x10, #1
-; CHECK-NOSVE-NEXT: cset w10, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v2.8h, v5.8h, v17.8h
-; CHECK-NOSVE-NEXT: uzp1 v3.8h, v4.8h, v6.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: uzp1 v1.16b, v3.16b, v2.16b
-; CHECK-NOSVE-NEXT: dup v2.16b, w10
-; CHECK-NOSVE-NEXT: dup v3.16b, w9
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI14_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI14_8]
-; CHECK-NOSVE-NEXT: orr v1.16b, v1.16b, v3.16b
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-NOSVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
-; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: addv h1, v1.8h
-; CHECK-NOSVE-NEXT: str h0, [x8]
-; CHECK-NOSVE-NEXT: str h1, [x8, #2]
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_32_split3:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: add x9, x0, #96
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: add x10, x0, #64
+; CHECK-NEXT: whilewr p1.s, x9, x1
+; CHECK-NEXT: add x9, x0, #32
+; CHECK-NEXT: whilewr p3.s, x9, x1
+; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p2.s, x10, x1
+; CHECK-NEXT: mov z4.s, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z6.s, p3/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z5.s, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov w11, v0.s[1]
+; CHECK-NEXT: mov w12, v0.s[2]
+; CHECK-NEXT: mov w9, v4.s[1]
+; CHECK-NEXT: mov v1.16b, v4.16b
+; CHECK-NEXT: mov w14, v0.s[3]
+; CHECK-NEXT: mov w13, v6.s[1]
+; CHECK-NEXT: mov v2.16b, v6.16b
+; CHECK-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-NEXT: mov w15, v4.s[2]
+; CHECK-NEXT: mov w10, v5.s[1]
+; CHECK-NEXT: mov v3.16b, v5.16b
+; CHECK-NEXT: mov w16, v5.s[2]
+; CHECK-NEXT: mov v0.h[1], w11
+; CHECK-NEXT: mov w11, v4.s[3]
+; CHECK-NEXT: mov w17, v5.s[3]
+; CHECK-NEXT: mov v1.h[1], w9
+; CHECK-NEXT: mov w9, v6.s[2]
+; CHECK-NEXT: mov v2.h[1], w13
+; CHECK-NEXT: add x13, x0, #16
+; CHECK-NEXT: mov v3.h[1], w10
+; CHECK-NEXT: whilewr p0.s, x13, x1
+; CHECK-NEXT: add x13, x0, #112
+; CHECK-NEXT: mov v0.h[2], w12
+; CHECK-NEXT: add x12, x0, #48
+; CHECK-NEXT: mov w10, v6.s[3]
+; CHECK-NEXT: whilewr p1.s, x13, x1
+; CHECK-NEXT: mov v1.h[2], w15
+; CHECK-NEXT: mov z4.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v2.h[2], w9
+; CHECK-NEXT: add x9, x0, #80
+; CHECK-NEXT: whilewr p0.s, x12, x1
+; CHECK-NEXT: mov v3.h[2], w16
+; CHECK-NEXT: whilewr p2.s, x9, x1
+; CHECK-NEXT: mov z5.s, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z7.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v0.h[3], w14
+; CHECK-NEXT: mov w9, v4.s[1]
+; CHECK-NEXT: mov z6.s, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v1.h[3], w11
+; CHECK-NEXT: mov v2.h[3], w10
+; CHECK-NEXT: fmov w10, s4
+; CHECK-NEXT: fmov w12, s5
+; CHECK-NEXT: mov v3.h[3], w17
+; CHECK-NEXT: fmov w15, s7
+; CHECK-NEXT: mov w11, v5.s[1]
+; CHECK-NEXT: fmov w14, s6
+; CHECK-NEXT: mov w13, v6.s[1]
+; CHECK-NEXT: mov v1.h[4], w12
+; CHECK-NEXT: mov w12, v7.s[1]
+; CHECK-NEXT: mov v0.h[4], w10
+; CHECK-NEXT: mov v2.h[4], w15
+; CHECK-NEXT: mov w10, v4.s[2]
+; CHECK-NEXT: mov w15, v6.s[2]
+; CHECK-NEXT: mov v3.h[4], w14
+; CHECK-NEXT: mov w14, v5.s[2]
+; CHECK-NEXT: mov v1.h[5], w11
+; CHECK-NEXT: mov w11, v7.s[2]
+; CHECK-NEXT: mov v0.h[5], w9
+; CHECK-NEXT: mov v2.h[5], w12
+; CHECK-NEXT: mov w9, v4.s[3]
+; CHECK-NEXT: mov w12, v5.s[3]
+; CHECK-NEXT: mov v3.h[5], w13
+; CHECK-NEXT: mov w13, v6.s[3]
+; CHECK-NEXT: mov v1.h[6], w14
+; CHECK-NEXT: mov w14, v7.s[3]
+; CHECK-NEXT: mov v0.h[6], w10
+; CHECK-NEXT: mov v2.h[6], w11
+; CHECK-NEXT: mov v3.h[6], w15
+; CHECK-NEXT: mov v1.h[7], w12
+; CHECK-NEXT: mov v0.h[7], w9
+; CHECK-NEXT: adrp x9, .LCPI14_0
+; CHECK-NEXT: mov v2.h[7], w14
+; CHECK-NEXT: mov v3.h[7], w13
+; CHECK-NEXT: uzp1 v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: ldr q2, [x9, :lo12:.LCPI14_0]
+; CHECK-NEXT: uzp1 v1.16b, v3.16b, v1.16b
+; CHECK-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NEXT: shl v1.16b, v1.16b, #7
+; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-NEXT: addv h0, v0.8h
+; CHECK-NEXT: addv h1, v1.8h
+; CHECK-NEXT: str h0, [x8]
+; CHECK-NEXT: str h1, [x8, #2]
+; CHECK-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 4)
ret <32 x i1> %0
}
define <4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_64_split:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: dup v2.2d, x8
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: add z1.d, z1.d, #2 // =0x2
-; CHECK-SVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-SVE-NEXT: dup v1.4h, w8
-; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-SVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_64_split:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI15_0
-; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI15_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI15_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI15_1]
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-NOSVE-NEXT: dup v1.4h, w8
-; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_64_split:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: add x8, x0, #16
+; CHECK-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p0.d, x8, x1
+; CHECK-NEXT: mov z1.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v0.s[1], v0.s[2]
+; CHECK-NEXT: mov v0.s[2], v1.s[0]
+; CHECK-NEXT: mov v0.s[3], v1.s[2]
+; CHECK-NEXT: xtn v0.4h, v0.4s
+; CHECK-NEXT: ret
entry:
%0 = call <4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 8)
ret <4 x i1> %0
}
define <8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_64_split2:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: dup v1.2d, x8
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z2.d, z2.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z3.d, z3.d, #2 // =0x2
-; CHECK-SVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v1.2d, v1.2d, v3.2d
-; CHECK-SVE-NEXT: uzp1 v2.4s, v2.4s, v4.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-SVE-NEXT: dup v1.8b, w8
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v2.8h
-; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-SVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_64_split2:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI16_1
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI16_2
-; CHECK-NOSVE-NEXT: add x9, x8, #7
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI16_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI16_2]
-; CHECK-NOSVE-NEXT: csel x8, x9, x8, mi
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_0
-; CHECK-NOSVE-NEXT: asr x8, x8, #3
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI16_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI16_3
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI16_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_64_split2:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: add x8, x0, #32
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: whilewr p1.d, x8, x1
+; CHECK-NEXT: add x8, x0, #16
+; CHECK-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p0.d, x8, x1
+; CHECK-NEXT: add x8, x0, #48
+; CHECK-NEXT: mov z1.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.d, x8, x1
+; CHECK-NEXT: mov z2.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v0.s[1], v0.s[2]
+; CHECK-NEXT: mov z3.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v1.s[1], v1.s[2]
+; CHECK-NEXT: mov v0.s[2], v2.s[0]
+; CHECK-NEXT: mov v1.s[2], v3.s[0]
+; CHECK-NEXT: mov v0.s[3], v2.s[2]
+; CHECK-NEXT: mov v1.s[3], v3.s[2]
+; CHECK-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NEXT: xtn v0.8b, v0.8h
+; CHECK-NEXT: ret
entry:
%0 = call <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 8)
ret <8 x i1> %0
}
define <16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_64_split3:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z6.d, z0.d
-; CHECK-SVE-NEXT: mov z7.d, z0.d
-; CHECK-SVE-NEXT: mov z16.d, z0.d
-; CHECK-SVE-NEXT: dup v3.2d, x8
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
-; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
-; CHECK-SVE-NEXT: add z4.d, z4.d, #8 // =0x8
-; CHECK-SVE-NEXT: add z5.d, z5.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z6.d, z6.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
-; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
-; CHECK-SVE-NEXT: cmhi v16.2d, v3.2d, v16.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v3.2d, v7.2d
-; CHECK-SVE-NEXT: uzp1 v2.4s, v4.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v4.4s, v6.4s, v5.4s
-; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v16.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v4.8h
-; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: dup v1.16b, w8
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_64_split3:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_0
-; CHECK-NOSVE-NEXT: add x10, x9, #7
-; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI17_0]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_2
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_1
-; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI17_2]
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI17_1]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_3
-; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI17_4]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI17_3]
-; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI17_5
-; CHECK-NOSVE-NEXT: ldr q7, [x8, :lo12:.LCPI17_6]
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI17_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI17_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI17_7]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_64_split3:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: add x8, x0, #96
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: add x9, x0, #48
+; CHECK-NEXT: whilewr p1.d, x8, x1
+; CHECK-NEXT: add x8, x0, #64
+; CHECK-NEXT: whilewr p2.d, x8, x1
+; CHECK-NEXT: add x8, x0, #32
+; CHECK-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p0.d, x8, x1
+; CHECK-NEXT: add x8, x0, #112
+; CHECK-NEXT: mov z1.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.d, x8, x1
+; CHECK-NEXT: add x8, x0, #80
+; CHECK-NEXT: mov z2.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z3.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p2.d, x8, x1
+; CHECK-NEXT: add x8, x0, #16
+; CHECK-NEXT: whilewr p0.d, x8, x1
+; CHECK-NEXT: mov z4.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v1.s[1], v1.s[2]
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: mov v0.s[1], v0.s[2]
+; CHECK-NEXT: mov v2.s[1], v2.s[2]
+; CHECK-NEXT: mov v3.s[1], v3.s[2]
+; CHECK-NEXT: mov z5.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z6.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z7.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v1.s[2], v4.s[0]
+; CHECK-NEXT: mov v0.s[2], v6.s[0]
+; CHECK-NEXT: mov v2.s[2], v5.s[0]
+; CHECK-NEXT: mov v3.s[2], v7.s[0]
+; CHECK-NEXT: mov v1.s[3], v4.s[2]
+; CHECK-NEXT: mov v2.s[3], v5.s[2]
+; CHECK-NEXT: mov v0.s[3], v6.s[2]
+; CHECK-NEXT: mov v3.s[3], v7.s[2]
+; CHECK-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-NEXT: uzp1 v0.8h, v0.8h, v3.8h
+; CHECK-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 8)
ret <16 x i1> %0
}
define <32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_64_split4:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: subs x9, x1, x0
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: add x10, x9, #7
-; CHECK-SVE-NEXT: sub x11, x9, #121
-; CHECK-SVE-NEXT: csel x10, x10, x9, mi
-; CHECK-SVE-NEXT: subs x9, x9, #128
-; CHECK-SVE-NEXT: csel x9, x11, x9, mi
-; CHECK-SVE-NEXT: asr x10, x10, #3
-; CHECK-SVE-NEXT: asr x9, x9, #3
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z6.d, z0.d
-; CHECK-SVE-NEXT: dup v7.2d, x10
-; CHECK-SVE-NEXT: dup v16.2d, x9
-; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
-; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
-; CHECK-SVE-NEXT: cmp x9, #1
-; CHECK-SVE-NEXT: add z3.d, z3.d, #8 // =0x8
-; CHECK-SVE-NEXT: add z4.d, z4.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z5.d, z5.d, #4 // =0x4
-; CHECK-SVE-NEXT: add z6.d, z6.d, #2 // =0x2
-; CHECK-SVE-NEXT: cmhi v17.2d, v7.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v18.2d, v16.2d, v0.2d
-; CHECK-SVE-NEXT: add z0.d, z0.d, #14 // =0xe
-; CHECK-SVE-NEXT: cmhi v19.2d, v7.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v20.2d, v7.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v21.2d, v7.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v22.2d, v7.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v23.2d, v7.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v24.2d, v7.2d, v6.2d
-; CHECK-SVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v16.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v7.2d, v7.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v16.2d, v6.2d
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
-; CHECK-SVE-NEXT: uzp1 v16.4s, v21.4s, v20.4s
-; CHECK-SVE-NEXT: cmp x10, #1
-; CHECK-SVE-NEXT: uzp1 v20.4s, v23.4s, v22.4s
-; CHECK-SVE-NEXT: uzp1 v17.4s, v17.4s, v24.4s
-; CHECK-SVE-NEXT: cset w10, lt
-; CHECK-SVE-NEXT: uzp1 v2.4s, v3.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v3.4s, v19.4s, v7.4s
-; CHECK-SVE-NEXT: uzp1 v4.4s, v5.4s, v4.4s
-; CHECK-SVE-NEXT: uzp1 v5.4s, v18.4s, v6.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-SVE-NEXT: uzp1 v1.8h, v17.8h, v20.8h
-; CHECK-SVE-NEXT: uzp1 v3.8h, v16.8h, v3.8h
-; CHECK-SVE-NEXT: uzp1 v4.8h, v5.8h, v4.8h
-; CHECK-SVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
-; CHECK-SVE-NEXT: dup v2.16b, w10
-; CHECK-SVE-NEXT: uzp1 v1.16b, v1.16b, v3.16b
-; CHECK-SVE-NEXT: dup v3.16b, w9
-; CHECK-SVE-NEXT: adrp x9, .LCPI18_0
-; CHECK-SVE-NEXT: uzp1 v0.16b, v4.16b, v0.16b
-; CHECK-SVE-NEXT: orr v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: ldr q2, [x9, :lo12:.LCPI18_0]
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v3.16b
-; CHECK-SVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-SVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-SVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-SVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-SVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-SVE-NEXT: ext v2.16b, v1.16b, v1.16b, #8
-; CHECK-SVE-NEXT: ext v3.16b, v0.16b, v0.16b, #8
-; CHECK-SVE-NEXT: zip1 v1.16b, v1.16b, v2.16b
-; CHECK-SVE-NEXT: zip1 v0.16b, v0.16b, v3.16b
-; CHECK-SVE-NEXT: addv h1, v1.8h
-; CHECK-SVE-NEXT: addv h0, v0.8h
-; CHECK-SVE-NEXT: str h1, [x8]
-; CHECK-SVE-NEXT: str h0, [x8, #2]
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_64_split4:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_0
-; CHECK-NOSVE-NEXT: add x11, x9, #7
-; CHECK-NOSVE-NEXT: sub x12, x9, #121
-; CHECK-NOSVE-NEXT: ldr q0, [x10, :lo12:.LCPI18_0]
-; CHECK-NOSVE-NEXT: csel x11, x11, x9, mi
-; CHECK-NOSVE-NEXT: subs x9, x9, #128
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_1
-; CHECK-NOSVE-NEXT: csel x9, x12, x9, mi
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI18_1]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI18_3
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI18_2
-; CHECK-NOSVE-NEXT: asr x9, x9, #3
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI18_3]
-; CHECK-NOSVE-NEXT: asr x10, x11, #3
-; CHECK-NOSVE-NEXT: ldr q2, [x12, :lo12:.LCPI18_2]
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI18_4
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x9
-; CHECK-NOSVE-NEXT: ldr q5, [x12, :lo12:.LCPI18_4]
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI18_6
-; CHECK-NOSVE-NEXT: ldr q6, [x11, :lo12:.LCPI18_5]
-; CHECK-NOSVE-NEXT: dup v16.2d, x10
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI18_7
-; CHECK-NOSVE-NEXT: ldr q7, [x12, :lo12:.LCPI18_6]
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: ldr q22, [x11, :lo12:.LCPI18_7]
-; CHECK-NOSVE-NEXT: cmhi v17.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v18.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v19.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v20.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v21.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v16.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v16.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v16.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v16.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v16.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v23.2d, v16.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v24.2d, v16.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v16.2d, v16.2d, v22.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v22.2d
-; CHECK-NOSVE-NEXT: uzp1 v17.4s, v18.4s, v17.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v23.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v16.4s, v24.4s
-; CHECK-NOSVE-NEXT: uzp1 v5.4s, v20.4s, v19.4s
-; CHECK-NOSVE-NEXT: uzp1 v6.4s, v6.4s, v21.4s
-; CHECK-NOSVE-NEXT: uzp1 v4.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: cmp x10, #1
-; CHECK-NOSVE-NEXT: cset w10, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v2.8h, v5.8h, v17.8h
-; CHECK-NOSVE-NEXT: uzp1 v3.8h, v4.8h, v6.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: uzp1 v1.16b, v3.16b, v2.16b
-; CHECK-NOSVE-NEXT: dup v2.16b, w10
-; CHECK-NOSVE-NEXT: dup v3.16b, w9
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI18_8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI18_8]
-; CHECK-NOSVE-NEXT: orr v1.16b, v1.16b, v3.16b
-; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
-; CHECK-NOSVE-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
-; CHECK-NOSVE-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-NOSVE-NEXT: and v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-NOSVE-NEXT: ext v2.16b, v0.16b, v0.16b, #8
-; CHECK-NOSVE-NEXT: ext v3.16b, v1.16b, v1.16b, #8
-; CHECK-NOSVE-NEXT: zip1 v0.16b, v0.16b, v2.16b
-; CHECK-NOSVE-NEXT: zip1 v1.16b, v1.16b, v3.16b
-; CHECK-NOSVE-NEXT: addv h0, v0.8h
-; CHECK-NOSVE-NEXT: addv h1, v1.8h
-; CHECK-NOSVE-NEXT: str h0, [x8]
-; CHECK-NOSVE-NEXT: str h1, [x8, #2]
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_64_split4:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: add x9, x0, #96
+; CHECK-NEXT: whilewr p2.d, x0, x1
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: add x9, x0, #112
+; CHECK-NEXT: whilewr p0.d, x9, x1
+; CHECK-NEXT: add x9, x0, #64
+; CHECK-NEXT: mov z1.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z0.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: add x9, x0, #32
+; CHECK-NEXT: whilewr p3.d, x9, x1
+; CHECK-NEXT: add x9, x0, #80
+; CHECK-NEXT: mov z3.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p0.d, x9, x1
+; CHECK-NEXT: add x9, x0, #224
+; CHECK-NEXT: mov z2.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: add x9, x0, #240
+; CHECK-NEXT: mov z4.d, p3/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p2.d, x9, x1
+; CHECK-NEXT: add x9, x0, #192
+; CHECK-NEXT: mov v0.s[1], v0.s[2]
+; CHECK-NEXT: mov z5.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: add x9, x0, #208
+; CHECK-NEXT: mov z6.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p2.d, x9, x1
+; CHECK-NEXT: add x9, x0, #160
+; CHECK-NEXT: mov z7.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: add x9, x0, #128
+; CHECK-NEXT: mov z16.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p2.d, x9, x1
+; CHECK-NEXT: add x9, x0, #176
+; CHECK-NEXT: mov z17.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: add x9, x0, #144
+; CHECK-NEXT: mov z18.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p2.d, x9, x1
+; CHECK-NEXT: add x9, x0, #16
+; CHECK-NEXT: mov z19.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: add x9, x0, #48
+; CHECK-NEXT: mov z20.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: whilewr p2.d, x9, x1
+; CHECK-NEXT: mov v5.s[1], v5.s[2]
+; CHECK-NEXT: mov v7.s[1], v7.s[2]
+; CHECK-NEXT: mov v17.s[1], v17.s[2]
+; CHECK-NEXT: mov v18.s[1], v18.s[2]
+; CHECK-NEXT: mov v2.s[1], v2.s[2]
+; CHECK-NEXT: mov v1.s[1], v1.s[2]
+; CHECK-NEXT: adrp x9, .LCPI18_0
+; CHECK-NEXT: mov v4.s[1], v4.s[2]
+; CHECK-NEXT: mov z21.d, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z22.d, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov z23.d, p2/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: mov v0.s[2], v3.s[0]
+; CHECK-NEXT: mov v5.s[2], v6.s[0]
+; CHECK-NEXT: mov v7.s[2], v16.s[0]
+; CHECK-NEXT: mov v17.s[2], v19.s[0]
+; CHECK-NEXT: mov v18.s[2], v20.s[0]
+; CHECK-NEXT: mov v2.s[2], v21.s[0]
+; CHECK-NEXT: mov v1.s[2], v22.s[0]
+; CHECK-NEXT: mov v4.s[2], v23.s[0]
+; CHECK-NEXT: mov v0.s[3], v3.s[2]
+; CHECK-NEXT: mov v5.s[3], v6.s[2]
+; CHECK-NEXT: mov v7.s[3], v16.s[2]
+; CHECK-NEXT: mov v17.s[3], v19.s[2]
+; CHECK-NEXT: mov v18.s[3], v20.s[2]
+; CHECK-NEXT: mov v2.s[3], v21.s[2]
+; CHECK-NEXT: mov v1.s[3], v22.s[2]
+; CHECK-NEXT: mov v4.s[3], v23.s[2]
+; CHECK-NEXT: uzp1 v3.8h, v7.8h, v5.8h
+; CHECK-NEXT: uzp1 v5.8h, v18.8h, v17.8h
+; CHECK-NEXT: uzp1 v0.8h, v2.8h, v0.8h
+; CHECK-NEXT: uzp1 v1.8h, v1.8h, v4.8h
+; CHECK-NEXT: uzp1 v2.16b, v5.16b, v3.16b
+; CHECK-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NEXT: shl v1.16b, v2.16b, #7
+; CHECK-NEXT: ldr q2, [x9, :lo12:.LCPI18_0]
+; CHECK-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NEXT: cmlt v1.16b, v1.16b, #0
+; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NEXT: and v1.16b, v1.16b, v2.16b
+; CHECK-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-NEXT: addv h1, v1.8h
+; CHECK-NEXT: addv h0, v0.8h
+; CHECK-NEXT: str h1, [x8, #2]
+; CHECK-NEXT: str h0, [x8]
+; CHECK-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 8)
ret <32 x i1> %0
}
define <9 x i1> @whilewr_8_widen(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_8_widen:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: umov w9, v0.b[0]
-; CHECK-SVE-NEXT: umov w10, v0.b[1]
-; CHECK-SVE-NEXT: umov w11, v0.b[2]
-; CHECK-SVE-NEXT: umov w12, v0.b[7]
-; CHECK-SVE-NEXT: and w9, w9, #0x1
-; CHECK-SVE-NEXT: bfi w9, w10, #1, #1
-; CHECK-SVE-NEXT: umov w10, v0.b[3]
-; CHECK-SVE-NEXT: bfi w9, w11, #2, #1
-; CHECK-SVE-NEXT: umov w11, v0.b[4]
-; CHECK-SVE-NEXT: bfi w9, w10, #3, #1
-; CHECK-SVE-NEXT: umov w10, v0.b[5]
-; CHECK-SVE-NEXT: bfi w9, w11, #4, #1
-; CHECK-SVE-NEXT: umov w11, v0.b[6]
-; CHECK-SVE-NEXT: bfi w9, w10, #5, #1
-; CHECK-SVE-NEXT: umov w10, v0.b[8]
-; CHECK-SVE-NEXT: bfi w9, w11, #6, #1
-; CHECK-SVE-NEXT: ubfiz w11, w12, #7, #1
-; CHECK-SVE-NEXT: orr w9, w9, w11
-; CHECK-SVE-NEXT: orr w9, w9, w10, lsl #8
-; CHECK-SVE-NEXT: and w9, w9, #0x1ff
-; CHECK-SVE-NEXT: strh w9, [x8]
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_8_widen:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI19_0
-; CHECK-NOSVE-NEXT: sub x11, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI19_1
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI19_2
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI19_0]
-; CHECK-NOSVE-NEXT: dup v1.2d, x11
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI19_3
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI19_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x12, :lo12:.LCPI19_2]
-; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI19_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI19_4
-; CHECK-NOSVE-NEXT: cmp x11, #1
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v1.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v1.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v1.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v1.2d, v4.2d
-; CHECK-NOSVE-NEXT: ldr q5, [x10, :lo12:.LCPI19_4]
-; CHECK-NOSVE-NEXT: cset w9, lt
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v2.4s, v0.4s
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v1.2d, v5.2d
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v4.4s, v3.4s
-; CHECK-NOSVE-NEXT: xtn v1.2s, v1.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v2.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.4h, v1.4h, v0.4h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w9
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: umov w9, v0.b[0]
-; CHECK-NOSVE-NEXT: umov w10, v0.b[1]
-; CHECK-NOSVE-NEXT: umov w11, v0.b[2]
-; CHECK-NOSVE-NEXT: umov w12, v0.b[7]
-; CHECK-NOSVE-NEXT: and w9, w9, #0x1
-; CHECK-NOSVE-NEXT: bfi w9, w10, #1, #1
-; CHECK-NOSVE-NEXT: umov w10, v0.b[3]
-; CHECK-NOSVE-NEXT: bfi w9, w11, #2, #1
-; CHECK-NOSVE-NEXT: umov w11, v0.b[4]
-; CHECK-NOSVE-NEXT: bfi w9, w10, #3, #1
-; CHECK-NOSVE-NEXT: umov w10, v0.b[5]
-; CHECK-NOSVE-NEXT: bfi w9, w11, #4, #1
-; CHECK-NOSVE-NEXT: umov w11, v0.b[6]
-; CHECK-NOSVE-NEXT: bfi w9, w10, #5, #1
-; CHECK-NOSVE-NEXT: umov w10, v0.b[8]
-; CHECK-NOSVE-NEXT: bfi w9, w11, #6, #1
-; CHECK-NOSVE-NEXT: ubfiz w11, w12, #7, #1
-; CHECK-NOSVE-NEXT: orr w9, w9, w11
-; CHECK-NOSVE-NEXT: orr w9, w9, w10, lsl #8
-; CHECK-NOSVE-NEXT: and w9, w9, #0x1ff
-; CHECK-NOSVE-NEXT: strh w9, [x8]
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_8_widen:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: umov w9, v0.b[0]
+; CHECK-NEXT: umov w10, v0.b[1]
+; CHECK-NEXT: umov w11, v0.b[2]
+; CHECK-NEXT: umov w12, v0.b[7]
+; CHECK-NEXT: and w9, w9, #0x1
+; CHECK-NEXT: bfi w9, w10, #1, #1
+; CHECK-NEXT: umov w10, v0.b[3]
+; CHECK-NEXT: bfi w9, w11, #2, #1
+; CHECK-NEXT: umov w11, v0.b[4]
+; CHECK-NEXT: bfi w9, w10, #3, #1
+; CHECK-NEXT: umov w10, v0.b[5]
+; CHECK-NEXT: bfi w9, w11, #4, #1
+; CHECK-NEXT: umov w11, v0.b[6]
+; CHECK-NEXT: bfi w9, w10, #5, #1
+; CHECK-NEXT: umov w10, v0.b[8]
+; CHECK-NEXT: bfi w9, w11, #6, #1
+; CHECK-NEXT: ubfiz w11, w12, #7, #1
+; CHECK-NEXT: orr w9, w9, w11
+; CHECK-NEXT: orr w9, w9, w10, lsl #8
+; CHECK-NEXT: and w9, w9, #0x1ff
+; CHECK-NEXT: strh w9, [x8]
+; CHECK-NEXT: ret
entry:
%0 = call <9 x i1> @llvm.loop.dependence.war.mask.v9i1(ptr %a, ptr %b, i64 1)
ret <9 x i1> %0
}
define <7 x i1> @whilewr_16_widen(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_16_widen:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-SVE-NEXT: umov w0, v0.b[0]
-; CHECK-SVE-NEXT: umov w1, v0.b[1]
-; CHECK-SVE-NEXT: umov w2, v0.b[2]
-; CHECK-SVE-NEXT: umov w3, v0.b[3]
-; CHECK-SVE-NEXT: umov w4, v0.b[4]
-; CHECK-SVE-NEXT: umov w5, v0.b[5]
-; CHECK-SVE-NEXT: umov w6, v0.b[6]
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_16_widen:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI20_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI20_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: adrp x11, .LCPI20_2
-; CHECK-NOSVE-NEXT: adrp x12, .LCPI20_3
-; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI20_0]
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI20_1]
-; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI20_2]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: ldr q4, [x12, :lo12:.LCPI20_3]
-; CHECK-NOSVE-NEXT: dup v0.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
-; CHECK-NOSVE-NEXT: dup v1.8b, w8
-; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: umov w0, v0.b[0]
-; CHECK-NOSVE-NEXT: umov w1, v0.b[1]
-; CHECK-NOSVE-NEXT: umov w2, v0.b[2]
-; CHECK-NOSVE-NEXT: umov w3, v0.b[3]
-; CHECK-NOSVE-NEXT: umov w4, v0.b[4]
-; CHECK-NOSVE-NEXT: umov w5, v0.b[5]
-; CHECK-NOSVE-NEXT: umov w6, v0.b[6]
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_16_widen:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: xtn v0.8b, v0.8h
+; CHECK-NEXT: umov w0, v0.b[0]
+; CHECK-NEXT: umov w1, v0.b[1]
+; CHECK-NEXT: umov w2, v0.b[2]
+; CHECK-NEXT: umov w3, v0.b[3]
+; CHECK-NEXT: umov w4, v0.b[4]
+; CHECK-NEXT: umov w5, v0.b[5]
+; CHECK-NEXT: umov w6, v0.b[6]
+; CHECK-NEXT: ret
entry:
%0 = call <7 x i1> @llvm.loop.dependence.war.mask.v7i1(ptr %a, ptr %b, i64 2)
ret <7 x i1> %0
}
define <3 x i1> @whilewr_32_widen(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_32_widen:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-SVE-NEXT: umov w0, v0.h[0]
-; CHECK-SVE-NEXT: umov w1, v0.h[1]
-; CHECK-SVE-NEXT: umov w2, v0.h[2]
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_32_widen:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI21_0
-; CHECK-NOSVE-NEXT: add x10, x9, #3
-; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI21_0]
-; CHECK-NOSVE-NEXT: csel x9, x10, x9, mi
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI21_1
-; CHECK-NOSVE-NEXT: asr x9, x9, #2
-; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI21_1]
-; CHECK-NOSVE-NEXT: dup v0.2d, x9
-; CHECK-NOSVE-NEXT: cmp x9, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
-; CHECK-NOSVE-NEXT: dup v1.4h, w8
-; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
-; CHECK-NOSVE-NEXT: umov w0, v0.h[0]
-; CHECK-NOSVE-NEXT: umov w1, v0.h[1]
-; CHECK-NOSVE-NEXT: umov w2, v0.h[2]
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_32_widen:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: xtn v0.4h, v0.4s
+; CHECK-NEXT: umov w0, v0.h[0]
+; CHECK-NEXT: umov w1, v0.h[1]
+; CHECK-NEXT: umov w2, v0.h[2]
+; CHECK-NEXT: ret
entry:
%0 = call <3 x i1> @llvm.loop.dependence.war.mask.v3i1(ptr %a, ptr %b, i64 4)
ret <3 x i1> %0
}
define <16 x i1> @whilewr_badimm(ptr %a, ptr %b) {
-; CHECK-SVE-LABEL: whilewr_badimm:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
-; CHECK-SVE-NEXT: sub x9, x1, x0
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: movk x8, #21846
-; CHECK-SVE-NEXT: smulh x8, x9, x8
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z6.d, z0.d
-; CHECK-SVE-NEXT: mov z7.d, z0.d
-; CHECK-SVE-NEXT: mov z16.d, z0.d
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: add z1.d, z1.d, #12 // =0xc
-; CHECK-SVE-NEXT: add z2.d, z2.d, #10 // =0xa
-; CHECK-SVE-NEXT: add z4.d, z4.d, #8 // =0x8
-; CHECK-SVE-NEXT: add z5.d, z5.d, #6 // =0x6
-; CHECK-SVE-NEXT: add z6.d, z6.d, #4 // =0x4
-; CHECK-SVE-NEXT: dup v3.2d, x8
-; CHECK-SVE-NEXT: add z16.d, z16.d, #14 // =0xe
-; CHECK-SVE-NEXT: add z7.d, z7.d, #2 // =0x2
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
-; CHECK-SVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
-; CHECK-SVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
-; CHECK-SVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
-; CHECK-SVE-NEXT: cmhi v16.2d, v3.2d, v16.2d
-; CHECK-SVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
-; CHECK-SVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
-; CHECK-SVE-NEXT: cmhi v3.2d, v3.2d, v7.2d
-; CHECK-SVE-NEXT: uzp1 v1.4s, v1.4s, v16.4s
-; CHECK-SVE-NEXT: uzp1 v2.4s, v4.4s, v2.4s
-; CHECK-SVE-NEXT: uzp1 v4.4s, v6.4s, v5.4s
-; CHECK-SVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
-; CHECK-SVE-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-SVE-NEXT: uzp1 v0.8h, v0.8h, v4.8h
-; CHECK-SVE-NEXT: uzp1 v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: dup v1.16b, w8
-; CHECK-SVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: whilewr_badimm:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
-; CHECK-NOSVE-NEXT: sub x9, x1, x0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_1
-; CHECK-NOSVE-NEXT: movk x8, #21846
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI22_1]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_3
-; CHECK-NOSVE-NEXT: smulh x8, x9, x8
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_0
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI22_3]
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI22_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_2
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI22_5
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI22_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_4
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI22_5]
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI22_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_6
-; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI22_6]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI22_7
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI22_7]
-; CHECK-NOSVE-NEXT: dup v4.2d, x8
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
+; CHECK-LABEL: whilewr_badimm:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
+; CHECK-NEXT: sub x9, x1, x0
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: movk x8, #21846
+; CHECK-NEXT: smulh x8, x9, x8
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z6.d, z0.d
+; CHECK-NEXT: mov z7.d, z0.d
+; CHECK-NEXT: mov z16.d, z0.d
+; CHECK-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-NEXT: add z4.d, z4.d, #8 // =0x8
+; CHECK-NEXT: add z5.d, z5.d, #6 // =0x6
+; CHECK-NEXT: add z6.d, z6.d, #4 // =0x4
+; CHECK-NEXT: dup v3.2d, x8
+; CHECK-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-NEXT: cmhi v16.2d, v3.2d, v16.2d
+; CHECK-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-NEXT: cmhi v3.2d, v3.2d, v7.2d
+; CHECK-NEXT: uzp1 v1.4s, v1.4s, v16.4s
+; CHECK-NEXT: uzp1 v2.4s, v4.4s, v2.4s
+; CHECK-NEXT: uzp1 v4.4s, v6.4s, v5.4s
+; CHECK-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-NEXT: uzp1 v0.8h, v0.8h, v4.8h
+; CHECK-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-NEXT: dup v1.16b, w8
+; CHECK-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 3)
ret <16 x i1> %0
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_nosve.ll b/llvm/test/CodeGen/AArch64/alias_mask_nosve.ll
new file mode 100644
index 0000000000000..922b37c2f2a08
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/alias_mask_nosve.ll
@@ -0,0 +1,48 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64 %s -o - | FileCheck %s
+
+define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_8:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: adrp x8, .LCPI0_0
+; CHECK-NEXT: adrp x10, .LCPI0_1
+; CHECK-NEXT: sub x9, x1, x0
+; CHECK-NEXT: ldr q0, [x8, :lo12:.LCPI0_0]
+; CHECK-NEXT: adrp x8, .LCPI0_2
+; CHECK-NEXT: ldr q1, [x10, :lo12:.LCPI0_1]
+; CHECK-NEXT: ldr q3, [x8, :lo12:.LCPI0_2]
+; CHECK-NEXT: adrp x8, .LCPI0_4
+; CHECK-NEXT: adrp x10, .LCPI0_3
+; CHECK-NEXT: ldr q5, [x8, :lo12:.LCPI0_4]
+; CHECK-NEXT: adrp x8, .LCPI0_5
+; CHECK-NEXT: dup v2.2d, x9
+; CHECK-NEXT: ldr q4, [x10, :lo12:.LCPI0_3]
+; CHECK-NEXT: adrp x10, .LCPI0_6
+; CHECK-NEXT: ldr q6, [x8, :lo12:.LCPI0_5]
+; CHECK-NEXT: adrp x8, .LCPI0_7
+; CHECK-NEXT: ldr q7, [x10, :lo12:.LCPI0_6]
+; CHECK-NEXT: cmp x9, #1
+; CHECK-NEXT: ldr q16, [x8, :lo12:.LCPI0_7]
+; CHECK-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NEXT: dup v1.16b, w8
+; CHECK-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 1)
+ ret <16 x i1> %0
+}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 576ec81292014..ef2f729da1a62 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -1,1909 +1,497 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE2
-; RUN: llc -mtriple=aarch64 -mattr=+sve %s -o - | FileCheck %s --check-prefix=CHECK-SVE
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s
define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_8:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_8:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z3.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #4
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p2.s, p5.s, p6.s
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p4.s
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_8:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
}
define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_16:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_16:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
-; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
-; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_16:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
}
define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_32:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_32:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
-; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_32:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.nxv4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
}
define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_64:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z1.d, x8
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_64:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 2 x i1> @llvm.loop.dependence.war.mask.nxv2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilerw_8:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilerw p0.b, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilerw_8:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: cneg x8, x8, mi
-; CHECK-SVE-NEXT: mov z1.d, x8
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z1.d, z0.d
-; CHECK-SVE-NEXT: incd z2.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE-NEXT: mov z3.d, z2.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z1.d, z2.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z1.d, z4.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z1.d, z5.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z1.d, z2.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z1.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE-NEXT: mov z0.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z1.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w8, eq
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilerw_8:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.b, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.raw.mask.nxv16i1(ptr %a, ptr %b, i64 1)
ret <vscale x 16 x i1> %0
}
define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilerw_16:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilerw p0.h, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilerw_16:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: cneg x8, x8, mi
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: mov z3.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w8, eq
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
-; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
-; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilerw_16:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.h, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.raw.mask.nxv8i1(ptr %a, ptr %b, i64 2)
ret <vscale x 8 x i1> %0
}
define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilerw_32:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilerw p0.s, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilerw_32:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: cneg x8, x8, mi
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w8, eq
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
-; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilerw_32:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.s, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.raw.mask.nxv4i1(ptr %a, ptr %b, i64 4)
ret <vscale x 4 x i1> %0
}
define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilerw_64:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilerw p0.d, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilerw_64:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: cneg x8, x8, mi
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z1.d, x8
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
-; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w8, eq
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilerw_64:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.d, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 2 x i1> @llvm.loop.dependence.raw.mask.nxv2i1(ptr %a, ptr %b, i64 8)
ret <vscale x 2 x i1> %0
}
define <vscale x 32 x i1> @whilewr_8_split(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_8_split:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE2-NEXT: incb x0
-; CHECK-SVE2-NEXT: whilewr p1.b, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_8_split:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: mov z4.d, x8
-; CHECK-SVE-NEXT: sub x9, x1, x0
-; CHECK-SVE-NEXT: rdvl x10, #1
-; CHECK-SVE-NEXT: sub x9, x9, x10
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z4.d, z0.d
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: mov z6.d, z1.d
-; CHECK-SVE-NEXT: mov z7.d, z2.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z4.d, z2.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z4.d, z5.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE-NEXT: incd z7.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: mov z24.d, z3.d
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z4.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z4.d, z6.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z4.d, z7.d
-; CHECK-SVE-NEXT: incd z24.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p2.s, p3.s, p4.s
-; CHECK-SVE-NEXT: uzp1 p3.s, p5.s, p6.s
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z4.d, z24.d
-; CHECK-SVE-NEXT: mov z4.d, x9
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p2.h
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z4.d, z24.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z4.d, z7.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z4.d, z6.d
-; CHECK-SVE-NEXT: uzp1 p7.s, p7.s, p8.s
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z4.d, z5.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z4.d, z3.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z4.d, z2.d
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p3.h, p3.h, p7.h
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z4.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE-NEXT: uzp1 p5.s, p9.s, p6.s
-; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p6.b, xzr, x8
-; CHECK-SVE-NEXT: uzp1 p2.s, p8.s, p2.s
-; CHECK-SVE-NEXT: cmp x9, #1
-; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p7.s
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p4.h, p5.h, p4.h
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p2.h
-; CHECK-SVE-NEXT: uzp1 p1.b, p1.b, p3.b
-; CHECK-SVE-NEXT: uzp1 p2.b, p0.b, p4.b
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p1, p1.b, p6.b
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p1.b, p2, p2.b, p3.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_8_split:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: incb x0
+; CHECK-NEXT: whilewr p1.b, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 1)
ret <vscale x 32 x i1> %0
}
define <vscale x 64 x i1> @whilewr_8_split2(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_8_split2:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: mov x8, x0
-; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE2-NEXT: addvl x9, x0, #3
-; CHECK-SVE2-NEXT: incb x0, all, mul #2
-; CHECK-SVE2-NEXT: incb x8
-; CHECK-SVE2-NEXT: whilewr p3.b, x9, x1
-; CHECK-SVE2-NEXT: whilewr p2.b, x0, x1
-; CHECK-SVE2-NEXT: whilewr p1.b, x8, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_8_split2:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p11, [sp] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x9, x1, x0
-; CHECK-SVE-NEXT: ptrue p2.d
-; CHECK-SVE-NEXT: mov z24.d, x9
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: rdvl x10, #1
-; CHECK-SVE-NEXT: sub x10, x8, x10
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: cmphi p0.d, p2/z, z24.d, z0.d
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: mov z5.d, z1.d
-; CHECK-SVE-NEXT: mov z6.d, z2.d
-; CHECK-SVE-NEXT: cmphi p3.d, p2/z, z24.d, z1.d
-; CHECK-SVE-NEXT: cmphi p1.d, p2/z, z24.d, z2.d
-; CHECK-SVE-NEXT: cmphi p5.d, p2/z, z24.d, z4.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p3.s
-; CHECK-SVE-NEXT: mov z7.d, z3.d
-; CHECK-SVE-NEXT: cmphi p4.d, p2/z, z24.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p2/z, z24.d, z5.d
-; CHECK-SVE-NEXT: cmphi p3.d, p2/z, z24.d, z6.d
-; CHECK-SVE-NEXT: incd z7.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p4.s
-; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p6.s
-; CHECK-SVE-NEXT: cmphi p7.d, p2/z, z24.d, z7.d
-; CHECK-SVE-NEXT: mov z24.d, x10
-; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p1.h
-; CHECK-SVE-NEXT: cmp x9, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: cmphi p5.d, p2/z, z24.d, z7.d
-; CHECK-SVE-NEXT: cmphi p6.d, p2/z, z24.d, z6.d
-; CHECK-SVE-NEXT: cmphi p8.d, p2/z, z24.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p1.s, p3.s, p7.s
-; CHECK-SVE-NEXT: cmphi p7.d, p2/z, z24.d, z5.d
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: cmphi p9.d, p2/z, z24.d, z3.d
-; CHECK-SVE-NEXT: cmphi p10.d, p2/z, z24.d, z2.d
-; CHECK-SVE-NEXT: cmphi p11.d, p2/z, z24.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p3.h, p4.h, p1.h
-; CHECK-SVE-NEXT: cmphi p4.d, p2/z, z24.d, z1.d
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x9
-; CHECK-SVE-NEXT: rdvl x9, #2
-; CHECK-SVE-NEXT: sub x9, x8, x9
-; CHECK-SVE-NEXT: uzp1 p5.s, p6.s, p5.s
-; CHECK-SVE-NEXT: cmp x10, #1
-; CHECK-SVE-NEXT: uzp1 p6.s, p8.s, p7.s
-; CHECK-SVE-NEXT: mov z24.d, x9
-; CHECK-SVE-NEXT: cset w10, lt
-; CHECK-SVE-NEXT: uzp1 p7.s, p10.s, p9.s
-; CHECK-SVE-NEXT: sbfx x10, x10, #0, #1
-; CHECK-SVE-NEXT: uzp1 p8.s, p11.s, p4.s
-; CHECK-SVE-NEXT: uzp1 p0.b, p0.b, p3.b
-; CHECK-SVE-NEXT: cmphi p3.d, p2/z, z24.d, z7.d
-; CHECK-SVE-NEXT: cmphi p9.d, p2/z, z24.d, z6.d
-; CHECK-SVE-NEXT: uzp1 p5.h, p6.h, p5.h
-; CHECK-SVE-NEXT: cmphi p6.d, p2/z, z24.d, z5.d
-; CHECK-SVE-NEXT: cmphi p10.d, p2/z, z24.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p7.h, p8.h, p7.h
-; CHECK-SVE-NEXT: cmphi p8.d, p2/z, z24.d, z3.d
-; CHECK-SVE-NEXT: cmphi p11.d, p2/z, z24.d, z2.d
-; CHECK-SVE-NEXT: whilelo p4.b, xzr, x10
-; CHECK-SVE-NEXT: rdvl x10, #3
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: cmphi p1.d, p2/z, z24.d, z1.d
-; CHECK-SVE-NEXT: sub x8, x8, x10
-; CHECK-SVE-NEXT: uzp1 p5.b, p7.b, p5.b
-; CHECK-SVE-NEXT: cmphi p7.d, p2/z, z24.d, z0.d
-; CHECK-SVE-NEXT: mov z24.d, x8
-; CHECK-SVE-NEXT: uzp1 p3.s, p9.s, p3.s
-; CHECK-SVE-NEXT: cmp x9, #1
-; CHECK-SVE-NEXT: uzp1 p6.s, p10.s, p6.s
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: uzp1 p8.s, p11.s, p8.s
-; CHECK-SVE-NEXT: cmphi p11.d, p2/z, z24.d, z2.d
-; CHECK-SVE-NEXT: cmphi p9.d, p2/z, z24.d, z7.d
-; CHECK-SVE-NEXT: uzp1 p7.s, p7.s, p1.s
-; CHECK-SVE-NEXT: cmphi p10.d, p2/z, z24.d, z6.d
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: sel p1.b, p5, p5.b, p4.b
-; CHECK-SVE-NEXT: cmphi p4.d, p2/z, z24.d, z5.d
-; CHECK-SVE-NEXT: cmphi p5.d, p2/z, z24.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p3.h, p6.h, p3.h
-; CHECK-SVE-NEXT: cmphi p6.d, p2/z, z24.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p7.h, p7.h, p8.h
-; CHECK-SVE-NEXT: cmphi p8.d, p2/z, z24.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p2/z, z24.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p9.s, p10.s, p9.s
-; CHECK-SVE-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE-NEXT: uzp1 p5.s, p11.s, p6.s
-; CHECK-SVE-NEXT: ldr p11, [sp] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p6.b, xzr, x9
-; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p8.s
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: uzp1 p4.h, p4.h, p9.h
-; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p5.h
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p3.b, p7.b, p3.b
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p4.b, p2.b, p4.b
-; CHECK-SVE-NEXT: whilelo p5.b, xzr, x8
-; CHECK-SVE-NEXT: sel p2.b, p3, p3.b, p6.b
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p3.b, p4, p4.b, p5.b
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_8_split2:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: mov x8, x0
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: addvl x9, x0, #3
+; CHECK-NEXT: incb x0, all, mul #2
+; CHECK-NEXT: incb x8
+; CHECK-NEXT: whilewr p3.b, x9, x1
+; CHECK-NEXT: whilewr p2.b, x0, x1
+; CHECK-NEXT: whilewr p1.b, x8, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 64 x i1> @llvm.loop.dependence.war.mask.nxv64i1(ptr %a, ptr %b, i64 1)
ret <vscale x 64 x i1> %0
}
define <vscale x 16 x i1> @whilewr_16_expand(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_16_expand:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE2-NEXT: addvl sp, sp, #-1
-; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE2-NEXT: asr x8, x8, #1
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z4.d, z0.d
-; CHECK-SVE2-NEXT: mov z5.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, x8
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: mov z3.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
-; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
-; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE2-NEXT: mov z0.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
-; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE2-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE2-NEXT: addvl sp, sp, #1
-; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_16_expand:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE-NEXT: mov z0.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_16_expand:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: incb x0
+; CHECK-NEXT: whilewr p1.h, x0, x1
+; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 2)
ret <vscale x 16 x i1> %0
}
define <vscale x 32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_16_expand2:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE2-NEXT: addvl sp, sp, #-1
-; CHECK-SVE2-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: sub x8, x1, x0
-; CHECK-SVE2-NEXT: incb x0, all, mul #2
-; CHECK-SVE2-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: asr x8, x8, #1
-; CHECK-SVE2-NEXT: sub x9, x1, x0
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, z0.d
-; CHECK-SVE2-NEXT: mov z3.d, z0.d
-; CHECK-SVE2-NEXT: mov z5.d, x8
-; CHECK-SVE2-NEXT: add x9, x9, x9, lsr #63
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z3.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z5.d, z0.d
-; CHECK-SVE2-NEXT: asr x9, x9, #1
-; CHECK-SVE2-NEXT: mov z4.d, z1.d
-; CHECK-SVE2-NEXT: mov z6.d, z1.d
-; CHECK-SVE2-NEXT: mov z7.d, z2.d
-; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z5.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z5.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z2.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE2-NEXT: incd z7.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE2-NEXT: mov z24.d, z4.d
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z6.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z4.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
-; CHECK-SVE2-NEXT: incd z24.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p2.s, p3.s, p4.s
-; CHECK-SVE2-NEXT: uzp1 p3.s, p5.s, p6.s
-; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z24.d
-; CHECK-SVE2-NEXT: mov z5.d, x9
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z24.d
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z7.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
-; CHECK-SVE2-NEXT: uzp1 p7.s, p7.s, p8.s
-; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z5.d, z4.d
-; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z2.d
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p7.h
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
-; CHECK-SVE2-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE2-NEXT: uzp1 p5.s, p9.s, p6.s
-; CHECK-SVE2-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: whilelo p6.b, xzr, x8
-; CHECK-SVE2-NEXT: uzp1 p3.s, p8.s, p3.s
-; CHECK-SVE2-NEXT: cmp x9, #1
-; CHECK-SVE2-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p7.s
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p4.h, p5.h, p4.h
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p3.h
-; CHECK-SVE2-NEXT: uzp1 p1.b, p1.b, p2.b
-; CHECK-SVE2-NEXT: uzp1 p2.b, p0.b, p4.b
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: whilelo p3.b, xzr, x8
-; CHECK-SVE2-NEXT: sel p0.b, p1, p1.b, p6.b
-; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: sel p1.b, p2, p2.b, p3.b
-; CHECK-SVE2-NEXT: addvl sp, sp, #1
-; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_16_expand2:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: sub x9, x1, x0
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: rdvl x10, #2
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: sub x9, x9, x10
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: add x9, x9, x9, lsr #63
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, x8
-; CHECK-SVE-NEXT: asr x9, x9, #1
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: incd z3.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z1.d
-; CHECK-SVE-NEXT: mov z6.d, z1.d
-; CHECK-SVE-NEXT: mov z7.d, z2.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z3.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z2.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE-NEXT: incd z7.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE-NEXT: mov z24.d, z4.d
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z6.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z4.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
-; CHECK-SVE-NEXT: incd z24.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p2.s, p3.s, p4.s
-; CHECK-SVE-NEXT: uzp1 p3.s, p5.s, p6.s
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z24.d
-; CHECK-SVE-NEXT: mov z5.d, x9
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z24.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z7.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
-; CHECK-SVE-NEXT: uzp1 p7.s, p7.s, p8.s
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z3.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z4.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z2.d
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p7.h
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE-NEXT: uzp1 p5.s, p9.s, p6.s
-; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p6.b, xzr, x8
-; CHECK-SVE-NEXT: uzp1 p3.s, p8.s, p3.s
-; CHECK-SVE-NEXT: cmp x9, #1
-; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p7.s
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p4.h, p5.h, p4.h
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p3.h
-; CHECK-SVE-NEXT: uzp1 p1.b, p1.b, p2.b
-; CHECK-SVE-NEXT: uzp1 p2.b, p0.b, p4.b
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p3.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p1, p1.b, p6.b
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p1.b, p2, p2.b, p3.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_16_expand2:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: mov x8, x0
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: addvl x9, x0, #3
+; CHECK-NEXT: incb x8
+; CHECK-NEXT: incb x0, all, mul #2
+; CHECK-NEXT: whilewr p1.h, x9, x1
+; CHECK-NEXT: whilewr p2.h, x8, x1
+; CHECK-NEXT: whilewr p3.h, x0, x1
+; CHECK-NEXT: uzp1 p0.b, p0.b, p2.b
+; CHECK-NEXT: uzp1 p1.b, p3.b, p1.b
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 2)
ret <vscale x 32 x i1> %0
}
define <vscale x 8 x i1> @whilewr_32_expand(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_32_expand:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: subs x8, x1, x0
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: add x9, x8, #3
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: asr x8, x8, #2
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, z0.d
-; CHECK-SVE2-NEXT: mov z3.d, x8
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
-; CHECK-SVE2-NEXT: mov z4.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE2-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p3.s, p0.s
-; CHECK-SVE2-NEXT: uzp1 p0.h, p1.h, p0.h
-; CHECK-SVE2-NEXT: whilelo p1.h, xzr, x8
-; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_32_expand:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
-; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
-; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_32_expand:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: incb x0
+; CHECK-NEXT: whilewr p1.s, x0, x1
+; CHECK-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 4)
ret <vscale x 8 x i1> %0
}
define <vscale x 16 x i1> @whilewr_32_expand2(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_32_expand2:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE2-NEXT: addvl sp, sp, #-1
-; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: subs x8, x1, x0
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: add x9, x8, #3
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: asr x8, x8, #2
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z4.d, z0.d
-; CHECK-SVE2-NEXT: mov z5.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, x8
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: mov z3.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
-; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
-; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE2-NEXT: mov z0.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
-; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE2-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE2-NEXT: addvl sp, sp, #1
-; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_32_expand2:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE-NEXT: mov z0.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_32_expand2:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: mov x8, x0
+; CHECK-NEXT: addvl x9, x0, #3
+; CHECK-NEXT: incb x8, all, mul #2
+; CHECK-NEXT: incb x0
+; CHECK-NEXT: whilewr p1.s, x9, x1
+; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-NEXT: uzp1 p0.b, p0.b, p0.b
+; CHECK-NEXT: whilewr p2.s, x8, x1
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: whilewr p3.s, x0, x1
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: uzp1 p1.h, p2.h, p1.h
+; CHECK-NEXT: uzp1 p0.h, p0.h, p3.h
+; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 4)
ret <vscale x 16 x i1> %0
}
define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_32_expand3:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE2-NEXT: addvl sp, sp, #-1
-; CHECK-SVE2-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: subs x8, x1, x0
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: add x9, x8, #3
-; CHECK-SVE2-NEXT: incb x0, all, mul #4
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: asr x8, x8, #2
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, z0.d
-; CHECK-SVE2-NEXT: mov z4.d, z0.d
-; CHECK-SVE2-NEXT: mov z5.d, x8
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
-; CHECK-SVE2-NEXT: mov z3.d, z1.d
-; CHECK-SVE2-NEXT: mov z6.d, z2.d
-; CHECK-SVE2-NEXT: mov z7.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
-; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE2-NEXT: incd z7.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE2-NEXT: mov z24.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
-; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
-; CHECK-SVE2-NEXT: incd z24.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p2.s, p2.s, p7.s
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p8.s
-; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: uzp1 p3.h, p4.h, p3.h
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p6.s, p6.s, p9.s
-; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: subs x8, x1, x0
-; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p6.h
-; CHECK-SVE2-NEXT: add x9, x8, #3
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: uzp1 p2.b, p3.b, p2.b
-; CHECK-SVE2-NEXT: asr x8, x8, #2
-; CHECK-SVE2-NEXT: mov z5.d, x8
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
-; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
-; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: uzp1 p5.s, p7.s, p5.s
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: uzp1 p7.s, p9.s, p8.s
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p4.s, p10.s, p4.s
-; CHECK-SVE2-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p6.s
-; CHECK-SVE2-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p5.h, p7.h, p5.h
-; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p4.h
-; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: whilelo p4.b, xzr, x8
-; CHECK-SVE2-NEXT: uzp1 p3.b, p0.b, p5.b
-; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: sel p0.b, p2, p2.b, p1.b
-; CHECK-SVE2-NEXT: sel p1.b, p3, p3.b, p4.b
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: addvl sp, sp, #1
-; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_32_expand3:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: rdvl x9, #4
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: add x9, x0, x9
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: mov z6.d, z2.d
-; CHECK-SVE-NEXT: mov z7.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE-NEXT: incd z7.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE-NEXT: mov z24.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
-; CHECK-SVE-NEXT: incd z24.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p7.s
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p8.s
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: uzp1 p3.h, p4.h, p3.h
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p9.s
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: subs x8, x1, x9
-; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p6.h
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov z5.d, x8
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
-; CHECK-SVE-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: uzp1 p5.s, p7.s, p5.s
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: uzp1 p7.s, p9.s, p8.s
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p4.s, p10.s, p4.s
-; CHECK-SVE-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p6.s
-; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p5.h, p7.h, p5.h
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p4.h
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p4.b, xzr, x8
-; CHECK-SVE-NEXT: uzp1 p3.b, p0.b, p5.b
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p0.b, p2, p2.b, p1.b
-; CHECK-SVE-NEXT: sel p1.b, p3, p3.b, p4.b
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_32_expand3:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_offset w29, -16
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: mov x8, x0
+; CHECK-NEXT: addvl x9, x0, #3
+; CHECK-NEXT: incb x8, all, mul #2
+; CHECK-NEXT: mov x10, x0
+; CHECK-NEXT: whilewr p1.s, x9, x1
+; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-NEXT: addvl x9, x0, #7
+; CHECK-NEXT: incb x10
+; CHECK-NEXT: whilewr p2.s, x9, x1
+; CHECK-NEXT: addvl x9, x0, #6
+; CHECK-NEXT: uzp1 p0.b, p0.b, p0.b
+; CHECK-NEXT: whilewr p3.s, x8, x1
+; CHECK-NEXT: addvl x8, x0, #5
+; CHECK-NEXT: incb x0, all, mul #4
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: whilewr p4.s, x10, x1
+; CHECK-NEXT: whilewr p5.s, x0, x1
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: uzp1 p1.h, p3.h, p1.h
+; CHECK-NEXT: uzp1 p3.h, p5.h, p0.h
+; CHECK-NEXT: uzp1 p0.h, p0.h, p4.h
+; CHECK-NEXT: whilewr p4.s, x9, x1
+; CHECK-NEXT: uzp1 p3.b, p3.b, p0.b
+; CHECK-NEXT: whilewr p5.s, x8, x1
+; CHECK-NEXT: punpklo p3.h, p3.b
+; CHECK-NEXT: uzp1 p2.h, p4.h, p2.h
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: punpklo p3.h, p3.b
+; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-NEXT: uzp1 p3.h, p3.h, p5.h
+; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p1.b, p3.b, p2.b
+; CHECK-NEXT: addvl sp, sp, #1
+; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 4)
ret <vscale x 32 x i1> %0
}
define <vscale x 4 x i1> @whilewr_64_expand(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64_expand:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: subs x8, x1, x0
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: asr x8, x8, #3
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, x8
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p1.s, p0.s
-; CHECK-SVE2-NEXT: whilelo p1.s, xzr, x8
-; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_64_expand:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
-; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_64_expand:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: incb x0
+; CHECK-NEXT: whilewr p1.d, x0, x1
+; CHECK-NEXT: uzp1 p0.s, p0.s, p1.s
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.nxv4i1(ptr %a, ptr %b, i64 8)
ret <vscale x 4 x i1> %0
}
define <vscale x 8 x i1> @whilewr_64_expand2(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64_expand2:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: subs x8, x1, x0
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: asr x8, x8, #3
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, z0.d
-; CHECK-SVE2-NEXT: mov z3.d, x8
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
-; CHECK-SVE2-NEXT: mov z4.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE2-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p3.s, p0.s
-; CHECK-SVE2-NEXT: uzp1 p0.h, p1.h, p0.h
-; CHECK-SVE2-NEXT: whilelo p1.h, xzr, x8
-; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_64_expand2:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
-; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
-; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_64_expand2:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: mov x8, x0
+; CHECK-NEXT: addvl x9, x0, #3
+; CHECK-NEXT: incb x8, all, mul #2
+; CHECK-NEXT: incb x0
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: uzp1 p0.s, p0.s, p0.s
+; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-NEXT: whilewr p2.d, x8, x1
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: whilewr p3.d, x0, x1
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-NEXT: uzp1 p0.s, p0.s, p3.s
+; CHECK-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 8)
ret <vscale x 8 x i1> %0
}
define <vscale x 16 x i1> @whilewr_64_expand3(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64_expand3:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE2-NEXT: addvl sp, sp, #-1
-; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: subs x8, x1, x0
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: asr x8, x8, #3
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z4.d, z0.d
-; CHECK-SVE2-NEXT: mov z5.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, x8
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: mov z3.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
-; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
-; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE2-NEXT: mov z0.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
-; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE2-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE2-NEXT: addvl sp, sp, #1
-; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_64_expand3:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE-NEXT: mov z0.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_64_expand3:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_offset w29, -16
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: mov x8, x0
+; CHECK-NEXT: addvl x9, x0, #3
+; CHECK-NEXT: incb x8, all, mul #2
+; CHECK-NEXT: mov x10, x0
+; CHECK-NEXT: whilewr p1.d, x9, x1
+; CHECK-NEXT: uzp1 p0.s, p0.s, p0.s
+; CHECK-NEXT: addvl x9, x0, #7
+; CHECK-NEXT: incb x10
+; CHECK-NEXT: whilewr p2.d, x9, x1
+; CHECK-NEXT: addvl x9, x0, #6
+; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-NEXT: whilewr p3.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #5
+; CHECK-NEXT: incb x0, all, mul #4
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: whilewr p4.d, x10, x1
+; CHECK-NEXT: whilewr p5.d, x0, x1
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: uzp1 p1.s, p3.s, p1.s
+; CHECK-NEXT: uzp1 p3.s, p5.s, p0.s
+; CHECK-NEXT: uzp1 p0.s, p0.s, p4.s
+; CHECK-NEXT: whilewr p4.d, x9, x1
+; CHECK-NEXT: uzp1 p3.h, p3.h, p0.h
+; CHECK-NEXT: whilewr p5.d, x8, x1
+; CHECK-NEXT: punpklo p3.h, p3.b
+; CHECK-NEXT: uzp1 p2.s, p4.s, p2.s
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: punpklo p3.h, p3.b
+; CHECK-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-NEXT: uzp1 p3.s, p3.s, p5.s
+; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p1.h, p3.h, p2.h
+; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-NEXT: addvl sp, sp, #1
+; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 8)
ret <vscale x 16 x i1> %0
}
define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_64_expand4:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE2-NEXT: addvl sp, sp, #-1
-; CHECK-SVE2-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: subs x8, x1, x0
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: addvl x9, x0, #8
-; CHECK-SVE2-NEXT: asr x8, x8, #3
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z2.d, z0.d
-; CHECK-SVE2-NEXT: mov z4.d, z0.d
-; CHECK-SVE2-NEXT: mov z5.d, x8
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
-; CHECK-SVE2-NEXT: mov z3.d, z1.d
-; CHECK-SVE2-NEXT: mov z6.d, z2.d
-; CHECK-SVE2-NEXT: mov z7.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
-; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE2-NEXT: incd z7.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE2-NEXT: mov z24.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
-; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
-; CHECK-SVE2-NEXT: incd z24.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p2.s, p2.s, p7.s
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p8.s
-; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: uzp1 p3.h, p4.h, p3.h
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p6.s, p6.s, p9.s
-; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: subs x8, x1, x9
-; CHECK-SVE2-NEXT: uzp1 p2.h, p2.h, p6.h
-; CHECK-SVE2-NEXT: add x9, x8, #7
-; CHECK-SVE2-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE2-NEXT: uzp1 p2.b, p3.b, p2.b
-; CHECK-SVE2-NEXT: asr x8, x8, #3
-; CHECK-SVE2-NEXT: mov z5.d, x8
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
-; CHECK-SVE2-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
-; CHECK-SVE2-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: uzp1 p5.s, p7.s, p5.s
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: uzp1 p7.s, p9.s, p8.s
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p4.s, p10.s, p4.s
-; CHECK-SVE2-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.s, p0.s, p6.s
-; CHECK-SVE2-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p5.h, p7.h, p5.h
-; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.h, p0.h, p4.h
-; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: whilelo p4.b, xzr, x8
-; CHECK-SVE2-NEXT: uzp1 p3.b, p0.b, p5.b
-; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: sel p0.b, p2, p2.b, p1.b
-; CHECK-SVE2-NEXT: sel p1.b, p3, p3.b, p4.b
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: addvl sp, sp, #1
-; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_64_expand4:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: rdvl x9, #8
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: add x9, x0, x9
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: mov z6.d, z2.d
-; CHECK-SVE-NEXT: mov z7.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: incd z6.d, all, mul #4
-; CHECK-SVE-NEXT: incd z7.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p4.s, p5.s, p4.s
-; CHECK-SVE-NEXT: mov z24.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
-; CHECK-SVE-NEXT: incd z24.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p2.s, p2.s, p7.s
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p8.s
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: uzp1 p3.h, p4.h, p3.h
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p6.s, p6.s, p9.s
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: subs x8, x1, x9
-; CHECK-SVE-NEXT: uzp1 p2.h, p2.h, p6.h
-; CHECK-SVE-NEXT: add x9, x8, #7
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: uzp1 p2.b, p3.b, p2.b
-; CHECK-SVE-NEXT: asr x8, x8, #3
-; CHECK-SVE-NEXT: mov z5.d, x8
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
-; CHECK-SVE-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
-; CHECK-SVE-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
-; CHECK-SVE-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: uzp1 p5.s, p7.s, p5.s
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: uzp1 p7.s, p9.s, p8.s
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p4.s, p10.s, p4.s
-; CHECK-SVE-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.s, p0.s, p6.s
-; CHECK-SVE-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p5.h, p7.h, p5.h
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p0.h, p4.h
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: whilelo p4.b, xzr, x8
-; CHECK-SVE-NEXT: uzp1 p3.b, p0.b, p5.b
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: sel p0.b, p2, p2.b, p1.b
-; CHECK-SVE-NEXT: sel p1.b, p3, p3.b, p4.b
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_64_expand4:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_offset w29, -16
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: addvl x8, x0, #3
+; CHECK-NEXT: mov x9, x0
+; CHECK-NEXT: whilewr p1.d, x8, x1
+; CHECK-NEXT: mov x8, x0
+; CHECK-NEXT: addvl x10, x0, #7
+; CHECK-NEXT: uzp1 p0.s, p0.s, p0.s
+; CHECK-NEXT: incb x9, all, mul #2
+; CHECK-NEXT: incb x8
+; CHECK-NEXT: whilewr p2.d, x10, x1
+; CHECK-NEXT: mov x10, x0
+; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
+; CHECK-NEXT: incb x10, all, mul #4
+; CHECK-NEXT: whilewr p3.d, x9, x1
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: whilewr p4.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #6
+; CHECK-NEXT: punpklo p0.h, p0.b
+; CHECK-NEXT: whilewr p5.d, x10, x1
+; CHECK-NEXT: uzp1 p1.s, p3.s, p1.s
+; CHECK-NEXT: uzp1 p0.s, p0.s, p4.s
+; CHECK-NEXT: uzp1 p3.s, p5.s, p0.s
+; CHECK-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-NEXT: uzp1 p1.h, p3.h, p0.h
+; CHECK-NEXT: whilewr p3.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #5
+; CHECK-NEXT: punpklo p1.h, p1.b
+; CHECK-NEXT: whilewr p4.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #12
+; CHECK-NEXT: punpklo p1.h, p1.b
+; CHECK-NEXT: uzp1 p2.s, p3.s, p2.s
+; CHECK-NEXT: whilewr p3.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #15
+; CHECK-NEXT: uzp1 p1.s, p1.s, p4.s
+; CHECK-NEXT: uzp1 p3.s, p3.s, p0.s
+; CHECK-NEXT: uzp1 p1.h, p1.h, p2.h
+; CHECK-NEXT: whilewr p2.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #14
+; CHECK-NEXT: uzp1 p3.h, p3.h, p0.h
+; CHECK-NEXT: whilewr p4.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #13
+; CHECK-NEXT: punpklo p3.h, p3.b
+; CHECK-NEXT: uzp1 p2.s, p4.s, p2.s
+; CHECK-NEXT: whilewr p4.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #8
+; CHECK-NEXT: punpklo p3.h, p3.b
+; CHECK-NEXT: whilewr p5.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #11
+; CHECK-NEXT: uzp1 p3.s, p3.s, p4.s
+; CHECK-NEXT: uzp1 p4.s, p5.s, p0.s
+; CHECK-NEXT: whilewr p5.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #10
+; CHECK-NEXT: uzp1 p4.h, p4.h, p0.h
+; CHECK-NEXT: whilewr p6.d, x8, x1
+; CHECK-NEXT: addvl x8, x0, #9
+; CHECK-NEXT: punpklo p4.h, p4.b
+; CHECK-NEXT: whilewr p7.d, x8, x1
+; CHECK-NEXT: punpklo p4.h, p4.b
+; CHECK-NEXT: uzp1 p5.s, p6.s, p5.s
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p4.s, p4.s, p7.s
+; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p2.h, p3.h, p2.h
+; CHECK-NEXT: uzp1 p3.h, p4.h, p5.h
+; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-NEXT: uzp1 p1.b, p3.b, p2.b
+; CHECK-NEXT: addvl sp, sp, #1
+; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 8)
ret <vscale x 32 x i1> %0
}
define <vscale x 9 x i1> @whilewr_8_widen(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_8_widen:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_8_widen:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, z0.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z3.d
-; CHECK-SVE-NEXT: incd z3.d, all, mul #4
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z3.d
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p2.s, p5.s, p6.s
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p4.s
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_8_widen:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 9 x i1> @llvm.loop.dependence.war.mask.nxv9i1(ptr %a, ptr %b, i64 1)
ret <vscale x 9 x i1> %0
}
define <vscale x 7 x i1> @whilewr_16_widen(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_16_widen:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_16_widen:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, z0.d
-; CHECK-SVE-NEXT: mov z3.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: incd z2.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
-; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
-; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_16_widen:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 7 x i1> @llvm.loop.dependence.war.mask.nxv7i1(ptr %a, ptr %b, i64 2)
ret <vscale x 7 x i1> %0
}
define <vscale x 3 x i1> @whilewr_32_widen(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_32_widen:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_32_widen:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: subs x8, x1, x0
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: add x9, x8, #3
-; CHECK-SVE-NEXT: csel x8, x9, x8, mi
-; CHECK-SVE-NEXT: asr x8, x8, #2
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
-; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_32_widen:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 3 x i1> @llvm.loop.dependence.war.mask.nxv3i1(ptr %a, ptr %b, i64 4)
ret <vscale x 3 x i1> %0
}
define <vscale x 16 x i1> @whilewr_badimm(ptr %a, ptr %b) {
-; CHECK-SVE2-LABEL: whilewr_badimm:
-; CHECK-SVE2: // %bb.0: // %entry
-; CHECK-SVE2-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE2-NEXT: addvl sp, sp, #-1
-; CHECK-SVE2-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE2-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE2-NEXT: .cfi_offset w29, -16
-; CHECK-SVE2-NEXT: index z0.d, #0, #1
-; CHECK-SVE2-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
-; CHECK-SVE2-NEXT: sub x9, x1, x0
-; CHECK-SVE2-NEXT: movk x8, #21846
-; CHECK-SVE2-NEXT: ptrue p0.d
-; CHECK-SVE2-NEXT: smulh x8, x9, x8
-; CHECK-SVE2-NEXT: mov z1.d, z0.d
-; CHECK-SVE2-NEXT: mov z4.d, z0.d
-; CHECK-SVE2-NEXT: mov z5.d, z0.d
-; CHECK-SVE2-NEXT: incd z1.d
-; CHECK-SVE2-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE2-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE2-NEXT: mov z2.d, x8
-; CHECK-SVE2-NEXT: mov z3.d, z1.d
-; CHECK-SVE2-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE2-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE2-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
-; CHECK-SVE2-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE2-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
-; CHECK-SVE2-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
-; CHECK-SVE2-NEXT: mov z0.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
-; CHECK-SVE2-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
-; CHECK-SVE2-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE2-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE2-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE2-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE2-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
-; CHECK-SVE2-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE2-NEXT: cmp x8, #1
-; CHECK-SVE2-NEXT: cset w8, lt
-; CHECK-SVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE2-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE2-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE2-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE2-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE2-NEXT: addvl sp, sp, #1
-; CHECK-SVE2-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE2-NEXT: ret
-;
-; CHECK-SVE-LABEL: whilewr_badimm:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
-; CHECK-SVE-NEXT: addvl sp, sp, #-1
-; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
-; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
-; CHECK-SVE-NEXT: .cfi_offset w29, -16
-; CHECK-SVE-NEXT: index z0.d, #0, #1
-; CHECK-SVE-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
-; CHECK-SVE-NEXT: sub x9, x1, x0
-; CHECK-SVE-NEXT: movk x8, #21846
-; CHECK-SVE-NEXT: ptrue p0.d
-; CHECK-SVE-NEXT: smulh x8, x9, x8
-; CHECK-SVE-NEXT: mov z1.d, z0.d
-; CHECK-SVE-NEXT: mov z4.d, z0.d
-; CHECK-SVE-NEXT: mov z5.d, z0.d
-; CHECK-SVE-NEXT: incd z1.d
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: incd z4.d, all, mul #2
-; CHECK-SVE-NEXT: incd z5.d, all, mul #4
-; CHECK-SVE-NEXT: mov z2.d, x8
-; CHECK-SVE-NEXT: mov z3.d, z1.d
-; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: incd z1.d, all, mul #4
-; CHECK-SVE-NEXT: incd z3.d, all, mul #2
-; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: incd z4.d, all, mul #4
-; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
-; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
-; CHECK-SVE-NEXT: mov z0.d, z3.d
-; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
-; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
-; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-SVE-NEXT: incd z0.d, all, mul #4
-; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
-; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
-; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
-; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w8, lt
-; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
-; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
-; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
-; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
-; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
-; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-SVE-NEXT: addvl sp, sp, #1
-; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
-; CHECK-SVE-NEXT: ret
+; CHECK-LABEL: whilewr_badimm:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_offset w29, -16
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: mov x8, #6148914691236517205 // =0x5555555555555555
+; CHECK-NEXT: sub x9, x1, x0
+; CHECK-NEXT: movk x8, #21846
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: smulh x8, x9, x8
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NEXT: incd z4.d, all, mul #2
+; CHECK-NEXT: incd z5.d, all, mul #4
+; CHECK-NEXT: mov z2.d, x8
+; CHECK-NEXT: mov z3.d, z1.d
+; CHECK-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: incd z1.d, all, mul #4
+; CHECK-NEXT: incd z3.d, all, mul #2
+; CHECK-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: incd z4.d, all, mul #4
+; CHECK-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: mov z0.d, z3.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-NEXT: incd z0.d, all, mul #4
+; CHECK-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-NEXT: whilelo p1.b, xzr, x8
+; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-NEXT: addvl sp, sp, #1
+; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 3)
ret <vscale x 16 x i1> %0
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable_nosve2.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable_nosve2.ll
new file mode 100644
index 0000000000000..8b5ea0bc3b3ce
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable_nosve2.ll
@@ -0,0 +1,59 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64 -mattr=+sve %s -o - | FileCheck %s
+
+define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_8:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_offset w29, -16
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: sub x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: mov z2.d, x8
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z3.d, z0.d
+; CHECK-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: incd z0.d, all, mul #4
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z3.d, all, mul #2
+; CHECK-NEXT: cmphi p5.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: mov z4.d, z1.d
+; CHECK-NEXT: cmphi p2.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: incd z1.d, all, mul #4
+; CHECK-NEXT: cmphi p3.d, p0/z, z2.d, z3.d
+; CHECK-NEXT: incd z3.d, all, mul #4
+; CHECK-NEXT: incd z4.d, all, mul #2
+; CHECK-NEXT: cmphi p6.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z2.d, z3.d
+; CHECK-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-NEXT: cmphi p4.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: incd z4.d, all, mul #4
+; CHECK-NEXT: uzp1 p2.s, p5.s, p6.s
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: cmphi p0.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: uzp1 p3.s, p3.s, p4.s
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-NEXT: whilelo p1.b, xzr, x8
+; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-NEXT: addvl sp, sp, #1
+; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 1)
+ ret <vscale x 16 x i1> %0
+}
>From 5402e276dbbad830a38cbe6e2760b3ffacdfb931 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 15 Aug 2025 16:53:17 +0100
Subject: [PATCH 41/43] Rebase
---
llvm/test/CodeGen/AArch64/alias_mask.ll | 740 ++++++++++--------
.../CodeGen/AArch64/alias_mask_scalable.ll | 627 ++++++++++-----
2 files changed, 845 insertions(+), 522 deletions(-)
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 750d67c1b8ce7..7646c9d4750e0 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -177,426 +177,480 @@ entry:
ret <64 x i1> %0
}
-define <16 x i1> @whilewr_16_split(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_16_split:
+define <16 x i1> @whilewr_16_expand(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_16_expand:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: add x8, x0, #16
-; CHECK-NEXT: whilewr p0.h, x0, x1
-; CHECK-NEXT: whilewr p1.h, x8, x1
-; CHECK-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z1.h, p1/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: sub x8, x1, x0
+; CHECK-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NEXT: asr x8, x8, #1
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z6.d, z0.d
+; CHECK-NEXT: mov z7.d, z0.d
+; CHECK-NEXT: mov z16.d, z0.d
+; CHECK-NEXT: dup v3.2d, x8
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-NEXT: add z4.d, z4.d, #8 // =0x8
+; CHECK-NEXT: add z5.d, z5.d, #6 // =0x6
+; CHECK-NEXT: add z6.d, z6.d, #4 // =0x4
+; CHECK-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-NEXT: cmhi v16.2d, v3.2d, v16.2d
+; CHECK-NEXT: cmhi v3.2d, v3.2d, v7.2d
+; CHECK-NEXT: uzp1 v2.4s, v4.4s, v2.4s
+; CHECK-NEXT: uzp1 v4.4s, v6.4s, v5.4s
+; CHECK-NEXT: uzp1 v1.4s, v1.4s, v16.4s
+; CHECK-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-NEXT: uzp1 v0.8h, v0.8h, v4.8h
; CHECK-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-NEXT: dup v1.16b, w8
+; CHECK-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 2)
ret <16 x i1> %0
}
-define <32 x i1> @whilewr_16_split2(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_16_split2:
+define <32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_16_expand2:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: add x9, x0, #48
-; CHECK-NEXT: add x10, x0, #16
-; CHECK-NEXT: whilewr p0.h, x0, x1
-; CHECK-NEXT: whilewr p1.h, x9, x1
-; CHECK-NEXT: add x9, x0, #32
-; CHECK-NEXT: whilewr p2.h, x9, x1
-; CHECK-NEXT: mov z2.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-NEXT: sub x9, x1, x0
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: sub x10, x9, #32
+; CHECK-NEXT: add x9, x9, x9, lsr #63
+; CHECK-NEXT: add x10, x10, x10, lsr #63
+; CHECK-NEXT: asr x9, x9, #1
+; CHECK-NEXT: asr x10, x10, #1
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z6.d, z0.d
+; CHECK-NEXT: dup v7.2d, x9
+; CHECK-NEXT: dup v16.2d, x10
+; CHECK-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-NEXT: cmp x10, #1
+; CHECK-NEXT: add z3.d, z3.d, #8 // =0x8
+; CHECK-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-NEXT: add z5.d, z5.d, #4 // =0x4
+; CHECK-NEXT: add z6.d, z6.d, #2 // =0x2
+; CHECK-NEXT: cmhi v17.2d, v7.2d, v0.2d
+; CHECK-NEXT: cmhi v18.2d, v16.2d, v0.2d
+; CHECK-NEXT: add z0.d, z0.d, #14 // =0xe
+; CHECK-NEXT: cmhi v19.2d, v7.2d, v1.2d
+; CHECK-NEXT: cmhi v20.2d, v7.2d, v2.2d
+; CHECK-NEXT: cmhi v21.2d, v7.2d, v3.2d
+; CHECK-NEXT: cmhi v22.2d, v7.2d, v4.2d
+; CHECK-NEXT: cmhi v23.2d, v7.2d, v5.2d
+; CHECK-NEXT: cmhi v24.2d, v7.2d, v6.2d
+; CHECK-NEXT: cmhi v1.2d, v16.2d, v1.2d
+; CHECK-NEXT: cmhi v2.2d, v16.2d, v2.2d
+; CHECK-NEXT: cmhi v3.2d, v16.2d, v3.2d
+; CHECK-NEXT: cmhi v4.2d, v16.2d, v4.2d
+; CHECK-NEXT: cmhi v7.2d, v7.2d, v0.2d
+; CHECK-NEXT: cmhi v5.2d, v16.2d, v5.2d
+; CHECK-NEXT: cmhi v6.2d, v16.2d, v6.2d
+; CHECK-NEXT: cset w10, lt
+; CHECK-NEXT: cmhi v0.2d, v16.2d, v0.2d
+; CHECK-NEXT: uzp1 v16.4s, v21.4s, v20.4s
+; CHECK-NEXT: cmp x9, #1
+; CHECK-NEXT: uzp1 v20.4s, v23.4s, v22.4s
+; CHECK-NEXT: uzp1 v17.4s, v17.4s, v24.4s
+; CHECK-NEXT: cset w9, lt
+; CHECK-NEXT: uzp1 v2.4s, v3.4s, v2.4s
+; CHECK-NEXT: uzp1 v3.4s, v19.4s, v7.4s
+; CHECK-NEXT: uzp1 v4.4s, v5.4s, v4.4s
+; CHECK-NEXT: uzp1 v5.4s, v18.4s, v6.4s
+; CHECK-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NEXT: uzp1 v1.8h, v17.8h, v20.8h
+; CHECK-NEXT: uzp1 v3.8h, v16.8h, v3.8h
+; CHECK-NEXT: uzp1 v4.8h, v5.8h, v4.8h
+; CHECK-NEXT: uzp1 v0.8h, v2.8h, v0.8h
+; CHECK-NEXT: dup v2.16b, w9
; CHECK-NEXT: adrp x9, .LCPI11_0
-; CHECK-NEXT: whilewr p3.h, x10, x1
-; CHECK-NEXT: mov z0.h, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z1.h, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z3.h, p3/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NEXT: uzp1 v1.16b, v2.16b, v3.16b
+; CHECK-NEXT: uzp1 v1.16b, v1.16b, v3.16b
+; CHECK-NEXT: dup v3.16b, w10
+; CHECK-NEXT: uzp1 v0.16b, v4.16b, v0.16b
+; CHECK-NEXT: orr v1.16b, v1.16b, v2.16b
; CHECK-NEXT: ldr q2, [x9, :lo12:.LCPI11_0]
-; CHECK-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NEXT: orr v0.16b, v0.16b, v3.16b
; CHECK-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NEXT: shl v0.16b, v0.16b, #7
; CHECK-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-NEXT: ext v2.16b, v0.16b, v0.16b, #8
-; CHECK-NEXT: ext v3.16b, v1.16b, v1.16b, #8
-; CHECK-NEXT: zip1 v0.16b, v0.16b, v2.16b
-; CHECK-NEXT: zip1 v1.16b, v1.16b, v3.16b
-; CHECK-NEXT: addv h0, v0.8h
+; CHECK-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: ext v2.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: zip1 v1.16b, v1.16b, v2.16b
+; CHECK-NEXT: zip1 v0.16b, v0.16b, v3.16b
; CHECK-NEXT: addv h1, v1.8h
-; CHECK-NEXT: str h0, [x8, #2]
+; CHECK-NEXT: addv h0, v0.8h
; CHECK-NEXT: str h1, [x8]
+; CHECK-NEXT: str h0, [x8, #2]
; CHECK-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 2)
ret <32 x i1> %0
}
-define <8 x i1> @whilewr_32_split(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_32_split:
+define <8 x i1> @whilewr_32_expand(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_32_expand:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.s, x0, x1
-; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov w8, v0.s[1]
-; CHECK-NEXT: mov v1.16b, v0.16b
-; CHECK-NEXT: mov w9, v0.s[2]
-; CHECK-NEXT: mov v1.h[1], w8
-; CHECK-NEXT: mov w8, v0.s[3]
-; CHECK-NEXT: mov v1.h[2], w9
-; CHECK-NEXT: add x9, x0, #16
-; CHECK-NEXT: whilewr p0.s, x9, x1
-; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v1.h[3], w8
-; CHECK-NEXT: fmov w8, s0
-; CHECK-NEXT: mov w9, v0.s[1]
-; CHECK-NEXT: mov v1.h[4], w8
-; CHECK-NEXT: mov w8, v0.s[2]
-; CHECK-NEXT: mov v1.h[5], w9
-; CHECK-NEXT: mov w9, v0.s[3]
-; CHECK-NEXT: mov v1.h[6], w8
-; CHECK-NEXT: mov v1.h[7], w9
-; CHECK-NEXT: xtn v0.8b, v1.8h
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: add x9, x8, #3
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #2
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: dup v1.2d, x8
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-NEXT: add z2.d, z2.d, #4 // =0x4
+; CHECK-NEXT: add z3.d, z3.d, #2 // =0x2
+; CHECK-NEXT: cmhi v0.2d, v1.2d, v0.2d
+; CHECK-NEXT: cmhi v4.2d, v1.2d, v4.2d
+; CHECK-NEXT: cmhi v2.2d, v1.2d, v2.2d
+; CHECK-NEXT: cmhi v1.2d, v1.2d, v3.2d
+; CHECK-NEXT: uzp1 v2.4s, v2.4s, v4.4s
+; CHECK-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NEXT: dup v1.8b, w8
+; CHECK-NEXT: uzp1 v0.8h, v0.8h, v2.8h
+; CHECK-NEXT: xtn v0.8b, v0.8h
+; CHECK-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NEXT: ret
entry:
%0 = call <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 4)
ret <8 x i1> %0
}
-define <16 x i1> @whilewr_32_split2(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_32_split2:
+define <16 x i1> @whilewr_32_expand2(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_32_expand2:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.s, x0, x1
-; CHECK-NEXT: add x8, x0, #32
-; CHECK-NEXT: whilewr p1.s, x8, x1
-; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z1.s, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov w8, v0.s[1]
-; CHECK-NEXT: mov v2.16b, v0.16b
-; CHECK-NEXT: mov w10, v0.s[2]
-; CHECK-NEXT: mov w9, v1.s[1]
-; CHECK-NEXT: mov v3.16b, v1.16b
-; CHECK-NEXT: mov w11, v1.s[2]
-; CHECK-NEXT: mov v2.h[1], w8
-; CHECK-NEXT: mov w8, v0.s[3]
-; CHECK-NEXT: mov v3.h[1], w9
-; CHECK-NEXT: mov w9, v1.s[3]
-; CHECK-NEXT: mov v2.h[2], w10
-; CHECK-NEXT: add x10, x0, #16
-; CHECK-NEXT: whilewr p0.s, x10, x1
-; CHECK-NEXT: add x10, x0, #48
-; CHECK-NEXT: mov v3.h[2], w11
-; CHECK-NEXT: whilewr p1.s, x10, x1
-; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z1.s, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v2.h[3], w8
-; CHECK-NEXT: mov v3.h[3], w9
-; CHECK-NEXT: fmov w9, s0
-; CHECK-NEXT: mov w8, v0.s[1]
-; CHECK-NEXT: fmov w10, s1
-; CHECK-NEXT: mov w11, v1.s[1]
-; CHECK-NEXT: mov v2.h[4], w9
-; CHECK-NEXT: mov w9, v0.s[2]
-; CHECK-NEXT: mov v3.h[4], w10
-; CHECK-NEXT: mov w10, v1.s[2]
-; CHECK-NEXT: mov v2.h[5], w8
-; CHECK-NEXT: mov w8, v0.s[3]
-; CHECK-NEXT: mov v3.h[5], w11
-; CHECK-NEXT: mov w11, v1.s[3]
-; CHECK-NEXT: mov v2.h[6], w9
-; CHECK-NEXT: mov v3.h[6], w10
-; CHECK-NEXT: mov v2.h[7], w8
-; CHECK-NEXT: mov v3.h[7], w11
-; CHECK-NEXT: uzp1 v0.16b, v2.16b, v3.16b
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: add x9, x8, #3
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #2
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z6.d, z0.d
+; CHECK-NEXT: mov z7.d, z0.d
+; CHECK-NEXT: mov z16.d, z0.d
+; CHECK-NEXT: dup v3.2d, x8
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-NEXT: add z4.d, z4.d, #8 // =0x8
+; CHECK-NEXT: add z5.d, z5.d, #6 // =0x6
+; CHECK-NEXT: add z6.d, z6.d, #4 // =0x4
+; CHECK-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-NEXT: cmhi v16.2d, v3.2d, v16.2d
+; CHECK-NEXT: cmhi v3.2d, v3.2d, v7.2d
+; CHECK-NEXT: uzp1 v2.4s, v4.4s, v2.4s
+; CHECK-NEXT: uzp1 v4.4s, v6.4s, v5.4s
+; CHECK-NEXT: uzp1 v1.4s, v1.4s, v16.4s
+; CHECK-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NEXT: uzp1 v1.8h, v2.8h, v1.8h
+; CHECK-NEXT: uzp1 v0.8h, v0.8h, v4.8h
+; CHECK-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-NEXT: dup v1.16b, w8
+; CHECK-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 4)
ret <16 x i1> %0
}
-define <32 x i1> @whilewr_32_split3(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_32_split3:
+define <32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_32_expand3:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: add x9, x0, #96
-; CHECK-NEXT: whilewr p0.s, x0, x1
-; CHECK-NEXT: add x10, x0, #64
-; CHECK-NEXT: whilewr p1.s, x9, x1
-; CHECK-NEXT: add x9, x0, #32
-; CHECK-NEXT: whilewr p3.s, x9, x1
-; CHECK-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p2.s, x10, x1
-; CHECK-NEXT: mov z4.s, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z6.s, p3/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z5.s, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov w11, v0.s[1]
-; CHECK-NEXT: mov w12, v0.s[2]
-; CHECK-NEXT: mov w9, v4.s[1]
-; CHECK-NEXT: mov v1.16b, v4.16b
-; CHECK-NEXT: mov w14, v0.s[3]
-; CHECK-NEXT: mov w13, v6.s[1]
-; CHECK-NEXT: mov v2.16b, v6.16b
-; CHECK-NEXT: // kill: def $q0 killed $q0 killed $z0
-; CHECK-NEXT: mov w15, v4.s[2]
-; CHECK-NEXT: mov w10, v5.s[1]
-; CHECK-NEXT: mov v3.16b, v5.16b
-; CHECK-NEXT: mov w16, v5.s[2]
-; CHECK-NEXT: mov v0.h[1], w11
-; CHECK-NEXT: mov w11, v4.s[3]
-; CHECK-NEXT: mov w17, v5.s[3]
-; CHECK-NEXT: mov v1.h[1], w9
-; CHECK-NEXT: mov w9, v6.s[2]
-; CHECK-NEXT: mov v2.h[1], w13
-; CHECK-NEXT: add x13, x0, #16
-; CHECK-NEXT: mov v3.h[1], w10
-; CHECK-NEXT: whilewr p0.s, x13, x1
-; CHECK-NEXT: add x13, x0, #112
-; CHECK-NEXT: mov v0.h[2], w12
-; CHECK-NEXT: add x12, x0, #48
-; CHECK-NEXT: mov w10, v6.s[3]
-; CHECK-NEXT: whilewr p1.s, x13, x1
-; CHECK-NEXT: mov v1.h[2], w15
-; CHECK-NEXT: mov z4.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v2.h[2], w9
-; CHECK-NEXT: add x9, x0, #80
-; CHECK-NEXT: whilewr p0.s, x12, x1
-; CHECK-NEXT: mov v3.h[2], w16
-; CHECK-NEXT: whilewr p2.s, x9, x1
-; CHECK-NEXT: mov z5.s, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z7.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v0.h[3], w14
-; CHECK-NEXT: mov w9, v4.s[1]
-; CHECK-NEXT: mov z6.s, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v1.h[3], w11
-; CHECK-NEXT: mov v2.h[3], w10
-; CHECK-NEXT: fmov w10, s4
-; CHECK-NEXT: fmov w12, s5
-; CHECK-NEXT: mov v3.h[3], w17
-; CHECK-NEXT: fmov w15, s7
-; CHECK-NEXT: mov w11, v5.s[1]
-; CHECK-NEXT: fmov w14, s6
-; CHECK-NEXT: mov w13, v6.s[1]
-; CHECK-NEXT: mov v1.h[4], w12
-; CHECK-NEXT: mov w12, v7.s[1]
-; CHECK-NEXT: mov v0.h[4], w10
-; CHECK-NEXT: mov v2.h[4], w15
-; CHECK-NEXT: mov w10, v4.s[2]
-; CHECK-NEXT: mov w15, v6.s[2]
-; CHECK-NEXT: mov v3.h[4], w14
-; CHECK-NEXT: mov w14, v5.s[2]
-; CHECK-NEXT: mov v1.h[5], w11
-; CHECK-NEXT: mov w11, v7.s[2]
-; CHECK-NEXT: mov v0.h[5], w9
-; CHECK-NEXT: mov v2.h[5], w12
-; CHECK-NEXT: mov w9, v4.s[3]
-; CHECK-NEXT: mov w12, v5.s[3]
-; CHECK-NEXT: mov v3.h[5], w13
-; CHECK-NEXT: mov w13, v6.s[3]
-; CHECK-NEXT: mov v1.h[6], w14
-; CHECK-NEXT: mov w14, v7.s[3]
-; CHECK-NEXT: mov v0.h[6], w10
-; CHECK-NEXT: mov v2.h[6], w11
-; CHECK-NEXT: mov v3.h[6], w15
-; CHECK-NEXT: mov v1.h[7], w12
-; CHECK-NEXT: mov v0.h[7], w9
+; CHECK-NEXT: subs x9, x1, x0
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: add x10, x9, #3
+; CHECK-NEXT: sub x11, x9, #61
+; CHECK-NEXT: csel x10, x10, x9, mi
+; CHECK-NEXT: subs x9, x9, #64
+; CHECK-NEXT: csel x9, x11, x9, mi
+; CHECK-NEXT: asr x10, x10, #2
+; CHECK-NEXT: asr x9, x9, #2
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z6.d, z0.d
+; CHECK-NEXT: dup v7.2d, x10
+; CHECK-NEXT: dup v16.2d, x9
+; CHECK-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-NEXT: cmp x9, #1
+; CHECK-NEXT: add z3.d, z3.d, #8 // =0x8
+; CHECK-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-NEXT: add z5.d, z5.d, #4 // =0x4
+; CHECK-NEXT: add z6.d, z6.d, #2 // =0x2
+; CHECK-NEXT: cmhi v17.2d, v7.2d, v0.2d
+; CHECK-NEXT: cmhi v18.2d, v16.2d, v0.2d
+; CHECK-NEXT: add z0.d, z0.d, #14 // =0xe
+; CHECK-NEXT: cmhi v19.2d, v7.2d, v1.2d
+; CHECK-NEXT: cmhi v20.2d, v7.2d, v2.2d
+; CHECK-NEXT: cmhi v21.2d, v7.2d, v3.2d
+; CHECK-NEXT: cmhi v22.2d, v7.2d, v4.2d
+; CHECK-NEXT: cmhi v23.2d, v7.2d, v5.2d
+; CHECK-NEXT: cmhi v24.2d, v7.2d, v6.2d
+; CHECK-NEXT: cmhi v1.2d, v16.2d, v1.2d
+; CHECK-NEXT: cmhi v2.2d, v16.2d, v2.2d
+; CHECK-NEXT: cmhi v3.2d, v16.2d, v3.2d
+; CHECK-NEXT: cmhi v4.2d, v16.2d, v4.2d
+; CHECK-NEXT: cmhi v7.2d, v7.2d, v0.2d
+; CHECK-NEXT: cmhi v5.2d, v16.2d, v5.2d
+; CHECK-NEXT: cmhi v6.2d, v16.2d, v6.2d
+; CHECK-NEXT: cset w9, lt
+; CHECK-NEXT: cmhi v0.2d, v16.2d, v0.2d
+; CHECK-NEXT: uzp1 v16.4s, v21.4s, v20.4s
+; CHECK-NEXT: cmp x10, #1
+; CHECK-NEXT: uzp1 v20.4s, v23.4s, v22.4s
+; CHECK-NEXT: uzp1 v17.4s, v17.4s, v24.4s
+; CHECK-NEXT: cset w10, lt
+; CHECK-NEXT: uzp1 v2.4s, v3.4s, v2.4s
+; CHECK-NEXT: uzp1 v3.4s, v19.4s, v7.4s
+; CHECK-NEXT: uzp1 v4.4s, v5.4s, v4.4s
+; CHECK-NEXT: uzp1 v5.4s, v18.4s, v6.4s
+; CHECK-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NEXT: uzp1 v1.8h, v17.8h, v20.8h
+; CHECK-NEXT: uzp1 v3.8h, v16.8h, v3.8h
+; CHECK-NEXT: uzp1 v4.8h, v5.8h, v4.8h
+; CHECK-NEXT: uzp1 v0.8h, v2.8h, v0.8h
+; CHECK-NEXT: dup v2.16b, w10
+; CHECK-NEXT: uzp1 v1.16b, v1.16b, v3.16b
+; CHECK-NEXT: dup v3.16b, w9
; CHECK-NEXT: adrp x9, .LCPI14_0
-; CHECK-NEXT: mov v2.h[7], w14
-; CHECK-NEXT: mov v3.h[7], w13
-; CHECK-NEXT: uzp1 v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: uzp1 v0.16b, v4.16b, v0.16b
+; CHECK-NEXT: orr v1.16b, v1.16b, v2.16b
; CHECK-NEXT: ldr q2, [x9, :lo12:.LCPI14_0]
-; CHECK-NEXT: uzp1 v1.16b, v3.16b, v1.16b
-; CHECK-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NEXT: orr v0.16b, v0.16b, v3.16b
; CHECK-NEXT: shl v1.16b, v1.16b, #7
-; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
+; CHECK-NEXT: shl v0.16b, v0.16b, #7
; CHECK-NEXT: cmlt v1.16b, v1.16b, #0
-; CHECK-NEXT: and v0.16b, v0.16b, v2.16b
+; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NEXT: and v1.16b, v1.16b, v2.16b
-; CHECK-NEXT: ext v3.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT: and v0.16b, v0.16b, v2.16b
; CHECK-NEXT: ext v2.16b, v1.16b, v1.16b, #8
-; CHECK-NEXT: zip1 v0.16b, v0.16b, v3.16b
+; CHECK-NEXT: ext v3.16b, v0.16b, v0.16b, #8
; CHECK-NEXT: zip1 v1.16b, v1.16b, v2.16b
-; CHECK-NEXT: addv h0, v0.8h
+; CHECK-NEXT: zip1 v0.16b, v0.16b, v3.16b
; CHECK-NEXT: addv h1, v1.8h
-; CHECK-NEXT: str h0, [x8]
-; CHECK-NEXT: str h1, [x8, #2]
+; CHECK-NEXT: addv h0, v0.8h
+; CHECK-NEXT: str h1, [x8]
+; CHECK-NEXT: str h0, [x8, #2]
; CHECK-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 4)
ret <32 x i1> %0
}
-define <4 x i1> @whilewr_64_split(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_64_split:
+define <4 x i1> @whilewr_64_expand(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_64_expand:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.d, x0, x1
-; CHECK-NEXT: add x8, x0, #16
-; CHECK-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p0.d, x8, x1
-; CHECK-NEXT: mov z1.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v0.s[1], v0.s[2]
-; CHECK-NEXT: mov v0.s[2], v1.s[0]
-; CHECK-NEXT: mov v0.s[3], v1.s[2]
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: add x9, x8, #7
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #3
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: dup v2.2d, x8
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: add z1.d, z1.d, #2 // =0x2
+; CHECK-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NEXT: dup v1.4h, w8
; CHECK-NEXT: xtn v0.4h, v0.4s
+; CHECK-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NEXT: ret
entry:
%0 = call <4 x i1> @llvm.loop.dependence.war.mask.v4i1(ptr %a, ptr %b, i64 8)
ret <4 x i1> %0
}
-define <8 x i1> @whilewr_64_split2(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_64_split2:
+define <8 x i1> @whilewr_64_expand2(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_64_expand2:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: add x8, x0, #32
-; CHECK-NEXT: whilewr p0.d, x0, x1
-; CHECK-NEXT: whilewr p1.d, x8, x1
-; CHECK-NEXT: add x8, x0, #16
-; CHECK-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p0.d, x8, x1
-; CHECK-NEXT: add x8, x0, #48
-; CHECK-NEXT: mov z1.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p1.d, x8, x1
-; CHECK-NEXT: mov z2.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v0.s[1], v0.s[2]
-; CHECK-NEXT: mov z3.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v1.s[1], v1.s[2]
-; CHECK-NEXT: mov v0.s[2], v2.s[0]
-; CHECK-NEXT: mov v1.s[2], v3.s[0]
-; CHECK-NEXT: mov v0.s[3], v2.s[2]
-; CHECK-NEXT: mov v1.s[3], v3.s[2]
-; CHECK-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: add x9, x8, #7
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #3
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: dup v1.2d, x8
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-NEXT: add z2.d, z2.d, #4 // =0x4
+; CHECK-NEXT: add z3.d, z3.d, #2 // =0x2
+; CHECK-NEXT: cmhi v0.2d, v1.2d, v0.2d
+; CHECK-NEXT: cmhi v4.2d, v1.2d, v4.2d
+; CHECK-NEXT: cmhi v2.2d, v1.2d, v2.2d
+; CHECK-NEXT: cmhi v1.2d, v1.2d, v3.2d
+; CHECK-NEXT: uzp1 v2.4s, v2.4s, v4.4s
+; CHECK-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NEXT: dup v1.8b, w8
+; CHECK-NEXT: uzp1 v0.8h, v0.8h, v2.8h
; CHECK-NEXT: xtn v0.8b, v0.8h
+; CHECK-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NEXT: ret
entry:
%0 = call <8 x i1> @llvm.loop.dependence.war.mask.v8i1(ptr %a, ptr %b, i64 8)
ret <8 x i1> %0
}
-define <16 x i1> @whilewr_64_split3(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_64_split3:
+define <16 x i1> @whilewr_64_expand3(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_64_expand3:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: add x8, x0, #96
-; CHECK-NEXT: whilewr p0.d, x0, x1
-; CHECK-NEXT: add x9, x0, #48
-; CHECK-NEXT: whilewr p1.d, x8, x1
-; CHECK-NEXT: add x8, x0, #64
-; CHECK-NEXT: whilewr p2.d, x8, x1
-; CHECK-NEXT: add x8, x0, #32
-; CHECK-NEXT: mov z0.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p0.d, x8, x1
-; CHECK-NEXT: add x8, x0, #112
-; CHECK-NEXT: mov z1.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p1.d, x8, x1
-; CHECK-NEXT: add x8, x0, #80
-; CHECK-NEXT: mov z2.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z3.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p2.d, x8, x1
-; CHECK-NEXT: add x8, x0, #16
-; CHECK-NEXT: whilewr p0.d, x8, x1
-; CHECK-NEXT: mov z4.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v1.s[1], v1.s[2]
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: mov v0.s[1], v0.s[2]
-; CHECK-NEXT: mov v2.s[1], v2.s[2]
-; CHECK-NEXT: mov v3.s[1], v3.s[2]
-; CHECK-NEXT: mov z5.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z6.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z7.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v1.s[2], v4.s[0]
-; CHECK-NEXT: mov v0.s[2], v6.s[0]
-; CHECK-NEXT: mov v2.s[2], v5.s[0]
-; CHECK-NEXT: mov v3.s[2], v7.s[0]
-; CHECK-NEXT: mov v1.s[3], v4.s[2]
-; CHECK-NEXT: mov v2.s[3], v5.s[2]
-; CHECK-NEXT: mov v0.s[3], v6.s[2]
-; CHECK-NEXT: mov v3.s[3], v7.s[2]
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: add x9, x8, #7
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #3
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z6.d, z0.d
+; CHECK-NEXT: mov z7.d, z0.d
+; CHECK-NEXT: mov z16.d, z0.d
+; CHECK-NEXT: dup v3.2d, x8
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-NEXT: add z4.d, z4.d, #8 // =0x8
+; CHECK-NEXT: add z5.d, z5.d, #6 // =0x6
+; CHECK-NEXT: add z6.d, z6.d, #4 // =0x4
+; CHECK-NEXT: add z7.d, z7.d, #2 // =0x2
+; CHECK-NEXT: add z16.d, z16.d, #14 // =0xe
+; CHECK-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-NEXT: cmhi v16.2d, v3.2d, v16.2d
+; CHECK-NEXT: cmhi v3.2d, v3.2d, v7.2d
+; CHECK-NEXT: uzp1 v2.4s, v4.4s, v2.4s
+; CHECK-NEXT: uzp1 v4.4s, v6.4s, v5.4s
+; CHECK-NEXT: uzp1 v1.4s, v1.4s, v16.4s
+; CHECK-NEXT: uzp1 v0.4s, v0.4s, v3.4s
; CHECK-NEXT: uzp1 v1.8h, v2.8h, v1.8h
-; CHECK-NEXT: uzp1 v0.8h, v0.8h, v3.8h
+; CHECK-NEXT: uzp1 v0.8h, v0.8h, v4.8h
; CHECK-NEXT: uzp1 v0.16b, v0.16b, v1.16b
+; CHECK-NEXT: dup v1.16b, w8
+; CHECK-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NEXT: ret
entry:
%0 = call <16 x i1> @llvm.loop.dependence.war.mask.v16i1(ptr %a, ptr %b, i64 8)
ret <16 x i1> %0
}
-define <32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
-; CHECK-LABEL: whilewr_64_split4:
+define <32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
+; CHECK-LABEL: whilewr_64_expand4:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: add x9, x0, #96
-; CHECK-NEXT: whilewr p2.d, x0, x1
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: add x9, x0, #112
-; CHECK-NEXT: whilewr p0.d, x9, x1
-; CHECK-NEXT: add x9, x0, #64
-; CHECK-NEXT: mov z1.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z0.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: add x9, x0, #32
-; CHECK-NEXT: whilewr p3.d, x9, x1
-; CHECK-NEXT: add x9, x0, #80
-; CHECK-NEXT: mov z3.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p0.d, x9, x1
-; CHECK-NEXT: add x9, x0, #224
-; CHECK-NEXT: mov z2.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: add x9, x0, #240
-; CHECK-NEXT: mov z4.d, p3/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p2.d, x9, x1
-; CHECK-NEXT: add x9, x0, #192
-; CHECK-NEXT: mov v0.s[1], v0.s[2]
-; CHECK-NEXT: mov z5.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: add x9, x0, #208
-; CHECK-NEXT: mov z6.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p2.d, x9, x1
-; CHECK-NEXT: add x9, x0, #160
-; CHECK-NEXT: mov z7.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: add x9, x0, #128
-; CHECK-NEXT: mov z16.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p2.d, x9, x1
-; CHECK-NEXT: add x9, x0, #176
-; CHECK-NEXT: mov z17.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: add x9, x0, #144
-; CHECK-NEXT: mov z18.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p2.d, x9, x1
-; CHECK-NEXT: add x9, x0, #16
-; CHECK-NEXT: mov z19.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: add x9, x0, #48
-; CHECK-NEXT: mov z20.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: whilewr p2.d, x9, x1
-; CHECK-NEXT: mov v5.s[1], v5.s[2]
-; CHECK-NEXT: mov v7.s[1], v7.s[2]
-; CHECK-NEXT: mov v17.s[1], v17.s[2]
-; CHECK-NEXT: mov v18.s[1], v18.s[2]
-; CHECK-NEXT: mov v2.s[1], v2.s[2]
-; CHECK-NEXT: mov v1.s[1], v1.s[2]
-; CHECK-NEXT: adrp x9, .LCPI18_0
-; CHECK-NEXT: mov v4.s[1], v4.s[2]
-; CHECK-NEXT: mov z21.d, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z22.d, p1/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov z23.d, p2/z, #-1 // =0xffffffffffffffff
-; CHECK-NEXT: mov v0.s[2], v3.s[0]
-; CHECK-NEXT: mov v5.s[2], v6.s[0]
-; CHECK-NEXT: mov v7.s[2], v16.s[0]
-; CHECK-NEXT: mov v17.s[2], v19.s[0]
-; CHECK-NEXT: mov v18.s[2], v20.s[0]
-; CHECK-NEXT: mov v2.s[2], v21.s[0]
-; CHECK-NEXT: mov v1.s[2], v22.s[0]
-; CHECK-NEXT: mov v4.s[2], v23.s[0]
-; CHECK-NEXT: mov v0.s[3], v3.s[2]
-; CHECK-NEXT: mov v5.s[3], v6.s[2]
-; CHECK-NEXT: mov v7.s[3], v16.s[2]
-; CHECK-NEXT: mov v17.s[3], v19.s[2]
-; CHECK-NEXT: mov v18.s[3], v20.s[2]
-; CHECK-NEXT: mov v2.s[3], v21.s[2]
-; CHECK-NEXT: mov v1.s[3], v22.s[2]
-; CHECK-NEXT: mov v4.s[3], v23.s[2]
-; CHECK-NEXT: uzp1 v3.8h, v7.8h, v5.8h
-; CHECK-NEXT: uzp1 v5.8h, v18.8h, v17.8h
+; CHECK-NEXT: subs x9, x1, x0
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: add x10, x9, #7
+; CHECK-NEXT: sub x11, x9, #121
+; CHECK-NEXT: csel x10, x10, x9, mi
+; CHECK-NEXT: subs x9, x9, #128
+; CHECK-NEXT: csel x9, x11, x9, mi
+; CHECK-NEXT: asr x10, x10, #3
+; CHECK-NEXT: asr x9, x9, #3
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z6.d, z0.d
+; CHECK-NEXT: dup v7.2d, x10
+; CHECK-NEXT: dup v16.2d, x9
+; CHECK-NEXT: add z1.d, z1.d, #12 // =0xc
+; CHECK-NEXT: add z2.d, z2.d, #10 // =0xa
+; CHECK-NEXT: cmp x9, #1
+; CHECK-NEXT: add z3.d, z3.d, #8 // =0x8
+; CHECK-NEXT: add z4.d, z4.d, #6 // =0x6
+; CHECK-NEXT: add z5.d, z5.d, #4 // =0x4
+; CHECK-NEXT: add z6.d, z6.d, #2 // =0x2
+; CHECK-NEXT: cmhi v17.2d, v7.2d, v0.2d
+; CHECK-NEXT: cmhi v18.2d, v16.2d, v0.2d
+; CHECK-NEXT: add z0.d, z0.d, #14 // =0xe
+; CHECK-NEXT: cmhi v19.2d, v7.2d, v1.2d
+; CHECK-NEXT: cmhi v20.2d, v7.2d, v2.2d
+; CHECK-NEXT: cmhi v21.2d, v7.2d, v3.2d
+; CHECK-NEXT: cmhi v22.2d, v7.2d, v4.2d
+; CHECK-NEXT: cmhi v23.2d, v7.2d, v5.2d
+; CHECK-NEXT: cmhi v24.2d, v7.2d, v6.2d
+; CHECK-NEXT: cmhi v1.2d, v16.2d, v1.2d
+; CHECK-NEXT: cmhi v2.2d, v16.2d, v2.2d
+; CHECK-NEXT: cmhi v3.2d, v16.2d, v3.2d
+; CHECK-NEXT: cmhi v4.2d, v16.2d, v4.2d
+; CHECK-NEXT: cmhi v7.2d, v7.2d, v0.2d
+; CHECK-NEXT: cmhi v5.2d, v16.2d, v5.2d
+; CHECK-NEXT: cmhi v6.2d, v16.2d, v6.2d
+; CHECK-NEXT: cset w9, lt
+; CHECK-NEXT: cmhi v0.2d, v16.2d, v0.2d
+; CHECK-NEXT: uzp1 v16.4s, v21.4s, v20.4s
+; CHECK-NEXT: cmp x10, #1
+; CHECK-NEXT: uzp1 v20.4s, v23.4s, v22.4s
+; CHECK-NEXT: uzp1 v17.4s, v17.4s, v24.4s
+; CHECK-NEXT: cset w10, lt
+; CHECK-NEXT: uzp1 v2.4s, v3.4s, v2.4s
+; CHECK-NEXT: uzp1 v3.4s, v19.4s, v7.4s
+; CHECK-NEXT: uzp1 v4.4s, v5.4s, v4.4s
+; CHECK-NEXT: uzp1 v5.4s, v18.4s, v6.4s
+; CHECK-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NEXT: uzp1 v1.8h, v17.8h, v20.8h
+; CHECK-NEXT: uzp1 v3.8h, v16.8h, v3.8h
+; CHECK-NEXT: uzp1 v4.8h, v5.8h, v4.8h
; CHECK-NEXT: uzp1 v0.8h, v2.8h, v0.8h
-; CHECK-NEXT: uzp1 v1.8h, v1.8h, v4.8h
-; CHECK-NEXT: uzp1 v2.16b, v5.16b, v3.16b
-; CHECK-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NEXT: shl v1.16b, v2.16b, #7
+; CHECK-NEXT: dup v2.16b, w10
+; CHECK-NEXT: uzp1 v1.16b, v1.16b, v3.16b
+; CHECK-NEXT: dup v3.16b, w9
+; CHECK-NEXT: adrp x9, .LCPI18_0
+; CHECK-NEXT: uzp1 v0.16b, v4.16b, v0.16b
+; CHECK-NEXT: orr v1.16b, v1.16b, v2.16b
; CHECK-NEXT: ldr q2, [x9, :lo12:.LCPI18_0]
+; CHECK-NEXT: orr v0.16b, v0.16b, v3.16b
+; CHECK-NEXT: shl v1.16b, v1.16b, #7
; CHECK-NEXT: shl v0.16b, v0.16b, #7
; CHECK-NEXT: cmlt v1.16b, v1.16b, #0
; CHECK-NEXT: cmlt v0.16b, v0.16b, #0
@@ -608,8 +662,8 @@ define <32 x i1> @whilewr_64_split4(ptr %a, ptr %b) {
; CHECK-NEXT: zip1 v0.16b, v0.16b, v3.16b
; CHECK-NEXT: addv h1, v1.8h
; CHECK-NEXT: addv h0, v0.8h
-; CHECK-NEXT: str h1, [x8, #2]
-; CHECK-NEXT: str h0, [x8]
+; CHECK-NEXT: str h1, [x8]
+; CHECK-NEXT: str h0, [x8, #2]
; CHECK-NEXT: ret
entry:
%0 = call <32 x i1> @llvm.loop.dependence.war.mask.v32i1(ptr %a, ptr %b, i64 8)
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index ef2f729da1a62..179dcfa11c108 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -113,10 +113,58 @@ entry:
define <vscale x 16 x i1> @whilewr_16_expand(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_16_expand:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.h, x0, x1
-; CHECK-NEXT: incb x0
-; CHECK-NEXT: whilewr p1.h, x0, x1
-; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_offset w29, -16
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: sub x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NEXT: asr x8, x8, #1
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z2.d, x8
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z4.d, all, mul #2
+; CHECK-NEXT: incd z5.d, all, mul #4
+; CHECK-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z1.d
+; CHECK-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: incd z1.d, all, mul #4
+; CHECK-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: incd z4.d, all, mul #4
+; CHECK-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-NEXT: incd z3.d, all, mul #2
+; CHECK-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-NEXT: mov z0.d, z3.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: incd z0.d, all, mul #4
+; CHECK-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-NEXT: whilelo p1.b, xzr, x8
+; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-NEXT: addvl sp, sp, #1
+; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 2)
@@ -126,16 +174,89 @@ entry:
define <vscale x 32 x i1> @whilewr_16_expand2(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_16_expand2:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: mov x8, x0
-; CHECK-NEXT: whilewr p0.h, x0, x1
-; CHECK-NEXT: addvl x9, x0, #3
-; CHECK-NEXT: incb x8
+; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_offset w29, -16
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: sub x8, x1, x0
; CHECK-NEXT: incb x0, all, mul #2
-; CHECK-NEXT: whilewr p1.h, x9, x1
-; CHECK-NEXT: whilewr p2.h, x8, x1
-; CHECK-NEXT: whilewr p3.h, x0, x1
-; CHECK-NEXT: uzp1 p0.b, p0.b, p2.b
-; CHECK-NEXT: uzp1 p1.b, p3.b, p1.b
+; CHECK-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: asr x8, x8, #1
+; CHECK-NEXT: sub x9, x1, x0
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z0.d
+; CHECK-NEXT: mov z5.d, x8
+; CHECK-NEXT: add x9, x9, x9, lsr #63
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z2.d, all, mul #2
+; CHECK-NEXT: incd z3.d, all, mul #4
+; CHECK-NEXT: cmphi p2.d, p0/z, z5.d, z0.d
+; CHECK-NEXT: asr x9, x9, #1
+; CHECK-NEXT: mov z4.d, z1.d
+; CHECK-NEXT: mov z6.d, z1.d
+; CHECK-NEXT: mov z7.d, z2.d
+; CHECK-NEXT: cmphi p1.d, p0/z, z5.d, z1.d
+; CHECK-NEXT: cmphi p3.d, p0/z, z5.d, z3.d
+; CHECK-NEXT: cmphi p5.d, p0/z, z5.d, z2.d
+; CHECK-NEXT: incd z4.d, all, mul #2
+; CHECK-NEXT: incd z6.d, all, mul #4
+; CHECK-NEXT: incd z7.d, all, mul #4
+; CHECK-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-NEXT: mov z24.d, z4.d
+; CHECK-NEXT: cmphi p4.d, p0/z, z5.d, z6.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z5.d, z4.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-NEXT: incd z24.d, all, mul #4
+; CHECK-NEXT: uzp1 p2.s, p3.s, p4.s
+; CHECK-NEXT: uzp1 p3.s, p5.s, p6.s
+; CHECK-NEXT: cmphi p8.d, p0/z, z5.d, z24.d
+; CHECK-NEXT: mov z5.d, x9
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: cmphi p4.d, p0/z, z5.d, z24.d
+; CHECK-NEXT: cmphi p5.d, p0/z, z5.d, z7.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-NEXT: uzp1 p7.s, p7.s, p8.s
+; CHECK-NEXT: cmphi p9.d, p0/z, z5.d, z3.d
+; CHECK-NEXT: cmphi p3.d, p0/z, z5.d, z4.d
+; CHECK-NEXT: cmphi p8.d, p0/z, z5.d, z2.d
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p2.h, p2.h, p7.h
+; CHECK-NEXT: cmphi p7.d, p0/z, z5.d, z1.d
+; CHECK-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-NEXT: uzp1 p5.s, p9.s, p6.s
+; CHECK-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: whilelo p6.b, xzr, x8
+; CHECK-NEXT: uzp1 p3.s, p8.s, p3.s
+; CHECK-NEXT: cmp x9, #1
+; CHECK-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.s, p0.s, p7.s
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p4.h, p5.h, p4.h
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.h, p0.h, p3.h
+; CHECK-NEXT: uzp1 p1.b, p1.b, p2.b
+; CHECK-NEXT: uzp1 p2.b, p0.b, p4.b
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: whilelo p3.b, xzr, x8
+; CHECK-NEXT: sel p0.b, p1, p1.b, p6.b
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: sel p1.b, p2, p2.b, p3.b
+; CHECK-NEXT: addvl sp, sp, #1
+; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-NEXT: ret
entry:
%0 = call <vscale x 32 x i1> @llvm.loop.dependence.war.mask.nxv32i1(ptr %a, ptr %b, i64 2)
@@ -145,10 +266,31 @@ entry:
define <vscale x 8 x i1> @whilewr_32_expand(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_32_expand:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.s, x0, x1
-; CHECK-NEXT: incb x0
-; CHECK-NEXT: whilewr p1.s, x0, x1
-; CHECK-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: add x9, x8, #3
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #2
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z3.d, x8
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z2.d, all, mul #2
+; CHECK-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-NEXT: mov z4.d, z1.d
+; CHECK-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-NEXT: incd z4.d, all, mul #2
+; CHECK-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-NEXT: uzp1 p0.h, p1.h, p0.h
+; CHECK-NEXT: whilelo p1.h, xzr, x8
+; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 4)
@@ -158,21 +300,59 @@ entry:
define <vscale x 16 x i1> @whilewr_32_expand2(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_32_expand2:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.s, x0, x1
-; CHECK-NEXT: mov x8, x0
-; CHECK-NEXT: addvl x9, x0, #3
-; CHECK-NEXT: incb x8, all, mul #2
-; CHECK-NEXT: incb x0
-; CHECK-NEXT: whilewr p1.s, x9, x1
-; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-NEXT: uzp1 p0.b, p0.b, p0.b
-; CHECK-NEXT: whilewr p2.s, x8, x1
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: whilewr p3.s, x0, x1
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: uzp1 p1.h, p2.h, p1.h
-; CHECK-NEXT: uzp1 p0.h, p0.h, p3.h
-; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-NEXT: .cfi_offset w29, -16
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: add x9, x8, #3
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #2
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z2.d, x8
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z4.d, all, mul #2
+; CHECK-NEXT: incd z5.d, all, mul #4
+; CHECK-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z1.d
+; CHECK-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: incd z1.d, all, mul #4
+; CHECK-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: incd z4.d, all, mul #4
+; CHECK-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-NEXT: incd z3.d, all, mul #2
+; CHECK-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-NEXT: mov z0.d, z3.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: incd z0.d, all, mul #4
+; CHECK-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-NEXT: whilelo p1.b, xzr, x8
+; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-NEXT: addvl sp, sp, #1
+; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.loop.dependence.war.mask.nxv16i1(ptr %a, ptr %b, i64 4)
@@ -184,43 +364,89 @@ define <vscale x 32 x i1> @whilewr_32_expand3(ptr %a, ptr %b) {
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: whilewr p0.s, x0, x1
-; CHECK-NEXT: mov x8, x0
-; CHECK-NEXT: addvl x9, x0, #3
-; CHECK-NEXT: incb x8, all, mul #2
-; CHECK-NEXT: mov x10, x0
-; CHECK-NEXT: whilewr p1.s, x9, x1
-; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-NEXT: addvl x9, x0, #7
-; CHECK-NEXT: incb x10
-; CHECK-NEXT: whilewr p2.s, x9, x1
-; CHECK-NEXT: addvl x9, x0, #6
-; CHECK-NEXT: uzp1 p0.b, p0.b, p0.b
-; CHECK-NEXT: whilewr p3.s, x8, x1
-; CHECK-NEXT: addvl x8, x0, #5
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: add x9, x8, #3
; CHECK-NEXT: incb x0, all, mul #4
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: whilewr p4.s, x10, x1
-; CHECK-NEXT: whilewr p5.s, x0, x1
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: uzp1 p1.h, p3.h, p1.h
-; CHECK-NEXT: uzp1 p3.h, p5.h, p0.h
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #2
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, x8
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z2.d, all, mul #2
+; CHECK-NEXT: incd z4.d, all, mul #4
+; CHECK-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
+; CHECK-NEXT: mov z3.d, z1.d
+; CHECK-NEXT: mov z6.d, z2.d
+; CHECK-NEXT: mov z7.d, z1.d
+; CHECK-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
+; CHECK-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
+; CHECK-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
+; CHECK-NEXT: incd z3.d, all, mul #2
+; CHECK-NEXT: incd z6.d, all, mul #4
+; CHECK-NEXT: incd z7.d, all, mul #4
+; CHECK-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-NEXT: mov z24.d, z3.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
+; CHECK-NEXT: incd z24.d, all, mul #4
+; CHECK-NEXT: uzp1 p2.s, p2.s, p7.s
+; CHECK-NEXT: uzp1 p3.s, p3.s, p8.s
+; CHECK-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: uzp1 p3.h, p4.h, p3.h
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p6.s, p6.s, p9.s
+; CHECK-NEXT: whilelo p1.b, xzr, x8
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: uzp1 p2.h, p2.h, p6.h
+; CHECK-NEXT: add x9, x8, #3
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-NEXT: asr x8, x8, #2
+; CHECK-NEXT: mov z5.d, x8
+; CHECK-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
+; CHECK-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
+; CHECK-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
+; CHECK-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
+; CHECK-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
+; CHECK-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: uzp1 p5.s, p7.s, p5.s
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: uzp1 p7.s, p9.s, p8.s
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p4.s, p10.s, p4.s
+; CHECK-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.s, p0.s, p6.s
+; CHECK-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p5.h, p7.h, p5.h
+; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
; CHECK-NEXT: uzp1 p0.h, p0.h, p4.h
-; CHECK-NEXT: whilewr p4.s, x9, x1
-; CHECK-NEXT: uzp1 p3.b, p3.b, p0.b
-; CHECK-NEXT: whilewr p5.s, x8, x1
-; CHECK-NEXT: punpklo p3.h, p3.b
-; CHECK-NEXT: uzp1 p2.h, p4.h, p2.h
-; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-NEXT: punpklo p3.h, p3.b
-; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
-; CHECK-NEXT: uzp1 p3.h, p3.h, p5.h
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: whilelo p4.b, xzr, x8
+; CHECK-NEXT: uzp1 p3.b, p0.b, p5.b
; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-NEXT: uzp1 p1.b, p3.b, p2.b
+; CHECK-NEXT: sel p0.b, p2, p2.b, p1.b
+; CHECK-NEXT: sel p1.b, p3, p3.b, p4.b
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
; CHECK-NEXT: addvl sp, sp, #1
; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-NEXT: ret
@@ -232,10 +458,23 @@ entry:
define <vscale x 4 x i1> @whilewr_64_expand(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_64_expand:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.d, x0, x1
-; CHECK-NEXT: incb x0
-; CHECK-NEXT: whilewr p1.d, x0, x1
-; CHECK-NEXT: uzp1 p0.s, p0.s, p1.s
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: add x9, x8, #7
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #3
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, x8
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-NEXT: whilelo p1.s, xzr, x8
+; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.loop.dependence.war.mask.nxv4i1(ptr %a, ptr %b, i64 8)
@@ -245,21 +484,31 @@ entry:
define <vscale x 8 x i1> @whilewr_64_expand2(ptr %a, ptr %b) {
; CHECK-LABEL: whilewr_64_expand2:
; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.d, x0, x1
-; CHECK-NEXT: mov x8, x0
-; CHECK-NEXT: addvl x9, x0, #3
-; CHECK-NEXT: incb x8, all, mul #2
-; CHECK-NEXT: incb x0
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: uzp1 p0.s, p0.s, p0.s
-; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-NEXT: whilewr p2.d, x8, x1
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: whilewr p3.d, x0, x1
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: uzp1 p1.s, p2.s, p1.s
-; CHECK-NEXT: uzp1 p0.s, p0.s, p3.s
-; CHECK-NEXT: uzp1 p0.h, p0.h, p1.h
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: add x9, x8, #7
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #3
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z3.d, x8
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z2.d, all, mul #2
+; CHECK-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-NEXT: mov z4.d, z1.d
+; CHECK-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-NEXT: incd z4.d, all, mul #2
+; CHECK-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-NEXT: uzp1 p0.h, p1.h, p0.h
+; CHECK-NEXT: whilelo p1.h, xzr, x8
+; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.loop.dependence.war.mask.nxv8i1(ptr %a, ptr %b, i64 8)
@@ -271,44 +520,55 @@ define <vscale x 16 x i1> @whilewr_64_expand3(ptr %a, ptr %b) {
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: whilewr p0.d, x0, x1
-; CHECK-NEXT: mov x8, x0
-; CHECK-NEXT: addvl x9, x0, #3
-; CHECK-NEXT: incb x8, all, mul #2
-; CHECK-NEXT: mov x10, x0
-; CHECK-NEXT: whilewr p1.d, x9, x1
-; CHECK-NEXT: uzp1 p0.s, p0.s, p0.s
-; CHECK-NEXT: addvl x9, x0, #7
-; CHECK-NEXT: incb x10
-; CHECK-NEXT: whilewr p2.d, x9, x1
-; CHECK-NEXT: addvl x9, x0, #6
-; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-NEXT: whilewr p3.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #5
-; CHECK-NEXT: incb x0, all, mul #4
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: whilewr p4.d, x10, x1
-; CHECK-NEXT: whilewr p5.d, x0, x1
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: uzp1 p1.s, p3.s, p1.s
-; CHECK-NEXT: uzp1 p3.s, p5.s, p0.s
-; CHECK-NEXT: uzp1 p0.s, p0.s, p4.s
-; CHECK-NEXT: whilewr p4.d, x9, x1
-; CHECK-NEXT: uzp1 p3.h, p3.h, p0.h
-; CHECK-NEXT: whilewr p5.d, x8, x1
-; CHECK-NEXT: punpklo p3.h, p3.b
-; CHECK-NEXT: uzp1 p2.s, p4.s, p2.s
-; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-NEXT: punpklo p3.h, p3.b
-; CHECK-NEXT: uzp1 p0.h, p0.h, p1.h
-; CHECK-NEXT: uzp1 p3.s, p3.s, p5.s
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: add x9, x8, #7
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: asr x8, x8, #3
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, z0.d
+; CHECK-NEXT: mov z2.d, x8
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z4.d, all, mul #2
+; CHECK-NEXT: incd z5.d, all, mul #4
+; CHECK-NEXT: cmphi p2.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: mov z3.d, z1.d
+; CHECK-NEXT: cmphi p1.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: incd z1.d, all, mul #4
+; CHECK-NEXT: cmphi p3.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: incd z4.d, all, mul #4
+; CHECK-NEXT: cmphi p4.d, p0/z, z2.d, z5.d
+; CHECK-NEXT: incd z3.d, all, mul #2
+; CHECK-NEXT: cmphi p5.d, p0/z, z2.d, z1.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z2.d, z4.d
+; CHECK-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-NEXT: mov z0.d, z3.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z2.d, z3.d
+; CHECK-NEXT: uzp1 p2.s, p4.s, p5.s
; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
-; CHECK-NEXT: uzp1 p1.h, p3.h, p2.h
-; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
+; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: incd z0.d, all, mul #4
+; CHECK-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: cmphi p0.d, p0/z, z2.d, z0.d
+; CHECK-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-NEXT: uzp1 p0.b, p1.b, p0.b
+; CHECK-NEXT: whilelo p1.b, xzr, x8
+; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-NEXT: addvl sp, sp, #1
; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-NEXT: ret
@@ -322,80 +582,89 @@ define <vscale x 32 x i1> @whilewr_64_expand4(ptr %a, ptr %b) {
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
; CHECK-NEXT: addvl sp, sp, #-1
+; CHECK-NEXT: str p10, [sp, #1, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p9, [sp, #2, mul vl] // 2-byte Folded Spill
+; CHECK-NEXT: str p8, [sp, #3, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
; CHECK-NEXT: .cfi_escape 0x0f, 0x08, 0x8f, 0x10, 0x92, 0x2e, 0x00, 0x38, 0x1e, 0x22 // sp + 16 + 8 * VG
; CHECK-NEXT: .cfi_offset w29, -16
-; CHECK-NEXT: whilewr p0.d, x0, x1
-; CHECK-NEXT: addvl x8, x0, #3
-; CHECK-NEXT: mov x9, x0
-; CHECK-NEXT: whilewr p1.d, x8, x1
-; CHECK-NEXT: mov x8, x0
-; CHECK-NEXT: addvl x10, x0, #7
-; CHECK-NEXT: uzp1 p0.s, p0.s, p0.s
-; CHECK-NEXT: incb x9, all, mul #2
-; CHECK-NEXT: incb x8
-; CHECK-NEXT: whilewr p2.d, x10, x1
-; CHECK-NEXT: mov x10, x0
-; CHECK-NEXT: uzp1 p0.h, p0.h, p0.h
-; CHECK-NEXT: incb x10, all, mul #4
-; CHECK-NEXT: whilewr p3.d, x9, x1
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: whilewr p4.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #6
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: whilewr p5.d, x10, x1
-; CHECK-NEXT: uzp1 p1.s, p3.s, p1.s
-; CHECK-NEXT: uzp1 p0.s, p0.s, p4.s
-; CHECK-NEXT: uzp1 p3.s, p5.s, p0.s
-; CHECK-NEXT: uzp1 p0.h, p0.h, p1.h
-; CHECK-NEXT: uzp1 p1.h, p3.h, p0.h
-; CHECK-NEXT: whilewr p3.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #5
-; CHECK-NEXT: punpklo p1.h, p1.b
-; CHECK-NEXT: whilewr p4.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #12
-; CHECK-NEXT: punpklo p1.h, p1.b
-; CHECK-NEXT: uzp1 p2.s, p3.s, p2.s
-; CHECK-NEXT: whilewr p3.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #15
-; CHECK-NEXT: uzp1 p1.s, p1.s, p4.s
-; CHECK-NEXT: uzp1 p3.s, p3.s, p0.s
-; CHECK-NEXT: uzp1 p1.h, p1.h, p2.h
-; CHECK-NEXT: whilewr p2.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #14
-; CHECK-NEXT: uzp1 p3.h, p3.h, p0.h
-; CHECK-NEXT: whilewr p4.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #13
-; CHECK-NEXT: punpklo p3.h, p3.b
-; CHECK-NEXT: uzp1 p2.s, p4.s, p2.s
-; CHECK-NEXT: whilewr p4.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #8
-; CHECK-NEXT: punpklo p3.h, p3.b
-; CHECK-NEXT: whilewr p5.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #11
-; CHECK-NEXT: uzp1 p3.s, p3.s, p4.s
-; CHECK-NEXT: uzp1 p4.s, p5.s, p0.s
-; CHECK-NEXT: whilewr p5.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #10
-; CHECK-NEXT: uzp1 p4.h, p4.h, p0.h
-; CHECK-NEXT: whilewr p6.d, x8, x1
-; CHECK-NEXT: addvl x8, x0, #9
-; CHECK-NEXT: punpklo p4.h, p4.b
-; CHECK-NEXT: whilewr p7.d, x8, x1
-; CHECK-NEXT: punpklo p4.h, p4.b
-; CHECK-NEXT: uzp1 p5.s, p6.s, p5.s
-; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
-; CHECK-NEXT: uzp1 p4.s, p4.s, p7.s
+; CHECK-NEXT: index z0.d, #0, #1
+; CHECK-NEXT: subs x8, x1, x0
+; CHECK-NEXT: ptrue p0.d
+; CHECK-NEXT: add x9, x8, #7
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: addvl x9, x0, #8
+; CHECK-NEXT: asr x8, x8, #3
+; CHECK-NEXT: mov z1.d, z0.d
+; CHECK-NEXT: mov z2.d, z0.d
+; CHECK-NEXT: mov z4.d, z0.d
+; CHECK-NEXT: mov z5.d, x8
+; CHECK-NEXT: incd z1.d
+; CHECK-NEXT: incd z2.d, all, mul #2
+; CHECK-NEXT: incd z4.d, all, mul #4
+; CHECK-NEXT: cmphi p5.d, p0/z, z5.d, z0.d
+; CHECK-NEXT: mov z3.d, z1.d
+; CHECK-NEXT: mov z6.d, z2.d
+; CHECK-NEXT: mov z7.d, z1.d
+; CHECK-NEXT: cmphi p2.d, p0/z, z5.d, z4.d
+; CHECK-NEXT: cmphi p3.d, p0/z, z5.d, z2.d
+; CHECK-NEXT: cmphi p4.d, p0/z, z5.d, z1.d
+; CHECK-NEXT: incd z3.d, all, mul #2
+; CHECK-NEXT: incd z6.d, all, mul #4
+; CHECK-NEXT: incd z7.d, all, mul #4
+; CHECK-NEXT: uzp1 p4.s, p5.s, p4.s
+; CHECK-NEXT: mov z24.d, z3.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z5.d, z6.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z5.d, z7.d
+; CHECK-NEXT: cmphi p8.d, p0/z, z5.d, z3.d
+; CHECK-NEXT: incd z24.d, all, mul #4
+; CHECK-NEXT: uzp1 p2.s, p2.s, p7.s
+; CHECK-NEXT: uzp1 p3.s, p3.s, p8.s
+; CHECK-NEXT: cmphi p9.d, p0/z, z5.d, z24.d
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: uzp1 p3.h, p4.h, p3.h
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: uzp1 p6.s, p6.s, p9.s
+; CHECK-NEXT: whilelo p1.b, xzr, x8
+; CHECK-NEXT: subs x8, x1, x9
+; CHECK-NEXT: uzp1 p2.h, p2.h, p6.h
+; CHECK-NEXT: add x9, x8, #7
+; CHECK-NEXT: csel x8, x9, x8, mi
+; CHECK-NEXT: uzp1 p2.b, p3.b, p2.b
+; CHECK-NEXT: asr x8, x8, #3
+; CHECK-NEXT: mov z5.d, x8
+; CHECK-NEXT: cmphi p5.d, p0/z, z5.d, z24.d
+; CHECK-NEXT: cmphi p7.d, p0/z, z5.d, z6.d
+; CHECK-NEXT: cmphi p8.d, p0/z, z5.d, z7.d
+; CHECK-NEXT: cmphi p9.d, p0/z, z5.d, z4.d
+; CHECK-NEXT: cmphi p4.d, p0/z, z5.d, z3.d
+; CHECK-NEXT: cmphi p10.d, p0/z, z5.d, z2.d
+; CHECK-NEXT: cmphi p6.d, p0/z, z5.d, z1.d
+; CHECK-NEXT: cmphi p0.d, p0/z, z5.d, z0.d
+; CHECK-NEXT: cmp x8, #1
+; CHECK-NEXT: uzp1 p5.s, p7.s, p5.s
+; CHECK-NEXT: cset w8, lt
+; CHECK-NEXT: uzp1 p7.s, p9.s, p8.s
+; CHECK-NEXT: sbfx x8, x8, #0, #1
+; CHECK-NEXT: ldr p9, [sp, #2, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p4.s, p10.s, p4.s
+; CHECK-NEXT: ldr p10, [sp, #1, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p0.s, p0.s, p6.s
+; CHECK-NEXT: ldr p8, [sp, #3, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: uzp1 p5.h, p7.h, p5.h
; CHECK-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
-; CHECK-NEXT: uzp1 p2.h, p3.h, p2.h
-; CHECK-NEXT: uzp1 p3.h, p4.h, p5.h
+; CHECK-NEXT: uzp1 p0.h, p0.h, p4.h
+; CHECK-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: whilelo p4.b, xzr, x8
+; CHECK-NEXT: uzp1 p3.b, p0.b, p5.b
; CHECK-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-NEXT: sel p0.b, p2, p2.b, p1.b
+; CHECK-NEXT: sel p1.b, p3, p3.b, p4.b
; CHECK-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
-; CHECK-NEXT: uzp1 p0.b, p0.b, p1.b
-; CHECK-NEXT: uzp1 p1.b, p3.b, p2.b
; CHECK-NEXT: addvl sp, sp, #1
; CHECK-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-NEXT: ret
>From 5075b5f821c0e3f9f53fecc3cf854a6c9e6ea6bf Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 18 Aug 2025 15:39:27 +0100
Subject: [PATCH 42/43] Remove unneeded lowering cases
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index a8aa1f67e342d..65a6bf431ddc1 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -28331,9 +28331,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
DAG.getNode(ISD::TRUNCATE, DL, MVT::i1, RuntimePStateSM));
return;
}
- case Intrinsic::experimental_vector_match:
- case Intrinsic::loop_dependence_raw_mask:
- case Intrinsic::loop_dependence_war_mask: {
+ case Intrinsic::experimental_vector_match: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
>From d85d375be973f4165acc1b4aec12e205807f5f0b Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 19 Aug 2025 14:57:05 +0100
Subject: [PATCH 43/43] Simplify lang ref again
---
llvm/docs/LangRef.rst | 36 ++++++++++++++----------------------
1 file changed, 14 insertions(+), 22 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 06c06cc7e5333..77fb2eb4f63e0 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -24126,8 +24126,9 @@ Overview:
"""""""""
Given a vector load from %ptrA, followed by a vector store to %ptrB, this
-intrinsic generates a mask where a true lane indicates that the accesses don't
-overlap for that lane.
+intrinsic generates a mask where an active lane indicates that the accesses can
+be made safely without a lane being stored to before being read from. This can
+occur when the pointers alias within a vectorised loop iteration.
A write-after-read hazard occurs when a write-after-read sequence for a given
lane in a vector ends up being executed as a read-after-write sequence due to
@@ -24146,13 +24147,12 @@ Semantics:
The intrinsic returns ``poison`` if the distance between ``%prtA`` and ``%ptrB``
is smaller than ``VF * %elementsize`` and either ``%ptrA + VF * %elementSize``
or ``%ptrB + VF * %elementSize`` wrap.
-The element of the result mask is active when no write-after-read hazard occurs,
-meaning that:
+The element of the result mask is active when loading from %ptrA then storing to
+%ptrB is safe and doesn't result in aliasing, meaning that:
-* (ptrB - ptrA) <= 0 (guarantees that all lanes are loaded before any stores are
- committed), or
+* (ptrB - ptrA) <= 0 (guarantees that all lanes are loaded before any stores), or
* (ptrB - ptrA) >= elementSize * lane (guarantees that this lane is loaded
- before the store to the same address is committed)
+ before the store to the same address)
Examples:
"""""""""
@@ -24184,18 +24184,10 @@ This is an overloaded intrinsic.
Overview:
"""""""""
-Given a scalar store to %ptrA, followed by a scalar load from %ptrB, this
-instruction generates a mask where an active lane indicates that there is no
-read-after-write hazard for this lane and that this lane does not introduce any
-new store-to-load forwarding hazard.
-
-A read-after-write hazard occurs when a read-after-write sequence for a given
-lane in a vector ends up being executed as a write-after-read sequence due to
-the aliasing of pointers.
-
-Note that the case where (ptrB - ptrA) < 0 does not result in any
-read-after-write hazards, but may introduce new store-to-load-forwarding stalls
-where both the store and load partially access the same addresses.
+Given a vector store to %ptrA, followed by a vector load from %ptrB, this
+instruction generates a mask where an active lane indicates that the accesses
+can be made safely without a lane being read from before being stored to.
+This can occur when the pointers alias within a vectorised loop iteration.
Arguments:
""""""""""
@@ -24210,11 +24202,11 @@ Semantics:
The intrinsic returns ``poison`` if the distance between ``%prtA`` and ``%ptrB``
is smaller than ``VF * %elementsize`` and either ``%ptrA + VF * %elementSize``
or ``%ptrB + VF * %elementSize`` wrap.
-The element of the result mask is active when no read-after-write hazard occurs,
-meaning that:
+The element of the result mask is active when storing to %ptrA then loading from
+%ptrB is safe and doesn't result in aliasing, meaning that:
abs(ptrB - ptrA) >= elementSize * lane (guarantees that the store of this lane
- is committed before loading from this address)
+ occurs before loading from this address)
Examples:
"""""""""
More information about the llvm-commits
mailing list