[clang] [llvm] [Intrinsics][AArch64] Add intrinsic to mask off aliasing vector lanes (PR #117007)
Sam Tebbs via llvm-commits
llvm-commits at lists.llvm.org
Thu Jan 16 02:25:07 PST 2025
https://github.com/SamTebbs33 updated https://github.com/llvm/llvm-project/pull/117007
>From dc3dba57e8cd76191ba03270d512f9931d4f4600 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 15 Nov 2024 10:24:46 +0000
Subject: [PATCH 01/15] [Intrinsics][AArch64] Add intrinsic to mask off
aliasing vector lanes
It can be unsafe to load a vector from an address and write a vector to
an address if those two addresses have overlapping lanes within a
vectorised loop iteration.
This PR adds an intrinsic designed to create a mask with lanes disabled
if they overlap between the two pointer arguments, so that only safe
lanes are loaded, operated on and stored.
Along with the two pointer parameters, the intrinsic also takes an
immediate that represents the size in bytes of the vector element
types, as well as an immediate i1 that is true if there is a write
after-read-hazard or false if there is a read-after-write hazard.
This will be used by #100579 and replaces the existing lowering for
whilewr since that isn't needed now we have the intrinsic.
---
llvm/docs/LangRef.rst | 80 ++
llvm/include/llvm/CodeGen/TargetLowering.h | 5 +
llvm/include/llvm/IR/Intrinsics.td | 5 +
.../SelectionDAG/SelectionDAGBuilder.cpp | 44 +
.../Target/AArch64/AArch64ISelLowering.cpp | 182 +--
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 6 +
.../lib/Target/AArch64/AArch64SVEInstrInfo.td | 9 +-
llvm/lib/Target/AArch64/SVEInstrFormats.td | 10 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 421 +++++++
.../CodeGen/AArch64/alias_mask_scalable.ll | 82 ++
llvm/test/CodeGen/AArch64/whilewr.ll | 1086 -----------------
11 files changed, 714 insertions(+), 1216 deletions(-)
create mode 100644 llvm/test/CodeGen/AArch64/alias_mask.ll
create mode 100644 llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
delete mode 100644 llvm/test/CodeGen/AArch64/whilewr.ll
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 9f4c90ba82a419..c9589d5af8ebbe 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23475,6 +23475,86 @@ Examples:
%active.lane.mask = call <4 x i1> @llvm.get.active.lane.mask.v4i1.i64(i64 %elem0, i64 429)
%wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> poison)
+.. _int_experimental_get_alias_lane_mask:
+
+'``llvm.get.alias.lane.mask.*``' Intrinsics
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Syntax:
+"""""""
+This is an overloaded intrinsic.
+
+::
+
+ declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1.i64(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+
+
+Overview:
+"""""""""
+
+Create a mask representing lanes that do or not overlap between two pointers across one vector loop iteration.
+
+
+Arguments:
+""""""""""
+
+The first two arguments have the same scalar integer type.
+The final two are immediates and the result is a vector with the i1 element type.
+
+Semantics:
+""""""""""
+
+In the case that ``%writeAfterRead`` is true, the '``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically equivalent
+to:
+
+::
+
+ %diff = (%ptrB - %ptrA) / %elementSize
+ %m[i] = (icmp ult i, %diff) || (%diff <= 0)
+
+Otherwise they are semantically equivalent to:
+
+::
+
+ %diff = abs(%ptrB - %ptrA) / %elementSize
+ %m[i] = (icmp ult i, %diff) || (%diff == 0)
+
+where ``%m`` is a vector (mask) of active/inactive lanes with its elements
+indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two i64 arguments to
+``llvm.experimental.get.alias.lane.mask.*``, ``%elementSize`` is the i32 argument, ``%abs`` is the absolute difference operation, ``%icmp`` is an integer compare and ``ult``
+the unsigned less-than comparison operator. The subtraction between ``%ptrA`` and ``%ptrB`` could be negative. The ``%writeAfterRead`` argument is expected to be true if the ``%ptrB`` is stored to after ``%ptrA`` is read from.
+The above is equivalent to:
+
+::
+
+ %m = @llvm.experimental.get.alias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
+
+This can, for example, be emitted by the loop vectorizer in which case
+``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a pointer that is stored to within the loop.
+If the difference between these pointers is less than the vector factor, then they overlap (alias) within a loop iteration.
+An example is if ``%ptrA`` is 20 and ``%ptrB`` is 23 with a vector factor of 8, then lanes 3, 4, 5, 6 and 7 of the vector loaded from ``%ptrA``
+share addresses with lanes 0, 1, 2, 3, 4 and 5 from the vector stored to at ``%ptrB``.
+An alias mask of these two pointers should be <1, 1, 1, 0, 0, 0, 0, 0> so that only the non-overlapping lanes are loaded and stored.
+This operation allows many loops to be vectorised when it would otherwise be unsafe to do so.
+
+To account for the fact that only a subset of lanes have been operated on in an iteration,
+the loop's induction variable should be incremented by the popcount of the mask rather than the vector factor.
+
+This mask ``%m`` can e.g. be used in masked load/store instructions.
+
+
+Examples:
+"""""""""
+
+.. code-block:: llvm
+
+ %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64(i64 %ptrA, i64 %ptrB, i32 4, i1 1)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %alias.lane.mask, <4 x i32> poison)
+ [...]
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %alias.lane.mask)
.. _int_experimental_vp_splice:
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 6a41094ff933b0..0338310fd936df 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -468,6 +468,11 @@ class TargetLoweringBase {
return true;
}
+ /// Return true if the @llvm.experimental.get.alias.lane.mask intrinsic should be expanded using generic code in SelectionDAGBuilder.
+ virtual bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT, unsigned EltSize) const {
+ return true;
+ }
+
virtual bool shouldExpandGetVectorLength(EVT CountVT, unsigned VF,
bool IsScalable) const {
return true;
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index 1ca8c2565ab0b6..5f7073a531283e 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2363,6 +2363,11 @@ let IntrProperties = [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<1>>
llvm_i32_ty]>;
}
+def int_experimental_get_alias_lane_mask:
+ DefaultAttrsIntrinsic<[llvm_anyvector_ty],
+ [llvm_anyint_ty, LLVMMatchType<1>, llvm_anyint_ty, llvm_i1_ty],
+ [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
+
def int_get_active_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[llvm_anyint_ty, LLVMMatchType<1>],
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 9d729d448502d8..39e84e06a8de60 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8284,6 +8284,50 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
visitVectorExtractLastActive(I, Intrinsic);
return;
}
+ case Intrinsic::experimental_get_alias_lane_mask: {
+ SDValue SourceValue = getValue(I.getOperand(0));
+ SDValue SinkValue = getValue(I.getOperand(1));
+ SDValue EltSize = getValue(I.getOperand(2));
+ bool IsWriteAfterRead = cast<ConstantSDNode>(getValue(I.getOperand(3)))->getZExtValue() != 0;
+ auto IntrinsicVT = EVT::getEVT(I.getType());
+ auto PtrVT = SourceValue->getValueType(0);
+
+ if (!TLI.shouldExpandGetAliasLaneMask(IntrinsicVT, PtrVT, cast<ConstantSDNode>(EltSize)->getSExtValue())) {
+ visitTargetIntrinsic(I, Intrinsic);
+ return;
+ }
+
+ SDValue Diff = DAG.getNode(ISD::SUB, sdl,
+ PtrVT, SinkValue, SourceValue);
+ if (!IsWriteAfterRead)
+ Diff = DAG.getNode(ISD::ABS, sdl, PtrVT, Diff);
+
+ Diff = DAG.getNode(ISD::SDIV, sdl, PtrVT, Diff, EltSize);
+ SDValue Zero = DAG.getTargetConstant(0, sdl, PtrVT);
+
+ // If the difference is positive then some elements may alias
+ auto CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+ PtrVT);
+ SDValue Cmp = DAG.getSetCC(sdl, CmpVT, Diff, Zero, IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+
+ // Splat the compare result then OR it with a lane mask
+ SDValue Splat = DAG.getSplat(IntrinsicVT, sdl, Cmp);
+
+ SDValue DiffMask;
+ // Don't emit an active lane mask if the target doesn't support it
+ if (TLI.shouldExpandGetActiveLaneMask(IntrinsicVT, PtrVT)) {
+ EVT VecTy = EVT::getVectorVT(*DAG.getContext(), PtrVT,
+ IntrinsicVT.getVectorElementCount());
+ SDValue DiffSplat = DAG.getSplat(VecTy, sdl, Diff);
+ SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
+ DiffMask = DAG.getSetCC(sdl, IntrinsicVT, VectorStep,
+ DiffSplat, ISD::CondCode::SETULT);
+ } else {
+ DiffMask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, sdl, IntrinsicVT, DAG.getTargetConstant(Intrinsic::get_active_lane_mask, sdl, MVT::i64), Zero, Diff);
+ }
+ SDValue Or = DAG.getNode(ISD::OR, sdl, IntrinsicVT, DiffMask, Splat);
+ setValue(&I, Or);
+ }
}
}
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 7ab3fc06715ec8..66eaec0d5ae6c9 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -2033,6 +2033,24 @@ bool AArch64TargetLowering::shouldExpandGetActiveLaneMask(EVT ResVT,
return false;
}
+bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT, unsigned EltSize) const {
+ if (!Subtarget->hasSVE2())
+ return true;
+
+ if (PtrVT != MVT::i64)
+ return true;
+
+ if (VT == MVT::v2i1 || VT == MVT::nxv2i1)
+ return EltSize != 8;
+ if( VT == MVT::v4i1 || VT == MVT::nxv4i1)
+ return EltSize != 4;
+ if (VT == MVT::v8i1 || VT == MVT::nxv8i1)
+ return EltSize != 2;
+ if (VT == MVT::v16i1 || VT == MVT::nxv16i1)
+ return EltSize != 1;
+ return true;
+}
+
bool AArch64TargetLowering::shouldExpandPartialReductionIntrinsic(
const IntrinsicInst *I) const {
if (I->getIntrinsicID() != Intrinsic::experimental_vector_partial_reduce_add)
@@ -2796,6 +2814,8 @@ const char *AArch64TargetLowering::getTargetNodeName(unsigned Opcode) const {
MAKE_CASE(AArch64ISD::LS64_BUILD)
MAKE_CASE(AArch64ISD::LS64_EXTRACT)
MAKE_CASE(AArch64ISD::TBL)
+ MAKE_CASE(AArch64ISD::WHILEWR)
+ MAKE_CASE(AArch64ISD::WHILERW)
MAKE_CASE(AArch64ISD::FADD_PRED)
MAKE_CASE(AArch64ISD::FADDA_PRED)
MAKE_CASE(AArch64ISD::FADDV_PRED)
@@ -5881,6 +5901,16 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
EVT PtrVT = getPointerTy(DAG.getDataLayout());
return DAG.getNode(AArch64ISD::THREAD_POINTER, dl, PtrVT);
}
+ case Intrinsic::aarch64_sve_whilewr_b:
+ case Intrinsic::aarch64_sve_whilewr_h:
+ case Intrinsic::aarch64_sve_whilewr_s:
+ case Intrinsic::aarch64_sve_whilewr_d:
+ return DAG.getNode(AArch64ISD::WHILEWR, dl, Op.getValueType(), Op.getOperand(1), Op.getOperand(2));
+ case Intrinsic::aarch64_sve_whilerw_b:
+ case Intrinsic::aarch64_sve_whilerw_h:
+ case Intrinsic::aarch64_sve_whilerw_s:
+ case Intrinsic::aarch64_sve_whilerw_d:
+ return DAG.getNode(AArch64ISD::WHILERW, dl, Op.getValueType(), Op.getOperand(1), Op.getOperand(2));
case Intrinsic::aarch64_neon_abs: {
EVT Ty = Op.getValueType();
if (Ty == MVT::i64) {
@@ -6340,16 +6370,39 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(AArch64ISD::USDOT, dl, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2), Op.getOperand(3));
}
+ case Intrinsic::experimental_get_alias_lane_mask:
case Intrinsic::get_active_lane_mask: {
+ unsigned IntrinsicID = Intrinsic::aarch64_sve_whilelo;
+ if (IntNo == Intrinsic::experimental_get_alias_lane_mask) {
+ uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
+ bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
+ switch (EltSize) {
+ case 1:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b : Intrinsic::aarch64_sve_whilerw_b;
+ break;
+ case 2:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h : Intrinsic::aarch64_sve_whilerw_h;
+ break;
+ case 4:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s : Intrinsic::aarch64_sve_whilerw_s;
+ break;
+ case 8:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d : Intrinsic::aarch64_sve_whilerw_d;
+ break;
+ default:
+ llvm_unreachable("Unexpected element size for get.alias.lane.mask");
+ break;
+ }
+ }
SDValue ID =
- DAG.getTargetConstant(Intrinsic::aarch64_sve_whilelo, dl, MVT::i64);
+ DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
EVT VT = Op.getValueType();
if (VT.isScalableVector())
return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, VT, ID, Op.getOperand(1),
Op.getOperand(2));
- // We can use the SVE whilelo instruction to lower this intrinsic by
+ // We can use the SVE whilelo/whilewr/whilerw instruction to lower this intrinsic by
// creating the appropriate sequence of scalable vector operations and
// then extracting a fixed-width subvector from the scalable vector.
@@ -14241,128 +14294,8 @@ static SDValue tryLowerToSLI(SDNode *N, SelectionDAG &DAG) {
return ResultSLI;
}
-/// Try to lower the construction of a pointer alias mask to a WHILEWR.
-/// The mask's enabled lanes represent the elements that will not overlap across
-/// one loop iteration. This tries to match:
-/// or (splat (setcc_lt (sub ptrA, ptrB), -(element_size - 1))),
-/// (get_active_lane_mask 0, (div (sub ptrA, ptrB), element_size))
-SDValue tryWhileWRFromOR(SDValue Op, SelectionDAG &DAG,
- const AArch64Subtarget &Subtarget) {
- if (!Subtarget.hasSVE2())
- return SDValue();
- SDValue LaneMask = Op.getOperand(0);
- SDValue Splat = Op.getOperand(1);
-
- if (Splat.getOpcode() != ISD::SPLAT_VECTOR)
- std::swap(LaneMask, Splat);
-
- if (LaneMask.getOpcode() != ISD::INTRINSIC_WO_CHAIN ||
- LaneMask.getConstantOperandVal(0) != Intrinsic::get_active_lane_mask ||
- Splat.getOpcode() != ISD::SPLAT_VECTOR)
- return SDValue();
-
- SDValue Cmp = Splat.getOperand(0);
- if (Cmp.getOpcode() != ISD::SETCC)
- return SDValue();
-
- CondCodeSDNode *Cond = cast<CondCodeSDNode>(Cmp.getOperand(2));
-
- auto ComparatorConst = dyn_cast<ConstantSDNode>(Cmp.getOperand(1));
- if (!ComparatorConst || ComparatorConst->getSExtValue() > 0 ||
- Cond->get() != ISD::CondCode::SETLT)
- return SDValue();
- unsigned CompValue = std::abs(ComparatorConst->getSExtValue());
- unsigned EltSize = CompValue + 1;
- if (!isPowerOf2_64(EltSize) || EltSize > 8)
- return SDValue();
-
- SDValue Diff = Cmp.getOperand(0);
- if (Diff.getOpcode() != ISD::SUB || Diff.getValueType() != MVT::i64)
- return SDValue();
-
- if (!isNullConstant(LaneMask.getOperand(1)) ||
- (EltSize != 1 && LaneMask.getOperand(2).getOpcode() != ISD::SRA))
- return SDValue();
-
- // The number of elements that alias is calculated by dividing the positive
- // difference between the pointers by the element size. An alias mask for i8
- // elements omits the division because it would just divide by 1
- if (EltSize > 1) {
- SDValue DiffDiv = LaneMask.getOperand(2);
- auto DiffDivConst = dyn_cast<ConstantSDNode>(DiffDiv.getOperand(1));
- if (!DiffDivConst || DiffDivConst->getZExtValue() != Log2_64(EltSize))
- return SDValue();
- if (EltSize > 2) {
- // When masking i32 or i64 elements, the positive value of the
- // possibly-negative difference comes from a select of the difference if
- // it's positive, otherwise the difference plus the element size if it's
- // negative: pos_diff = diff < 0 ? (diff + 7) : diff
- SDValue Select = DiffDiv.getOperand(0);
- // Make sure the difference is being compared by the select
- if (Select.getOpcode() != ISD::SELECT_CC || Select.getOperand(3) != Diff)
- return SDValue();
- // Make sure it's checking if the difference is less than 0
- if (!isNullConstant(Select.getOperand(1)) ||
- cast<CondCodeSDNode>(Select.getOperand(4))->get() !=
- ISD::CondCode::SETLT)
- return SDValue();
- // An add creates a positive value from the negative difference
- SDValue Add = Select.getOperand(2);
- if (Add.getOpcode() != ISD::ADD || Add.getOperand(0) != Diff)
- return SDValue();
- if (auto *AddConst = dyn_cast<ConstantSDNode>(Add.getOperand(1));
- !AddConst || AddConst->getZExtValue() != EltSize - 1)
- return SDValue();
- } else {
- // When masking i16 elements, this positive value comes from adding the
- // difference's sign bit to the difference itself. This is equivalent to
- // the 32 bit and 64 bit case: pos_diff = diff + sign_bit (diff)
- SDValue Add = DiffDiv.getOperand(0);
- if (Add.getOpcode() != ISD::ADD || Add.getOperand(0) != Diff)
- return SDValue();
- // A logical right shift by 63 extracts the sign bit from the difference
- SDValue Shift = Add.getOperand(1);
- if (Shift.getOpcode() != ISD::SRL || Shift.getOperand(0) != Diff)
- return SDValue();
- if (auto *ShiftConst = dyn_cast<ConstantSDNode>(Shift.getOperand(1));
- !ShiftConst || ShiftConst->getZExtValue() != 63)
- return SDValue();
- }
- } else if (LaneMask.getOperand(2) != Diff)
- return SDValue();
-
- SDValue StorePtr = Diff.getOperand(0);
- SDValue ReadPtr = Diff.getOperand(1);
-
- unsigned IntrinsicID = 0;
- switch (EltSize) {
- case 1:
- IntrinsicID = Intrinsic::aarch64_sve_whilewr_b;
- break;
- case 2:
- IntrinsicID = Intrinsic::aarch64_sve_whilewr_h;
- break;
- case 4:
- IntrinsicID = Intrinsic::aarch64_sve_whilewr_s;
- break;
- case 8:
- IntrinsicID = Intrinsic::aarch64_sve_whilewr_d;
- break;
- default:
- return SDValue();
- }
- SDLoc DL(Op);
- SDValue ID = DAG.getConstant(IntrinsicID, DL, MVT::i32);
- return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, Op.getValueType(), ID,
- StorePtr, ReadPtr);
-}
-
SDValue AArch64TargetLowering::LowerVectorOR(SDValue Op,
SelectionDAG &DAG) const {
- if (SDValue SV =
- tryWhileWRFromOR(Op, DAG, DAG.getSubtarget<AArch64Subtarget>()))
- return SV;
-
if (useSVEForFixedLengthVectorVT(Op.getValueType(),
!Subtarget->isNeonAvailable()))
return LowerToScalableOp(Op, DAG);
@@ -19609,7 +19542,9 @@ static bool isPredicateCCSettingOp(SDValue N) {
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilels ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelt ||
// get_active_lane_mask is lowered to a whilelo instruction.
- N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask)))
+ N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask ||
+ // get_alias_lane_mask is lowered to a whilewr/rw instruction.
+ N.getConstantOperandVal(0) == Intrinsic::experimental_get_alias_lane_mask)))
return true;
return false;
@@ -27175,6 +27110,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
return;
}
case Intrinsic::experimental_vector_match:
+ case Intrinsic::experimental_get_alias_lane_mask:
case Intrinsic::get_active_lane_mask: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index cb0b9e965277aa..b2f766b22911ff 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -297,6 +297,10 @@ enum NodeType : unsigned {
SMAXV,
UMAXV,
+ // Alias lane masks
+ WHILEWR,
+ WHILERW,
+
SADDV_PRED,
UADDV_PRED,
SMAXV_PRED,
@@ -980,6 +984,8 @@ class AArch64TargetLowering : public TargetLowering {
bool shouldExpandGetActiveLaneMask(EVT VT, EVT OpVT) const override;
+ bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT, unsigned EltSize) const override;
+
bool
shouldExpandPartialReductionIntrinsic(const IntrinsicInst *I) const override;
diff --git a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
index 564fb33758ad57..99b1e0618ab34b 100644
--- a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
@@ -140,6 +140,11 @@ def AArch64st1q_scatter : SDNode<"AArch64ISD::SST1Q_PRED", SDT_AArch64_SCATTER_V
// AArch64 SVE/SVE2 - the remaining node definitions
//
+// Alias masks
+def SDT_AArch64Mask : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisInt<1>, SDTCisSameAs<2, 1>, SDTCVecEltisVT<0,i1>]>;
+def AArch64whilewr : SDNode<"AArch64ISD::WHILEWR", SDT_AArch64Mask>;
+def AArch64whilerw : SDNode<"AArch64ISD::WHILERW", SDT_AArch64Mask>;
+
// SVE CNT/INC/RDVL
def sve_rdvl_imm : ComplexPattern<i64, 1, "SelectRDVLImm<-32, 31, 16>">;
def sve_cnth_imm : ComplexPattern<i64, 1, "SelectRDVLImm<1, 16, 8>">;
@@ -3913,8 +3918,8 @@ let Predicates = [HasSVE2orSME] in {
defm WHILEHI_PXX : sve_int_while8_rr<0b101, "whilehi", int_aarch64_sve_whilehi, int_aarch64_sve_whilelo>;
// SVE2 pointer conflict compare
- defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", "int_aarch64_sve_whilewr">;
- defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", "int_aarch64_sve_whilerw">;
+ defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", AArch64whilewr>;
+ defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", AArch64whilerw>;
} // End HasSVE2orSME
let Predicates = [HasSVEAES, HasNonStreamingSVE2orSSVE_AES] in {
diff --git a/llvm/lib/Target/AArch64/SVEInstrFormats.td b/llvm/lib/Target/AArch64/SVEInstrFormats.td
index 1ddb913f013f5e..a0d5bb1504089d 100644
--- a/llvm/lib/Target/AArch64/SVEInstrFormats.td
+++ b/llvm/lib/Target/AArch64/SVEInstrFormats.td
@@ -5765,16 +5765,16 @@ class sve2_int_while_rr<bits<2> sz8_64, bits<1> rw, string asm,
let isWhile = 1;
}
-multiclass sve2_int_while_rr<bits<1> rw, string asm, string op> {
+multiclass sve2_int_while_rr<bits<1> rw, string asm, SDPatternOperator op> {
def _B : sve2_int_while_rr<0b00, rw, asm, PPR8>;
def _H : sve2_int_while_rr<0b01, rw, asm, PPR16>;
def _S : sve2_int_while_rr<0b10, rw, asm, PPR32>;
def _D : sve2_int_while_rr<0b11, rw, asm, PPR64>;
- def : SVE_2_Op_Pat<nxv16i1, !cast<SDPatternOperator>(op # _b), i64, i64, !cast<Instruction>(NAME # _B)>;
- def : SVE_2_Op_Pat<nxv8i1, !cast<SDPatternOperator>(op # _h), i64, i64, !cast<Instruction>(NAME # _H)>;
- def : SVE_2_Op_Pat<nxv4i1, !cast<SDPatternOperator>(op # _s), i64, i64, !cast<Instruction>(NAME # _S)>;
- def : SVE_2_Op_Pat<nxv2i1, !cast<SDPatternOperator>(op # _d), i64, i64, !cast<Instruction>(NAME # _D)>;
+ def : SVE_2_Op_Pat<nxv16i1, op, i64, i64, !cast<Instruction>(NAME # _B)>;
+ def : SVE_2_Op_Pat<nxv8i1, op, i64, i64, !cast<Instruction>(NAME # _H)>;
+ def : SVE_2_Op_Pat<nxv4i1, op, i64, i64, !cast<Instruction>(NAME # _S)>;
+ def : SVE_2_Op_Pat<nxv2i1, op, i64, i64, !cast<Instruction>(NAME # _D)>;
}
//===----------------------------------------------------------------------===//
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
new file mode 100644
index 00000000000000..59ad2bf82db92e
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -0,0 +1,421 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s | FileCheck %s --check-prefix=CHECK-SVE
+; RUN: llc -mtriple=aarch64 %s | FileCheck %s --check-prefix=CHECK-NOSVE
+
+define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_8:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_1
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI0_0]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI0_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x8, :lo12:.LCPI0_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_4
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_3
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI0_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_5
+; CHECK-NOSVE-NEXT: dup v2.2d, x9
+; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI0_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_6
+; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI0_5]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI0_6]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI0_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ ret <16 x i1> %0
+}
+
+define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_16:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI1_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI1_2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI1_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI1_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI1_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI1_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <8 x i1> %0
+}
+
+define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI2_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI2_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI2_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI2_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ ret <4 x i1> %0
+}
+
+define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_64:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI3_0
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI3_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v1.2s, w8
+; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ ret <2 x i1> %0
+}
+
+define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_8:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_1
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI4_0]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_2
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI4_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_3
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI4_1]
+; CHECK-NOSVE-NEXT: ldr q4, [x8, :lo12:.LCPI4_3]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_4
+; CHECK-NOSVE-NEXT: dup v3.2d, x9
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI4_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_5
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_6
+; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI4_5]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI4_6]
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI4_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v3.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v3.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v2.4s
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v3.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ ret <16 x i1> %0
+}
+
+define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_16:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI5_1
+; CHECK-NOSVE-NEXT: cneg x8, x8, mi
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI5_2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI5_0]
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI5_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI5_2]
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI5_3]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ ret <8 x i1> %0
+}
+
+define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI6_0
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI6_0]
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI6_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI6_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ ret <4 x i1> %0
+}
+
+define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_64:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI7_0
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI7_0]
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v1.2s, w8
+; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ ret <2 x i1> %0
+}
+
+define <16 x i1> @not_whilewr_wrong_eltsize(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: not_whilewr_wrong_eltsize:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: dup v0.16b, w9
+; CHECK-SVE-NEXT: mov z1.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: orr v0.16b, v1.16b, v0.16b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: not_whilewr_wrong_eltsize:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI8_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x8
+; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI8_6]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI8_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <16 x i1> %0
+}
+
+define <2 x i1> @not_whilerw_ptr32(i32 %a, i32 %b) {
+; CHECK-SVE-LABEL: not_whilerw_ptr32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs w8, w1, w0
+; CHECK-SVE-NEXT: cneg w8, w8, mi
+; CHECK-SVE-NEXT: add w9, w8, #7
+; CHECK-SVE-NEXT: cmp w8, #0
+; CHECK-SVE-NEXT: csel w8, w9, w8, lt
+; CHECK-SVE-NEXT: asr w8, w8, #3
+; CHECK-SVE-NEXT: cmp w8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.s, #0, w8
+; CHECK-SVE-NEXT: dup v0.2s, w9
+; CHECK-SVE-NEXT: mov z1.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: orr v0.8b, v1.8b, v0.8b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: not_whilerw_ptr32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs w9, w1, w0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI9_0
+; CHECK-NOSVE-NEXT: cneg w9, w9, mi
+; CHECK-NOSVE-NEXT: ldr d1, [x8, :lo12:.LCPI9_0]
+; CHECK-NOSVE-NEXT: add w10, w9, #7
+; CHECK-NOSVE-NEXT: cmp w9, #0
+; CHECK-NOSVE-NEXT: csel w9, w10, w9, lt
+; CHECK-NOSVE-NEXT: asr w9, w9, #3
+; CHECK-NOSVE-NEXT: dup v0.2s, w9
+; CHECK-NOSVE-NEXT: cmp w9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: dup v2.2s, w8
+; CHECK-NOSVE-NEXT: cmhi v0.2s, v0.2s, v1.2s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v2.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i32.i32(i32 %a, i32 %b, i32 8, i1 0)
+ ret <2 x i1> %0
+}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
new file mode 100644
index 00000000000000..4912bc0f59d40d
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -0,0 +1,82 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s | FileCheck %s
+
+define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
+; CHECK-LABEL: whilewr_8:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.b, x0, x1
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
+; CHECK-LABEL: whilewr_16:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.h, x0, x1
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <vscale x 8 x i1> %0
+}
+
+define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
+; CHECK-LABEL: whilewr_32:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.s, x0, x1
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ ret <vscale x 4 x i1> %0
+}
+
+define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
+; CHECK-LABEL: whilewr_64:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilewr p0.d, x0, x1
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ ret <vscale x 2 x i1> %0
+}
+
+define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
+; CHECK-LABEL: whilerw_8:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.b, x0, x1
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
+; CHECK-LABEL: whilerw_16:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.h, x0, x1
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ ret <vscale x 8 x i1> %0
+}
+
+define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
+; CHECK-LABEL: whilerw_32:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.s, x0, x1
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ ret <vscale x 4 x i1> %0
+}
+
+define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
+; CHECK-LABEL: whilerw_64:
+; CHECK: // %bb.0: // %entry
+; CHECK-NEXT: whilerw p0.d, x0, x1
+; CHECK-NEXT: ret
+entry:
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ ret <vscale x 2 x i1> %0
+}
diff --git a/llvm/test/CodeGen/AArch64/whilewr.ll b/llvm/test/CodeGen/AArch64/whilewr.ll
deleted file mode 100644
index 9f1ea850792384..00000000000000
--- a/llvm/test/CodeGen/AArch64/whilewr.ll
+++ /dev/null
@@ -1,1086 +0,0 @@
-; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 4
-; RUN: llc %s -mtriple=aarch64-linux-gnu -mattr=+sve2 -o - | FileCheck %s
-; RUN: llc %s -mtriple=aarch64-linux-gnu -mattr=+sve -o - | FileCheck %s --check-prefix=CHECK-NOSVE2
-
-define <vscale x 16 x i1> @whilewr_8(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_8:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.b, x1, x2
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_8:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: sub x8, x1, x2
-; CHECK-NOSVE2-NEXT: cmp x8, #0
-; CHECK-NOSVE2-NEXT: cset w9, lt
-; CHECK-NOSVE2-NEXT: whilelo p0.b, xzr, x8
-; CHECK-NOSVE2-NEXT: sbfx x8, x9, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-NOSVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %c14 = ptrtoint ptr %c to i64
- %b15 = ptrtoint ptr %b to i64
- %sub.diff = sub i64 %b15, %c14
- %neg.compare = icmp slt i64 %sub.diff, 0
- %.splatinsert = insertelement <vscale x 16 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 16 x i1> %.splatinsert, <vscale x 16 x i1> poison, <vscale x 16 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 0, i64 %sub.diff)
- %active.lane.mask.alias = or <vscale x 16 x i1> %ptr.diff.lane.mask, %.splat
- ret <vscale x 16 x i1> %active.lane.mask.alias
-}
-
-define <vscale x 16 x i1> @whilewr_commutative(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_commutative:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.b, x1, x2
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_commutative:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: sub x8, x1, x2
-; CHECK-NOSVE2-NEXT: cmp x8, #0
-; CHECK-NOSVE2-NEXT: cset w9, lt
-; CHECK-NOSVE2-NEXT: whilelo p0.b, xzr, x8
-; CHECK-NOSVE2-NEXT: sbfx x8, x9, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p1.b, xzr, x8
-; CHECK-NOSVE2-NEXT: mov p0.b, p1/m, p1.b
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %c14 = ptrtoint ptr %c to i64
- %b15 = ptrtoint ptr %b to i64
- %sub.diff = sub i64 %b15, %c14
- %neg.compare = icmp slt i64 %sub.diff, 0
- %.splatinsert = insertelement <vscale x 16 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 16 x i1> %.splatinsert, <vscale x 16 x i1> poison, <vscale x 16 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 0, i64 %sub.diff)
- %active.lane.mask.alias = or <vscale x 16 x i1> %.splat, %ptr.diff.lane.mask
- ret <vscale x 16 x i1> %active.lane.mask.alias
-}
-
-define <vscale x 8 x i1> @whilewr_16(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_16:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.h, x1, x2
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_16:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: sub x8, x1, x2
-; CHECK-NOSVE2-NEXT: cmn x8, #1
-; CHECK-NOSVE2-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE2-NEXT: cset w9, lt
-; CHECK-NOSVE2-NEXT: sbfx x9, x9, #0, #1
-; CHECK-NOSVE2-NEXT: asr x8, x8, #1
-; CHECK-NOSVE2-NEXT: whilelo p0.h, xzr, x9
-; CHECK-NOSVE2-NEXT: whilelo p1.h, xzr, x8
-; CHECK-NOSVE2-NEXT: mov p0.b, p1/m, p1.b
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %b14 = ptrtoint ptr %b to i64
- %c15 = ptrtoint ptr %c to i64
- %sub.diff = sub i64 %b14, %c15
- %diff = sdiv i64 %sub.diff, 2
- %neg.compare = icmp slt i64 %sub.diff, -1
- %.splatinsert = insertelement <vscale x 8 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 8 x i1> %.splatinsert, <vscale x 8 x i1> poison, <vscale x 8 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 8 x i1> %ptr.diff.lane.mask, %.splat
- ret <vscale x 8 x i1> %active.lane.mask.alias
-}
-
-define <vscale x 4 x i1> @whilewr_32(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_32:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.s, x1, x2
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_32:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: sub x8, x1, x2
-; CHECK-NOSVE2-NEXT: add x9, x8, #3
-; CHECK-NOSVE2-NEXT: cmp x8, #0
-; CHECK-NOSVE2-NEXT: csel x9, x9, x8, lt
-; CHECK-NOSVE2-NEXT: cmn x8, #3
-; CHECK-NOSVE2-NEXT: cset w8, lt
-; CHECK-NOSVE2-NEXT: asr x9, x9, #2
-; CHECK-NOSVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p1.s, xzr, x9
-; CHECK-NOSVE2-NEXT: whilelo p0.s, xzr, x8
-; CHECK-NOSVE2-NEXT: mov p0.b, p1/m, p1.b
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %b12 = ptrtoint ptr %b to i64
- %c13 = ptrtoint ptr %c to i64
- %sub.diff = sub i64 %b12, %c13
- %diff = sdiv i64 %sub.diff, 4
- %neg.compare = icmp slt i64 %sub.diff, -3
- %.splatinsert = insertelement <vscale x 4 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 4 x i1> %.splatinsert, <vscale x 4 x i1> poison, <vscale x 4 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 4 x i1> %ptr.diff.lane.mask, %.splat
- ret <vscale x 4 x i1> %active.lane.mask.alias
-}
-
-define <vscale x 2 x i1> @whilewr_64(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_64:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.d, x1, x2
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_64:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: sub x8, x1, x2
-; CHECK-NOSVE2-NEXT: add x9, x8, #7
-; CHECK-NOSVE2-NEXT: cmp x8, #0
-; CHECK-NOSVE2-NEXT: csel x9, x9, x8, lt
-; CHECK-NOSVE2-NEXT: cmn x8, #7
-; CHECK-NOSVE2-NEXT: cset w8, lt
-; CHECK-NOSVE2-NEXT: asr x9, x9, #3
-; CHECK-NOSVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p1.d, xzr, x9
-; CHECK-NOSVE2-NEXT: whilelo p0.d, xzr, x8
-; CHECK-NOSVE2-NEXT: mov p0.b, p1/m, p1.b
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %b12 = ptrtoint ptr %b to i64
- %c13 = ptrtoint ptr %c to i64
- %sub.diff = sub i64 %b12, %c13
- %diff = sdiv i64 %sub.diff, 8
- %neg.compare = icmp slt i64 %sub.diff, -7
- %.splatinsert = insertelement <vscale x 2 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 2 x i1> %.splatinsert, <vscale x 2 x i1> poison, <vscale x 2 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 2 x i1> %ptr.diff.lane.mask, %.splat
- ret <vscale x 2 x i1> %active.lane.mask.alias
-}
-
-define <vscale x 1 x i1> @no_whilewr_128(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: no_whilewr_128:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: sub x8, x1, x2
-; CHECK-NEXT: index z0.d, #0, #1
-; CHECK-NEXT: ptrue p0.d
-; CHECK-NEXT: add x9, x8, #15
-; CHECK-NEXT: cmp x8, #0
-; CHECK-NEXT: csel x9, x9, x8, lt
-; CHECK-NEXT: cmn x8, #15
-; CHECK-NEXT: asr x9, x9, #4
-; CHECK-NEXT: cset w8, lt
-; CHECK-NEXT: sbfx x8, x8, #0, #1
-; CHECK-NEXT: mov z1.d, x9
-; CHECK-NEXT: whilelo p1.d, xzr, x8
-; CHECK-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
-; CHECK-NEXT: punpklo p1.h, p1.b
-; CHECK-NEXT: punpklo p0.h, p0.b
-; CHECK-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: no_whilewr_128:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: sub x8, x1, x2
-; CHECK-NOSVE2-NEXT: index z0.d, #0, #1
-; CHECK-NOSVE2-NEXT: ptrue p0.d
-; CHECK-NOSVE2-NEXT: add x9, x8, #15
-; CHECK-NOSVE2-NEXT: cmp x8, #0
-; CHECK-NOSVE2-NEXT: csel x9, x9, x8, lt
-; CHECK-NOSVE2-NEXT: cmn x8, #15
-; CHECK-NOSVE2-NEXT: asr x9, x9, #4
-; CHECK-NOSVE2-NEXT: cset w8, lt
-; CHECK-NOSVE2-NEXT: sbfx x8, x8, #0, #1
-; CHECK-NOSVE2-NEXT: mov z1.d, x9
-; CHECK-NOSVE2-NEXT: whilelo p1.d, xzr, x8
-; CHECK-NOSVE2-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
-; CHECK-NOSVE2-NEXT: punpklo p1.h, p1.b
-; CHECK-NOSVE2-NEXT: punpklo p0.h, p0.b
-; CHECK-NOSVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %b12 = ptrtoint ptr %b to i64
- %c13 = ptrtoint ptr %c to i64
- %sub.diff = sub i64 %b12, %c13
- %diff = sdiv i64 %sub.diff, 16
- %neg.compare = icmp slt i64 %sub.diff, -15
- %.splatinsert = insertelement <vscale x 1 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 1 x i1> %.splatinsert, <vscale x 1 x i1> poison, <vscale x 1 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 1 x i1> @llvm.get.active.lane.mask.nxv1i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 1 x i1> %ptr.diff.lane.mask, %.splat
- ret <vscale x 1 x i1> %active.lane.mask.alias
-}
-
-define void @whilewr_loop_8(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_loop_8:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: cmp w3, #1
-; CHECK-NEXT: b.lt .LBB6_3
-; CHECK-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NEXT: whilewr p0.b, x1, x2
-; CHECK-NEXT: mov w9, w3
-; CHECK-NEXT: mov x8, xzr
-; CHECK-NEXT: whilelo p1.b, xzr, x9
-; CHECK-NEXT: cntp x10, p0, p0.b
-; CHECK-NEXT: and x10, x10, #0xff
-; CHECK-NEXT: .LBB6_2: // %vector.body
-; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NEXT: ld1b { z0.b }, p1/z, [x0, x8]
-; CHECK-NEXT: ld1b { z1.b }, p1/z, [x1, x8]
-; CHECK-NEXT: add z0.b, z1.b, z0.b
-; CHECK-NEXT: st1b { z0.b }, p1, [x2, x8]
-; CHECK-NEXT: add x8, x8, x10
-; CHECK-NEXT: whilelo p1.b, x8, x9
-; CHECK-NEXT: b.mi .LBB6_2
-; CHECK-NEXT: .LBB6_3: // %for.cond.cleanup
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_loop_8:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: cmp w3, #1
-; CHECK-NOSVE2-NEXT: b.lt .LBB6_3
-; CHECK-NOSVE2-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NOSVE2-NEXT: sub x9, x1, x2
-; CHECK-NOSVE2-NEXT: mov x8, xzr
-; CHECK-NOSVE2-NEXT: cmp x9, #0
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: whilelo p0.b, xzr, x9
-; CHECK-NOSVE2-NEXT: sbfx x9, x10, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p1.b, xzr, x9
-; CHECK-NOSVE2-NEXT: mov w9, w3
-; CHECK-NOSVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: whilelo p1.b, xzr, x9
-; CHECK-NOSVE2-NEXT: cntp x10, p0, p0.b
-; CHECK-NOSVE2-NEXT: and x10, x10, #0xff
-; CHECK-NOSVE2-NEXT: .LBB6_2: // %vector.body
-; CHECK-NOSVE2-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NOSVE2-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NOSVE2-NEXT: ld1b { z0.b }, p1/z, [x0, x8]
-; CHECK-NOSVE2-NEXT: ld1b { z1.b }, p1/z, [x1, x8]
-; CHECK-NOSVE2-NEXT: add z0.b, z1.b, z0.b
-; CHECK-NOSVE2-NEXT: st1b { z0.b }, p1, [x2, x8]
-; CHECK-NOSVE2-NEXT: add x8, x8, x10
-; CHECK-NOSVE2-NEXT: whilelo p1.b, x8, x9
-; CHECK-NOSVE2-NEXT: b.mi .LBB6_2
-; CHECK-NOSVE2-NEXT: .LBB6_3: // %for.cond.cleanup
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %cmp11 = icmp sgt i32 %n, 0
- br i1 %cmp11, label %for.body.preheader, label %for.cond.cleanup
-
-for.body.preheader:
- %c14 = ptrtoint ptr %c to i64
- %b15 = ptrtoint ptr %b to i64
- %wide.trip.count = zext nneg i32 %n to i64
- %sub.diff = sub i64 %b15, %c14
- %neg.compare = icmp slt i64 %sub.diff, 0
- %.splatinsert = insertelement <vscale x 16 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 16 x i1> %.splatinsert, <vscale x 16 x i1> poison, <vscale x 16 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 0, i64 %sub.diff)
- %active.lane.mask.alias = or <vscale x 16 x i1> %ptr.diff.lane.mask, %.splat
- %active.lane.mask.entry = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 0, i64 %wide.trip.count)
- %0 = zext <vscale x 16 x i1> %active.lane.mask.alias to <vscale x 16 x i8>
- %1 = tail call i8 @llvm.vector.reduce.add.nxv16i8(<vscale x 16 x i8> %0)
- %2 = zext i8 %1 to i64
- br label %vector.body
-
-vector.body:
- %index = phi i64 [ 0, %for.body.preheader ], [ %index.next, %vector.body ]
- %active.lane.mask = phi <vscale x 16 x i1> [ %active.lane.mask.entry, %for.body.preheader ], [ %active.lane.mask.next, %vector.body ]
- %3 = and <vscale x 16 x i1> %active.lane.mask, %active.lane.mask.alias
- %4 = getelementptr inbounds i8, ptr %a, i64 %index
- %wide.masked.load = tail call <vscale x 16 x i8> @llvm.masked.load.nxv16i8.p0(ptr %4, i32 1, <vscale x 16 x i1> %3, <vscale x 16 x i8> poison)
- %5 = getelementptr inbounds i8, ptr %b, i64 %index
- %wide.masked.load16 = tail call <vscale x 16 x i8> @llvm.masked.load.nxv16i8.p0(ptr %5, i32 1, <vscale x 16 x i1> %3, <vscale x 16 x i8> poison)
- %6 = add <vscale x 16 x i8> %wide.masked.load16, %wide.masked.load
- %7 = getelementptr inbounds i8, ptr %c, i64 %index
- tail call void @llvm.masked.store.nxv16i8.p0(<vscale x 16 x i8> %6, ptr %7, i32 1, <vscale x 16 x i1> %3)
- %index.next = add i64 %index, %2
- %active.lane.mask.next = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 %index.next, i64 %wide.trip.count)
- %8 = extractelement <vscale x 16 x i1> %active.lane.mask.next, i64 0
- br i1 %8, label %vector.body, label %for.cond.cleanup
-
-for.cond.cleanup:
- ret void
-}
-
-define void @whilewr_loop_16(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_loop_16:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: cmp w3, #1
-; CHECK-NEXT: b.lt .LBB7_3
-; CHECK-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NEXT: mov w8, w3
-; CHECK-NEXT: whilewr p1.h, x1, x2
-; CHECK-NEXT: mov x9, xzr
-; CHECK-NEXT: whilelo p0.h, xzr, x8
-; CHECK-NEXT: and p0.b, p1/z, p1.b, p0.b
-; CHECK-NEXT: .LBB7_2: // %vector.body
-; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: ld1h { z0.h }, p0/z, [x0, x9, lsl #1]
-; CHECK-NEXT: ld1h { z1.h }, p0/z, [x1, x9, lsl #1]
-; CHECK-NEXT: add z0.h, z1.h, z0.h
-; CHECK-NEXT: st1h { z0.h }, p0, [x2, x9, lsl #1]
-; CHECK-NEXT: inch x9
-; CHECK-NEXT: whilelo p0.h, x9, x8
-; CHECK-NEXT: b.mi .LBB7_2
-; CHECK-NEXT: .LBB7_3: // %for.cond.cleanup
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_loop_16:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: cmp w3, #1
-; CHECK-NOSVE2-NEXT: b.lt .LBB7_3
-; CHECK-NOSVE2-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NOSVE2-NEXT: mov w9, w3
-; CHECK-NOSVE2-NEXT: sub x10, x1, x2
-; CHECK-NOSVE2-NEXT: mov x8, xzr
-; CHECK-NOSVE2-NEXT: whilelo p0.h, xzr, x9
-; CHECK-NOSVE2-NEXT: cmn x10, #1
-; CHECK-NOSVE2-NEXT: add x10, x10, x10, lsr #63
-; CHECK-NOSVE2-NEXT: cset w11, lt
-; CHECK-NOSVE2-NEXT: sbfx x11, x11, #0, #1
-; CHECK-NOSVE2-NEXT: asr x10, x10, #1
-; CHECK-NOSVE2-NEXT: whilelo p1.h, xzr, x11
-; CHECK-NOSVE2-NEXT: whilelo p2.h, xzr, x10
-; CHECK-NOSVE2-NEXT: cnth x10
-; CHECK-NOSVE2-NEXT: mov p1.b, p2/m, p2.b
-; CHECK-NOSVE2-NEXT: and p0.b, p1/z, p1.b, p0.b
-; CHECK-NOSVE2-NEXT: .LBB7_2: // %vector.body
-; CHECK-NOSVE2-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NOSVE2-NEXT: ld1h { z0.h }, p0/z, [x0, x8, lsl #1]
-; CHECK-NOSVE2-NEXT: ld1h { z1.h }, p0/z, [x1, x8, lsl #1]
-; CHECK-NOSVE2-NEXT: add z0.h, z1.h, z0.h
-; CHECK-NOSVE2-NEXT: st1h { z0.h }, p0, [x2, x8, lsl #1]
-; CHECK-NOSVE2-NEXT: add x8, x8, x10
-; CHECK-NOSVE2-NEXT: whilelo p0.h, x8, x9
-; CHECK-NOSVE2-NEXT: b.mi .LBB7_2
-; CHECK-NOSVE2-NEXT: .LBB7_3: // %for.cond.cleanup
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %cmp11 = icmp sgt i32 %n, 0
- br i1 %cmp11, label %for.body.preheader, label %for.cond.cleanup
-
-for.body.preheader:
- %b14 = ptrtoint ptr %b to i64
- %c15 = ptrtoint ptr %c to i64
- %wide.trip.count = zext nneg i32 %n to i64
- %0 = tail call i64 @llvm.vscale.i64()
- %1 = shl nuw nsw i64 %0, 3
- %active.lane.mask.entry = tail call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 %wide.trip.count)
- %sub.diff = sub i64 %b14, %c15
- %diff = sdiv i64 %sub.diff, 2
- %neg.compare = icmp slt i64 %sub.diff, -1
- %.splatinsert = insertelement <vscale x 8 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 8 x i1> %.splatinsert, <vscale x 8 x i1> poison, <vscale x 8 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 8 x i1> %ptr.diff.lane.mask, %.splat
- %2 = and <vscale x 8 x i1> %active.lane.mask.alias, %active.lane.mask.entry
- br label %vector.body
-
-vector.body:
- %index = phi i64 [ 0, %for.body.preheader ], [ %index.next, %vector.body ]
- %active.lane.mask = phi <vscale x 8 x i1> [ %2, %for.body.preheader ], [ %active.lane.mask.next, %vector.body ]
- %3 = getelementptr inbounds i16, ptr %a, i64 %index
- %wide.masked.load = tail call <vscale x 8 x i16> @llvm.masked.load.nxv8i16.p0(ptr %3, i32 2, <vscale x 8 x i1> %active.lane.mask, <vscale x 8 x i16> poison)
- %4 = getelementptr inbounds i16, ptr %b, i64 %index
- %wide.masked.load16 = tail call <vscale x 8 x i16> @llvm.masked.load.nxv8i16.p0(ptr %4, i32 2, <vscale x 8 x i1> %active.lane.mask, <vscale x 8 x i16> poison)
- %5 = add <vscale x 8 x i16> %wide.masked.load16, %wide.masked.load
- %6 = getelementptr inbounds i16, ptr %c, i64 %index
- tail call void @llvm.masked.store.nxv8i16.p0(<vscale x 8 x i16> %5, ptr %6, i32 2, <vscale x 8 x i1> %active.lane.mask)
- %index.next = add i64 %index, %1
- %active.lane.mask.next = tail call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 %index.next, i64 %wide.trip.count)
- %7 = extractelement <vscale x 8 x i1> %active.lane.mask.next, i64 0
- br i1 %7, label %vector.body, label %for.cond.cleanup
-
-for.cond.cleanup:
- ret void
-}
-
-define void @whilewr_loop_32(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_loop_32:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: cmp w3, #1
-; CHECK-NEXT: b.lt .LBB8_3
-; CHECK-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NEXT: mov w8, w3
-; CHECK-NEXT: whilewr p1.s, x1, x2
-; CHECK-NEXT: mov x9, xzr
-; CHECK-NEXT: whilelo p0.s, xzr, x8
-; CHECK-NEXT: and p0.b, p1/z, p1.b, p0.b
-; CHECK-NEXT: .LBB8_2: // %vector.body
-; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: ld1w { z0.s }, p0/z, [x0, x9, lsl #2]
-; CHECK-NEXT: ld1w { z1.s }, p0/z, [x1, x9, lsl #2]
-; CHECK-NEXT: add z0.s, z1.s, z0.s
-; CHECK-NEXT: st1w { z0.s }, p0, [x2, x9, lsl #2]
-; CHECK-NEXT: incw x9
-; CHECK-NEXT: whilelo p0.s, x9, x8
-; CHECK-NEXT: b.mi .LBB8_2
-; CHECK-NEXT: .LBB8_3: // %for.cond.cleanup
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_loop_32:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: cmp w3, #1
-; CHECK-NOSVE2-NEXT: b.lt .LBB8_3
-; CHECK-NOSVE2-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NOSVE2-NEXT: mov w9, w3
-; CHECK-NOSVE2-NEXT: sub x10, x1, x2
-; CHECK-NOSVE2-NEXT: mov x8, xzr
-; CHECK-NOSVE2-NEXT: whilelo p0.s, xzr, x9
-; CHECK-NOSVE2-NEXT: add x11, x10, #3
-; CHECK-NOSVE2-NEXT: cmp x10, #0
-; CHECK-NOSVE2-NEXT: csel x11, x11, x10, lt
-; CHECK-NOSVE2-NEXT: cmn x10, #3
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: asr x11, x11, #2
-; CHECK-NOSVE2-NEXT: sbfx x10, x10, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p2.s, xzr, x11
-; CHECK-NOSVE2-NEXT: whilelo p1.s, xzr, x10
-; CHECK-NOSVE2-NEXT: cntw x10
-; CHECK-NOSVE2-NEXT: mov p1.b, p2/m, p2.b
-; CHECK-NOSVE2-NEXT: and p0.b, p1/z, p1.b, p0.b
-; CHECK-NOSVE2-NEXT: .LBB8_2: // %vector.body
-; CHECK-NOSVE2-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NOSVE2-NEXT: ld1w { z0.s }, p0/z, [x0, x8, lsl #2]
-; CHECK-NOSVE2-NEXT: ld1w { z1.s }, p0/z, [x1, x8, lsl #2]
-; CHECK-NOSVE2-NEXT: add z0.s, z1.s, z0.s
-; CHECK-NOSVE2-NEXT: st1w { z0.s }, p0, [x2, x8, lsl #2]
-; CHECK-NOSVE2-NEXT: add x8, x8, x10
-; CHECK-NOSVE2-NEXT: whilelo p0.s, x8, x9
-; CHECK-NOSVE2-NEXT: b.mi .LBB8_2
-; CHECK-NOSVE2-NEXT: .LBB8_3: // %for.cond.cleanup
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %cmp9 = icmp sgt i32 %n, 0
- br i1 %cmp9, label %for.body.preheader, label %for.cond.cleanup
-
-for.body.preheader:
- %b12 = ptrtoint ptr %b to i64
- %c13 = ptrtoint ptr %c to i64
- %wide.trip.count = zext nneg i32 %n to i64
- %0 = tail call i64 @llvm.vscale.i64()
- %1 = shl nuw nsw i64 %0, 2
- %active.lane.mask.entry = tail call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 %wide.trip.count)
- %sub.diff = sub i64 %b12, %c13
- %diff = sdiv i64 %sub.diff, 4
- %neg.compare = icmp slt i64 %sub.diff, -3
- %.splatinsert = insertelement <vscale x 4 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 4 x i1> %.splatinsert, <vscale x 4 x i1> poison, <vscale x 4 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 4 x i1> %ptr.diff.lane.mask, %.splat
- %2 = and <vscale x 4 x i1> %active.lane.mask.alias, %active.lane.mask.entry
- br label %vector.body
-
-vector.body:
- %index = phi i64 [ 0, %for.body.preheader ], [ %index.next, %vector.body ]
- %active.lane.mask = phi <vscale x 4 x i1> [ %2, %for.body.preheader ], [ %active.lane.mask.next, %vector.body ]
- %3 = getelementptr inbounds i32, ptr %a, i64 %index
- %wide.masked.load = tail call <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr %3, i32 4, <vscale x 4 x i1> %active.lane.mask, <vscale x 4 x i32> poison)
- %4 = getelementptr inbounds i32, ptr %b, i64 %index
- %wide.masked.load14 = tail call <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr %4, i32 4, <vscale x 4 x i1> %active.lane.mask, <vscale x 4 x i32> poison)
- %5 = add <vscale x 4 x i32> %wide.masked.load14, %wide.masked.load
- %6 = getelementptr inbounds i32, ptr %c, i64 %index
- tail call void @llvm.masked.store.nxv4i32.p0(<vscale x 4 x i32> %5, ptr %6, i32 4, <vscale x 4 x i1> %active.lane.mask)
- %index.next = add i64 %index, %1
- %active.lane.mask.next = tail call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 %index.next, i64 %wide.trip.count)
- %7 = extractelement <vscale x 4 x i1> %active.lane.mask.next, i64 0
- br i1 %7, label %vector.body, label %for.cond.cleanup
-
-for.cond.cleanup:
- ret void
-}
-
-define void @whilewr_loop_64(ptr noalias %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_loop_64:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: cmp w3, #1
-; CHECK-NEXT: b.lt .LBB9_3
-; CHECK-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NEXT: mov w8, w3
-; CHECK-NEXT: whilewr p1.d, x1, x2
-; CHECK-NEXT: mov x9, xzr
-; CHECK-NEXT: whilelo p0.d, xzr, x8
-; CHECK-NEXT: and p0.b, p1/z, p1.b, p0.b
-; CHECK-NEXT: .LBB9_2: // %vector.body
-; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: ld1d { z0.d }, p0/z, [x0, x9, lsl #3]
-; CHECK-NEXT: ld1d { z1.d }, p0/z, [x1, x9, lsl #3]
-; CHECK-NEXT: add z0.d, z1.d, z0.d
-; CHECK-NEXT: st1d { z0.d }, p0, [x2, x9, lsl #3]
-; CHECK-NEXT: incd x9
-; CHECK-NEXT: whilelo p0.d, x9, x8
-; CHECK-NEXT: b.mi .LBB9_2
-; CHECK-NEXT: .LBB9_3: // %for.cond.cleanup
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_loop_64:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: cmp w3, #1
-; CHECK-NOSVE2-NEXT: b.lt .LBB9_3
-; CHECK-NOSVE2-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NOSVE2-NEXT: mov w9, w3
-; CHECK-NOSVE2-NEXT: sub x10, x1, x2
-; CHECK-NOSVE2-NEXT: mov x8, xzr
-; CHECK-NOSVE2-NEXT: whilelo p0.d, xzr, x9
-; CHECK-NOSVE2-NEXT: add x11, x10, #7
-; CHECK-NOSVE2-NEXT: cmp x10, #0
-; CHECK-NOSVE2-NEXT: csel x11, x11, x10, lt
-; CHECK-NOSVE2-NEXT: cmn x10, #7
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: asr x11, x11, #3
-; CHECK-NOSVE2-NEXT: sbfx x10, x10, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p2.d, xzr, x11
-; CHECK-NOSVE2-NEXT: whilelo p1.d, xzr, x10
-; CHECK-NOSVE2-NEXT: cntd x10
-; CHECK-NOSVE2-NEXT: mov p1.b, p2/m, p2.b
-; CHECK-NOSVE2-NEXT: and p0.b, p1/z, p1.b, p0.b
-; CHECK-NOSVE2-NEXT: .LBB9_2: // %vector.body
-; CHECK-NOSVE2-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NOSVE2-NEXT: ld1d { z0.d }, p0/z, [x0, x8, lsl #3]
-; CHECK-NOSVE2-NEXT: ld1d { z1.d }, p0/z, [x1, x8, lsl #3]
-; CHECK-NOSVE2-NEXT: add z0.d, z1.d, z0.d
-; CHECK-NOSVE2-NEXT: st1d { z0.d }, p0, [x2, x8, lsl #3]
-; CHECK-NOSVE2-NEXT: add x8, x8, x10
-; CHECK-NOSVE2-NEXT: whilelo p0.d, x8, x9
-; CHECK-NOSVE2-NEXT: b.mi .LBB9_2
-; CHECK-NOSVE2-NEXT: .LBB9_3: // %for.cond.cleanup
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %cmp9 = icmp sgt i32 %n, 0
- br i1 %cmp9, label %for.body.preheader, label %for.cond.cleanup
-
-for.body.preheader:
- %b12 = ptrtoint ptr %b to i64
- %c13 = ptrtoint ptr %c to i64
- %wide.trip.count = zext nneg i32 %n to i64
- %0 = tail call i64 @llvm.vscale.i64()
- %1 = shl nuw nsw i64 %0, 1
- %active.lane.mask.entry = tail call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64 0, i64 %wide.trip.count)
- %sub.diff = sub i64 %b12, %c13
- %diff = sdiv i64 %sub.diff, 8
- %neg.compare = icmp slt i64 %sub.diff, -7
- %.splatinsert = insertelement <vscale x 2 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 2 x i1> %.splatinsert, <vscale x 2 x i1> poison, <vscale x 2 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 2 x i1> %ptr.diff.lane.mask, %.splat
- %2 = and <vscale x 2 x i1> %active.lane.mask.alias, %active.lane.mask.entry
- br label %vector.body
-
-vector.body:
- %index = phi i64 [ 0, %for.body.preheader ], [ %index.next, %vector.body ]
- %active.lane.mask = phi <vscale x 2 x i1> [ %2, %for.body.preheader ], [ %active.lane.mask.next, %vector.body ]
- %3 = getelementptr inbounds i64, ptr %a, i64 %index
- %wide.masked.load = tail call <vscale x 2 x i64> @llvm.masked.load.nxv2i64.p0(ptr %3, i32 8, <vscale x 2 x i1> %active.lane.mask, <vscale x 2 x i64> poison)
- %4 = getelementptr inbounds i64, ptr %b, i64 %index
- %wide.masked.load14 = tail call <vscale x 2 x i64> @llvm.masked.load.nxv2i64.p0(ptr %4, i32 8, <vscale x 2 x i1> %active.lane.mask, <vscale x 2 x i64> poison)
- %5 = add <vscale x 2 x i64> %wide.masked.load14, %wide.masked.load
- %6 = getelementptr inbounds i64, ptr %c, i64 %index
- tail call void @llvm.masked.store.nxv2i64.p0(<vscale x 2 x i64> %5, ptr %6, i32 8, <vscale x 2 x i1> %active.lane.mask)
- %index.next = add i64 %index, %1
- %active.lane.mask.next = tail call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64 %index.next, i64 %wide.trip.count)
- %7 = extractelement <vscale x 2 x i1> %active.lane.mask.next, i64 0
- br i1 %7, label %vector.body, label %for.cond.cleanup
-
-for.cond.cleanup:
- ret void
-}
-
-define void @whilewr_loop_multiple_8(ptr %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_loop_multiple_8:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: cmp w3, #1
-; CHECK-NEXT: b.lt .LBB10_3
-; CHECK-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NEXT: whilewr p0.b, x0, x2
-; CHECK-NEXT: mov w9, w3
-; CHECK-NEXT: mov x8, xzr
-; CHECK-NEXT: whilewr p1.b, x1, x2
-; CHECK-NEXT: and p0.b, p0/z, p0.b, p1.b
-; CHECK-NEXT: whilelo p1.b, xzr, x9
-; CHECK-NEXT: cntp x10, p0, p0.b
-; CHECK-NEXT: and x10, x10, #0xff
-; CHECK-NEXT: .LBB10_2: // %vector.body
-; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NEXT: ld1b { z0.b }, p1/z, [x0, x8]
-; CHECK-NEXT: ld1b { z1.b }, p1/z, [x1, x8]
-; CHECK-NEXT: add z0.b, z1.b, z0.b
-; CHECK-NEXT: st1b { z0.b }, p1, [x2, x8]
-; CHECK-NEXT: add x8, x8, x10
-; CHECK-NEXT: whilelo p1.b, x8, x9
-; CHECK-NEXT: b.mi .LBB10_2
-; CHECK-NEXT: .LBB10_3: // %for.cond.cleanup
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_loop_multiple_8:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: cmp w3, #1
-; CHECK-NOSVE2-NEXT: b.lt .LBB10_3
-; CHECK-NOSVE2-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NOSVE2-NEXT: sub x9, x0, x2
-; CHECK-NOSVE2-NEXT: mov x8, xzr
-; CHECK-NOSVE2-NEXT: cmp x9, #0
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: whilelo p0.b, xzr, x9
-; CHECK-NOSVE2-NEXT: sub x9, x1, x2
-; CHECK-NOSVE2-NEXT: sbfx x10, x10, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p1.b, xzr, x10
-; CHECK-NOSVE2-NEXT: cmp x9, #0
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: whilelo p3.b, xzr, x9
-; CHECK-NOSVE2-NEXT: mov w9, w3
-; CHECK-NOSVE2-NEXT: sbfx x10, x10, #0, #1
-; CHECK-NOSVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: whilelo p2.b, xzr, x10
-; CHECK-NOSVE2-NEXT: sel p1.b, p3, p3.b, p2.b
-; CHECK-NOSVE2-NEXT: and p0.b, p0/z, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: whilelo p1.b, xzr, x9
-; CHECK-NOSVE2-NEXT: cntp x10, p0, p0.b
-; CHECK-NOSVE2-NEXT: and x10, x10, #0xff
-; CHECK-NOSVE2-NEXT: .LBB10_2: // %vector.body
-; CHECK-NOSVE2-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NOSVE2-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NOSVE2-NEXT: ld1b { z0.b }, p1/z, [x0, x8]
-; CHECK-NOSVE2-NEXT: ld1b { z1.b }, p1/z, [x1, x8]
-; CHECK-NOSVE2-NEXT: add z0.b, z1.b, z0.b
-; CHECK-NOSVE2-NEXT: st1b { z0.b }, p1, [x2, x8]
-; CHECK-NOSVE2-NEXT: add x8, x8, x10
-; CHECK-NOSVE2-NEXT: whilelo p1.b, x8, x9
-; CHECK-NOSVE2-NEXT: b.mi .LBB10_2
-; CHECK-NOSVE2-NEXT: .LBB10_3: // %for.cond.cleanup
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %cmp11 = icmp sgt i32 %n, 0
- br i1 %cmp11, label %for.body.preheader, label %for.cond.cleanup
-
-for.body.preheader:
- %c14 = ptrtoint ptr %c to i64
- %a15 = ptrtoint ptr %a to i64
- %b16 = ptrtoint ptr %b to i64
- %wide.trip.count = zext nneg i32 %n to i64
- %sub.diff = sub i64 %a15, %c14
- %neg.compare = icmp slt i64 %sub.diff, 0
- %.splatinsert = insertelement <vscale x 16 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 16 x i1> %.splatinsert, <vscale x 16 x i1> poison, <vscale x 16 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 0, i64 %sub.diff)
- %active.lane.mask.alias = or <vscale x 16 x i1> %ptr.diff.lane.mask, %.splat
- %sub.diff18 = sub i64 %b16, %c14
- %neg.compare20 = icmp slt i64 %sub.diff18, 0
- %.splatinsert21 = insertelement <vscale x 16 x i1> poison, i1 %neg.compare20, i64 0
- %.splat22 = shufflevector <vscale x 16 x i1> %.splatinsert21, <vscale x 16 x i1> poison, <vscale x 16 x i32> zeroinitializer
- %ptr.diff.lane.mask23 = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 0, i64 %sub.diff18)
- %active.lane.mask.alias24 = or <vscale x 16 x i1> %ptr.diff.lane.mask23, %.splat22
- %0 = and <vscale x 16 x i1> %active.lane.mask.alias, %active.lane.mask.alias24
- %active.lane.mask.entry = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 0, i64 %wide.trip.count)
- %1 = zext <vscale x 16 x i1> %0 to <vscale x 16 x i8>
- %2 = tail call i8 @llvm.vector.reduce.add.nxv16i8(<vscale x 16 x i8> %1)
- %3 = zext i8 %2 to i64
- br label %vector.body
-
-vector.body:
- %index = phi i64 [ 0, %for.body.preheader ], [ %index.next, %vector.body ]
- %active.lane.mask = phi <vscale x 16 x i1> [ %active.lane.mask.entry, %for.body.preheader ], [ %active.lane.mask.next, %vector.body ]
- %4 = and <vscale x 16 x i1> %active.lane.mask, %0
- %5 = getelementptr inbounds i8, ptr %a, i64 %index
- %wide.masked.load = tail call <vscale x 16 x i8> @llvm.masked.load.nxv16i8.p0(ptr %5, i32 1, <vscale x 16 x i1> %4, <vscale x 16 x i8> poison)
- %6 = getelementptr inbounds i8, ptr %b, i64 %index
- %wide.masked.load25 = tail call <vscale x 16 x i8> @llvm.masked.load.nxv16i8.p0(ptr %6, i32 1, <vscale x 16 x i1> %4, <vscale x 16 x i8> poison)
- %7 = add <vscale x 16 x i8> %wide.masked.load25, %wide.masked.load
- %8 = getelementptr inbounds i8, ptr %c, i64 %index
- tail call void @llvm.masked.store.nxv16i8.p0(<vscale x 16 x i8> %7, ptr %8, i32 1, <vscale x 16 x i1> %4)
- %index.next = add i64 %index, %3
- %active.lane.mask.next = tail call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 %index.next, i64 %wide.trip.count)
- %9 = extractelement <vscale x 16 x i1> %active.lane.mask.next, i64 0
- br i1 %9, label %vector.body, label %for.cond.cleanup
-
-for.cond.cleanup:
- ret void
-}
-
-define void @whilewr_loop_multiple_16(ptr %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_loop_multiple_16:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: cmp w3, #1
-; CHECK-NEXT: b.lt .LBB11_3
-; CHECK-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NEXT: whilewr p0.h, x0, x2
-; CHECK-NEXT: mov w9, w3
-; CHECK-NEXT: mov x8, xzr
-; CHECK-NEXT: whilewr p1.h, x1, x2
-; CHECK-NEXT: and p0.b, p0/z, p0.b, p1.b
-; CHECK-NEXT: whilelo p1.h, xzr, x9
-; CHECK-NEXT: cntp x10, p0, p0.h
-; CHECK-NEXT: and x10, x10, #0xff
-; CHECK-NEXT: .LBB11_2: // %vector.body
-; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NEXT: ld1h { z0.h }, p1/z, [x0, x8, lsl #1]
-; CHECK-NEXT: ld1h { z1.h }, p1/z, [x1, x8, lsl #1]
-; CHECK-NEXT: add z0.h, z1.h, z0.h
-; CHECK-NEXT: st1h { z0.h }, p1, [x2, x8, lsl #1]
-; CHECK-NEXT: add x8, x8, x10
-; CHECK-NEXT: whilelo p1.h, x8, x9
-; CHECK-NEXT: b.mi .LBB11_2
-; CHECK-NEXT: .LBB11_3: // %for.cond.cleanup
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_loop_multiple_16:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: cmp w3, #1
-; CHECK-NOSVE2-NEXT: b.lt .LBB11_3
-; CHECK-NOSVE2-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NOSVE2-NEXT: sub x9, x0, x2
-; CHECK-NOSVE2-NEXT: mov x8, xzr
-; CHECK-NOSVE2-NEXT: cmn x9, #1
-; CHECK-NOSVE2-NEXT: add x9, x9, x9, lsr #63
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: sbfx x10, x10, #0, #1
-; CHECK-NOSVE2-NEXT: asr x9, x9, #1
-; CHECK-NOSVE2-NEXT: whilelo p0.h, xzr, x10
-; CHECK-NOSVE2-NEXT: sub x10, x1, x2
-; CHECK-NOSVE2-NEXT: whilelo p1.h, xzr, x9
-; CHECK-NOSVE2-NEXT: add x9, x10, x10, lsr #63
-; CHECK-NOSVE2-NEXT: cmn x10, #1
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: asr x9, x9, #1
-; CHECK-NOSVE2-NEXT: mov p0.b, p1/m, p1.b
-; CHECK-NOSVE2-NEXT: sbfx x10, x10, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p3.h, xzr, x9
-; CHECK-NOSVE2-NEXT: mov w9, w3
-; CHECK-NOSVE2-NEXT: whilelo p2.h, xzr, x10
-; CHECK-NOSVE2-NEXT: sel p1.b, p3, p3.b, p2.b
-; CHECK-NOSVE2-NEXT: and p0.b, p0/z, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: whilelo p1.h, xzr, x9
-; CHECK-NOSVE2-NEXT: cntp x10, p0, p0.h
-; CHECK-NOSVE2-NEXT: and x10, x10, #0xff
-; CHECK-NOSVE2-NEXT: .LBB11_2: // %vector.body
-; CHECK-NOSVE2-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NOSVE2-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NOSVE2-NEXT: ld1h { z0.h }, p1/z, [x0, x8, lsl #1]
-; CHECK-NOSVE2-NEXT: ld1h { z1.h }, p1/z, [x1, x8, lsl #1]
-; CHECK-NOSVE2-NEXT: add z0.h, z1.h, z0.h
-; CHECK-NOSVE2-NEXT: st1h { z0.h }, p1, [x2, x8, lsl #1]
-; CHECK-NOSVE2-NEXT: add x8, x8, x10
-; CHECK-NOSVE2-NEXT: whilelo p1.h, x8, x9
-; CHECK-NOSVE2-NEXT: b.mi .LBB11_2
-; CHECK-NOSVE2-NEXT: .LBB11_3: // %for.cond.cleanup
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %cmp11 = icmp sgt i32 %n, 0
- br i1 %cmp11, label %for.body.preheader, label %for.cond.cleanup
-
-for.body.preheader:
- %c14 = ptrtoint ptr %c to i64
- %a15 = ptrtoint ptr %a to i64
- %b16 = ptrtoint ptr %b to i64
- %wide.trip.count = zext nneg i32 %n to i64
- %sub.diff = sub i64 %a15, %c14
- %diff = sdiv i64 %sub.diff, 2
- %neg.compare = icmp slt i64 %sub.diff, -1
- %.splatinsert = insertelement <vscale x 8 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 8 x i1> %.splatinsert, <vscale x 8 x i1> poison, <vscale x 8 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 8 x i1> %ptr.diff.lane.mask, %.splat
- %sub.diff18 = sub i64 %b16, %c14
- %diff19 = sdiv i64 %sub.diff18, 2
- %neg.compare20 = icmp slt i64 %sub.diff18, -1
- %.splatinsert21 = insertelement <vscale x 8 x i1> poison, i1 %neg.compare20, i64 0
- %.splat22 = shufflevector <vscale x 8 x i1> %.splatinsert21, <vscale x 8 x i1> poison, <vscale x 8 x i32> zeroinitializer
- %ptr.diff.lane.mask23 = tail call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 %diff19)
- %active.lane.mask.alias24 = or <vscale x 8 x i1> %ptr.diff.lane.mask23, %.splat22
- %0 = and <vscale x 8 x i1> %active.lane.mask.alias, %active.lane.mask.alias24
- %active.lane.mask.entry = tail call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 0, i64 %wide.trip.count)
- %1 = zext <vscale x 8 x i1> %0 to <vscale x 8 x i8>
- %2 = tail call i8 @llvm.vector.reduce.add.nxv8i8(<vscale x 8 x i8> %1)
- %3 = zext i8 %2 to i64
- br label %vector.body
-
-vector.body:
- %index = phi i64 [ 0, %for.body.preheader ], [ %index.next, %vector.body ]
- %active.lane.mask = phi <vscale x 8 x i1> [ %active.lane.mask.entry, %for.body.preheader ], [ %active.lane.mask.next, %vector.body ]
- %4 = and <vscale x 8 x i1> %active.lane.mask, %0
- %5 = getelementptr inbounds i16, ptr %a, i64 %index
- %wide.masked.load = tail call <vscale x 8 x i16> @llvm.masked.load.nxv8i16.p0(ptr %5, i32 2, <vscale x 8 x i1> %4, <vscale x 8 x i16> poison)
- %6 = getelementptr inbounds i16, ptr %b, i64 %index
- %wide.masked.load25 = tail call <vscale x 8 x i16> @llvm.masked.load.nxv8i16.p0(ptr %6, i32 2, <vscale x 8 x i1> %4, <vscale x 8 x i16> poison)
- %7 = add <vscale x 8 x i16> %wide.masked.load25, %wide.masked.load
- %8 = getelementptr inbounds i16, ptr %c, i64 %index
- tail call void @llvm.masked.store.nxv8i16.p0(<vscale x 8 x i16> %7, ptr %8, i32 2, <vscale x 8 x i1> %4)
- %index.next = add i64 %index, %3
- %active.lane.mask.next = tail call <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64 %index.next, i64 %wide.trip.count)
- %9 = extractelement <vscale x 8 x i1> %active.lane.mask.next, i64 0
- br i1 %9, label %vector.body, label %for.cond.cleanup
-
-for.cond.cleanup:
- ret void
-}
-
-define void @whilewr_loop_multiple_32(ptr %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_loop_multiple_32:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: cmp w3, #1
-; CHECK-NEXT: b.lt .LBB12_3
-; CHECK-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NEXT: whilewr p0.s, x0, x2
-; CHECK-NEXT: mov w9, w3
-; CHECK-NEXT: mov x8, xzr
-; CHECK-NEXT: whilewr p1.s, x1, x2
-; CHECK-NEXT: and p0.b, p0/z, p0.b, p1.b
-; CHECK-NEXT: whilelo p1.s, xzr, x9
-; CHECK-NEXT: cntp x10, p0, p0.s
-; CHECK-NEXT: and x10, x10, #0xff
-; CHECK-NEXT: .LBB12_2: // %vector.body
-; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NEXT: ld1w { z0.s }, p1/z, [x0, x8, lsl #2]
-; CHECK-NEXT: ld1w { z1.s }, p1/z, [x1, x8, lsl #2]
-; CHECK-NEXT: add z0.s, z1.s, z0.s
-; CHECK-NEXT: st1w { z0.s }, p1, [x2, x8, lsl #2]
-; CHECK-NEXT: add x8, x8, x10
-; CHECK-NEXT: whilelo p1.s, x8, x9
-; CHECK-NEXT: b.mi .LBB12_2
-; CHECK-NEXT: .LBB12_3: // %for.cond.cleanup
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_loop_multiple_32:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: cmp w3, #1
-; CHECK-NOSVE2-NEXT: b.lt .LBB12_3
-; CHECK-NOSVE2-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NOSVE2-NEXT: sub x9, x0, x2
-; CHECK-NOSVE2-NEXT: mov x8, xzr
-; CHECK-NOSVE2-NEXT: add x10, x9, #3
-; CHECK-NOSVE2-NEXT: cmp x9, #0
-; CHECK-NOSVE2-NEXT: csel x10, x10, x9, lt
-; CHECK-NOSVE2-NEXT: cmn x9, #3
-; CHECK-NOSVE2-NEXT: asr x9, x10, #2
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: sbfx x10, x10, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p0.s, xzr, x9
-; CHECK-NOSVE2-NEXT: sub x9, x1, x2
-; CHECK-NOSVE2-NEXT: whilelo p1.s, xzr, x10
-; CHECK-NOSVE2-NEXT: add x10, x9, #3
-; CHECK-NOSVE2-NEXT: cmp x9, #0
-; CHECK-NOSVE2-NEXT: csel x10, x10, x9, lt
-; CHECK-NOSVE2-NEXT: cmn x9, #3
-; CHECK-NOSVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: cset w9, lt
-; CHECK-NOSVE2-NEXT: asr x10, x10, #2
-; CHECK-NOSVE2-NEXT: sbfx x9, x9, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p3.s, xzr, x10
-; CHECK-NOSVE2-NEXT: whilelo p2.s, xzr, x9
-; CHECK-NOSVE2-NEXT: mov w9, w3
-; CHECK-NOSVE2-NEXT: sel p1.b, p3, p3.b, p2.b
-; CHECK-NOSVE2-NEXT: and p0.b, p0/z, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: whilelo p1.s, xzr, x9
-; CHECK-NOSVE2-NEXT: cntp x10, p0, p0.s
-; CHECK-NOSVE2-NEXT: and x10, x10, #0xff
-; CHECK-NOSVE2-NEXT: .LBB12_2: // %vector.body
-; CHECK-NOSVE2-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NOSVE2-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NOSVE2-NEXT: ld1w { z0.s }, p1/z, [x0, x8, lsl #2]
-; CHECK-NOSVE2-NEXT: ld1w { z1.s }, p1/z, [x1, x8, lsl #2]
-; CHECK-NOSVE2-NEXT: add z0.s, z1.s, z0.s
-; CHECK-NOSVE2-NEXT: st1w { z0.s }, p1, [x2, x8, lsl #2]
-; CHECK-NOSVE2-NEXT: add x8, x8, x10
-; CHECK-NOSVE2-NEXT: whilelo p1.s, x8, x9
-; CHECK-NOSVE2-NEXT: b.mi .LBB12_2
-; CHECK-NOSVE2-NEXT: .LBB12_3: // %for.cond.cleanup
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %cmp9 = icmp sgt i32 %n, 0
- br i1 %cmp9, label %for.body.preheader, label %for.cond.cleanup
-
-for.body.preheader:
- %c12 = ptrtoint ptr %c to i64
- %a13 = ptrtoint ptr %a to i64
- %b14 = ptrtoint ptr %b to i64
- %wide.trip.count = zext nneg i32 %n to i64
- %sub.diff = sub i64 %a13, %c12
- %diff = sdiv i64 %sub.diff, 4
- %neg.compare = icmp slt i64 %sub.diff, -3
- %.splatinsert = insertelement <vscale x 4 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 4 x i1> %.splatinsert, <vscale x 4 x i1> poison, <vscale x 4 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 4 x i1> %ptr.diff.lane.mask, %.splat
- %sub.diff16 = sub i64 %b14, %c12
- %diff17 = sdiv i64 %sub.diff16, 4
- %neg.compare18 = icmp slt i64 %sub.diff16, -3
- %.splatinsert19 = insertelement <vscale x 4 x i1> poison, i1 %neg.compare18, i64 0
- %.splat20 = shufflevector <vscale x 4 x i1> %.splatinsert19, <vscale x 4 x i1> poison, <vscale x 4 x i32> zeroinitializer
- %ptr.diff.lane.mask21 = tail call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 %diff17)
- %active.lane.mask.alias22 = or <vscale x 4 x i1> %ptr.diff.lane.mask21, %.splat20
- %0 = and <vscale x 4 x i1> %active.lane.mask.alias, %active.lane.mask.alias22
- %active.lane.mask.entry = tail call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 0, i64 %wide.trip.count)
- %1 = zext <vscale x 4 x i1> %0 to <vscale x 4 x i8>
- %2 = tail call i8 @llvm.vector.reduce.add.nxv4i8(<vscale x 4 x i8> %1)
- %3 = zext i8 %2 to i64
- br label %vector.body
-
-vector.body:
- %index = phi i64 [ 0, %for.body.preheader ], [ %index.next, %vector.body ]
- %active.lane.mask = phi <vscale x 4 x i1> [ %active.lane.mask.entry, %for.body.preheader ], [ %active.lane.mask.next, %vector.body ]
- %4 = and <vscale x 4 x i1> %active.lane.mask, %0
- %5 = getelementptr inbounds i32, ptr %a, i64 %index
- %wide.masked.load = tail call <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr %5, i32 4, <vscale x 4 x i1> %4, <vscale x 4 x i32> poison)
- %6 = getelementptr inbounds i32, ptr %b, i64 %index
- %wide.masked.load23 = tail call <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr %6, i32 4, <vscale x 4 x i1> %4, <vscale x 4 x i32> poison)
- %7 = add <vscale x 4 x i32> %wide.masked.load23, %wide.masked.load
- %8 = getelementptr inbounds i32, ptr %c, i64 %index
- tail call void @llvm.masked.store.nxv4i32.p0(<vscale x 4 x i32> %7, ptr %8, i32 4, <vscale x 4 x i1> %4)
- %index.next = add i64 %index, %3
- %active.lane.mask.next = tail call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64 %index.next, i64 %wide.trip.count)
- %9 = extractelement <vscale x 4 x i1> %active.lane.mask.next, i64 0
- br i1 %9, label %vector.body, label %for.cond.cleanup
-
-for.cond.cleanup:
- ret void
-}
-
-define void @whilewr_loop_multiple_64(ptr %a, ptr %b, ptr %c, i32 %n) {
-; CHECK-LABEL: whilewr_loop_multiple_64:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: cmp w3, #1
-; CHECK-NEXT: b.lt .LBB13_3
-; CHECK-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NEXT: whilewr p0.d, x0, x2
-; CHECK-NEXT: mov w9, w3
-; CHECK-NEXT: mov x8, xzr
-; CHECK-NEXT: whilewr p1.d, x1, x2
-; CHECK-NEXT: and p0.b, p0/z, p0.b, p1.b
-; CHECK-NEXT: whilelo p1.d, xzr, x9
-; CHECK-NEXT: cntp x10, p0, p0.d
-; CHECK-NEXT: and x10, x10, #0xff
-; CHECK-NEXT: .LBB13_2: // %vector.body
-; CHECK-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NEXT: ld1d { z0.d }, p1/z, [x0, x8, lsl #3]
-; CHECK-NEXT: ld1d { z1.d }, p1/z, [x1, x8, lsl #3]
-; CHECK-NEXT: add z0.d, z1.d, z0.d
-; CHECK-NEXT: st1d { z0.d }, p1, [x2, x8, lsl #3]
-; CHECK-NEXT: add x8, x8, x10
-; CHECK-NEXT: whilelo p1.d, x8, x9
-; CHECK-NEXT: b.mi .LBB13_2
-; CHECK-NEXT: .LBB13_3: // %for.cond.cleanup
-; CHECK-NEXT: ret
-;
-; CHECK-NOSVE2-LABEL: whilewr_loop_multiple_64:
-; CHECK-NOSVE2: // %bb.0: // %entry
-; CHECK-NOSVE2-NEXT: cmp w3, #1
-; CHECK-NOSVE2-NEXT: b.lt .LBB13_3
-; CHECK-NOSVE2-NEXT: // %bb.1: // %for.body.preheader
-; CHECK-NOSVE2-NEXT: sub x9, x0, x2
-; CHECK-NOSVE2-NEXT: mov x8, xzr
-; CHECK-NOSVE2-NEXT: add x10, x9, #7
-; CHECK-NOSVE2-NEXT: cmp x9, #0
-; CHECK-NOSVE2-NEXT: csel x10, x10, x9, lt
-; CHECK-NOSVE2-NEXT: cmn x9, #7
-; CHECK-NOSVE2-NEXT: asr x9, x10, #3
-; CHECK-NOSVE2-NEXT: cset w10, lt
-; CHECK-NOSVE2-NEXT: sbfx x10, x10, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p0.d, xzr, x9
-; CHECK-NOSVE2-NEXT: sub x9, x1, x2
-; CHECK-NOSVE2-NEXT: whilelo p1.d, xzr, x10
-; CHECK-NOSVE2-NEXT: add x10, x9, #7
-; CHECK-NOSVE2-NEXT: cmp x9, #0
-; CHECK-NOSVE2-NEXT: csel x10, x10, x9, lt
-; CHECK-NOSVE2-NEXT: cmn x9, #7
-; CHECK-NOSVE2-NEXT: sel p0.b, p0, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: cset w9, lt
-; CHECK-NOSVE2-NEXT: asr x10, x10, #3
-; CHECK-NOSVE2-NEXT: sbfx x9, x9, #0, #1
-; CHECK-NOSVE2-NEXT: whilelo p3.d, xzr, x10
-; CHECK-NOSVE2-NEXT: whilelo p2.d, xzr, x9
-; CHECK-NOSVE2-NEXT: mov w9, w3
-; CHECK-NOSVE2-NEXT: sel p1.b, p3, p3.b, p2.b
-; CHECK-NOSVE2-NEXT: and p0.b, p0/z, p0.b, p1.b
-; CHECK-NOSVE2-NEXT: whilelo p1.d, xzr, x9
-; CHECK-NOSVE2-NEXT: cntp x10, p0, p0.d
-; CHECK-NOSVE2-NEXT: and x10, x10, #0xff
-; CHECK-NOSVE2-NEXT: .LBB13_2: // %vector.body
-; CHECK-NOSVE2-NEXT: // =>This Inner Loop Header: Depth=1
-; CHECK-NOSVE2-NEXT: and p1.b, p1/z, p1.b, p0.b
-; CHECK-NOSVE2-NEXT: ld1d { z0.d }, p1/z, [x0, x8, lsl #3]
-; CHECK-NOSVE2-NEXT: ld1d { z1.d }, p1/z, [x1, x8, lsl #3]
-; CHECK-NOSVE2-NEXT: add z0.d, z1.d, z0.d
-; CHECK-NOSVE2-NEXT: st1d { z0.d }, p1, [x2, x8, lsl #3]
-; CHECK-NOSVE2-NEXT: add x8, x8, x10
-; CHECK-NOSVE2-NEXT: whilelo p1.d, x8, x9
-; CHECK-NOSVE2-NEXT: b.mi .LBB13_2
-; CHECK-NOSVE2-NEXT: .LBB13_3: // %for.cond.cleanup
-; CHECK-NOSVE2-NEXT: ret
-entry:
- %cmp9 = icmp sgt i32 %n, 0
- br i1 %cmp9, label %for.body.preheader, label %for.cond.cleanup
-
-for.body.preheader:
- %c12 = ptrtoint ptr %c to i64
- %a13 = ptrtoint ptr %a to i64
- %b14 = ptrtoint ptr %b to i64
- %wide.trip.count = zext nneg i32 %n to i64
- %sub.diff = sub i64 %a13, %c12
- %diff = sdiv i64 %sub.diff, 8
- %neg.compare = icmp slt i64 %sub.diff, -7
- %.splatinsert = insertelement <vscale x 2 x i1> poison, i1 %neg.compare, i64 0
- %.splat = shufflevector <vscale x 2 x i1> %.splatinsert, <vscale x 2 x i1> poison, <vscale x 2 x i32> zeroinitializer
- %ptr.diff.lane.mask = tail call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64 0, i64 %diff)
- %active.lane.mask.alias = or <vscale x 2 x i1> %ptr.diff.lane.mask, %.splat
- %sub.diff16 = sub i64 %b14, %c12
- %diff17 = sdiv i64 %sub.diff16, 8
- %neg.compare18 = icmp slt i64 %sub.diff16, -7
- %.splatinsert19 = insertelement <vscale x 2 x i1> poison, i1 %neg.compare18, i64 0
- %.splat20 = shufflevector <vscale x 2 x i1> %.splatinsert19, <vscale x 2 x i1> poison, <vscale x 2 x i32> zeroinitializer
- %ptr.diff.lane.mask21 = tail call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64 0, i64 %diff17)
- %active.lane.mask.alias22 = or <vscale x 2 x i1> %ptr.diff.lane.mask21, %.splat20
- %0 = and <vscale x 2 x i1> %active.lane.mask.alias, %active.lane.mask.alias22
- %active.lane.mask.entry = tail call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64 0, i64 %wide.trip.count)
- %1 = zext <vscale x 2 x i1> %0 to <vscale x 2 x i8>
- %2 = tail call i8 @llvm.vector.reduce.add.nxv2i8(<vscale x 2 x i8> %1)
- %3 = zext i8 %2 to i64
- br label %vector.body
-
-vector.body:
- %index = phi i64 [ 0, %for.body.preheader ], [ %index.next, %vector.body ]
- %active.lane.mask = phi <vscale x 2 x i1> [ %active.lane.mask.entry, %for.body.preheader ], [ %active.lane.mask.next, %vector.body ]
- %4 = and <vscale x 2 x i1> %active.lane.mask, %0
- %5 = getelementptr inbounds i64, ptr %a, i64 %index
- %wide.masked.load = tail call <vscale x 2 x i64> @llvm.masked.load.nxv2i64.p0(ptr %5, i32 8, <vscale x 2 x i1> %4, <vscale x 2 x i64> poison)
- %6 = getelementptr inbounds i64, ptr %b, i64 %index
- %wide.masked.load23 = tail call <vscale x 2 x i64> @llvm.masked.load.nxv2i64.p0(ptr %6, i32 8, <vscale x 2 x i1> %4, <vscale x 2 x i64> poison)
- %7 = add <vscale x 2 x i64> %wide.masked.load23, %wide.masked.load
- %8 = getelementptr inbounds i64, ptr %c, i64 %index
- tail call void @llvm.masked.store.nxv2i64.p0(<vscale x 2 x i64> %7, ptr %8, i32 8, <vscale x 2 x i1> %4)
- %index.next = add i64 %index, %3
- %active.lane.mask.next = tail call <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64 %index.next, i64 %wide.trip.count)
- %9 = extractelement <vscale x 2 x i1> %active.lane.mask.next, i64 0
- br i1 %9, label %vector.body, label %for.cond.cleanup
-
-for.cond.cleanup:
- ret void
-}
-
-declare i64 @llvm.vscale.i64()
-
-declare <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64, i64)
-
-declare <vscale x 16 x i8> @llvm.masked.load.nxv16i8.p0(ptr nocapture, i32 immarg, <vscale x 16 x i1>, <vscale x 16 x i8>)
-
-declare void @llvm.masked.store.nxv16i8.p0(<vscale x 16 x i8>, ptr nocapture, i32 immarg, <vscale x 16 x i1>)
-
-declare <vscale x 8 x i1> @llvm.get.active.lane.mask.nxv8i1.i64(i64, i64)
-
-declare <vscale x 8 x i16> @llvm.masked.load.nxv8i16.p0(ptr nocapture, i32 immarg, <vscale x 8 x i1>, <vscale x 8 x i16>)
-
-declare void @llvm.masked.store.nxv8i16.p0(<vscale x 8 x i16>, ptr nocapture, i32 immarg, <vscale x 8 x i1>)
-
-declare <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64, i64)
-
-declare <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr nocapture, i32 immarg, <vscale x 4 x i1>, <vscale x 4 x i32>)
-
-declare void @llvm.masked.store.nxv4i32.p0(<vscale x 4 x i32>, ptr nocapture, i32 immarg, <vscale x 4 x i1>)
-
-declare <vscale x 2 x i1> @llvm.get.active.lane.mask.nxv2i1.i64(i64, i64)
-
-declare <vscale x 2 x i64> @llvm.masked.load.nxv2i64.p0(ptr nocapture, i32 immarg, <vscale x 2 x i1>, <vscale x 2 x i64>)
-
-declare void @llvm.masked.store.nxv2i64.p0(<vscale x 2 x i64>, ptr nocapture, i32 immarg, <vscale x 2 x i1>)
>From e984b6120792f90ef00cd9ac23fde16adee73a5a Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 21 Nov 2024 13:53:11 +0000
Subject: [PATCH 02/15] Fix intrinsic signature in docs
---
llvm/docs/LangRef.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index c9589d5af8ebbe..1ad4b5c3b7e860 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23477,7 +23477,7 @@ Examples:
.. _int_experimental_get_alias_lane_mask:
-'``llvm.get.alias.lane.mask.*``' Intrinsics
+'``llvm.experimental.get.alias.lane.mask.*``' Intrinsics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
@@ -23486,10 +23486,10 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1.i64(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
Overview:
>From 2ea522b993b1f835e519ed51ac6f167556fc3772 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 21 Nov 2024 14:57:06 +0000
Subject: [PATCH 03/15] Add -o - to test run lines
---
llvm/docs/LangRef.rst | 4 ++--
llvm/test/CodeGen/AArch64/alias_mask.ll | 4 ++--
llvm/test/CodeGen/AArch64/alias_mask_scalable.ll | 2 +-
3 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 1ad4b5c3b7e860..71c1c22cbac026 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23524,7 +23524,7 @@ Otherwise they are semantically equivalent to:
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two i64 arguments to
-``llvm.experimental.get.alias.lane.mask.*``, ``%elementSize`` is the i32 argument, ``%abs`` is the absolute difference operation, ``%icmp`` is an integer compare and ``ult``
+``llvm.experimental.get.alias.lane.mask.*``, ``%elementSize`` is the first immediate argument, ``%abs`` is the absolute difference operation, ``%icmp`` is an integer compare and ``ult``
the unsigned less-than comparison operator. The subtraction between ``%ptrA`` and ``%ptrB`` could be negative. The ``%writeAfterRead`` argument is expected to be true if the ``%ptrB`` is stored to after ``%ptrA`` is read from.
The above is equivalent to:
@@ -23551,7 +23551,7 @@ Examples:
.. code-block:: llvm
- %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64(i64 %ptrA, i64 %ptrB, i32 4, i1 1)
+ %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 4, i1 1)
%vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %alias.lane.mask, <4 x i32> poison)
[...]
call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %alias.lane.mask)
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 59ad2bf82db92e..84a22822f1702b 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -1,6 +1,6 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s | FileCheck %s --check-prefix=CHECK-SVE
-; RUN: llc -mtriple=aarch64 %s | FileCheck %s --check-prefix=CHECK-NOSVE
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE
+; RUN: llc -mtriple=aarch64 %s -o - | FileCheck %s --check-prefix=CHECK-NOSVE
define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-SVE-LABEL: whilewr_8:
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 4912bc0f59d40d..e4ef5292dee277 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -1,5 +1,5 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s | FileCheck %s
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s
define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-LABEL: whilewr_8:
>From 4c4ab8aeed1b4fd66a90334dfcd889481f7e35b6 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 21 Nov 2024 17:01:47 +0000
Subject: [PATCH 04/15] Format
---
llvm/include/llvm/CodeGen/TargetLowering.h | 6 +-
.../SelectionDAG/SelectionDAGBuilder.cpp | 34 ++++++-----
.../Target/AArch64/AArch64ISelLowering.cpp | 58 +++++++++++--------
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 3 +-
4 files changed, 59 insertions(+), 42 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 0338310fd936df..51e716f50814e1 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -468,8 +468,10 @@ class TargetLoweringBase {
return true;
}
- /// Return true if the @llvm.experimental.get.alias.lane.mask intrinsic should be expanded using generic code in SelectionDAGBuilder.
- virtual bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT, unsigned EltSize) const {
+ /// Return true if the @llvm.experimental.get.alias.lane.mask intrinsic should
+ /// be expanded using generic code in SelectionDAGBuilder.
+ virtual bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
+ unsigned EltSize) const {
return true;
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 39e84e06a8de60..f942ab05149c23 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8288,17 +8288,19 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
SDValue SourceValue = getValue(I.getOperand(0));
SDValue SinkValue = getValue(I.getOperand(1));
SDValue EltSize = getValue(I.getOperand(2));
- bool IsWriteAfterRead = cast<ConstantSDNode>(getValue(I.getOperand(3)))->getZExtValue() != 0;
+ bool IsWriteAfterRead =
+ cast<ConstantSDNode>(getValue(I.getOperand(3)))->getZExtValue() != 0;
auto IntrinsicVT = EVT::getEVT(I.getType());
auto PtrVT = SourceValue->getValueType(0);
- if (!TLI.shouldExpandGetAliasLaneMask(IntrinsicVT, PtrVT, cast<ConstantSDNode>(EltSize)->getSExtValue())) {
+ if (!TLI.shouldExpandGetAliasLaneMask(
+ IntrinsicVT, PtrVT,
+ cast<ConstantSDNode>(EltSize)->getSExtValue())) {
visitTargetIntrinsic(I, Intrinsic);
return;
}
- SDValue Diff = DAG.getNode(ISD::SUB, sdl,
- PtrVT, SinkValue, SourceValue);
+ SDValue Diff = DAG.getNode(ISD::SUB, sdl, PtrVT, SinkValue, SourceValue);
if (!IsWriteAfterRead)
Diff = DAG.getNode(ISD::ABS, sdl, PtrVT, Diff);
@@ -8306,9 +8308,10 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
SDValue Zero = DAG.getTargetConstant(0, sdl, PtrVT);
// If the difference is positive then some elements may alias
- auto CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
- PtrVT);
- SDValue Cmp = DAG.getSetCC(sdl, CmpVT, Diff, Zero, IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+ auto CmpVT =
+ TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), PtrVT);
+ SDValue Cmp = DAG.getSetCC(sdl, CmpVT, Diff, Zero,
+ IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
// Splat the compare result then OR it with a lane mask
SDValue Splat = DAG.getSplat(IntrinsicVT, sdl, Cmp);
@@ -8316,14 +8319,17 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
SDValue DiffMask;
// Don't emit an active lane mask if the target doesn't support it
if (TLI.shouldExpandGetActiveLaneMask(IntrinsicVT, PtrVT)) {
- EVT VecTy = EVT::getVectorVT(*DAG.getContext(), PtrVT,
- IntrinsicVT.getVectorElementCount());
- SDValue DiffSplat = DAG.getSplat(VecTy, sdl, Diff);
- SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
- DiffMask = DAG.getSetCC(sdl, IntrinsicVT, VectorStep,
- DiffSplat, ISD::CondCode::SETULT);
+ EVT VecTy = EVT::getVectorVT(*DAG.getContext(), PtrVT,
+ IntrinsicVT.getVectorElementCount());
+ SDValue DiffSplat = DAG.getSplat(VecTy, sdl, Diff);
+ SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
+ DiffMask = DAG.getSetCC(sdl, IntrinsicVT, VectorStep, DiffSplat,
+ ISD::CondCode::SETULT);
} else {
- DiffMask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, sdl, IntrinsicVT, DAG.getTargetConstant(Intrinsic::get_active_lane_mask, sdl, MVT::i64), Zero, Diff);
+ DiffMask = DAG.getNode(
+ ISD::INTRINSIC_WO_CHAIN, sdl, IntrinsicVT,
+ DAG.getTargetConstant(Intrinsic::get_active_lane_mask, sdl, MVT::i64),
+ Zero, Diff);
}
SDValue Or = DAG.getNode(ISD::OR, sdl, IntrinsicVT, DiffMask, Splat);
setValue(&I, Or);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 66eaec0d5ae6c9..020635400eedae 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -2033,7 +2033,8 @@ bool AArch64TargetLowering::shouldExpandGetActiveLaneMask(EVT ResVT,
return false;
}
-bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT, unsigned EltSize) const {
+bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(
+ EVT VT, EVT PtrVT, unsigned EltSize) const {
if (!Subtarget->hasSVE2())
return true;
@@ -2042,7 +2043,7 @@ bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT, unsi
if (VT == MVT::v2i1 || VT == MVT::nxv2i1)
return EltSize != 8;
- if( VT == MVT::v4i1 || VT == MVT::nxv4i1)
+ if (VT == MVT::v4i1 || VT == MVT::nxv4i1)
return EltSize != 4;
if (VT == MVT::v8i1 || VT == MVT::nxv8i1)
return EltSize != 2;
@@ -5905,12 +5906,14 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
case Intrinsic::aarch64_sve_whilewr_h:
case Intrinsic::aarch64_sve_whilewr_s:
case Intrinsic::aarch64_sve_whilewr_d:
- return DAG.getNode(AArch64ISD::WHILEWR, dl, Op.getValueType(), Op.getOperand(1), Op.getOperand(2));
+ return DAG.getNode(AArch64ISD::WHILEWR, dl, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2));
case Intrinsic::aarch64_sve_whilerw_b:
case Intrinsic::aarch64_sve_whilerw_h:
case Intrinsic::aarch64_sve_whilerw_s:
case Intrinsic::aarch64_sve_whilerw_d:
- return DAG.getNode(AArch64ISD::WHILERW, dl, Op.getValueType(), Op.getOperand(1), Op.getOperand(2));
+ return DAG.getNode(AArch64ISD::WHILERW, dl, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2));
case Intrinsic::aarch64_neon_abs: {
EVT Ty = Op.getValueType();
if (Ty == MVT::i64) {
@@ -6377,34 +6380,38 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
switch (EltSize) {
- case 1:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b : Intrinsic::aarch64_sve_whilerw_b;
- break;
- case 2:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h : Intrinsic::aarch64_sve_whilerw_h;
- break;
- case 4:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s : Intrinsic::aarch64_sve_whilerw_s;
- break;
- case 8:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d : Intrinsic::aarch64_sve_whilerw_d;
- break;
- default:
- llvm_unreachable("Unexpected element size for get.alias.lane.mask");
- break;
+ case 1:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
+ : Intrinsic::aarch64_sve_whilerw_b;
+ break;
+ case 2:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
+ : Intrinsic::aarch64_sve_whilerw_h;
+ break;
+ case 4:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
+ : Intrinsic::aarch64_sve_whilerw_s;
+ break;
+ case 8:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
+ : Intrinsic::aarch64_sve_whilerw_d;
+ break;
+ default:
+ llvm_unreachable("Unexpected element size for get.alias.lane.mask");
+ break;
}
}
- SDValue ID =
- DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
+ SDValue ID = DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
EVT VT = Op.getValueType();
if (VT.isScalableVector())
return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, VT, ID, Op.getOperand(1),
Op.getOperand(2));
- // We can use the SVE whilelo/whilewr/whilerw instruction to lower this intrinsic by
- // creating the appropriate sequence of scalable vector operations and
- // then extracting a fixed-width subvector from the scalable vector.
+ // We can use the SVE whilelo/whilewr/whilerw instruction to lower this
+ // intrinsic by creating the appropriate sequence of scalable vector
+ // operations and then extracting a fixed-width subvector from the scalable
+ // vector.
EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
@@ -19544,7 +19551,8 @@ static bool isPredicateCCSettingOp(SDValue N) {
// get_active_lane_mask is lowered to a whilelo instruction.
N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask ||
// get_alias_lane_mask is lowered to a whilewr/rw instruction.
- N.getConstantOperandVal(0) == Intrinsic::experimental_get_alias_lane_mask)))
+ N.getConstantOperandVal(0) ==
+ Intrinsic::experimental_get_alias_lane_mask)))
return true;
return false;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index b2f766b22911ff..41f56349215903 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -984,7 +984,8 @@ class AArch64TargetLowering : public TargetLowering {
bool shouldExpandGetActiveLaneMask(EVT VT, EVT OpVT) const override;
- bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT, unsigned EltSize) const override;
+ bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
+ unsigned EltSize) const override;
bool
shouldExpandPartialReductionIntrinsic(const IntrinsicInst *I) const override;
>From 4be260b7f1881839de0fc115839bc185e1ab74af Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 26 Nov 2024 10:22:07 +0000
Subject: [PATCH 05/15] Extend underline
---
llvm/docs/LangRef.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 71c1c22cbac026..957441462a6d11 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23478,7 +23478,7 @@ Examples:
.. _int_experimental_get_alias_lane_mask:
'``llvm.experimental.get.alias.lane.mask.*``' Intrinsics
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
"""""""
>From a2ca34f731307ab3ec51456645d435b5c93820a7 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 28 Nov 2024 13:38:46 +0000
Subject: [PATCH 06/15] Add SVE run line
---
.../CodeGen/AArch64/alias_mask_scalable.ll | 179 ++++++++++++++----
1 file changed, 146 insertions(+), 33 deletions(-)
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index e4ef5292dee277..be5ec8b2a82bf2 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -1,81 +1,194 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
-; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE2
+; RUN: llc -mtriple=aarch64 -mattr=+sve %s -o - | FileCheck %s --check-prefix=CHECK-SVE
define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
-; CHECK-LABEL: whilewr_8:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.b, x0, x1
-; CHECK-NEXT: ret
+; CHECK-SVE2-LABEL: whilewr_8:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
-; CHECK-LABEL: whilewr_16:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.h, x0, x1
-; CHECK-NEXT: ret
+; CHECK-SVE2-LABEL: whilewr_16:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
ret <vscale x 8 x i1> %0
}
define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
-; CHECK-LABEL: whilewr_32:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.s, x0, x1
-; CHECK-NEXT: ret
+; CHECK-SVE2-LABEL: whilewr_32:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
-; CHECK-LABEL: whilewr_64:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilewr p0.d, x0, x1
-; CHECK-NEXT: ret
+; CHECK-SVE2-LABEL: whilewr_64:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
entry:
%0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
-; CHECK-LABEL: whilerw_8:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilerw p0.b, x0, x1
-; CHECK-NEXT: ret
+; CHECK-SVE2-LABEL: whilerw_8:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
entry:
%0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
-; CHECK-LABEL: whilerw_16:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilerw p0.h, x0, x1
-; CHECK-NEXT: ret
+; CHECK-SVE2-LABEL: whilerw_16:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.h, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
entry:
%0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
ret <vscale x 8 x i1> %0
}
define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
-; CHECK-LABEL: whilerw_32:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilerw p0.s, x0, x1
-; CHECK-NEXT: ret
+; CHECK-SVE2-LABEL: whilerw_32:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.s, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
entry:
%0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
-; CHECK-LABEL: whilerw_64:
-; CHECK: // %bb.0: // %entry
-; CHECK-NEXT: whilerw p0.d, x0, x1
-; CHECK-NEXT: ret
+; CHECK-SVE2-LABEL: whilerw_64:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.d, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
entry:
%0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
>From 1494e9e20dc647ae53964892002d308050e38537 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 28 Nov 2024 16:23:15 +0000
Subject: [PATCH 07/15] 80 columns in lang ref and remove some extra usage info
---
llvm/docs/LangRef.rst | 35 +++++++++++++++++++----------------
1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 957441462a6d11..c680f10eaf0590 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23495,7 +23495,8 @@ This is an overloaded intrinsic.
Overview:
"""""""""
-Create a mask representing lanes that do or not overlap between two pointers across one vector loop iteration.
+Create a mask representing lanes that do or not overlap between two pointers
+across one vector loop iteration.
Arguments:
@@ -23507,8 +23508,9 @@ The final two are immediates and the result is a vector with the i1 element type
Semantics:
""""""""""
-In the case that ``%writeAfterRead`` is true, the '``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically equivalent
-to:
+In the case that ``%writeAfterRead`` is true, the
+'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+equivalent to:
::
@@ -23524,8 +23526,12 @@ Otherwise they are semantically equivalent to:
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two i64 arguments to
-``llvm.experimental.get.alias.lane.mask.*``, ``%elementSize`` is the first immediate argument, ``%abs`` is the absolute difference operation, ``%icmp`` is an integer compare and ``ult``
-the unsigned less-than comparison operator. The subtraction between ``%ptrA`` and ``%ptrB`` could be negative. The ``%writeAfterRead`` argument is expected to be true if the ``%ptrB`` is stored to after ``%ptrA`` is read from.
+``llvm.experimental.get.alias.lane.mask.*``, ``%elementSize`` is the first
+immediate argument, ``%abs`` is the absolute difference operation, ``%icmp`` is
+an integer compare and ``ult`` the unsigned less-than comparison operator. The
+subtraction between ``%ptrA`` and ``%ptrB`` could be negative. The
+``%writeAfterRead`` argument is expected to be true if the ``%ptrB`` is stored
+to after ``%ptrA`` is read from.
The above is equivalent to:
::
@@ -23533,17 +23539,14 @@ The above is equivalent to:
%m = @llvm.experimental.get.alias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
This can, for example, be emitted by the loop vectorizer in which case
-``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a pointer that is stored to within the loop.
-If the difference between these pointers is less than the vector factor, then they overlap (alias) within a loop iteration.
-An example is if ``%ptrA`` is 20 and ``%ptrB`` is 23 with a vector factor of 8, then lanes 3, 4, 5, 6 and 7 of the vector loaded from ``%ptrA``
-share addresses with lanes 0, 1, 2, 3, 4 and 5 from the vector stored to at ``%ptrB``.
-An alias mask of these two pointers should be <1, 1, 1, 0, 0, 0, 0, 0> so that only the non-overlapping lanes are loaded and stored.
-This operation allows many loops to be vectorised when it would otherwise be unsafe to do so.
-
-To account for the fact that only a subset of lanes have been operated on in an iteration,
-the loop's induction variable should be incremented by the popcount of the mask rather than the vector factor.
-
-This mask ``%m`` can e.g. be used in masked load/store instructions.
+``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a
+pointer that is stored to within the loop.
+If the difference between these pointers is less than the vector factor, then
+they overlap (alias) within a loop iteration.
+An example is if ``%ptrA`` is 20 and ``%ptrB`` is 23 with a vector factor of 8,
+then lanes 3, 4, 5, 6 and 7 of the vector loaded from ``%ptrA``
+share addresses with lanes 0, 1, 2, 3, 4 and 5 from the vector stored to at
+``%ptrB``.
Examples:
>From e8857b535e7afcd776f54443f8b76cdbf6bd38d9 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 29 Nov 2024 15:04:55 +0000
Subject: [PATCH 08/15] Add note about poison return
---
llvm/docs/LangRef.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index c680f10eaf0590..06ef75d74f41b6 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23548,6 +23548,9 @@ then lanes 3, 4, 5, 6 and 7 of the vector loaded from ``%ptrA``
share addresses with lanes 0, 1, 2, 3, 4 and 5 from the vector stored to at
``%ptrB``.
+The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
+VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
+
Examples:
"""""""""
>From c3f3575ca88a60ca62fa7cd939fc021beb29f4a5 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 2 Dec 2024 15:33:31 +0000
Subject: [PATCH 09/15] Reorganise poison note
---
llvm/docs/LangRef.rst | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 06ef75d74f41b6..71eee7caf285cb 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23508,7 +23508,9 @@ The final two are immediates and the result is a vector with the i1 element type
Semantics:
""""""""""
-In the case that ``%writeAfterRead`` is true, the
+The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
+VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
+In other cases when ``%writeAfterRead`` is true, the
'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
equivalent to:
@@ -23517,7 +23519,9 @@ equivalent to:
%diff = (%ptrB - %ptrA) / %elementSize
%m[i] = (icmp ult i, %diff) || (%diff <= 0)
-Otherwise they are semantically equivalent to:
+When the return value is not poison and ``%writeAfterRead`` is false, the
+'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+equivalent to:
::
@@ -23526,12 +23530,9 @@ Otherwise they are semantically equivalent to:
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two i64 arguments to
-``llvm.experimental.get.alias.lane.mask.*``, ``%elementSize`` is the first
-immediate argument, ``%abs`` is the absolute difference operation, ``%icmp`` is
-an integer compare and ``ult`` the unsigned less-than comparison operator. The
-subtraction between ``%ptrA`` and ``%ptrB`` could be negative. The
-``%writeAfterRead`` argument is expected to be true if the ``%ptrB`` is stored
-to after ``%ptrA`` is read from.
+``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
+immediate argument. The ``%writeAfterRead`` argument is expected to be true if
+``%ptrB`` is stored to after ``%ptrA`` is read from.
The above is equivalent to:
::
@@ -23548,9 +23549,6 @@ then lanes 3, 4, 5, 6 and 7 of the vector loaded from ``%ptrA``
share addresses with lanes 0, 1, 2, 3, 4 and 5 from the vector stored to at
``%ptrB``.
-The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
-VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
-
Examples:
"""""""""
>From 24d33ca82b865e45bcf524196cd8d2cdfec8c3ec Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 10 Jan 2025 11:37:37 +0000
Subject: [PATCH 10/15] Rework lowering location
---
llvm/include/llvm/CodeGen/ISDOpcodes.h | 5 +
llvm/include/llvm/IR/IntrinsicsAArch64.td | 10 +-
.../SelectionDAG/LegalizeIntegerTypes.cpp | 22 ++
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 2 +
.../SelectionDAG/LegalizeVectorOps.cpp | 41 ++++
.../SelectionDAG/SelectionDAGBuilder.cpp | 53 +----
.../SelectionDAG/SelectionDAGDumper.cpp | 3 +
llvm/lib/CodeGen/TargetLoweringBase.cpp | 1 +
.../Target/AArch64/AArch64ISelLowering.cpp | 123 ++++++----
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 1 +
llvm/test/CodeGen/AArch64/alias_mask.ll | 120 ++--------
.../CodeGen/AArch64/alias_mask_scalable.ll | 210 ++++++++++++++----
12 files changed, 353 insertions(+), 238 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index 0b6d155b6d161e..9e4a5e88223ddf 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1480,6 +1480,11 @@ enum NodeType {
// Output: Output Chain
EXPERIMENTAL_VECTOR_HISTOGRAM,
+ // The `llvm.experimental.get.alias.lane.mask.*` intrinsics
+ // Operands: Load pointer, Store pointer, Element size, Write after read
+ // Output: Mask
+ EXPERIMENTAL_ALIAS_LANE_MASK,
+
// llvm.clear_cache intrinsic
// Operands: Input Chain, Start Addres, End Address
// Outputs: Output Chain
diff --git a/llvm/include/llvm/IR/IntrinsicsAArch64.td b/llvm/include/llvm/IR/IntrinsicsAArch64.td
index 6a09a8647096f9..04ce5271be795a 100644
--- a/llvm/include/llvm/IR/IntrinsicsAArch64.td
+++ b/llvm/include/llvm/IR/IntrinsicsAArch64.td
@@ -2862,14 +2862,8 @@ def int_aarch64_sve_stnt1_pn_x4 : SVE2p1_Store_PN_X4_Intrinsic;
// SVE2 - Contiguous conflict detection
//
-def int_aarch64_sve_whilerw_b : SVE2_CONFLICT_DETECT_Intrinsic;
-def int_aarch64_sve_whilerw_h : SVE2_CONFLICT_DETECT_Intrinsic;
-def int_aarch64_sve_whilerw_s : SVE2_CONFLICT_DETECT_Intrinsic;
-def int_aarch64_sve_whilerw_d : SVE2_CONFLICT_DETECT_Intrinsic;
-def int_aarch64_sve_whilewr_b : SVE2_CONFLICT_DETECT_Intrinsic;
-def int_aarch64_sve_whilewr_h : SVE2_CONFLICT_DETECT_Intrinsic;
-def int_aarch64_sve_whilewr_s : SVE2_CONFLICT_DETECT_Intrinsic;
-def int_aarch64_sve_whilewr_d : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilerw : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilewr : SVE2_CONFLICT_DETECT_Intrinsic;
// Scalable Matrix Extension (SME) Intrinsics
let TargetPrefix = "aarch64" in {
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 648719bcabc373..328a779b6a426a 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -55,6 +55,9 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
N->dump(&DAG); dbgs() << "\n";
#endif
report_fatal_error("Do not know how to promote this operator!");
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
+ break;
case ISD::MERGE_VALUES:Res = PromoteIntRes_MERGE_VALUES(N, ResNo); break;
case ISD::AssertSext: Res = PromoteIntRes_AssertSext(N); break;
case ISD::AssertZext: Res = PromoteIntRes_AssertZext(N); break;
@@ -355,6 +358,14 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
return GetPromotedInteger(Op);
}
+SDValue
+DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+ EVT VT = N->getValueType(0);
+ EVT NewVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
+ return DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, SDLoc(N), NewVT,
+ N->ops());
+}
+
SDValue DAGTypeLegalizer::PromoteIntRes_AssertSext(SDNode *N) {
// Sign-extend the new bits, and continue the assertion.
SDValue Op = SExtPromotedInteger(N->getOperand(0));
@@ -2069,6 +2080,9 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::EXPERIMENTAL_VECTOR_HISTOGRAM:
Res = PromoteIntOp_VECTOR_HISTOGRAM(N, OpNo);
break;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Res = DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
+ break;
}
// If the result is null, the sub-method took care of registering results etc.
@@ -2802,6 +2816,14 @@ SDValue DAGTypeLegalizer::PromoteIntOp_VECTOR_HISTOGRAM(SDNode *N,
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
}
+SDValue
+DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N,
+ unsigned OpNo) {
+ SmallVector<SDValue, 4> NewOps(N->ops());
+ NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
+ return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
+}
+
//===----------------------------------------------------------------------===//
// Integer Result Expansion
//===----------------------------------------------------------------------===//
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 1703149aca7463..4eb1e08024cc13 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -378,6 +378,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_VPFunnelShift(SDNode *N);
SDValue PromoteIntRes_IS_FPCLASS(SDNode *N);
SDValue PromoteIntRes_PATCHPOINT(SDNode *N);
+ SDValue PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
// Integer Operand Promotion.
bool PromoteIntegerOperand(SDNode *N, unsigned OpNo);
@@ -428,6 +429,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntOp_VP_STRIDED(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_VP_SPLICE(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_VECTOR_HISTOGRAM(SDNode *N, unsigned OpNo);
+ SDValue PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N, unsigned OpNo);
void SExtOrZExtPromotedOperands(SDValue &LHS, SDValue &RHS);
void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index db21e708970648..4099e57da38d24 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -138,6 +138,7 @@ class VectorLegalizer {
SDValue ExpandVP_FNEG(SDNode *Node);
SDValue ExpandVP_FABS(SDNode *Node);
SDValue ExpandVP_FCOPYSIGN(SDNode *Node);
+ SDValue ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
SDValue ExpandSELECT(SDNode *Node);
std::pair<SDValue, SDValue> ExpandLoad(SDNode *N);
SDValue ExpandStore(SDNode *N);
@@ -465,6 +466,7 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::VECTOR_COMPRESS:
case ISD::SCMP:
case ISD::UCMP:
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
@@ -1206,6 +1208,9 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
case ISD::UCMP:
Results.push_back(TLI.expandCMP(Node, DAG));
return;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Results.push_back(ExpandEXPERIMENTAL_ALIAS_LANE_MASK(Node));
+ return;
case ISD::FADD:
case ISD::FMUL:
@@ -1713,6 +1718,42 @@ SDValue VectorLegalizer::ExpandVP_FCOPYSIGN(SDNode *Node) {
return DAG.getNode(ISD::BITCAST, DL, VT, CopiedSign);
}
+SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+ SDLoc DL(N);
+ SDValue SourceValue = N->getOperand(0);
+ SDValue SinkValue = N->getOperand(1);
+ SDValue EltSize = N->getOperand(2);
+
+ bool IsWriteAfterRead =
+ cast<ConstantSDNode>(N->getOperand(3))->getZExtValue() != 0;
+ auto VT = N->getValueType(0);
+ auto PtrVT = SourceValue->getValueType(0);
+
+ SDValue Diff = DAG.getNode(ISD::SUB, DL, PtrVT, SinkValue, SourceValue);
+ if (!IsWriteAfterRead)
+ Diff = DAG.getNode(ISD::ABS, DL, PtrVT, Diff);
+
+ Diff = DAG.getNode(ISD::SDIV, DL, PtrVT, Diff, EltSize);
+ SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
+
+ // If the difference is positive then some elements may alias
+ auto CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+ Diff.getValueType());
+ SDValue Cmp = DAG.getSetCC(DL, CmpVT, Diff, Zero,
+ IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+
+ EVT SplatTY =
+ EVT::getVectorVT(*DAG.getContext(), PtrVT, VT.getVectorElementCount());
+ SDValue DiffSplat = DAG.getSplat(SplatTY, DL, Diff);
+ SDValue VectorStep = DAG.getStepVector(DL, SplatTY);
+ SDValue DiffMask =
+ DAG.getSetCC(DL, VT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
+
+ // Splat the compare result then OR it with a lane mask
+ SDValue Splat = DAG.getSplat(VT, DL, Cmp);
+ return DAG.getNode(ISD::OR, DL, VT, DiffMask, Splat);
+}
+
void VectorLegalizer::ExpandFP_TO_UINT(SDNode *Node,
SmallVectorImpl<SDValue> &Results) {
// Attempt to expand using TargetLowering.
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index f942ab05149c23..6e791bcc70cff5 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8285,54 +8285,13 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
return;
}
case Intrinsic::experimental_get_alias_lane_mask: {
- SDValue SourceValue = getValue(I.getOperand(0));
- SDValue SinkValue = getValue(I.getOperand(1));
- SDValue EltSize = getValue(I.getOperand(2));
- bool IsWriteAfterRead =
- cast<ConstantSDNode>(getValue(I.getOperand(3)))->getZExtValue() != 0;
auto IntrinsicVT = EVT::getEVT(I.getType());
- auto PtrVT = SourceValue->getValueType(0);
-
- if (!TLI.shouldExpandGetAliasLaneMask(
- IntrinsicVT, PtrVT,
- cast<ConstantSDNode>(EltSize)->getSExtValue())) {
- visitTargetIntrinsic(I, Intrinsic);
- return;
- }
-
- SDValue Diff = DAG.getNode(ISD::SUB, sdl, PtrVT, SinkValue, SourceValue);
- if (!IsWriteAfterRead)
- Diff = DAG.getNode(ISD::ABS, sdl, PtrVT, Diff);
-
- Diff = DAG.getNode(ISD::SDIV, sdl, PtrVT, Diff, EltSize);
- SDValue Zero = DAG.getTargetConstant(0, sdl, PtrVT);
-
- // If the difference is positive then some elements may alias
- auto CmpVT =
- TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), PtrVT);
- SDValue Cmp = DAG.getSetCC(sdl, CmpVT, Diff, Zero,
- IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
-
- // Splat the compare result then OR it with a lane mask
- SDValue Splat = DAG.getSplat(IntrinsicVT, sdl, Cmp);
-
- SDValue DiffMask;
- // Don't emit an active lane mask if the target doesn't support it
- if (TLI.shouldExpandGetActiveLaneMask(IntrinsicVT, PtrVT)) {
- EVT VecTy = EVT::getVectorVT(*DAG.getContext(), PtrVT,
- IntrinsicVT.getVectorElementCount());
- SDValue DiffSplat = DAG.getSplat(VecTy, sdl, Diff);
- SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
- DiffMask = DAG.getSetCC(sdl, IntrinsicVT, VectorStep, DiffSplat,
- ISD::CondCode::SETULT);
- } else {
- DiffMask = DAG.getNode(
- ISD::INTRINSIC_WO_CHAIN, sdl, IntrinsicVT,
- DAG.getTargetConstant(Intrinsic::get_active_lane_mask, sdl, MVT::i64),
- Zero, Diff);
- }
- SDValue Or = DAG.getNode(ISD::OR, sdl, IntrinsicVT, DiffMask, Splat);
- setValue(&I, Or);
+ SmallVector<SDValue, 4> Ops;
+ for (auto &Op : I.operands())
+ Ops.push_back(getValue(Op));
+ SDValue Mask =
+ DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, sdl, IntrinsicVT, Ops);
+ setValue(&I, Mask);
}
}
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index 580ff19065557b..eb7e26bc2ce38e 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -567,6 +567,9 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
case ISD::EXPERIMENTAL_VECTOR_HISTOGRAM:
return "histogram";
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ return "alias_mask";
+
// Vector Predication
#define BEGIN_REGISTER_VP_SDNODE(SDID, LEGALARG, NAME, ...) \
case ISD::SDID: \
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index 392cfbdd21273d..853c8f494f8cbb 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -818,6 +818,7 @@ void TargetLoweringBase::initActions() {
setOperationAction(ISD::SDOPC, VT, Expand);
#include "llvm/IR/VPIntrinsics.def"
+ setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Expand);
// FP environment operations default to expand.
setOperationAction(ISD::GET_FPENV, VT, Expand);
setOperationAction(ISD::SET_FPENV, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 020635400eedae..357af0c96309b4 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1802,6 +1802,13 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
setOperationAction(ISD::INTRINSIC_WO_CHAIN, VT, Custom);
}
+ if (Subtarget->hasSVE2() || (Subtarget->hasSME() && Subtarget->isStreaming())) {
+ for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
+ MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
+ setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Custom);
+ }
+ }
+
// Handle operations that are only available in non-streaming SVE mode.
if (Subtarget->isSVEAvailable()) {
for (auto VT : {MVT::nxv16i8, MVT::nxv8i16, MVT::nxv4i32, MVT::nxv2i64,
@@ -5149,6 +5156,59 @@ SDValue AArch64TargetLowering::LowerFSINCOS(SDValue Op,
static MVT getSVEContainerType(EVT ContentTy);
+SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
+ SelectionDAG &DAG) const {
+ SDLoc DL(Op);
+ unsigned IntrinsicID = 0;
+ uint64_t EltSize = Op.getOperand(2)->getAsZExtVal();
+ bool IsWriteAfterRead = Op.getOperand(3)->getAsZExtVal() == 1;
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr
+ : Intrinsic::aarch64_sve_whilerw;
+ EVT VT = Op.getValueType();
+ MVT SimpleVT = VT.getSimpleVT();
+ // Make sure that the promoted mask size and element size match
+ switch (EltSize) {
+ case 1:
+ assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
+ "Unexpected mask or element size");
+ break;
+ case 2:
+ assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
+ "Unexpected mask or element size");
+ break;
+ case 4:
+ assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
+ "Unexpected mask or element size");
+ break;
+ case 8:
+ assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
+ "Unexpected mask or element size");
+ break;
+ default:
+ llvm_unreachable("Unexpected element size for get.alias.lane.mask");
+ break;
+ }
+ SDValue ID = DAG.getTargetConstant(IntrinsicID, DL, MVT::i64);
+
+ if (VT.isScalableVector())
+ return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, VT, ID, Op.getOperand(0),
+ Op.getOperand(1));
+
+ // We can use the SVE whilewr/whilerw instruction to lower this
+ // intrinsic by creating the appropriate sequence of scalable vector
+ // operations and then extracting a fixed-width subvector from the scalable
+ // vector.
+
+ EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
+ EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
+
+ SDValue Mask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, WhileVT, ID,
+ Op.getOperand(0), Op.getOperand(1));
+ SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
+ return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
+ DAG.getVectorIdxConstant(0, DL));
+}
+
SDValue AArch64TargetLowering::LowerBITCAST(SDValue Op,
SelectionDAG &DAG) const {
EVT OpVT = Op.getValueType();
@@ -5902,16 +5962,10 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
EVT PtrVT = getPointerTy(DAG.getDataLayout());
return DAG.getNode(AArch64ISD::THREAD_POINTER, dl, PtrVT);
}
- case Intrinsic::aarch64_sve_whilewr_b:
- case Intrinsic::aarch64_sve_whilewr_h:
- case Intrinsic::aarch64_sve_whilewr_s:
- case Intrinsic::aarch64_sve_whilewr_d:
+ case Intrinsic::aarch64_sve_whilewr:
return DAG.getNode(AArch64ISD::WHILEWR, dl, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2));
- case Intrinsic::aarch64_sve_whilerw_b:
- case Intrinsic::aarch64_sve_whilerw_h:
- case Intrinsic::aarch64_sve_whilerw_s:
- case Intrinsic::aarch64_sve_whilerw_d:
+ case Intrinsic::aarch64_sve_whilerw:
return DAG.getNode(AArch64ISD::WHILERW, dl, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2));
case Intrinsic::aarch64_neon_abs: {
@@ -6376,31 +6430,6 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
case Intrinsic::experimental_get_alias_lane_mask:
case Intrinsic::get_active_lane_mask: {
unsigned IntrinsicID = Intrinsic::aarch64_sve_whilelo;
- if (IntNo == Intrinsic::experimental_get_alias_lane_mask) {
- uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
- bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
- switch (EltSize) {
- case 1:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
- : Intrinsic::aarch64_sve_whilerw_b;
- break;
- case 2:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
- : Intrinsic::aarch64_sve_whilerw_h;
- break;
- case 4:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
- : Intrinsic::aarch64_sve_whilerw_s;
- break;
- case 8:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
- : Intrinsic::aarch64_sve_whilerw_d;
- break;
- default:
- llvm_unreachable("Unexpected element size for get.alias.lane.mask");
- break;
- }
- }
SDValue ID = DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
EVT VT = Op.getValueType();
@@ -6408,7 +6437,7 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, VT, ID, Op.getOperand(1),
Op.getOperand(2));
- // We can use the SVE whilelo/whilewr/whilerw instruction to lower this
+ // We can use the SVE whilelo instruction to lower this
// intrinsic by creating the appropriate sequence of scalable vector
// operations and then extracting a fixed-width subvector from the scalable
// vector.
@@ -7232,6 +7261,8 @@ SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
default:
llvm_unreachable("unimplemented operand");
return SDValue();
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ return LowerALIAS_LANE_MASK(Op, DAG);
case ISD::BITCAST:
return LowerBITCAST(Op, DAG);
case ISD::GlobalAddress:
@@ -19538,7 +19569,8 @@ static SDValue getPTest(SelectionDAG &DAG, EVT VT, SDValue Pg, SDValue Op,
AArch64CC::CondCode Cond);
static bool isPredicateCCSettingOp(SDValue N) {
- if ((N.getOpcode() == ISD::SETCC) ||
+ if ((N.getOpcode() == ISD::SETCC ||
+ N.getOpcode() == ISD::EXPERIMENTAL_ALIAS_LANE_MASK) ||
(N.getOpcode() == ISD::INTRINSIC_WO_CHAIN &&
(N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilege ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilegt ||
@@ -19549,10 +19581,7 @@ static bool isPredicateCCSettingOp(SDValue N) {
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilels ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelt ||
// get_active_lane_mask is lowered to a whilelo instruction.
- N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask ||
- // get_alias_lane_mask is lowered to a whilewr/rw instruction.
- N.getConstantOperandVal(0) ==
- Intrinsic::experimental_get_alias_lane_mask)))
+ N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask)))
return true;
return false;
@@ -27071,6 +27100,22 @@ void AArch64TargetLowering::ReplaceNodeResults(
// CONCAT_VECTORS -- but delegate to common code for result type
// legalisation
return;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK: {
+ EVT VT = N->getValueType(0);
+ if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
+ return;
+
+ // NOTE: Only trivial type promotion is supported.
+ EVT NewVT = getTypeToTransformTo(*DAG.getContext(), VT);
+ if (NewVT.getVectorNumElements() != VT.getVectorNumElements())
+ return;
+
+ SDLoc DL(N);
+ auto V =
+ DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, DL, NewVT, N->ops());
+ Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, VT, V));
+ return;
+ }
case ISD::INTRINSIC_WO_CHAIN: {
EVT VT = N->getValueType(0);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 41f56349215903..9812f85aff379f 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -1201,6 +1201,7 @@ class AArch64TargetLowering : public TargetLowering {
SDValue LowerXOR(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFSINCOS(SDValue Op, SelectionDAG &DAG) const;
+ SDValue LowerALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerVSCALE(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerTRUNCATE(SDValue Op, SelectionDAG &DAG) const;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 84a22822f1702b..9b344f03da077b 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -48,10 +48,12 @@ define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
ret <16 x i1> %0
}
@@ -88,6 +90,8 @@ define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
; CHECK-NOSVE-NEXT: dup v1.8b, w8
; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: shl v0.8b, v0.8b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.8b, v0.8b, #0
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -125,7 +129,7 @@ define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
ret <4 x i1> %0
}
@@ -155,7 +159,7 @@ define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
ret <2 x i1> %0
}
@@ -206,10 +210,12 @@ define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
ret <16 x i1> %0
}
@@ -247,6 +253,8 @@ define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
; CHECK-NOSVE-NEXT: dup v1.8b, w8
; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: shl v0.8b, v0.8b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.8b, v0.8b, #0
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -285,7 +293,7 @@ define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
ret <4 x i1> %0
}
@@ -316,106 +324,6 @@ define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
- ret <2 x i1> %0
-}
-
-define <16 x i1> @not_whilewr_wrong_eltsize(i64 %a, i64 %b) {
-; CHECK-SVE-LABEL: not_whilewr_wrong_eltsize:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: dup v0.16b, w9
-; CHECK-SVE-NEXT: mov z1.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: orr v0.16b, v1.16b, v0.16b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: not_whilewr_wrong_eltsize:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI8_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x8
-; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI8_6]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI8_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
-entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
- ret <16 x i1> %0
-}
-
-define <2 x i1> @not_whilerw_ptr32(i32 %a, i32 %b) {
-; CHECK-SVE-LABEL: not_whilerw_ptr32:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: subs w8, w1, w0
-; CHECK-SVE-NEXT: cneg w8, w8, mi
-; CHECK-SVE-NEXT: add w9, w8, #7
-; CHECK-SVE-NEXT: cmp w8, #0
-; CHECK-SVE-NEXT: csel w8, w9, w8, lt
-; CHECK-SVE-NEXT: asr w8, w8, #3
-; CHECK-SVE-NEXT: cmp w8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.s, #0, w8
-; CHECK-SVE-NEXT: dup v0.2s, w9
-; CHECK-SVE-NEXT: mov z1.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: orr v0.8b, v1.8b, v0.8b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: not_whilerw_ptr32:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs w9, w1, w0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI9_0
-; CHECK-NOSVE-NEXT: cneg w9, w9, mi
-; CHECK-NOSVE-NEXT: ldr d1, [x8, :lo12:.LCPI9_0]
-; CHECK-NOSVE-NEXT: add w10, w9, #7
-; CHECK-NOSVE-NEXT: cmp w9, #0
-; CHECK-NOSVE-NEXT: csel w9, w10, w9, lt
-; CHECK-NOSVE-NEXT: asr w9, w9, #3
-; CHECK-NOSVE-NEXT: dup v0.2s, w9
-; CHECK-NOSVE-NEXT: cmp w9, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: dup v2.2s, w8
-; CHECK-NOSVE-NEXT: cmhi v0.2s, v0.2s, v1.2s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v2.8b
-; CHECK-NOSVE-NEXT: ret
-entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i32.i32(i32 %a, i32 %b, i32 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index be5ec8b2a82bf2..a7c9c5e3cdd33b 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -10,16 +10,57 @@ define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_8:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p5.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p4.s
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
@@ -31,13 +72,28 @@ define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_16:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
@@ -54,20 +110,27 @@ define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_32:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #3
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
@@ -80,19 +143,22 @@ define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-SVE-LABEL: whilewr_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #7
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
@@ -104,17 +170,60 @@ define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_8:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z1.d, z0.d
+; CHECK-SVE-NEXT: incd z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE-NEXT: mov z3.d, z2.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z1.d, z2.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z1.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z1.d, z5.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z1.d, z2.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z1.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE-NEXT: mov z0.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z1.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
@@ -126,14 +235,29 @@ define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_16:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
@@ -150,21 +274,28 @@ define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_32:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x9, x8, #3
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
@@ -177,19 +308,22 @@ define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-SVE-LABEL: whilerw_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x9, x8, #7
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
}
>From b00864f85224cda88bd129a8648a7deb7d003fb9 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Tue, 14 Jan 2025 16:16:50 +0000
Subject: [PATCH 11/15] Update whilewr/rw intrinsic name in aarch64 tests
---
clang/include/clang/Basic/arm_sve.td | 20 ++++-----
.../acle_sve2_whilerw-bfloat.c | 4 +-
.../sve2-intrinsics/acle_sve2_whilerw.c | 44 +++++++++----------
.../acle_sve2_whilewr-bfloat.c | 4 +-
.../sve2-intrinsics/acle_sve2_whilewr.c | 44 +++++++++----------
5 files changed, 58 insertions(+), 58 deletions(-)
diff --git a/clang/include/clang/Basic/arm_sve.td b/clang/include/clang/Basic/arm_sve.td
index d492fae4145b92..9efb9db9ddc336 100644
--- a/clang/include/clang/Basic/arm_sve.td
+++ b/clang/include/clang/Basic/arm_sve.td
@@ -1915,20 +1915,20 @@ def SVNMATCH : SInst<"svnmatch[_{d}]", "PPdd", "csUcUs", MergeNone, "aarch64_sve
////////////////////////////////////////////////////////////////////////////////
// SVE2 - Contiguous conflict detection
let SVETargetGuard = "sve2", SMETargetGuard = "sme" in {
-def SVWHILERW_B : SInst<"svwhilerw[_{1}]", "Pcc", "cUc", MergeNone, "aarch64_sve_whilerw_b", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILERW_H : SInst<"svwhilerw[_{1}]", "Pcc", "sUsh", MergeNone, "aarch64_sve_whilerw_h", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILERW_S : SInst<"svwhilerw[_{1}]", "Pcc", "iUif", MergeNone, "aarch64_sve_whilerw_s", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILERW_D : SInst<"svwhilerw[_{1}]", "Pcc", "lUld", MergeNone, "aarch64_sve_whilerw_d", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_B : SInst<"svwhilerw[_{1}]", "Pcc", "cUc", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_H : SInst<"svwhilerw[_{1}]", "Pcc", "sUsh", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_S : SInst<"svwhilerw[_{1}]", "Pcc", "iUif", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_D : SInst<"svwhilerw[_{1}]", "Pcc", "lUld", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_B : SInst<"svwhilewr[_{1}]", "Pcc", "cUc", MergeNone, "aarch64_sve_whilewr_b", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_H : SInst<"svwhilewr[_{1}]", "Pcc", "sUsh", MergeNone, "aarch64_sve_whilewr_h", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_S : SInst<"svwhilewr[_{1}]", "Pcc", "iUif", MergeNone, "aarch64_sve_whilewr_s", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_D : SInst<"svwhilewr[_{1}]", "Pcc", "lUld", MergeNone, "aarch64_sve_whilewr_d", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_B : SInst<"svwhilewr[_{1}]", "Pcc", "cUc", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_H : SInst<"svwhilewr[_{1}]", "Pcc", "sUsh", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_S : SInst<"svwhilewr[_{1}]", "Pcc", "iUif", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_D : SInst<"svwhilewr[_{1}]", "Pcc", "lUld", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
}
let SVETargetGuard = "sve2,bf16", SMETargetGuard = "sme,bf16" in {
-def SVWHILERW_H_BF16 : SInst<"svwhilerw[_{1}]", "Pcc", "b", MergeNone, "aarch64_sve_whilerw_h", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_H_BF16 : SInst<"svwhilewr[_{1}]", "Pcc", "b", MergeNone, "aarch64_sve_whilewr_h", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_H_BF16 : SInst<"svwhilerw[_{1}]", "Pcc", "b", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_H_BF16 : SInst<"svwhilewr[_{1}]", "Pcc", "b", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
}
////////////////////////////////////////////////////////////////////////////////
diff --git a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw-bfloat.c b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw-bfloat.c
index 95b0f53abdce0f..de6bad440fa55f 100644
--- a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw-bfloat.c
+++ b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw-bfloat.c
@@ -17,13 +17,13 @@
// CHECK-LABEL: @test_svwhilerw_bf16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z19test_svwhilerw_bf16PKu6__bf16S0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
diff --git a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw.c b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw.c
index 13f1984db94cc8..6118a807a72b2f 100644
--- a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw.c
+++ b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw.c
@@ -17,12 +17,12 @@
// CHECK-LABEL: @test_svwhilerw_s8(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
// CPP-CHECK-LABEL: @_Z17test_svwhilerw_s8PKaS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
svbool_t test_svwhilerw_s8(const int8_t *op1, const int8_t *op2)
@@ -32,13 +32,13 @@ svbool_t test_svwhilerw_s8(const int8_t *op1, const int8_t *op2)
// CHECK-LABEL: @test_svwhilerw_s16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_s16PKsS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -49,13 +49,13 @@ svbool_t test_svwhilerw_s16(const int16_t *op1, const int16_t *op2)
// CHECK-LABEL: @test_svwhilerw_s32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_s32PKiS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -66,13 +66,13 @@ svbool_t test_svwhilerw_s32(const int32_t *op1, const int32_t *op2)
// CHECK-LABEL: @test_svwhilerw_s64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_s64PKlS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -83,12 +83,12 @@ svbool_t test_svwhilerw_s64(const int64_t *op1, const int64_t *op2)
// CHECK-LABEL: @test_svwhilerw_u8(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
// CPP-CHECK-LABEL: @_Z17test_svwhilerw_u8PKhS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
svbool_t test_svwhilerw_u8(const uint8_t *op1, const uint8_t *op2)
@@ -98,13 +98,13 @@ svbool_t test_svwhilerw_u8(const uint8_t *op1, const uint8_t *op2)
// CHECK-LABEL: @test_svwhilerw_u16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_u16PKtS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -115,13 +115,13 @@ svbool_t test_svwhilerw_u16(const uint16_t *op1, const uint16_t *op2)
// CHECK-LABEL: @test_svwhilerw_u32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_u32PKjS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -132,13 +132,13 @@ svbool_t test_svwhilerw_u32(const uint32_t *op1, const uint32_t *op2)
// CHECK-LABEL: @test_svwhilerw_u64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_u64PKmS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -149,13 +149,13 @@ svbool_t test_svwhilerw_u64(const uint64_t *op1, const uint64_t *op2)
// CHECK-LABEL: @test_svwhilerw_f16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_f16PKDhS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -166,13 +166,13 @@ svbool_t test_svwhilerw_f16(const float16_t *op1, const float16_t *op2)
// CHECK-LABEL: @test_svwhilerw_f32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_f32PKfS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -183,13 +183,13 @@ svbool_t test_svwhilerw_f32(const float32_t *op1, const float32_t *op2)
// CHECK-LABEL: @test_svwhilerw_f64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_f64PKdS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
diff --git a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr-bfloat.c b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr-bfloat.c
index 647f2aef98d812..2948de3adee3fd 100644
--- a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr-bfloat.c
+++ b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr-bfloat.c
@@ -17,13 +17,13 @@
// CHECK-LABEL: @test_svwhilewr_bf16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z19test_svwhilewr_bf16PKu6__bf16S0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
diff --git a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr.c b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr.c
index fddede6a4dc090..20d73e95cb66b1 100644
--- a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr.c
+++ b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr.c
@@ -17,12 +17,12 @@
// CHECK-LABEL: @test_svwhilewr_s8(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
// CPP-CHECK-LABEL: @_Z17test_svwhilewr_s8PKaS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
svbool_t test_svwhilewr_s8(const int8_t *op1, const int8_t *op2)
@@ -32,13 +32,13 @@ svbool_t test_svwhilewr_s8(const int8_t *op1, const int8_t *op2)
// CHECK-LABEL: @test_svwhilewr_s16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_s16PKsS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -49,13 +49,13 @@ svbool_t test_svwhilewr_s16(const int16_t *op1, const int16_t *op2)
// CHECK-LABEL: @test_svwhilewr_s32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_s32PKiS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -66,13 +66,13 @@ svbool_t test_svwhilewr_s32(const int32_t *op1, const int32_t *op2)
// CHECK-LABEL: @test_svwhilewr_s64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_s64PKlS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -83,12 +83,12 @@ svbool_t test_svwhilewr_s64(const int64_t *op1, const int64_t *op2)
// CHECK-LABEL: @test_svwhilewr_u8(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
// CPP-CHECK-LABEL: @_Z17test_svwhilewr_u8PKhS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
svbool_t test_svwhilewr_u8(const uint8_t *op1, const uint8_t *op2)
@@ -98,13 +98,13 @@ svbool_t test_svwhilewr_u8(const uint8_t *op1, const uint8_t *op2)
// CHECK-LABEL: @test_svwhilewr_u16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_u16PKtS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -115,13 +115,13 @@ svbool_t test_svwhilewr_u16(const uint16_t *op1, const uint16_t *op2)
// CHECK-LABEL: @test_svwhilewr_u32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_u32PKjS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -132,13 +132,13 @@ svbool_t test_svwhilewr_u32(const uint32_t *op1, const uint32_t *op2)
// CHECK-LABEL: @test_svwhilewr_u64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_u64PKmS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -149,13 +149,13 @@ svbool_t test_svwhilewr_u64(const uint64_t *op1, const uint64_t *op2)
// CHECK-LABEL: @test_svwhilewr_f16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_f16PKDhS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -166,13 +166,13 @@ svbool_t test_svwhilewr_f16(const float16_t *op1, const float16_t *op2)
// CHECK-LABEL: @test_svwhilewr_f32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_f32PKfS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -183,13 +183,13 @@ svbool_t test_svwhilewr_f32(const float32_t *op1, const float32_t *op2)
// CHECK-LABEL: @test_svwhilewr_f64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_f64PKdS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
>From 0315f4f74ae13332be05c687103e215385e0e3d2 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 15 Jan 2025 11:27:56 +0000
Subject: [PATCH 12/15] Format
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 357af0c96309b4..ffe88c518897b1 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1802,7 +1802,8 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
setOperationAction(ISD::INTRINSIC_WO_CHAIN, VT, Custom);
}
- if (Subtarget->hasSVE2() || (Subtarget->hasSME() && Subtarget->isStreaming())) {
+ if (Subtarget->hasSVE2() ||
+ (Subtarget->hasSME() && Subtarget->isStreaming())) {
for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Custom);
>From ef887cf7e1dbda13a319dfeba18240f63df85d7f Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 15 Jan 2025 16:16:31 +0000
Subject: [PATCH 13/15] Fix ISD node name string and remove shouldExpand
function
---
.../SelectionDAG/SelectionDAGDumper.cpp | 2 +-
.../Target/AArch64/AArch64ISelLowering.cpp | 19 -------------------
2 files changed, 1 insertion(+), 20 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index eb7e26bc2ce38e..dd90f260b86754 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -568,7 +568,7 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
return "histogram";
case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- return "alias_mask";
+ return "alias_lane_mask";
// Vector Predication
#define BEGIN_REGISTER_VP_SDNODE(SDID, LEGALARG, NAME, ...) \
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index ffe88c518897b1..a4d32679128dae 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -2041,25 +2041,6 @@ bool AArch64TargetLowering::shouldExpandGetActiveLaneMask(EVT ResVT,
return false;
}
-bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(
- EVT VT, EVT PtrVT, unsigned EltSize) const {
- if (!Subtarget->hasSVE2())
- return true;
-
- if (PtrVT != MVT::i64)
- return true;
-
- if (VT == MVT::v2i1 || VT == MVT::nxv2i1)
- return EltSize != 8;
- if (VT == MVT::v4i1 || VT == MVT::nxv4i1)
- return EltSize != 4;
- if (VT == MVT::v8i1 || VT == MVT::nxv8i1)
- return EltSize != 2;
- if (VT == MVT::v16i1 || VT == MVT::nxv16i1)
- return EltSize != 1;
- return true;
-}
-
bool AArch64TargetLowering::shouldExpandPartialReductionIntrinsic(
const IntrinsicInst *I) const {
if (I->getIntrinsicID() != Intrinsic::experimental_vector_partial_reduce_add)
>From 3dd4702ca85cb44c09f59a1fc15b07e1c2f5b97a Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 15 Jan 2025 16:27:57 +0000
Subject: [PATCH 14/15] Remove intrinsic changes
---
clang/include/clang/Basic/arm_sve.td | 20 ++++-----
.../acle_sve2_whilerw-bfloat.c | 4 +-
.../sve2-intrinsics/acle_sve2_whilerw.c | 44 +++++++++----------
.../acle_sve2_whilewr-bfloat.c | 4 +-
.../sve2-intrinsics/acle_sve2_whilewr.c | 44 +++++++++----------
llvm/include/llvm/IR/IntrinsicsAArch64.td | 10 ++++-
.../Target/AArch64/AArch64ISelLowering.cpp | 16 +++++--
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 3 --
8 files changed, 78 insertions(+), 67 deletions(-)
diff --git a/clang/include/clang/Basic/arm_sve.td b/clang/include/clang/Basic/arm_sve.td
index 9efb9db9ddc336..d492fae4145b92 100644
--- a/clang/include/clang/Basic/arm_sve.td
+++ b/clang/include/clang/Basic/arm_sve.td
@@ -1915,20 +1915,20 @@ def SVNMATCH : SInst<"svnmatch[_{d}]", "PPdd", "csUcUs", MergeNone, "aarch64_sve
////////////////////////////////////////////////////////////////////////////////
// SVE2 - Contiguous conflict detection
let SVETargetGuard = "sve2", SMETargetGuard = "sme" in {
-def SVWHILERW_B : SInst<"svwhilerw[_{1}]", "Pcc", "cUc", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILERW_H : SInst<"svwhilerw[_{1}]", "Pcc", "sUsh", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILERW_S : SInst<"svwhilerw[_{1}]", "Pcc", "iUif", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILERW_D : SInst<"svwhilerw[_{1}]", "Pcc", "lUld", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_B : SInst<"svwhilerw[_{1}]", "Pcc", "cUc", MergeNone, "aarch64_sve_whilerw_b", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_H : SInst<"svwhilerw[_{1}]", "Pcc", "sUsh", MergeNone, "aarch64_sve_whilerw_h", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_S : SInst<"svwhilerw[_{1}]", "Pcc", "iUif", MergeNone, "aarch64_sve_whilerw_s", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_D : SInst<"svwhilerw[_{1}]", "Pcc", "lUld", MergeNone, "aarch64_sve_whilerw_d", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_B : SInst<"svwhilewr[_{1}]", "Pcc", "cUc", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_H : SInst<"svwhilewr[_{1}]", "Pcc", "sUsh", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_S : SInst<"svwhilewr[_{1}]", "Pcc", "iUif", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_D : SInst<"svwhilewr[_{1}]", "Pcc", "lUld", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_B : SInst<"svwhilewr[_{1}]", "Pcc", "cUc", MergeNone, "aarch64_sve_whilewr_b", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_H : SInst<"svwhilewr[_{1}]", "Pcc", "sUsh", MergeNone, "aarch64_sve_whilewr_h", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_S : SInst<"svwhilewr[_{1}]", "Pcc", "iUif", MergeNone, "aarch64_sve_whilewr_s", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_D : SInst<"svwhilewr[_{1}]", "Pcc", "lUld", MergeNone, "aarch64_sve_whilewr_d", [IsOverloadWhileRW, VerifyRuntimeMode]>;
}
let SVETargetGuard = "sve2,bf16", SMETargetGuard = "sme,bf16" in {
-def SVWHILERW_H_BF16 : SInst<"svwhilerw[_{1}]", "Pcc", "b", MergeNone, "aarch64_sve_whilerw", [IsOverloadWhileRW, VerifyRuntimeMode]>;
-def SVWHILEWR_H_BF16 : SInst<"svwhilewr[_{1}]", "Pcc", "b", MergeNone, "aarch64_sve_whilewr", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILERW_H_BF16 : SInst<"svwhilerw[_{1}]", "Pcc", "b", MergeNone, "aarch64_sve_whilerw_h", [IsOverloadWhileRW, VerifyRuntimeMode]>;
+def SVWHILEWR_H_BF16 : SInst<"svwhilewr[_{1}]", "Pcc", "b", MergeNone, "aarch64_sve_whilewr_h", [IsOverloadWhileRW, VerifyRuntimeMode]>;
}
////////////////////////////////////////////////////////////////////////////////
diff --git a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw-bfloat.c b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw-bfloat.c
index de6bad440fa55f..95b0f53abdce0f 100644
--- a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw-bfloat.c
+++ b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw-bfloat.c
@@ -17,13 +17,13 @@
// CHECK-LABEL: @test_svwhilerw_bf16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z19test_svwhilerw_bf16PKu6__bf16S0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
diff --git a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw.c b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw.c
index 6118a807a72b2f..13f1984db94cc8 100644
--- a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw.c
+++ b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilerw.c
@@ -17,12 +17,12 @@
// CHECK-LABEL: @test_svwhilerw_s8(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
// CPP-CHECK-LABEL: @_Z17test_svwhilerw_s8PKaS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
svbool_t test_svwhilerw_s8(const int8_t *op1, const int8_t *op2)
@@ -32,13 +32,13 @@ svbool_t test_svwhilerw_s8(const int8_t *op1, const int8_t *op2)
// CHECK-LABEL: @test_svwhilerw_s16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_s16PKsS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -49,13 +49,13 @@ svbool_t test_svwhilerw_s16(const int16_t *op1, const int16_t *op2)
// CHECK-LABEL: @test_svwhilerw_s32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_s32PKiS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -66,13 +66,13 @@ svbool_t test_svwhilerw_s32(const int32_t *op1, const int32_t *op2)
// CHECK-LABEL: @test_svwhilerw_s64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_s64PKlS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -83,12 +83,12 @@ svbool_t test_svwhilerw_s64(const int64_t *op1, const int64_t *op2)
// CHECK-LABEL: @test_svwhilerw_u8(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
// CPP-CHECK-LABEL: @_Z17test_svwhilerw_u8PKhS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilerw.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
svbool_t test_svwhilerw_u8(const uint8_t *op1, const uint8_t *op2)
@@ -98,13 +98,13 @@ svbool_t test_svwhilerw_u8(const uint8_t *op1, const uint8_t *op2)
// CHECK-LABEL: @test_svwhilerw_u16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_u16PKtS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -115,13 +115,13 @@ svbool_t test_svwhilerw_u16(const uint16_t *op1, const uint16_t *op2)
// CHECK-LABEL: @test_svwhilerw_u32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_u32PKjS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -132,13 +132,13 @@ svbool_t test_svwhilerw_u32(const uint32_t *op1, const uint32_t *op2)
// CHECK-LABEL: @test_svwhilerw_u64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_u64PKmS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -149,13 +149,13 @@ svbool_t test_svwhilerw_u64(const uint64_t *op1, const uint64_t *op2)
// CHECK-LABEL: @test_svwhilerw_f16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_f16PKDhS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilerw.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -166,13 +166,13 @@ svbool_t test_svwhilerw_f16(const float16_t *op1, const float16_t *op2)
// CHECK-LABEL: @test_svwhilerw_f32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_f32PKfS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilerw.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -183,13 +183,13 @@ svbool_t test_svwhilerw_f32(const float32_t *op1, const float32_t *op2)
// CHECK-LABEL: @test_svwhilerw_f64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilerw_f64PKdS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilerw.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
diff --git a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr-bfloat.c b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr-bfloat.c
index 2948de3adee3fd..647f2aef98d812 100644
--- a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr-bfloat.c
+++ b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr-bfloat.c
@@ -17,13 +17,13 @@
// CHECK-LABEL: @test_svwhilewr_bf16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z19test_svwhilewr_bf16PKu6__bf16S0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
diff --git a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr.c b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr.c
index 20d73e95cb66b1..fddede6a4dc090 100644
--- a/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr.c
+++ b/clang/test/CodeGen/AArch64/sve2-intrinsics/acle_sve2_whilewr.c
@@ -17,12 +17,12 @@
// CHECK-LABEL: @test_svwhilewr_s8(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
// CPP-CHECK-LABEL: @_Z17test_svwhilewr_s8PKaS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
svbool_t test_svwhilewr_s8(const int8_t *op1, const int8_t *op2)
@@ -32,13 +32,13 @@ svbool_t test_svwhilewr_s8(const int8_t *op1, const int8_t *op2)
// CHECK-LABEL: @test_svwhilewr_s16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_s16PKsS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -49,13 +49,13 @@ svbool_t test_svwhilewr_s16(const int16_t *op1, const int16_t *op2)
// CHECK-LABEL: @test_svwhilewr_s32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_s32PKiS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -66,13 +66,13 @@ svbool_t test_svwhilewr_s32(const int32_t *op1, const int32_t *op2)
// CHECK-LABEL: @test_svwhilewr_s64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_s64PKlS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -83,12 +83,12 @@ svbool_t test_svwhilewr_s64(const int64_t *op1, const int64_t *op2)
// CHECK-LABEL: @test_svwhilewr_u8(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
// CPP-CHECK-LABEL: @_Z17test_svwhilewr_u8PKhS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.whilewr.b.nxv16i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP0]]
//
svbool_t test_svwhilewr_u8(const uint8_t *op1, const uint8_t *op2)
@@ -98,13 +98,13 @@ svbool_t test_svwhilewr_u8(const uint8_t *op1, const uint8_t *op2)
// CHECK-LABEL: @test_svwhilewr_u16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_u16PKtS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -115,13 +115,13 @@ svbool_t test_svwhilewr_u16(const uint16_t *op1, const uint16_t *op2)
// CHECK-LABEL: @test_svwhilewr_u32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_u32PKjS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -132,13 +132,13 @@ svbool_t test_svwhilewr_u32(const uint32_t *op1, const uint32_t *op2)
// CHECK-LABEL: @test_svwhilewr_u64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_u64PKmS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -149,13 +149,13 @@ svbool_t test_svwhilewr_u64(const uint64_t *op1, const uint64_t *op2)
// CHECK-LABEL: @test_svwhilewr_f16(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_f16PKDhS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 8 x i1> @llvm.aarch64.sve.whilewr.h.nxv8i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv8i1(<vscale x 8 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -166,13 +166,13 @@ svbool_t test_svwhilewr_f16(const float16_t *op1, const float16_t *op2)
// CHECK-LABEL: @test_svwhilewr_f32(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_f32PKfS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 4 x i1> @llvm.aarch64.sve.whilewr.s.nxv4i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv4i1(<vscale x 4 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
@@ -183,13 +183,13 @@ svbool_t test_svwhilewr_f32(const float32_t *op1, const float32_t *op2)
// CHECK-LABEL: @test_svwhilewr_f64(
// CHECK-NEXT: entry:
-// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
// CPP-CHECK-LABEL: @_Z18test_svwhilewr_f64PKdS0_(
// CPP-CHECK-NEXT: entry:
-// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
+// CPP-CHECK-NEXT: [[TMP0:%.*]] = tail call <vscale x 2 x i1> @llvm.aarch64.sve.whilewr.d.nxv2i1.p0(ptr [[OP1:%.*]], ptr [[OP2:%.*]])
// CPP-CHECK-NEXT: [[TMP1:%.*]] = tail call <vscale x 16 x i1> @llvm.aarch64.sve.convert.to.svbool.nxv2i1(<vscale x 2 x i1> [[TMP0]])
// CPP-CHECK-NEXT: ret <vscale x 16 x i1> [[TMP1]]
//
diff --git a/llvm/include/llvm/IR/IntrinsicsAArch64.td b/llvm/include/llvm/IR/IntrinsicsAArch64.td
index 04ce5271be795a..6a09a8647096f9 100644
--- a/llvm/include/llvm/IR/IntrinsicsAArch64.td
+++ b/llvm/include/llvm/IR/IntrinsicsAArch64.td
@@ -2862,8 +2862,14 @@ def int_aarch64_sve_stnt1_pn_x4 : SVE2p1_Store_PN_X4_Intrinsic;
// SVE2 - Contiguous conflict detection
//
-def int_aarch64_sve_whilerw : SVE2_CONFLICT_DETECT_Intrinsic;
-def int_aarch64_sve_whilewr : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilerw_b : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilerw_h : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilerw_s : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilerw_d : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilewr_b : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilewr_h : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilewr_s : SVE2_CONFLICT_DETECT_Intrinsic;
+def int_aarch64_sve_whilewr_d : SVE2_CONFLICT_DETECT_Intrinsic;
// Scalable Matrix Extension (SME) Intrinsics
let TargetPrefix = "aarch64" in {
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index a4d32679128dae..80a98670e3d182 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5144,25 +5144,27 @@ SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
unsigned IntrinsicID = 0;
uint64_t EltSize = Op.getOperand(2)->getAsZExtVal();
bool IsWriteAfterRead = Op.getOperand(3)->getAsZExtVal() == 1;
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr
- : Intrinsic::aarch64_sve_whilerw;
EVT VT = Op.getValueType();
MVT SimpleVT = VT.getSimpleVT();
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b : Intrinsic::aarch64_sve_whilerw_b;
assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
"Unexpected mask or element size");
break;
case 2:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h : Intrinsic::aarch64_sve_whilerw_h;
assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
"Unexpected mask or element size");
break;
case 4:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s : Intrinsic::aarch64_sve_whilerw_s;
assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
"Unexpected mask or element size");
break;
case 8:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d : Intrinsic::aarch64_sve_whilerw_d;
assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
"Unexpected mask or element size");
break;
@@ -5944,10 +5946,16 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
EVT PtrVT = getPointerTy(DAG.getDataLayout());
return DAG.getNode(AArch64ISD::THREAD_POINTER, dl, PtrVT);
}
- case Intrinsic::aarch64_sve_whilewr:
+ case Intrinsic::aarch64_sve_whilewr_b:
+ case Intrinsic::aarch64_sve_whilewr_h:
+ case Intrinsic::aarch64_sve_whilewr_s:
+ case Intrinsic::aarch64_sve_whilewr_d:
return DAG.getNode(AArch64ISD::WHILEWR, dl, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2));
- case Intrinsic::aarch64_sve_whilerw:
+ case Intrinsic::aarch64_sve_whilerw_b:
+ case Intrinsic::aarch64_sve_whilerw_h:
+ case Intrinsic::aarch64_sve_whilerw_s:
+ case Intrinsic::aarch64_sve_whilerw_d:
return DAG.getNode(AArch64ISD::WHILERW, dl, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2));
case Intrinsic::aarch64_neon_abs: {
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 9812f85aff379f..e0a0cc93d07cb2 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -984,9 +984,6 @@ class AArch64TargetLowering : public TargetLowering {
bool shouldExpandGetActiveLaneMask(EVT VT, EVT OpVT) const override;
- bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
- unsigned EltSize) const override;
-
bool
shouldExpandPartialReductionIntrinsic(const IntrinsicInst *I) const override;
>From 87aa8c76b99141a8f220925878db0daebe539d73 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 16 Jan 2025 10:24:59 +0000
Subject: [PATCH 15/15] Format
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 80a98670e3d182..48dab7b326a427 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5149,22 +5149,26 @@ SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b : Intrinsic::aarch64_sve_whilerw_b;
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
+ : Intrinsic::aarch64_sve_whilerw_b;
assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
"Unexpected mask or element size");
break;
case 2:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h : Intrinsic::aarch64_sve_whilerw_h;
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
+ : Intrinsic::aarch64_sve_whilerw_h;
assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
"Unexpected mask or element size");
break;
case 4:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s : Intrinsic::aarch64_sve_whilerw_s;
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
+ : Intrinsic::aarch64_sve_whilerw_s;
assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
"Unexpected mask or element size");
break;
case 8:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d : Intrinsic::aarch64_sve_whilerw_d;
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
+ : Intrinsic::aarch64_sve_whilerw_d;
assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
"Unexpected mask or element size");
break;
More information about the llvm-commits
mailing list