[llvm] [Intrinsics][AArch64] Add intrinsic to mask off aliasing vector lanes (PR #117007)
Sam Tebbs via llvm-commits
llvm-commits at lists.llvm.org
Thu Jan 30 08:58:32 PST 2025
https://github.com/SamTebbs33 updated https://github.com/llvm/llvm-project/pull/117007
>From 7b25fff5a40457fddd62537f3ef77a078e8ad9e9 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 15 Nov 2024 10:24:46 +0000
Subject: [PATCH 01/18] [Intrinsics][AArch64] Add intrinsic to mask off
aliasing vector lanes
It can be unsafe to load a vector from an address and write a vector to
an address if those two addresses have overlapping lanes within a
vectorised loop iteration.
This PR adds an intrinsic designed to create a mask with lanes disabled
if they overlap between the two pointer arguments, so that only safe
lanes are loaded, operated on and stored.
Along with the two pointer parameters, the intrinsic also takes an
immediate that represents the size in bytes of the vector element
types, as well as an immediate i1 that is true if there is a write
after-read-hazard or false if there is a read-after-write hazard.
This will be used by #100579 and replaces the existing lowering for
whilewr since that isn't needed now we have the intrinsic.
---
llvm/docs/LangRef.rst | 84 ++++
llvm/include/llvm/CodeGen/TargetLowering.h | 7 +
llvm/include/llvm/IR/Intrinsics.td | 5 +
.../SelectionDAG/SelectionDAGBuilder.cpp | 50 +++
.../Target/AArch64/AArch64ISelLowering.cpp | 76 +++-
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 7 +
.../lib/Target/AArch64/AArch64SVEInstrInfo.td | 11 +-
llvm/lib/Target/AArch64/SVEInstrFormats.td | 10 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 421 ++++++++++++++++++
.../CodeGen/AArch64/alias_mask_scalable.ll | 195 ++++++++
10 files changed, 852 insertions(+), 14 deletions(-)
create mode 100644 llvm/test/CodeGen/AArch64/alias_mask.ll
create mode 100644 llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index b922636d6c914b..e2df2e0aefb1be 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23624,6 +23624,90 @@ Examples:
%active.lane.mask = call <4 x i1> @llvm.get.active.lane.mask.v4i1.i64(i64 %elem0, i64 429)
%wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> poison)
+.. _int_experimental_get_alias_lane_mask:
+
+'``llvm.experimental.get.alias.lane.mask.*``' Intrinsics
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Syntax:
+"""""""
+This is an overloaded intrinsic.
+
+::
+
+ declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+
+
+Overview:
+"""""""""
+
+Create a mask representing lanes that do or not overlap between two pointers
+across one vector loop iteration.
+
+
+Arguments:
+""""""""""
+
+The first two arguments have the same scalar integer type.
+The final two are immediates and the result is a vector with the i1 element type.
+
+Semantics:
+""""""""""
+
+The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
+VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
+In other cases when ``%writeAfterRead`` is true, the
+'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+equivalent to:
+
+::
+
+ %diff = (%ptrB - %ptrA) / %elementSize
+ %m[i] = (icmp ult i, %diff) || (%diff <= 0)
+
+When the return value is not poison and ``%writeAfterRead`` is false, the
+'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+equivalent to:
+
+::
+
+ %diff = abs(%ptrB - %ptrA) / %elementSize
+ %m[i] = (icmp ult i, %diff) || (%diff == 0)
+
+where ``%m`` is a vector (mask) of active/inactive lanes with its elements
+indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two i64 arguments to
+``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
+immediate argument. The ``%writeAfterRead`` argument is expected to be true if
+``%ptrB`` is stored to after ``%ptrA`` is read from.
+The above is equivalent to:
+
+::
+
+ %m = @llvm.experimental.get.alias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
+
+This can, for example, be emitted by the loop vectorizer in which case
+``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a
+pointer that is stored to within the loop.
+If the difference between these pointers is less than the vector factor, then
+they overlap (alias) within a loop iteration.
+An example is if ``%ptrA`` is 20 and ``%ptrB`` is 23 with a vector factor of 8,
+then lanes 3, 4, 5, 6 and 7 of the vector loaded from ``%ptrA``
+share addresses with lanes 0, 1, 2, 3, 4 and 5 from the vector stored to at
+``%ptrB``.
+
+
+Examples:
+"""""""""
+
+.. code-block:: llvm
+
+ %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 4, i1 1)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %alias.lane.mask, <4 x i32> poison)
+ [...]
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %alias.lane.mask)
.. _int_experimental_vp_splice:
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 38ac90f0c081b3..3fe5b3d108ba10 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -468,6 +468,13 @@ class TargetLoweringBase {
return true;
}
+ /// Return true if the @llvm.experimental.get.alias.lane.mask intrinsic should
+ /// be expanded using generic code in SelectionDAGBuilder.
+ virtual bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
+ unsigned EltSize) const {
+ return true;
+ }
+
virtual bool shouldExpandGetVectorLength(EVT CountVT, unsigned VF,
bool IsScalable) const {
return true;
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index ee877349a33149..b7a8efbd51b83c 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2363,6 +2363,11 @@ let IntrProperties = [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<1>>
llvm_i32_ty]>;
}
+def int_experimental_get_alias_lane_mask:
+ DefaultAttrsIntrinsic<[llvm_anyvector_ty],
+ [llvm_anyint_ty, LLVMMatchType<1>, llvm_anyint_ty, llvm_i1_ty],
+ [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
+
def int_get_active_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
[llvm_anyint_ty, LLVMMatchType<1>],
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 9f1aadcb279a99..25eddba2b921a4 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8276,6 +8276,56 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
visitVectorExtractLastActive(I, Intrinsic);
return;
}
+ case Intrinsic::experimental_get_alias_lane_mask: {
+ SDValue SourceValue = getValue(I.getOperand(0));
+ SDValue SinkValue = getValue(I.getOperand(1));
+ SDValue EltSize = getValue(I.getOperand(2));
+ bool IsWriteAfterRead =
+ cast<ConstantSDNode>(getValue(I.getOperand(3)))->getZExtValue() != 0;
+ auto IntrinsicVT = EVT::getEVT(I.getType());
+ auto PtrVT = SourceValue->getValueType(0);
+
+ if (!TLI.shouldExpandGetAliasLaneMask(
+ IntrinsicVT, PtrVT,
+ cast<ConstantSDNode>(EltSize)->getSExtValue())) {
+ visitTargetIntrinsic(I, Intrinsic);
+ return;
+ }
+
+ SDValue Diff = DAG.getNode(ISD::SUB, sdl, PtrVT, SinkValue, SourceValue);
+ if (!IsWriteAfterRead)
+ Diff = DAG.getNode(ISD::ABS, sdl, PtrVT, Diff);
+
+ Diff = DAG.getNode(ISD::SDIV, sdl, PtrVT, Diff, EltSize);
+ SDValue Zero = DAG.getTargetConstant(0, sdl, PtrVT);
+
+ // If the difference is positive then some elements may alias
+ auto CmpVT =
+ TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), PtrVT);
+ SDValue Cmp = DAG.getSetCC(sdl, CmpVT, Diff, Zero,
+ IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+
+ // Splat the compare result then OR it with a lane mask
+ SDValue Splat = DAG.getSplat(IntrinsicVT, sdl, Cmp);
+
+ SDValue DiffMask;
+ // Don't emit an active lane mask if the target doesn't support it
+ if (TLI.shouldExpandGetActiveLaneMask(IntrinsicVT, PtrVT)) {
+ EVT VecTy = EVT::getVectorVT(*DAG.getContext(), PtrVT,
+ IntrinsicVT.getVectorElementCount());
+ SDValue DiffSplat = DAG.getSplat(VecTy, sdl, Diff);
+ SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
+ DiffMask = DAG.getSetCC(sdl, IntrinsicVT, VectorStep, DiffSplat,
+ ISD::CondCode::SETULT);
+ } else {
+ DiffMask = DAG.getNode(
+ ISD::INTRINSIC_WO_CHAIN, sdl, IntrinsicVT,
+ DAG.getTargetConstant(Intrinsic::get_active_lane_mask, sdl, MVT::i64),
+ Zero, Diff);
+ }
+ SDValue Or = DAG.getNode(ISD::OR, sdl, IntrinsicVT, DiffMask, Splat);
+ setValue(&I, Or);
+ }
}
}
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 9a0bb73087980d..84a719401aa341 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -2038,6 +2038,25 @@ bool AArch64TargetLowering::shouldExpandGetActiveLaneMask(EVT ResVT,
return false;
}
+bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(
+ EVT VT, EVT PtrVT, unsigned EltSize) const {
+ if (!Subtarget->hasSVE2())
+ return true;
+
+ if (PtrVT != MVT::i64)
+ return true;
+
+ if (VT == MVT::v2i1 || VT == MVT::nxv2i1)
+ return EltSize != 8;
+ if (VT == MVT::v4i1 || VT == MVT::nxv4i1)
+ return EltSize != 4;
+ if (VT == MVT::v8i1 || VT == MVT::nxv8i1)
+ return EltSize != 2;
+ if (VT == MVT::v16i1 || VT == MVT::nxv16i1)
+ return EltSize != 1;
+ return true;
+}
+
bool AArch64TargetLowering::shouldExpandPartialReductionIntrinsic(
const IntrinsicInst *I) const {
if (I->getIntrinsicID() != Intrinsic::experimental_vector_partial_reduce_add)
@@ -2818,6 +2837,8 @@ const char *AArch64TargetLowering::getTargetNodeName(unsigned Opcode) const {
MAKE_CASE(AArch64ISD::LS64_BUILD)
MAKE_CASE(AArch64ISD::LS64_EXTRACT)
MAKE_CASE(AArch64ISD::TBL)
+ MAKE_CASE(AArch64ISD::WHILEWR)
+ MAKE_CASE(AArch64ISD::WHILERW)
MAKE_CASE(AArch64ISD::FADD_PRED)
MAKE_CASE(AArch64ISD::FADDA_PRED)
MAKE_CASE(AArch64ISD::FADDV_PRED)
@@ -6016,6 +6037,18 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
EVT PtrVT = getPointerTy(DAG.getDataLayout());
return DAG.getNode(AArch64ISD::THREAD_POINTER, dl, PtrVT);
}
+ case Intrinsic::aarch64_sve_whilewr_b:
+ case Intrinsic::aarch64_sve_whilewr_h:
+ case Intrinsic::aarch64_sve_whilewr_s:
+ case Intrinsic::aarch64_sve_whilewr_d:
+ return DAG.getNode(AArch64ISD::WHILEWR, dl, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2));
+ case Intrinsic::aarch64_sve_whilerw_b:
+ case Intrinsic::aarch64_sve_whilerw_h:
+ case Intrinsic::aarch64_sve_whilerw_s:
+ case Intrinsic::aarch64_sve_whilerw_d:
+ return DAG.getNode(AArch64ISD::WHILERW, dl, Op.getValueType(),
+ Op.getOperand(1), Op.getOperand(2));
case Intrinsic::aarch64_neon_abs: {
EVT Ty = Op.getValueType();
if (Ty == MVT::i64) {
@@ -6475,18 +6508,45 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(AArch64ISD::USDOT, dl, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2), Op.getOperand(3));
}
+ case Intrinsic::experimental_get_alias_lane_mask:
case Intrinsic::get_active_lane_mask: {
- SDValue ID =
- DAG.getTargetConstant(Intrinsic::aarch64_sve_whilelo, dl, MVT::i64);
+ unsigned IntrinsicID = Intrinsic::aarch64_sve_whilelo;
+ if (IntNo == Intrinsic::experimental_get_alias_lane_mask) {
+ uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
+ bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
+ switch (EltSize) {
+ case 1:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
+ : Intrinsic::aarch64_sve_whilerw_b;
+ break;
+ case 2:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
+ : Intrinsic::aarch64_sve_whilerw_h;
+ break;
+ case 4:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
+ : Intrinsic::aarch64_sve_whilerw_s;
+ break;
+ case 8:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
+ : Intrinsic::aarch64_sve_whilerw_d;
+ break;
+ default:
+ llvm_unreachable("Unexpected element size for get.alias.lane.mask");
+ break;
+ }
+ }
+ SDValue ID = DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
EVT VT = Op.getValueType();
if (VT.isScalableVector())
return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, VT, ID, Op.getOperand(1),
Op.getOperand(2));
- // We can use the SVE whilelo instruction to lower this intrinsic by
- // creating the appropriate sequence of scalable vector operations and
- // then extracting a fixed-width subvector from the scalable vector.
+ // We can use the SVE whilelo/whilewr/whilerw instruction to lower this
+ // intrinsic by creating the appropriate sequence of scalable vector
+ // operations and then extracting a fixed-width subvector from the scalable
+ // vector.
EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
@@ -19872,7 +19932,10 @@ static bool isPredicateCCSettingOp(SDValue N) {
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilels ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelt ||
// get_active_lane_mask is lowered to a whilelo instruction.
- N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask)))
+ N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask ||
+ // get_alias_lane_mask is lowered to a whilewr/rw instruction.
+ N.getConstantOperandVal(0) ==
+ Intrinsic::experimental_get_alias_lane_mask)))
return true;
return false;
@@ -27652,6 +27715,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
return;
}
case Intrinsic::experimental_vector_match:
+ case Intrinsic::experimental_get_alias_lane_mask:
case Intrinsic::get_active_lane_mask: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 61579de50db17e..3df8f2da365f31 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -298,6 +298,10 @@ enum NodeType : unsigned {
SMAXV,
UMAXV,
+ // Alias lane masks
+ WHILEWR,
+ WHILERW,
+
SADDV_PRED,
UADDV_PRED,
SMAXV_PRED,
@@ -993,6 +997,9 @@ class AArch64TargetLowering : public TargetLowering {
bool shouldExpandGetActiveLaneMask(EVT VT, EVT OpVT) const override;
+ bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
+ unsigned EltSize) const override;
+
bool
shouldExpandPartialReductionIntrinsic(const IntrinsicInst *I) const override;
diff --git a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
index 6d5e2697160ab6..919e8876f9737b 100644
--- a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
@@ -140,6 +140,11 @@ def AArch64st1q_scatter : SDNode<"AArch64ISD::SST1Q_PRED", SDT_AArch64_SCATTER_V
// AArch64 SVE/SVE2 - the remaining node definitions
//
+// Alias masks
+def SDT_AArch64Mask : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisInt<1>, SDTCisSameAs<2, 1>, SDTCVecEltisVT<0,i1>]>;
+def AArch64whilewr : SDNode<"AArch64ISD::WHILEWR", SDT_AArch64Mask>;
+def AArch64whilerw : SDNode<"AArch64ISD::WHILERW", SDT_AArch64Mask>;
+
// SVE CNT/INC/RDVL
def sve_rdvl_imm : ComplexPattern<i64, 1, "SelectRDVLImm<-32, 31, 16>">;
def sve_cnth_imm : ComplexPattern<i64, 1, "SelectRDVLImm<1, 16, 8>">;
@@ -3914,9 +3919,9 @@ let Predicates = [HasSVE2_or_SME] in {
defm WHILEHI_PXX : sve_int_while8_rr<0b101, "whilehi", int_aarch64_sve_whilehi, int_aarch64_sve_whilelo>;
// SVE2 pointer conflict compare
- defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", "int_aarch64_sve_whilewr">;
- defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", "int_aarch64_sve_whilerw">;
-} // End HasSVE2_or_SME
+ defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", AArch64whilewr>;
+ defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", AArch64whilerw>;
+} // End HasSVE2orSME
let Predicates = [HasSVEAES, HasNonStreamingSVE2_or_SSVE_AES] in {
// SVE2 crypto destructive binary operations
diff --git a/llvm/lib/Target/AArch64/SVEInstrFormats.td b/llvm/lib/Target/AArch64/SVEInstrFormats.td
index 2ee9910da50795..a0a1ac4b15612d 100644
--- a/llvm/lib/Target/AArch64/SVEInstrFormats.td
+++ b/llvm/lib/Target/AArch64/SVEInstrFormats.td
@@ -5849,16 +5849,16 @@ class sve2_int_while_rr<bits<2> sz8_64, bits<1> rw, string asm,
let isWhile = 1;
}
-multiclass sve2_int_while_rr<bits<1> rw, string asm, string op> {
+multiclass sve2_int_while_rr<bits<1> rw, string asm, SDPatternOperator op> {
def _B : sve2_int_while_rr<0b00, rw, asm, PPR8>;
def _H : sve2_int_while_rr<0b01, rw, asm, PPR16>;
def _S : sve2_int_while_rr<0b10, rw, asm, PPR32>;
def _D : sve2_int_while_rr<0b11, rw, asm, PPR64>;
- def : SVE_2_Op_Pat<nxv16i1, !cast<SDPatternOperator>(op # _b), i64, i64, !cast<Instruction>(NAME # _B)>;
- def : SVE_2_Op_Pat<nxv8i1, !cast<SDPatternOperator>(op # _h), i64, i64, !cast<Instruction>(NAME # _H)>;
- def : SVE_2_Op_Pat<nxv4i1, !cast<SDPatternOperator>(op # _s), i64, i64, !cast<Instruction>(NAME # _S)>;
- def : SVE_2_Op_Pat<nxv2i1, !cast<SDPatternOperator>(op # _d), i64, i64, !cast<Instruction>(NAME # _D)>;
+ def : SVE_2_Op_Pat<nxv16i1, op, i64, i64, !cast<Instruction>(NAME # _B)>;
+ def : SVE_2_Op_Pat<nxv8i1, op, i64, i64, !cast<Instruction>(NAME # _H)>;
+ def : SVE_2_Op_Pat<nxv4i1, op, i64, i64, !cast<Instruction>(NAME # _S)>;
+ def : SVE_2_Op_Pat<nxv2i1, op, i64, i64, !cast<Instruction>(NAME # _D)>;
}
//===----------------------------------------------------------------------===//
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
new file mode 100644
index 00000000000000..84a22822f1702b
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -0,0 +1,421 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE
+; RUN: llc -mtriple=aarch64 %s -o - | FileCheck %s --check-prefix=CHECK-NOSVE
+
+define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_8:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_1
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI0_0]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_2
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI0_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x8, :lo12:.LCPI0_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_4
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_3
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI0_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_5
+; CHECK-NOSVE-NEXT: dup v2.2d, x9
+; CHECK-NOSVE-NEXT: ldr q4, [x10, :lo12:.LCPI0_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI0_6
+; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI0_5]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI0_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI0_6]
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI0_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v2.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v2.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v2.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v2.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v2.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v2.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v2.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v2.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v2.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ ret <16 x i1> %0
+}
+
+define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_16:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI1_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI1_2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI1_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI1_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI1_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI1_2]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI1_3]
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <8 x i1> %0
+}
+
+define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI2_0
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI2_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI2_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI2_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ ret <4 x i1> %0
+}
+
+define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilewr_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilewr_64:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI3_0
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI3_0]
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v1.2s, w8
+; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ ret <2 x i1> %0
+}
+
+define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $q0 killed $q0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_8:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_0
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_1
+; CHECK-NOSVE-NEXT: ldr q0, [x8, :lo12:.LCPI4_0]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_2
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q2, [x8, :lo12:.LCPI4_2]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_3
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI4_1]
+; CHECK-NOSVE-NEXT: ldr q4, [x8, :lo12:.LCPI4_3]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_4
+; CHECK-NOSVE-NEXT: dup v3.2d, x9
+; CHECK-NOSVE-NEXT: ldr q5, [x8, :lo12:.LCPI4_4]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_5
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI4_6
+; CHECK-NOSVE-NEXT: ldr q6, [x8, :lo12:.LCPI4_5]
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI4_7
+; CHECK-NOSVE-NEXT: ldr q7, [x10, :lo12:.LCPI4_6]
+; CHECK-NOSVE-NEXT: ldr q16, [x8, :lo12:.LCPI4_7]
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v3.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v3.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v3.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v3.2d, v4.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v3.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v3.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v3.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v3.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v4.4s, v2.4s
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v3.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ ret <16 x i1> %0
+}
+
+define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE-NEXT: mov z0.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_16:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI5_1
+; CHECK-NOSVE-NEXT: cneg x8, x8, mi
+; CHECK-NOSVE-NEXT: adrp x11, .LCPI5_2
+; CHECK-NOSVE-NEXT: ldr q1, [x9, :lo12:.LCPI5_0]
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI5_3
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI5_1]
+; CHECK-NOSVE-NEXT: ldr q3, [x11, :lo12:.LCPI5_2]
+; CHECK-NOSVE-NEXT: ldr q4, [x9, :lo12:.LCPI5_3]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: dup v0.2d, x8
+; CHECK-NOSVE-NEXT: cmp x8, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v0.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v4.2d
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v2.4s, v1.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v3.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
+; CHECK-NOSVE-NEXT: dup v1.8b, w8
+; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ ret <8 x i1> %0
+}
+
+define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.h, x0, x1
+; CHECK-SVE-NEXT: mov z0.h, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI6_0
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI6_0]
+; CHECK-NOSVE-NEXT: add x10, x9, #3
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI6_1
+; CHECK-NOSVE-NEXT: asr x9, x9, #2
+; CHECK-NOSVE-NEXT: ldr q2, [x10, :lo12:.LCPI6_1]
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v2.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v0.4s, v1.4s
+; CHECK-NOSVE-NEXT: dup v1.4h, w8
+; CHECK-NOSVE-NEXT: xtn v0.4h, v0.4s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ ret <4 x i1> %0
+}
+
+define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: whilerw_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: whilerw p0.s, x0, x1
+; CHECK-SVE-NEXT: mov z0.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: whilerw_64:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs x9, x1, x0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI7_0
+; CHECK-NOSVE-NEXT: cneg x9, x9, mi
+; CHECK-NOSVE-NEXT: ldr q1, [x8, :lo12:.LCPI7_0]
+; CHECK-NOSVE-NEXT: add x10, x9, #7
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: csel x9, x10, x9, lt
+; CHECK-NOSVE-NEXT: asr x9, x9, #3
+; CHECK-NOSVE-NEXT: dup v0.2d, x9
+; CHECK-NOSVE-NEXT: cmp x9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v0.2d, v1.2d
+; CHECK-NOSVE-NEXT: dup v1.2s, w8
+; CHECK-NOSVE-NEXT: xtn v0.2s, v0.2d
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ ret <2 x i1> %0
+}
+
+define <16 x i1> @not_whilewr_wrong_eltsize(i64 %a, i64 %b) {
+; CHECK-SVE-LABEL: not_whilewr_wrong_eltsize:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: dup v0.16b, w9
+; CHECK-SVE-NEXT: mov z1.b, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: orr v0.16b, v1.16b, v0.16b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: not_whilewr_wrong_eltsize:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: sub x8, x1, x0
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
+; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
+; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_2]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
+; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
+; CHECK-NOSVE-NEXT: asr x8, x8, #1
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
+; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_6
+; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI8_3]
+; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_5
+; CHECK-NOSVE-NEXT: dup v4.2d, x8
+; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI8_6]
+; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
+; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI8_5]
+; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
+; CHECK-NOSVE-NEXT: cmp x8, #1
+; CHECK-NOSVE-NEXT: cset w8, lt
+; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
+; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
+; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
+; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
+; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
+; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
+; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
+; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
+; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
+; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
+; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
+; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
+; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
+; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
+; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
+; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <16 x i1> %0
+}
+
+define <2 x i1> @not_whilerw_ptr32(i32 %a, i32 %b) {
+; CHECK-SVE-LABEL: not_whilerw_ptr32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs w8, w1, w0
+; CHECK-SVE-NEXT: cneg w8, w8, mi
+; CHECK-SVE-NEXT: add w9, w8, #7
+; CHECK-SVE-NEXT: cmp w8, #0
+; CHECK-SVE-NEXT: csel w8, w9, w8, lt
+; CHECK-SVE-NEXT: asr w8, w8, #3
+; CHECK-SVE-NEXT: cmp w8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.s, #0, w8
+; CHECK-SVE-NEXT: dup v0.2s, w9
+; CHECK-SVE-NEXT: mov z1.s, p0/z, #-1 // =0xffffffffffffffff
+; CHECK-SVE-NEXT: orr v0.8b, v1.8b, v0.8b
+; CHECK-SVE-NEXT: ret
+;
+; CHECK-NOSVE-LABEL: not_whilerw_ptr32:
+; CHECK-NOSVE: // %bb.0: // %entry
+; CHECK-NOSVE-NEXT: subs w9, w1, w0
+; CHECK-NOSVE-NEXT: adrp x8, .LCPI9_0
+; CHECK-NOSVE-NEXT: cneg w9, w9, mi
+; CHECK-NOSVE-NEXT: ldr d1, [x8, :lo12:.LCPI9_0]
+; CHECK-NOSVE-NEXT: add w10, w9, #7
+; CHECK-NOSVE-NEXT: cmp w9, #0
+; CHECK-NOSVE-NEXT: csel w9, w10, w9, lt
+; CHECK-NOSVE-NEXT: asr w9, w9, #3
+; CHECK-NOSVE-NEXT: dup v0.2s, w9
+; CHECK-NOSVE-NEXT: cmp w9, #0
+; CHECK-NOSVE-NEXT: cset w8, eq
+; CHECK-NOSVE-NEXT: dup v2.2s, w8
+; CHECK-NOSVE-NEXT: cmhi v0.2s, v0.2s, v1.2s
+; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v2.8b
+; CHECK-NOSVE-NEXT: ret
+entry:
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i32.i32(i32 %a, i32 %b, i32 8, i1 0)
+ ret <2 x i1> %0
+}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
new file mode 100644
index 00000000000000..be5ec8b2a82bf2
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -0,0 +1,195 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE2
+; RUN: llc -mtriple=aarch64 -mattr=+sve %s -o - | FileCheck %s --check-prefix=CHECK-SVE
+
+define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilewr_8:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilewr_16:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ ret <vscale x 8 x i1> %0
+}
+
+define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilewr_32:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ ret <vscale x 4 x i1> %0
+}
+
+define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilewr_64:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilewr_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #1
+; CHECK-SVE-NEXT: cset w9, lt
+; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ ret <vscale x 2 x i1> %0
+}
+
+define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilerw_8:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.b, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_8:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ ret <vscale x 16 x i1> %0
+}
+
+define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilerw_16:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.h, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_16:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
+; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ ret <vscale x 8 x i1> %0
+}
+
+define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilerw_32:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.s, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_32:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x9, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ ret <vscale x 4 x i1> %0
+}
+
+define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
+; CHECK-SVE2-LABEL: whilerw_64:
+; CHECK-SVE2: // %bb.0: // %entry
+; CHECK-SVE2-NEXT: whilerw p0.d, x0, x1
+; CHECK-SVE2-NEXT: ret
+;
+; CHECK-SVE-LABEL: whilerw_64:
+; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: add x9, x8, #7
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: cmp x8, #0
+; CHECK-SVE-NEXT: cset w9, eq
+; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
+; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
+; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
+; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: ret
+entry:
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ ret <vscale x 2 x i1> %0
+}
>From 8d10d44396bd8faae1c1a7801e52a3bfa8f312e2 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Fri, 10 Jan 2025 11:37:37 +0000
Subject: [PATCH 02/18] Rework lowering location
---
llvm/include/llvm/CodeGen/ISDOpcodes.h | 5 +
.../SelectionDAG/LegalizeIntegerTypes.cpp | 22 ++
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 2 +
.../SelectionDAG/LegalizeVectorOps.cpp | 41 ++++
.../SelectionDAG/SelectionDAGBuilder.cpp | 53 +----
.../SelectionDAG/SelectionDAGDumper.cpp | 3 +
llvm/lib/CodeGen/TargetLoweringBase.cpp | 3 +
.../Target/AArch64/AArch64ISelLowering.cpp | 113 +++++++---
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 1 +
llvm/test/CodeGen/AArch64/alias_mask.ll | 120 ++--------
.../CodeGen/AArch64/alias_mask_scalable.ll | 210 ++++++++++++++----
11 files changed, 351 insertions(+), 222 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index fd8784a4c10034..b717a329378928 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1484,6 +1484,11 @@ enum NodeType {
// Operands: Mask
VECTOR_FIND_LAST_ACTIVE,
+ // The `llvm.experimental.get.alias.lane.mask.*` intrinsics
+ // Operands: Load pointer, Store pointer, Element size, Write after read
+ // Output: Mask
+ EXPERIMENTAL_ALIAS_LANE_MASK,
+
// llvm.clear_cache intrinsic
// Operands: Input Chain, Start Addres, End Address
// Outputs: Output Chain
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index b0a624680231e9..bb5b535a06dc28 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -55,6 +55,9 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
N->dump(&DAG); dbgs() << "\n";
#endif
report_fatal_error("Do not know how to promote this operator!");
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
+ break;
case ISD::MERGE_VALUES:Res = PromoteIntRes_MERGE_VALUES(N, ResNo); break;
case ISD::AssertSext: Res = PromoteIntRes_AssertSext(N); break;
case ISD::AssertZext: Res = PromoteIntRes_AssertZext(N); break;
@@ -359,6 +362,14 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
return GetPromotedInteger(Op);
}
+SDValue
+DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+ EVT VT = N->getValueType(0);
+ EVT NewVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
+ return DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, SDLoc(N), NewVT,
+ N->ops());
+}
+
SDValue DAGTypeLegalizer::PromoteIntRes_AssertSext(SDNode *N) {
// Sign-extend the new bits, and continue the assertion.
SDValue Op = SExtPromotedInteger(N->getOperand(0));
@@ -2076,6 +2087,9 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::VECTOR_FIND_LAST_ACTIVE:
Res = PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(N, OpNo);
break;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Res = DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
+ break;
}
// If the result is null, the sub-method took care of registering results etc.
@@ -2824,6 +2838,14 @@ SDValue DAGTypeLegalizer::PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N,
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
}
+SDValue
+DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N,
+ unsigned OpNo) {
+ SmallVector<SDValue, 4> NewOps(N->ops());
+ NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
+ return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
+}
+
//===----------------------------------------------------------------------===//
// Integer Result Expansion
//===----------------------------------------------------------------------===//
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index f13f70e66cfaa6..7959e9cf214091 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -379,6 +379,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_IS_FPCLASS(SDNode *N);
SDValue PromoteIntRes_PATCHPOINT(SDNode *N);
SDValue PromoteIntRes_VECTOR_FIND_LAST_ACTIVE(SDNode *N);
+ SDValue PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
// Integer Operand Promotion.
bool PromoteIntegerOperand(SDNode *N, unsigned OpNo);
@@ -430,6 +431,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntOp_VP_SPLICE(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_VECTOR_HISTOGRAM(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N, unsigned OpNo);
+ SDValue PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N, unsigned OpNo);
void SExtOrZExtPromotedOperands(SDValue &LHS, SDValue &RHS);
void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index 6ad08bce44b0a4..71b9cc3c7a6716 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -138,6 +138,7 @@ class VectorLegalizer {
SDValue ExpandVP_FNEG(SDNode *Node);
SDValue ExpandVP_FABS(SDNode *Node);
SDValue ExpandVP_FCOPYSIGN(SDNode *Node);
+ SDValue ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
SDValue ExpandSELECT(SDNode *Node);
std::pair<SDValue, SDValue> ExpandLoad(SDNode *N);
SDValue ExpandStore(SDNode *N);
@@ -467,6 +468,7 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::VECTOR_COMPRESS:
case ISD::SCMP:
case ISD::UCMP:
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
@@ -1233,6 +1235,9 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
case ISD::UCMP:
Results.push_back(TLI.expandCMP(Node, DAG));
return;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Results.push_back(ExpandEXPERIMENTAL_ALIAS_LANE_MASK(Node));
+ return;
case ISD::FADD:
case ISD::FMUL:
@@ -1740,6 +1745,42 @@ SDValue VectorLegalizer::ExpandVP_FCOPYSIGN(SDNode *Node) {
return DAG.getNode(ISD::BITCAST, DL, VT, CopiedSign);
}
+SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+ SDLoc DL(N);
+ SDValue SourceValue = N->getOperand(0);
+ SDValue SinkValue = N->getOperand(1);
+ SDValue EltSize = N->getOperand(2);
+
+ bool IsWriteAfterRead =
+ cast<ConstantSDNode>(N->getOperand(3))->getZExtValue() != 0;
+ auto VT = N->getValueType(0);
+ auto PtrVT = SourceValue->getValueType(0);
+
+ SDValue Diff = DAG.getNode(ISD::SUB, DL, PtrVT, SinkValue, SourceValue);
+ if (!IsWriteAfterRead)
+ Diff = DAG.getNode(ISD::ABS, DL, PtrVT, Diff);
+
+ Diff = DAG.getNode(ISD::SDIV, DL, PtrVT, Diff, EltSize);
+ SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
+
+ // If the difference is positive then some elements may alias
+ auto CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+ Diff.getValueType());
+ SDValue Cmp = DAG.getSetCC(DL, CmpVT, Diff, Zero,
+ IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+
+ EVT SplatTY =
+ EVT::getVectorVT(*DAG.getContext(), PtrVT, VT.getVectorElementCount());
+ SDValue DiffSplat = DAG.getSplat(SplatTY, DL, Diff);
+ SDValue VectorStep = DAG.getStepVector(DL, SplatTY);
+ SDValue DiffMask =
+ DAG.getSetCC(DL, VT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
+
+ // Splat the compare result then OR it with a lane mask
+ SDValue Splat = DAG.getSplat(VT, DL, Cmp);
+ return DAG.getNode(ISD::OR, DL, VT, DiffMask, Splat);
+}
+
void VectorLegalizer::ExpandFP_TO_UINT(SDNode *Node,
SmallVectorImpl<SDValue> &Results) {
// Attempt to expand using TargetLowering.
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 25eddba2b921a4..eeb295a8757a9b 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8277,54 +8277,13 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
return;
}
case Intrinsic::experimental_get_alias_lane_mask: {
- SDValue SourceValue = getValue(I.getOperand(0));
- SDValue SinkValue = getValue(I.getOperand(1));
- SDValue EltSize = getValue(I.getOperand(2));
- bool IsWriteAfterRead =
- cast<ConstantSDNode>(getValue(I.getOperand(3)))->getZExtValue() != 0;
auto IntrinsicVT = EVT::getEVT(I.getType());
- auto PtrVT = SourceValue->getValueType(0);
-
- if (!TLI.shouldExpandGetAliasLaneMask(
- IntrinsicVT, PtrVT,
- cast<ConstantSDNode>(EltSize)->getSExtValue())) {
- visitTargetIntrinsic(I, Intrinsic);
- return;
- }
-
- SDValue Diff = DAG.getNode(ISD::SUB, sdl, PtrVT, SinkValue, SourceValue);
- if (!IsWriteAfterRead)
- Diff = DAG.getNode(ISD::ABS, sdl, PtrVT, Diff);
-
- Diff = DAG.getNode(ISD::SDIV, sdl, PtrVT, Diff, EltSize);
- SDValue Zero = DAG.getTargetConstant(0, sdl, PtrVT);
-
- // If the difference is positive then some elements may alias
- auto CmpVT =
- TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), PtrVT);
- SDValue Cmp = DAG.getSetCC(sdl, CmpVT, Diff, Zero,
- IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
-
- // Splat the compare result then OR it with a lane mask
- SDValue Splat = DAG.getSplat(IntrinsicVT, sdl, Cmp);
-
- SDValue DiffMask;
- // Don't emit an active lane mask if the target doesn't support it
- if (TLI.shouldExpandGetActiveLaneMask(IntrinsicVT, PtrVT)) {
- EVT VecTy = EVT::getVectorVT(*DAG.getContext(), PtrVT,
- IntrinsicVT.getVectorElementCount());
- SDValue DiffSplat = DAG.getSplat(VecTy, sdl, Diff);
- SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
- DiffMask = DAG.getSetCC(sdl, IntrinsicVT, VectorStep, DiffSplat,
- ISD::CondCode::SETULT);
- } else {
- DiffMask = DAG.getNode(
- ISD::INTRINSIC_WO_CHAIN, sdl, IntrinsicVT,
- DAG.getTargetConstant(Intrinsic::get_active_lane_mask, sdl, MVT::i64),
- Zero, Diff);
- }
- SDValue Or = DAG.getNode(ISD::OR, sdl, IntrinsicVT, DiffMask, Splat);
- setValue(&I, Or);
+ SmallVector<SDValue, 4> Ops;
+ for (auto &Op : I.operands())
+ Ops.push_back(getValue(Op));
+ SDValue Mask =
+ DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, sdl, IntrinsicVT, Ops);
+ setValue(&I, Mask);
}
}
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index f63c8dd3df1c83..be59ff7b055dc6 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -570,6 +570,9 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
case ISD::VECTOR_FIND_LAST_ACTIVE:
return "find_last_active";
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ return "alias_mask";
+
// Vector Predication
#define BEGIN_REGISTER_VP_SDNODE(SDID, LEGALARG, NAME, ...) \
case ISD::SDID: \
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index 73af0a9a714074..db245956d65348 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -821,6 +821,9 @@ void TargetLoweringBase::initActions() {
// Masked vector extracts default to expand.
setOperationAction(ISD::VECTOR_FIND_LAST_ACTIVE, VT, Expand);
+ // Aliasing lanes mask default to expand
+ setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Expand);
+
// FP environment operations default to expand.
setOperationAction(ISD::GET_FPENV, VT, Expand);
setOperationAction(ISD::SET_FPENV, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 84a719401aa341..d16de1b7148623 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1807,6 +1807,13 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
setOperationAction(ISD::INTRINSIC_WO_CHAIN, VT, Custom);
}
+ if (Subtarget->hasSVE2() || (Subtarget->hasSME() && Subtarget->isStreaming())) {
+ for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
+ MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
+ setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Custom);
+ }
+ }
+
// Handle operations that are only available in non-streaming SVE mode.
if (Subtarget->isSVEAvailable()) {
for (auto VT : {MVT::nxv16i8, MVT::nxv8i16, MVT::nxv4i32, MVT::nxv2i64,
@@ -5284,6 +5291,59 @@ SDValue AArch64TargetLowering::LowerFSINCOS(SDValue Op,
static MVT getSVEContainerType(EVT ContentTy);
+SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
+ SelectionDAG &DAG) const {
+ SDLoc DL(Op);
+ unsigned IntrinsicID = 0;
+ uint64_t EltSize = Op.getOperand(2)->getAsZExtVal();
+ bool IsWriteAfterRead = Op.getOperand(3)->getAsZExtVal() == 1;
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr
+ : Intrinsic::aarch64_sve_whilerw;
+ EVT VT = Op.getValueType();
+ MVT SimpleVT = VT.getSimpleVT();
+ // Make sure that the promoted mask size and element size match
+ switch (EltSize) {
+ case 1:
+ assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
+ "Unexpected mask or element size");
+ break;
+ case 2:
+ assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
+ "Unexpected mask or element size");
+ break;
+ case 4:
+ assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
+ "Unexpected mask or element size");
+ break;
+ case 8:
+ assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
+ "Unexpected mask or element size");
+ break;
+ default:
+ llvm_unreachable("Unexpected element size for get.alias.lane.mask");
+ break;
+ }
+ SDValue ID = DAG.getTargetConstant(IntrinsicID, DL, MVT::i64);
+
+ if (VT.isScalableVector())
+ return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, VT, ID, Op.getOperand(0),
+ Op.getOperand(1));
+
+ // We can use the SVE whilewr/whilerw instruction to lower this
+ // intrinsic by creating the appropriate sequence of scalable vector
+ // operations and then extracting a fixed-width subvector from the scalable
+ // vector.
+
+ EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
+ EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
+
+ SDValue Mask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, WhileVT, ID,
+ Op.getOperand(0), Op.getOperand(1));
+ SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
+ return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
+ DAG.getVectorIdxConstant(0, DL));
+}
+
SDValue AArch64TargetLowering::LowerBITCAST(SDValue Op,
SelectionDAG &DAG) const {
EVT OpVT = Op.getValueType();
@@ -6511,31 +6571,6 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
case Intrinsic::experimental_get_alias_lane_mask:
case Intrinsic::get_active_lane_mask: {
unsigned IntrinsicID = Intrinsic::aarch64_sve_whilelo;
- if (IntNo == Intrinsic::experimental_get_alias_lane_mask) {
- uint64_t EltSize = Op.getOperand(3)->getAsZExtVal();
- bool IsWriteAfterRead = Op.getOperand(4)->getAsZExtVal() == 1;
- switch (EltSize) {
- case 1:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
- : Intrinsic::aarch64_sve_whilerw_b;
- break;
- case 2:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
- : Intrinsic::aarch64_sve_whilerw_h;
- break;
- case 4:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
- : Intrinsic::aarch64_sve_whilerw_s;
- break;
- case 8:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
- : Intrinsic::aarch64_sve_whilerw_d;
- break;
- default:
- llvm_unreachable("Unexpected element size for get.alias.lane.mask");
- break;
- }
- }
SDValue ID = DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
EVT VT = Op.getValueType();
@@ -6543,7 +6578,7 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, dl, VT, ID, Op.getOperand(1),
Op.getOperand(2));
- // We can use the SVE whilelo/whilewr/whilerw instruction to lower this
+ // We can use the SVE whilelo instruction to lower this
// intrinsic by creating the appropriate sequence of scalable vector
// operations and then extracting a fixed-width subvector from the scalable
// vector.
@@ -7374,6 +7409,8 @@ SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
default:
llvm_unreachable("unimplemented operand");
return SDValue();
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ return LowerALIAS_LANE_MASK(Op, DAG);
case ISD::BITCAST:
return LowerBITCAST(Op, DAG);
case ISD::GlobalAddress:
@@ -19921,7 +19958,8 @@ static SDValue getPTest(SelectionDAG &DAG, EVT VT, SDValue Pg, SDValue Op,
AArch64CC::CondCode Cond);
static bool isPredicateCCSettingOp(SDValue N) {
- if ((N.getOpcode() == ISD::SETCC) ||
+ if ((N.getOpcode() == ISD::SETCC ||
+ N.getOpcode() == ISD::EXPERIMENTAL_ALIAS_LANE_MASK) ||
(N.getOpcode() == ISD::INTRINSIC_WO_CHAIN &&
(N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilege ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilegt ||
@@ -19932,10 +19970,7 @@ static bool isPredicateCCSettingOp(SDValue N) {
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilels ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilelt ||
// get_active_lane_mask is lowered to a whilelo instruction.
- N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask ||
- // get_alias_lane_mask is lowered to a whilewr/rw instruction.
- N.getConstantOperandVal(0) ==
- Intrinsic::experimental_get_alias_lane_mask)))
+ N.getConstantOperandVal(0) == Intrinsic::get_active_lane_mask)))
return true;
return false;
@@ -27659,6 +27694,22 @@ void AArch64TargetLowering::ReplaceNodeResults(
// CONCAT_VECTORS -- but delegate to common code for result type
// legalisation
return;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK: {
+ EVT VT = N->getValueType(0);
+ if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
+ return;
+
+ // NOTE: Only trivial type promotion is supported.
+ EVT NewVT = getTypeToTransformTo(*DAG.getContext(), VT);
+ if (NewVT.getVectorNumElements() != VT.getVectorNumElements())
+ return;
+
+ SDLoc DL(N);
+ auto V =
+ DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, DL, NewVT, N->ops());
+ Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, VT, V));
+ return;
+ }
case ISD::INTRINSIC_WO_CHAIN: {
EVT VT = N->getValueType(0);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 3df8f2da365f31..075470a9199114 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -1214,6 +1214,7 @@ class AArch64TargetLowering : public TargetLowering {
SDValue LowerXOR(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFSINCOS(SDValue Op, SelectionDAG &DAG) const;
+ SDValue LowerALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerVSCALE(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerTRUNCATE(SDValue Op, SelectionDAG &DAG) const;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 84a22822f1702b..9b344f03da077b 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -48,10 +48,12 @@ define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v2.8h, v3.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
ret <16 x i1> %0
}
@@ -88,6 +90,8 @@ define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
; CHECK-NOSVE-NEXT: dup v1.8b, w8
; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: shl v0.8b, v0.8b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.8b, v0.8b, #0
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -125,7 +129,7 @@ define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
ret <4 x i1> %0
}
@@ -155,7 +159,7 @@ define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
ret <2 x i1> %0
}
@@ -206,10 +210,12 @@ define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
; CHECK-NOSVE-NEXT: dup v1.16b, w8
+; CHECK-NOSVE-NEXT: shl v0.16b, v0.16b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.16b, v0.16b, #0
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
ret <16 x i1> %0
}
@@ -247,6 +253,8 @@ define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: uzp1 v0.8h, v0.8h, v1.8h
; CHECK-NOSVE-NEXT: dup v1.8b, w8
; CHECK-NOSVE-NEXT: xtn v0.8b, v0.8h
+; CHECK-NOSVE-NEXT: shl v0.8b, v0.8b, #7
+; CHECK-NOSVE-NEXT: cmlt v0.8b, v0.8b, #0
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
@@ -285,7 +293,7 @@ define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
ret <4 x i1> %0
}
@@ -316,106 +324,6 @@ define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
- ret <2 x i1> %0
-}
-
-define <16 x i1> @not_whilewr_wrong_eltsize(i64 %a, i64 %b) {
-; CHECK-SVE-LABEL: not_whilewr_wrong_eltsize:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: sub x8, x1, x0
-; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-SVE-NEXT: asr x8, x8, #1
-; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: dup v0.16b, w9
-; CHECK-SVE-NEXT: mov z1.b, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: orr v0.16b, v1.16b, v0.16b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: not_whilewr_wrong_eltsize:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: sub x8, x1, x0
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_0
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_1
-; CHECK-NOSVE-NEXT: add x8, x8, x8, lsr #63
-; CHECK-NOSVE-NEXT: ldr q0, [x9, :lo12:.LCPI8_0]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_2
-; CHECK-NOSVE-NEXT: ldr q2, [x9, :lo12:.LCPI8_2]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_4
-; CHECK-NOSVE-NEXT: ldr q1, [x10, :lo12:.LCPI8_1]
-; CHECK-NOSVE-NEXT: asr x8, x8, #1
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_3
-; CHECK-NOSVE-NEXT: ldr q5, [x9, :lo12:.LCPI8_4]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_6
-; CHECK-NOSVE-NEXT: ldr q3, [x10, :lo12:.LCPI8_3]
-; CHECK-NOSVE-NEXT: adrp x10, .LCPI8_5
-; CHECK-NOSVE-NEXT: dup v4.2d, x8
-; CHECK-NOSVE-NEXT: ldr q7, [x9, :lo12:.LCPI8_6]
-; CHECK-NOSVE-NEXT: adrp x9, .LCPI8_7
-; CHECK-NOSVE-NEXT: ldr q6, [x10, :lo12:.LCPI8_5]
-; CHECK-NOSVE-NEXT: ldr q16, [x9, :lo12:.LCPI8_7]
-; CHECK-NOSVE-NEXT: cmp x8, #1
-; CHECK-NOSVE-NEXT: cset w8, lt
-; CHECK-NOSVE-NEXT: cmhi v0.2d, v4.2d, v0.2d
-; CHECK-NOSVE-NEXT: cmhi v1.2d, v4.2d, v1.2d
-; CHECK-NOSVE-NEXT: cmhi v2.2d, v4.2d, v2.2d
-; CHECK-NOSVE-NEXT: cmhi v3.2d, v4.2d, v3.2d
-; CHECK-NOSVE-NEXT: cmhi v5.2d, v4.2d, v5.2d
-; CHECK-NOSVE-NEXT: cmhi v6.2d, v4.2d, v6.2d
-; CHECK-NOSVE-NEXT: cmhi v7.2d, v4.2d, v7.2d
-; CHECK-NOSVE-NEXT: cmhi v4.2d, v4.2d, v16.2d
-; CHECK-NOSVE-NEXT: uzp1 v0.4s, v1.4s, v0.4s
-; CHECK-NOSVE-NEXT: uzp1 v1.4s, v3.4s, v2.4s
-; CHECK-NOSVE-NEXT: uzp1 v2.4s, v6.4s, v5.4s
-; CHECK-NOSVE-NEXT: uzp1 v3.4s, v4.4s, v7.4s
-; CHECK-NOSVE-NEXT: uzp1 v0.8h, v1.8h, v0.8h
-; CHECK-NOSVE-NEXT: uzp1 v1.8h, v3.8h, v2.8h
-; CHECK-NOSVE-NEXT: uzp1 v0.16b, v1.16b, v0.16b
-; CHECK-NOSVE-NEXT: dup v1.16b, w8
-; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
-; CHECK-NOSVE-NEXT: ret
-entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
- ret <16 x i1> %0
-}
-
-define <2 x i1> @not_whilerw_ptr32(i32 %a, i32 %b) {
-; CHECK-SVE-LABEL: not_whilerw_ptr32:
-; CHECK-SVE: // %bb.0: // %entry
-; CHECK-SVE-NEXT: subs w8, w1, w0
-; CHECK-SVE-NEXT: cneg w8, w8, mi
-; CHECK-SVE-NEXT: add w9, w8, #7
-; CHECK-SVE-NEXT: cmp w8, #0
-; CHECK-SVE-NEXT: csel w8, w9, w8, lt
-; CHECK-SVE-NEXT: asr w8, w8, #3
-; CHECK-SVE-NEXT: cmp w8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.s, #0, w8
-; CHECK-SVE-NEXT: dup v0.2s, w9
-; CHECK-SVE-NEXT: mov z1.s, p0/z, #-1 // =0xffffffffffffffff
-; CHECK-SVE-NEXT: orr v0.8b, v1.8b, v0.8b
-; CHECK-SVE-NEXT: ret
-;
-; CHECK-NOSVE-LABEL: not_whilerw_ptr32:
-; CHECK-NOSVE: // %bb.0: // %entry
-; CHECK-NOSVE-NEXT: subs w9, w1, w0
-; CHECK-NOSVE-NEXT: adrp x8, .LCPI9_0
-; CHECK-NOSVE-NEXT: cneg w9, w9, mi
-; CHECK-NOSVE-NEXT: ldr d1, [x8, :lo12:.LCPI9_0]
-; CHECK-NOSVE-NEXT: add w10, w9, #7
-; CHECK-NOSVE-NEXT: cmp w9, #0
-; CHECK-NOSVE-NEXT: csel w9, w10, w9, lt
-; CHECK-NOSVE-NEXT: asr w9, w9, #3
-; CHECK-NOSVE-NEXT: dup v0.2s, w9
-; CHECK-NOSVE-NEXT: cmp w9, #0
-; CHECK-NOSVE-NEXT: cset w8, eq
-; CHECK-NOSVE-NEXT: dup v2.2s, w8
-; CHECK-NOSVE-NEXT: cmhi v0.2s, v0.2s, v1.2s
-; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v2.8b
-; CHECK-NOSVE-NEXT: ret
-entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i32.i32(i32 %a, i32 %b, i32 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index be5ec8b2a82bf2..a7c9c5e3cdd33b 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -10,16 +10,57 @@ define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_8:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, z0.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: incd z1.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #4
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z2.d, z1.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z2.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p2.s, p5.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p4.s
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
@@ -31,13 +72,28 @@ define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_16:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
@@ -54,20 +110,27 @@ define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilewr_32:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #3
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
@@ -80,19 +143,22 @@ define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-SVE-LABEL: whilewr_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: sub x8, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: add x9, x8, #7
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
; CHECK-SVE-NEXT: cmp x8, #1
-; CHECK-SVE-NEXT: cset w9, lt
-; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, lt
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
@@ -104,17 +170,60 @@ define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_8:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: str x29, [sp, #-16]! // 8-byte Folded Spill
+; CHECK-SVE-NEXT: addvl sp, sp, #-1
+; CHECK-SVE-NEXT: str p7, [sp, #4, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p6, [sp, #5, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p5, [sp, #6, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: str p4, [sp, #7, mul vl] // 2-byte Folded Spill
+; CHECK-SVE-NEXT: .cfi_escape 0x0f, 0x0c, 0x8f, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0x2e, 0x00, 0x1e, 0x22 // sp + 16 + 8 * VG
+; CHECK-SVE-NEXT: .cfi_offset w29, -16
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: mov z2.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z0.d
+; CHECK-SVE-NEXT: mov z5.d, z0.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z1.d, z0.d
+; CHECK-SVE-NEXT: incd z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: incd z5.d, all, mul #4
+; CHECK-SVE-NEXT: mov z3.d, z2.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z1.d, z2.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z1.d, z4.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #4
+; CHECK-SVE-NEXT: cmphi p4.d, p0/z, z1.d, z5.d
+; CHECK-SVE-NEXT: incd z3.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p5.d, p0/z, z1.d, z2.d
+; CHECK-SVE-NEXT: cmphi p7.d, p0/z, z1.d, z4.d
+; CHECK-SVE-NEXT: uzp1 p1.s, p2.s, p1.s
+; CHECK-SVE-NEXT: mov z0.d, z3.d
+; CHECK-SVE-NEXT: cmphi p6.d, p0/z, z1.d, z3.d
+; CHECK-SVE-NEXT: uzp1 p2.s, p4.s, p5.s
+; CHECK-SVE-NEXT: ldr p5, [sp, #6, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: ldr p4, [sp, #7, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: incd z0.d, all, mul #4
+; CHECK-SVE-NEXT: uzp1 p3.s, p3.s, p6.s
+; CHECK-SVE-NEXT: ldr p6, [sp, #5, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
+; CHECK-SVE-NEXT: uzp1 p1.h, p1.h, p3.h
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.b, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p7.s, p0.s
+; CHECK-SVE-NEXT: ldr p7, [sp, #4, mul vl] // 2-byte Folded Reload
+; CHECK-SVE-NEXT: uzp1 p0.h, p2.h, p0.h
+; CHECK-SVE-NEXT: uzp1 p0.b, p1.b, p0.b
; CHECK-SVE-NEXT: whilelo p1.b, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
+; CHECK-SVE-NEXT: addvl sp, sp, #1
+; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
@@ -126,14 +235,29 @@ define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_16:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x8, x8, x8, lsr #63
+; CHECK-SVE-NEXT: mov z1.d, z0.d
+; CHECK-SVE-NEXT: mov z2.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #1
+; CHECK-SVE-NEXT: mov z3.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: incd z2.d, all, mul #2
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z3.d, z0.d
+; CHECK-SVE-NEXT: mov z4.d, z1.d
+; CHECK-SVE-NEXT: cmphi p2.d, p0/z, z3.d, z1.d
+; CHECK-SVE-NEXT: cmphi p3.d, p0/z, z3.d, z2.d
+; CHECK-SVE-NEXT: incd z4.d, all, mul #2
+; CHECK-SVE-NEXT: uzp1 p1.s, p1.s, p2.s
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z3.d, z4.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p0.h, #0, x8
-; CHECK-SVE-NEXT: sbfx x8, x9, #0, #1
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p3.s, p0.s
+; CHECK-SVE-NEXT: uzp1 p0.h, p1.h, p0.h
; CHECK-SVE-NEXT: whilelo p1.h, xzr, x8
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
@@ -150,21 +274,28 @@ define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
;
; CHECK-SVE-LABEL: whilerw_32:
; CHECK-SVE: // %bb.0: // %entry
+; CHECK-SVE-NEXT: index z0.d, #0, #1
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x9, x8, #3
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
+; CHECK-SVE-NEXT: mov z1.d, z0.d
; CHECK-SVE-NEXT: asr x8, x8, #2
+; CHECK-SVE-NEXT: mov z2.d, x8
+; CHECK-SVE-NEXT: incd z1.d
+; CHECK-SVE-NEXT: cmphi p1.d, p0/z, z2.d, z0.d
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z2.d, z1.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p1.s, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.s, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: uzp1 p0.s, p1.s, p0.s
+; CHECK-SVE-NEXT: whilelo p1.s, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
@@ -177,19 +308,22 @@ define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-SVE-LABEL: whilerw_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: subs x8, x1, x0
+; CHECK-SVE-NEXT: index z0.d, #0, #1
+; CHECK-SVE-NEXT: ptrue p0.d
; CHECK-SVE-NEXT: cneg x8, x8, mi
; CHECK-SVE-NEXT: add x9, x8, #7
; CHECK-SVE-NEXT: cmp x8, #0
; CHECK-SVE-NEXT: csel x8, x9, x8, lt
; CHECK-SVE-NEXT: asr x8, x8, #3
+; CHECK-SVE-NEXT: mov z1.d, x8
+; CHECK-SVE-NEXT: cmphi p0.d, p0/z, z1.d, z0.d
; CHECK-SVE-NEXT: cmp x8, #0
-; CHECK-SVE-NEXT: cset w9, eq
-; CHECK-SVE-NEXT: whilelo p1.d, #0, x8
-; CHECK-SVE-NEXT: sbfx x9, x9, #0, #1
-; CHECK-SVE-NEXT: whilelo p0.d, xzr, x9
-; CHECK-SVE-NEXT: mov p0.b, p1/m, p1.b
+; CHECK-SVE-NEXT: cset w8, eq
+; CHECK-SVE-NEXT: sbfx x8, x8, #0, #1
+; CHECK-SVE-NEXT: whilelo p1.d, xzr, x8
+; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
}
>From 3eebc8ec416f0c2d8beac53c68bb4be0c8c4ebca Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 15 Jan 2025 11:27:56 +0000
Subject: [PATCH 03/18] Format
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index d16de1b7148623..6e332a3c4b7001 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1807,7 +1807,8 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
setOperationAction(ISD::INTRINSIC_WO_CHAIN, VT, Custom);
}
- if (Subtarget->hasSVE2() || (Subtarget->hasSME() && Subtarget->isStreaming())) {
+ if (Subtarget->hasSVE2() ||
+ (Subtarget->hasSME() && Subtarget->isStreaming())) {
for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Custom);
>From b3d7791f53090996131c4a62849a9ba91e001421 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 15 Jan 2025 16:16:31 +0000
Subject: [PATCH 04/18] Fix ISD node name string and remove shouldExpand
function
---
.../SelectionDAG/SelectionDAGDumper.cpp | 2 +-
.../Target/AArch64/AArch64ISelLowering.cpp | 19 -------------------
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 3 ---
3 files changed, 1 insertion(+), 23 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index be59ff7b055dc6..70dff1018390c5 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -571,7 +571,7 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
return "find_last_active";
case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- return "alias_mask";
+ return "alias_lane_mask";
// Vector Predication
#define BEGIN_REGISTER_VP_SDNODE(SDID, LEGALARG, NAME, ...) \
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 6e332a3c4b7001..ef71975553d025 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -2046,25 +2046,6 @@ bool AArch64TargetLowering::shouldExpandGetActiveLaneMask(EVT ResVT,
return false;
}
-bool AArch64TargetLowering::shouldExpandGetAliasLaneMask(
- EVT VT, EVT PtrVT, unsigned EltSize) const {
- if (!Subtarget->hasSVE2())
- return true;
-
- if (PtrVT != MVT::i64)
- return true;
-
- if (VT == MVT::v2i1 || VT == MVT::nxv2i1)
- return EltSize != 8;
- if (VT == MVT::v4i1 || VT == MVT::nxv4i1)
- return EltSize != 4;
- if (VT == MVT::v8i1 || VT == MVT::nxv8i1)
- return EltSize != 2;
- if (VT == MVT::v16i1 || VT == MVT::nxv16i1)
- return EltSize != 1;
- return true;
-}
-
bool AArch64TargetLowering::shouldExpandPartialReductionIntrinsic(
const IntrinsicInst *I) const {
if (I->getIntrinsicID() != Intrinsic::experimental_vector_partial_reduce_add)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 075470a9199114..17d64e8cfadbb2 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -997,9 +997,6 @@ class AArch64TargetLowering : public TargetLowering {
bool shouldExpandGetActiveLaneMask(EVT VT, EVT OpVT) const override;
- bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
- unsigned EltSize) const override;
-
bool
shouldExpandPartialReductionIntrinsic(const IntrinsicInst *I) const override;
>From 7124e2c3a74fbee71f295a510227bfd8fa98f6f4 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 16 Jan 2025 10:24:59 +0000
Subject: [PATCH 05/18] Format
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index ef71975553d025..8791e5cd739240 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5279,25 +5279,31 @@ SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
unsigned IntrinsicID = 0;
uint64_t EltSize = Op.getOperand(2)->getAsZExtVal();
bool IsWriteAfterRead = Op.getOperand(3)->getAsZExtVal() == 1;
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr
- : Intrinsic::aarch64_sve_whilerw;
EVT VT = Op.getValueType();
MVT SimpleVT = VT.getSimpleVT();
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
+ : Intrinsic::aarch64_sve_whilerw_b;
assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
"Unexpected mask or element size");
break;
case 2:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
+ : Intrinsic::aarch64_sve_whilerw_h;
assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
"Unexpected mask or element size");
break;
case 4:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
+ : Intrinsic::aarch64_sve_whilerw_s;
assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
"Unexpected mask or element size");
break;
case 8:
+ IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
+ : Intrinsic::aarch64_sve_whilerw_d;
assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
"Unexpected mask or element size");
break;
>From 59c7eac2c3aaaee7f341ca1319a59dac6e752fb1 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 27 Jan 2025 14:17:16 +0000
Subject: [PATCH 06/18] Move promote case
---
llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index bb5b535a06dc28..bc433f09fe74f5 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -55,9 +55,6 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
N->dump(&DAG); dbgs() << "\n";
#endif
report_fatal_error("Do not know how to promote this operator!");
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
- break;
case ISD::MERGE_VALUES:Res = PromoteIntRes_MERGE_VALUES(N, ResNo); break;
case ISD::AssertSext: Res = PromoteIntRes_AssertSext(N); break;
case ISD::AssertZext: Res = PromoteIntRes_AssertZext(N); break;
@@ -315,6 +312,10 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
Res = PromoteIntRes_VP_REDUCE(N);
break;
+ case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
+ break;
+
case ISD::FREEZE:
Res = PromoteIntRes_FREEZE(N);
break;
>From 4b613c27334034a9157ce1f700b461dea8e470bc Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 27 Jan 2025 14:17:30 +0000
Subject: [PATCH 07/18] Fix tablegen comment
---
llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
index 919e8876f9737b..3a99ee8c1c3eb2 100644
--- a/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
+++ b/llvm/lib/Target/AArch64/AArch64SVEInstrInfo.td
@@ -3921,7 +3921,7 @@ let Predicates = [HasSVE2_or_SME] in {
// SVE2 pointer conflict compare
defm WHILEWR_PXX : sve2_int_while_rr<0b0, "whilewr", AArch64whilewr>;
defm WHILERW_PXX : sve2_int_while_rr<0b1, "whilerw", AArch64whilerw>;
-} // End HasSVE2orSME
+} // End HasSVE2_or_SME
let Predicates = [HasSVEAES, HasNonStreamingSVE2_or_SSVE_AES] in {
// SVE2 crypto destructive binary operations
>From c62abc689f04b72203cc5c858c5e25ce4b0f2be2 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 27 Jan 2025 14:20:31 +0000
Subject: [PATCH 08/18] Remove DAGTypeLegalizer::
---
llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index bc433f09fe74f5..14e1c3ee1eb2bc 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -2089,7 +2089,7 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
Res = PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(N, OpNo);
break;
case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Res = DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
+ Res = PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
break;
}
>From 63a0a8bb3bc3b25ae53abdbbbc52c36e4fcf67a3 Mon Sep 17 00:00:00 2001
From: Sam Tebbs <samuel.tebbs at arm.com>
Date: Mon, 27 Jan 2025 14:20:39 +0000
Subject: [PATCH 09/18] Use getConstantOperandVal
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 8791e5cd739240..3f2d7ca78897de 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5277,8 +5277,8 @@ SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
unsigned IntrinsicID = 0;
- uint64_t EltSize = Op.getOperand(2)->getAsZExtVal();
- bool IsWriteAfterRead = Op.getOperand(3)->getAsZExtVal() == 1;
+ uint64_t EltSize = Op.getConstantOperandVal(2);
+ bool IsWriteAfterRead = Op.getConstantOperandVal(3) == 1;
EVT VT = Op.getValueType();
MVT SimpleVT = VT.getSimpleVT();
// Make sure that the promoted mask size and element size match
>From 55def3d504e47086d4a915bc2d7a5c932c01aa2d Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Wed, 29 Jan 2025 11:40:32 +0000
Subject: [PATCH 10/18] Remove isPredicateCCSettingOp case
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 1 -
1 file changed, 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 3f2d7ca78897de..87f161a47722a5 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -19947,7 +19947,6 @@ static SDValue getPTest(SelectionDAG &DAG, EVT VT, SDValue Pg, SDValue Op,
static bool isPredicateCCSettingOp(SDValue N) {
if ((N.getOpcode() == ISD::SETCC ||
- N.getOpcode() == ISD::EXPERIMENTAL_ALIAS_LANE_MASK) ||
(N.getOpcode() == ISD::INTRINSIC_WO_CHAIN &&
(N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilege ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilegt ||
>From dd2c62476b83c1caad0d09fa8223a4f3785cfd6c Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 14:40:12 +0000
Subject: [PATCH 11/18] Remove overloads for pointer and element size
parameters
---
llvm/docs/LangRef.rst | 12 +++----
llvm/include/llvm/IR/Intrinsics.td | 2 +-
.../SelectionDAG/LegalizeVectorOps.cpp | 11 ++++---
.../Target/AArch64/AArch64ISelLowering.cpp | 2 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 32 +++++++++----------
.../CodeGen/AArch64/alias_mask_scalable.ll | 32 +++++++++----------
6 files changed, 47 insertions(+), 44 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index e2df2e0aefb1be..a2d649c93ec187 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23635,10 +23635,10 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %ptrA, i64 %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
Overview:
@@ -23678,7 +23678,7 @@ equivalent to:
%m[i] = (icmp ult i, %diff) || (%diff == 0)
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
-indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two i64 arguments to
+indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two ptr arguments to
``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
immediate argument. The ``%writeAfterRead`` argument is expected to be true if
``%ptrB`` is stored to after ``%ptrA`` is read from.
@@ -23704,7 +23704,7 @@ Examples:
.. code-block:: llvm
- %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i32(i64 %ptrA, i64 %ptrB, i32 4, i1 1)
+ %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
%vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %alias.lane.mask, <4 x i32> poison)
[...]
call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %alias.lane.mask)
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index b7a8efbd51b83c..6a70b9d966bf0b 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2365,7 +2365,7 @@ let IntrProperties = [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<1>>
def int_experimental_get_alias_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
- [llvm_anyint_ty, LLVMMatchType<1>, llvm_anyint_ty, llvm_i1_ty],
+ [llvm_anyptr_ty, LLVMMatchType<1>, llvm_i64_ty, llvm_i1_ty],
[IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
def int_get_active_lane_mask:
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index 71b9cc3c7a6716..3ba201c0d572d9 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -1751,8 +1751,7 @@ SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
SDValue SinkValue = N->getOperand(1);
SDValue EltSize = N->getOperand(2);
- bool IsWriteAfterRead =
- cast<ConstantSDNode>(N->getOperand(3))->getZExtValue() != 0;
+ bool IsWriteAfterRead = N->getConstantOperandVal(3) != 0;
auto VT = N->getValueType(0);
auto PtrVT = SourceValue->getValueType(0);
@@ -1761,14 +1760,15 @@ SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
Diff = DAG.getNode(ISD::ABS, DL, PtrVT, Diff);
Diff = DAG.getNode(ISD::SDIV, DL, PtrVT, Diff, EltSize);
- SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
// If the difference is positive then some elements may alias
auto CmpVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
Diff.getValueType());
+ SDValue Zero = DAG.getTargetConstant(0, DL, PtrVT);
SDValue Cmp = DAG.getSetCC(DL, CmpVT, Diff, Zero,
IsWriteAfterRead ? ISD::SETLE : ISD::SETEQ);
+ // Create the lane mask
EVT SplatTY =
EVT::getVectorVT(*DAG.getContext(), PtrVT, VT.getVectorElementCount());
SDValue DiffSplat = DAG.getSplat(SplatTY, DL, Diff);
@@ -1776,7 +1776,10 @@ SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
SDValue DiffMask =
DAG.getSetCC(DL, VT, VectorStep, DiffSplat, ISD::CondCode::SETULT);
- // Splat the compare result then OR it with a lane mask
+ // Splat the compare result then OR it with the lane mask
+ auto VTElementTy = VT.getVectorElementType();
+ if (CmpVT.getScalarSizeInBits() < VTElementTy.getScalarSizeInBits())
+ Cmp = DAG.getNode(ISD::ZERO_EXTEND, DL, VTElementTy, Cmp);
SDValue Splat = DAG.getSplat(VT, DL, Cmp);
return DAG.getNode(ISD::OR, DL, VT, DiffMask, Splat);
}
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 87f161a47722a5..21bca367e90117 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -19946,7 +19946,7 @@ static SDValue getPTest(SelectionDAG &DAG, EVT VT, SDValue Pg, SDValue Op,
AArch64CC::CondCode Cond);
static bool isPredicateCCSettingOp(SDValue N) {
- if ((N.getOpcode() == ISD::SETCC ||
+ if (N.getOpcode() == ISD::SETCC ||
(N.getOpcode() == ISD::INTRINSIC_WO_CHAIN &&
(N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilege ||
N.getConstantOperandVal(0) == Intrinsic::aarch64_sve_whilegt ||
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index 9b344f03da077b..f88baeece03563 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -2,7 +2,7 @@
; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE
; RUN: llc -mtriple=aarch64 %s -o - | FileCheck %s --check-prefix=CHECK-NOSVE
-define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
+define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_8:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
@@ -53,11 +53,11 @@ define <16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <16 x i1> %0
}
-define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
+define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_16:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilewr p0.b, x0, x1
@@ -95,11 +95,11 @@ define <8 x i1> @whilewr_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <8 x i1> %0
}
-define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
+define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_32:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilewr p0.h, x0, x1
@@ -129,11 +129,11 @@ define <4 x i1> @whilewr_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <4 x i1> %0
}
-define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
+define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilewr_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilewr p0.s, x0, x1
@@ -159,11 +159,11 @@ define <2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <2 x i1> %0
}
-define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
+define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_8:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
@@ -215,11 +215,11 @@ define <16 x i1> @whilerw_8(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <16 x i1> %0
}
-define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
+define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_16:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilerw p0.b, x0, x1
@@ -258,11 +258,11 @@ define <8 x i1> @whilerw_16(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <8 x i1> %0
}
-define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
+define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_32:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilerw p0.h, x0, x1
@@ -293,11 +293,11 @@ define <4 x i1> @whilerw_32(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <4 x i1> %0
}
-define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
+define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-LABEL: whilerw_64:
; CHECK-SVE: // %bb.0: // %entry
; CHECK-SVE-NEXT: whilerw p0.s, x0, x1
@@ -324,6 +324,6 @@ define <2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index a7c9c5e3cdd33b..3d0f293b4687a3 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -2,7 +2,7 @@
; RUN: llc -mtriple=aarch64 -mattr=+sve2 %s -o - | FileCheck %s --check-prefix=CHECK-SVE2
; RUN: llc -mtriple=aarch64 -mattr=+sve %s -o - | FileCheck %s --check-prefix=CHECK-SVE
-define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
+define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_8:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.b, x0, x1
@@ -60,11 +60,11 @@ define <vscale x 16 x i1> @whilewr_8(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
-define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
+define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_16:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.h, x0, x1
@@ -98,11 +98,11 @@ define <vscale x 8 x i1> @whilewr_16(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 1)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <vscale x 8 x i1> %0
}
-define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
+define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_32:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.s, x0, x1
@@ -130,11 +130,11 @@ define <vscale x 4 x i1> @whilewr_32(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
-define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
+define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilewr_64:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilewr p0.d, x0, x1
@@ -158,11 +158,11 @@ define <vscale x 2 x i1> @whilewr_64(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
-define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
+define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilerw_8:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilerw p0.b, x0, x1
@@ -223,11 +223,11 @@ define <vscale x 16 x i1> @whilerw_8(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1.i64.i64(i64 %a, i64 %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
-define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
+define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilerw_16:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilerw p0.h, x0, x1
@@ -262,11 +262,11 @@ define <vscale x 8 x i1> @whilerw_16(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1.i64.i64(i64 %a, i64 %b, i64 2, i1 0)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <vscale x 8 x i1> %0
}
-define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
+define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilerw_32:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilerw p0.s, x0, x1
@@ -295,11 +295,11 @@ define <vscale x 4 x i1> @whilerw_32(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1.i64.i64(i64 %a, i64 %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
-define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
+define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE2-LABEL: whilerw_64:
; CHECK-SVE2: // %bb.0: // %entry
; CHECK-SVE2-NEXT: whilerw p0.d, x0, x1
@@ -324,6 +324,6 @@ define <vscale x 2 x i1> @whilerw_64(i64 %a, i64 %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1.i64.i64(i64 %a, i64 %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
}
>From 04774073b9e8a158db41a1f5689e4407c5f58e5c Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 15:16:57 +0000
Subject: [PATCH 12/18] Clarify elementSize and writeAfterRead = 0
---
llvm/docs/LangRef.rst | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index a2d649c93ec187..3e9963c936e334 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23657,6 +23657,7 @@ The final two are immediates and the result is a vector with the i1 element type
Semantics:
""""""""""
+``%elementSize`` is the size of the accessed elements in bytes.
The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
In other cases when ``%writeAfterRead`` is true, the
@@ -23681,7 +23682,8 @@ where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two ptr arguments to
``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
immediate argument. The ``%writeAfterRead`` argument is expected to be true if
-``%ptrB`` is stored to after ``%ptrA`` is read from.
+``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise it is false for
+a read after write.
The above is equivalent to:
::
>From 8b7562d2ba7ad34eb53a0073bf783152db0b70c5 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 15:23:46 +0000
Subject: [PATCH 13/18] Add i=0 to VF-1
---
llvm/docs/LangRef.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index 3e9963c936e334..fe862bc2a07806 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23679,11 +23679,11 @@ equivalent to:
%m[i] = (icmp ult i, %diff) || (%diff == 0)
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
-indexed by ``i``, and ``%ptrA``, ``%ptrB`` are the two ptr arguments to
-``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize`` is the first
-immediate argument. The ``%writeAfterRead`` argument is expected to be true if
-``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise it is false for
-a read after write.
+indexed by ``i`` (i = 0 to VF - 1), and ``%ptrA``, ``%ptrB`` are the two ptr
+arguments to ``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize``
+is the first immediate argument. The ``%writeAfterRead`` argument is expected
+to be true if ``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise
+it is false for a read after write.
The above is equivalent to:
::
>From 45fa7d7e8c516d305ac063bd7357798d4d8e4f2b Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:08:47 +0000
Subject: [PATCH 14/18] Rename to get.nonalias.lane.mask
---
llvm/docs/LangRef.rst | 28 +++++++++----------
llvm/include/llvm/CodeGen/ISDOpcodes.h | 2 +-
llvm/include/llvm/IR/Intrinsics.td | 4 +--
.../SelectionDAG/LegalizeIntegerTypes.cpp | 16 +++++------
llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h | 5 ++--
.../SelectionDAG/LegalizeVectorOps.cpp | 10 +++----
.../SelectionDAG/SelectionDAGBuilder.cpp | 6 ++--
.../SelectionDAG/SelectionDAGDumper.cpp | 2 +-
llvm/lib/CodeGen/TargetLoweringBase.cpp | 4 +--
.../Target/AArch64/AArch64ISelLowering.cpp | 17 +++++------
llvm/lib/Target/AArch64/AArch64ISelLowering.h | 2 +-
llvm/test/CodeGen/AArch64/alias_mask.ll | 16 +++++------
.../CodeGen/AArch64/alias_mask_scalable.ll | 16 +++++------
13 files changed, 65 insertions(+), 63 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index fe862bc2a07806..a752c56677090a 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23624,9 +23624,9 @@ Examples:
%active.lane.mask = call <4 x i1> @llvm.get.active.lane.mask.v4i1.i64(i64 %elem0, i64 429)
%wide.masked.load = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %3, i32 4, <4 x i1> %active.lane.mask, <4 x i32> poison)
-.. _int_experimental_get_alias_lane_mask:
+.. _int_experimental_get_nonalias_lane_mask:
-'``llvm.experimental.get.alias.lane.mask.*``' Intrinsics
+'``llvm.experimental.get.nonalias.lane.mask.*``' Intrinsics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Syntax:
@@ -23635,16 +23635,16 @@ This is an overloaded intrinsic.
::
- declare <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
- declare <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
+ declare <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.nxv16i1(ptr %ptrA, ptr %ptrB, i64 immarg %elementSize, i1 immarg %writeAfterRead)
Overview:
"""""""""
-Create a mask representing lanes that do or not overlap between two pointers
+Create a mask enabling lanes that do not overlap between two pointers
across one vector loop iteration.
@@ -23661,7 +23661,7 @@ Semantics:
The intrinsic will return poison if ``%ptrA`` and ``%ptrB`` are within
VF * ``%elementSize`` of each other and ``%ptrA`` + VF * ``%elementSize`` wraps.
In other cases when ``%writeAfterRead`` is true, the
-'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+'``llvm.experimental.get.nonalias.lane.mask.*``' intrinsics are semantically
equivalent to:
::
@@ -23670,7 +23670,7 @@ equivalent to:
%m[i] = (icmp ult i, %diff) || (%diff <= 0)
When the return value is not poison and ``%writeAfterRead`` is false, the
-'``llvm.experimental.get.alias.lane.mask.*``' intrinsics are semantically
+'``llvm.experimental.get.nonalias.lane.mask.*``' intrinsics are semantically
equivalent to:
::
@@ -23680,7 +23680,7 @@ equivalent to:
where ``%m`` is a vector (mask) of active/inactive lanes with its elements
indexed by ``i`` (i = 0 to VF - 1), and ``%ptrA``, ``%ptrB`` are the two ptr
-arguments to ``llvm.experimental.get.alias.lane.mask.*`` and ``%elementSize``
+arguments to ``llvm.experimental.get.nonalias.lane.mask.*`` and ``%elementSize``
is the first immediate argument. The ``%writeAfterRead`` argument is expected
to be true if ``%ptrB`` is stored to after ``%ptrA`` is read from, otherwise
it is false for a read after write.
@@ -23688,7 +23688,7 @@ The above is equivalent to:
::
- %m = @llvm.experimental.get.alias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
+ %m = @llvm.experimental.get.nonalias.lane.mask(%ptrA, %ptrB, %elementSize, %writeAfterRead)
This can, for example, be emitted by the loop vectorizer in which case
``%ptrA`` is a pointer that is read from within the loop, and ``%ptrB`` is a
@@ -23706,10 +23706,10 @@ Examples:
.. code-block:: llvm
- %alias.lane.mask = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
- %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %alias.lane.mask, <4 x i32> poison)
+ %nonalias.lane.mask = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %nonalias.lane.mask, <4 x i32> poison)
[...]
- call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %alias.lane.mask)
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %nonalias.lane.mask)
.. _int_experimental_vp_splice:
diff --git a/llvm/include/llvm/CodeGen/ISDOpcodes.h b/llvm/include/llvm/CodeGen/ISDOpcodes.h
index b717a329378928..22965880064127 100644
--- a/llvm/include/llvm/CodeGen/ISDOpcodes.h
+++ b/llvm/include/llvm/CodeGen/ISDOpcodes.h
@@ -1487,7 +1487,7 @@ enum NodeType {
// The `llvm.experimental.get.alias.lane.mask.*` intrinsics
// Operands: Load pointer, Store pointer, Element size, Write after read
// Output: Mask
- EXPERIMENTAL_ALIAS_LANE_MASK,
+ EXPERIMENTAL_NONALIAS_LANE_MASK,
// llvm.clear_cache intrinsic
// Operands: Input Chain, Start Addres, End Address
diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td
index 6a70b9d966bf0b..4a8b599ab359dc 100644
--- a/llvm/include/llvm/IR/Intrinsics.td
+++ b/llvm/include/llvm/IR/Intrinsics.td
@@ -2363,9 +2363,9 @@ let IntrProperties = [IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<1>>
llvm_i32_ty]>;
}
-def int_experimental_get_alias_lane_mask:
+def int_experimental_get_nonalias_lane_mask:
DefaultAttrsIntrinsic<[llvm_anyvector_ty],
- [llvm_anyptr_ty, LLVMMatchType<1>, llvm_i64_ty, llvm_i1_ty],
+ [llvm_ptr_ty, llvm_ptr_ty, llvm_i64_ty, llvm_i1_ty],
[IntrNoMem, IntrNoSync, IntrWillReturn, ImmArg<ArgIndex<2>>, ImmArg<ArgIndex<3>>]>;
def int_get_active_lane_mask:
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 14e1c3ee1eb2bc..808e4daab258bb 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -312,8 +312,8 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
Res = PromoteIntRes_VP_REDUCE(N);
break;
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Res = PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(N);
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ Res = PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(N);
break;
case ISD::FREEZE:
@@ -364,10 +364,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
}
SDValue
-DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+DAGTypeLegalizer::PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N) {
EVT VT = N->getValueType(0);
EVT NewVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
- return DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, SDLoc(N), NewVT,
+ return DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, SDLoc(N), NewVT,
N->ops());
}
@@ -2088,8 +2088,8 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
case ISD::VECTOR_FIND_LAST_ACTIVE:
Res = PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(N, OpNo);
break;
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Res = PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(N, OpNo);
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ Res = PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(N, OpNo);
break;
}
@@ -2840,8 +2840,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N,
}
SDValue
-DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N,
- unsigned OpNo) {
+DAGTypeLegalizer::PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N,
+ unsigned OpNo) {
SmallVector<SDValue, 4> NewOps(N->ops());
NewOps[OpNo] = GetPromotedInteger(N->getOperand(OpNo));
return SDValue(DAG.UpdateNodeOperands(N, NewOps), 0);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 7959e9cf214091..c41ac15bc511fa 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -379,7 +379,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntRes_IS_FPCLASS(SDNode *N);
SDValue PromoteIntRes_PATCHPOINT(SDNode *N);
SDValue PromoteIntRes_VECTOR_FIND_LAST_ACTIVE(SDNode *N);
- SDValue PromoteIntRes_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
+ SDValue PromoteIntRes_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N);
// Integer Operand Promotion.
bool PromoteIntegerOperand(SDNode *N, unsigned OpNo);
@@ -431,7 +431,8 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
SDValue PromoteIntOp_VP_SPLICE(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_VECTOR_HISTOGRAM(SDNode *N, unsigned OpNo);
SDValue PromoteIntOp_VECTOR_FIND_LAST_ACTIVE(SDNode *N, unsigned OpNo);
- SDValue PromoteIntOp_EXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N, unsigned OpNo);
+ SDValue PromoteIntOp_EXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N,
+ unsigned OpNo);
void SExtOrZExtPromotedOperands(SDValue &LHS, SDValue &RHS);
void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index 3ba201c0d572d9..ad0ca0496a7da3 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -138,7 +138,7 @@ class VectorLegalizer {
SDValue ExpandVP_FNEG(SDNode *Node);
SDValue ExpandVP_FABS(SDNode *Node);
SDValue ExpandVP_FCOPYSIGN(SDNode *Node);
- SDValue ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N);
+ SDValue ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N);
SDValue ExpandSELECT(SDNode *Node);
std::pair<SDValue, SDValue> ExpandLoad(SDNode *N);
SDValue ExpandStore(SDNode *N);
@@ -468,7 +468,7 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
case ISD::VECTOR_COMPRESS:
case ISD::SCMP:
case ISD::UCMP:
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
break;
case ISD::SMULFIX:
@@ -1235,8 +1235,8 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
case ISD::UCMP:
Results.push_back(TLI.expandCMP(Node, DAG));
return;
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- Results.push_back(ExpandEXPERIMENTAL_ALIAS_LANE_MASK(Node));
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ Results.push_back(ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(Node));
return;
case ISD::FADD:
@@ -1745,7 +1745,7 @@ SDValue VectorLegalizer::ExpandVP_FCOPYSIGN(SDNode *Node) {
return DAG.getNode(ISD::BITCAST, DL, VT, CopiedSign);
}
-SDValue VectorLegalizer::ExpandEXPERIMENTAL_ALIAS_LANE_MASK(SDNode *N) {
+SDValue VectorLegalizer::ExpandEXPERIMENTAL_NONALIAS_LANE_MASK(SDNode *N) {
SDLoc DL(N);
SDValue SourceValue = N->getOperand(0);
SDValue SinkValue = N->getOperand(1);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index eeb295a8757a9b..04f211b0e65177 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -8276,13 +8276,13 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
visitVectorExtractLastActive(I, Intrinsic);
return;
}
- case Intrinsic::experimental_get_alias_lane_mask: {
+ case Intrinsic::experimental_get_nonalias_lane_mask: {
auto IntrinsicVT = EVT::getEVT(I.getType());
SmallVector<SDValue, 4> Ops;
for (auto &Op : I.operands())
Ops.push_back(getValue(Op));
- SDValue Mask =
- DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, sdl, IntrinsicVT, Ops);
+ SDValue Mask = DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, sdl,
+ IntrinsicVT, Ops);
setValue(&I, Mask);
}
}
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index 70dff1018390c5..6e413d2407fa31 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -570,7 +570,7 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
case ISD::VECTOR_FIND_LAST_ACTIVE:
return "find_last_active";
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
return "alias_lane_mask";
// Vector Predication
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index db245956d65348..a6dfe66222cffb 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -821,8 +821,8 @@ void TargetLoweringBase::initActions() {
// Masked vector extracts default to expand.
setOperationAction(ISD::VECTOR_FIND_LAST_ACTIVE, VT, Expand);
- // Aliasing lanes mask default to expand
- setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Expand);
+ // Non-aliasing lanes mask default to expand
+ setOperationAction(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, VT, Expand);
// FP environment operations default to expand.
setOperationAction(ISD::GET_FPENV, VT, Expand);
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 21bca367e90117..891e1c99dfe740 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -1811,7 +1811,7 @@ AArch64TargetLowering::AArch64TargetLowering(const TargetMachine &TM,
(Subtarget->hasSME() && Subtarget->isStreaming())) {
for (auto VT : {MVT::v2i32, MVT::v4i16, MVT::v8i8, MVT::v16i8, MVT::nxv2i1,
MVT::nxv4i1, MVT::nxv8i1, MVT::nxv16i1}) {
- setOperationAction(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, VT, Custom);
+ setOperationAction(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, VT, Custom);
}
}
@@ -5273,8 +5273,9 @@ SDValue AArch64TargetLowering::LowerFSINCOS(SDValue Op,
static MVT getSVEContainerType(EVT ContentTy);
-SDValue AArch64TargetLowering::LowerALIAS_LANE_MASK(SDValue Op,
- SelectionDAG &DAG) const {
+SDValue
+AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
+ SelectionDAG &DAG) const {
SDLoc DL(Op);
unsigned IntrinsicID = 0;
uint64_t EltSize = Op.getConstantOperandVal(2);
@@ -7397,8 +7398,8 @@ SDValue AArch64TargetLowering::LowerOperation(SDValue Op,
default:
llvm_unreachable("unimplemented operand");
return SDValue();
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK:
- return LowerALIAS_LANE_MASK(Op, DAG);
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK:
+ return LowerNONALIAS_LANE_MASK(Op, DAG);
case ISD::BITCAST:
return LowerBITCAST(Op, DAG);
case ISD::GlobalAddress:
@@ -27681,7 +27682,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
// CONCAT_VECTORS -- but delegate to common code for result type
// legalisation
return;
- case ISD::EXPERIMENTAL_ALIAS_LANE_MASK: {
+ case ISD::EXPERIMENTAL_NONALIAS_LANE_MASK: {
EVT VT = N->getValueType(0);
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
@@ -27693,7 +27694,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
SDLoc DL(N);
auto V =
- DAG.getNode(ISD::EXPERIMENTAL_ALIAS_LANE_MASK, DL, NewVT, N->ops());
+ DAG.getNode(ISD::EXPERIMENTAL_NONALIAS_LANE_MASK, DL, NewVT, N->ops());
Results.push_back(DAG.getNode(ISD::TRUNCATE, DL, VT, V));
return;
}
@@ -27753,7 +27754,7 @@ void AArch64TargetLowering::ReplaceNodeResults(
return;
}
case Intrinsic::experimental_vector_match:
- case Intrinsic::experimental_get_alias_lane_mask:
+ case Intrinsic::experimental_get_nonalias_lane_mask:
case Intrinsic::get_active_lane_mask: {
if (!VT.isFixedLengthVector() || VT.getVectorElementType() != MVT::i1)
return;
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.h b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
index 17d64e8cfadbb2..98f2841ecf2561 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.h
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.h
@@ -1211,7 +1211,7 @@ class AArch64TargetLowering : public TargetLowering {
SDValue LowerXOR(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerCONCAT_VECTORS(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerFSINCOS(SDValue Op, SelectionDAG &DAG) const;
- SDValue LowerALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
+ SDValue LowerNONALIAS_LANE_MASK(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerVSCALE(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerTRUNCATE(SDValue Op, SelectionDAG &DAG) const;
diff --git a/llvm/test/CodeGen/AArch64/alias_mask.ll b/llvm/test/CodeGen/AArch64/alias_mask.ll
index f88baeece03563..5ef6b588fe767f 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask.ll
@@ -53,7 +53,7 @@ define <16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
+ %0 = call <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <16 x i1> %0
}
@@ -95,7 +95,7 @@ define <8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
+ %0 = call <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <8 x i1> %0
}
@@ -129,7 +129,7 @@ define <4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
+ %0 = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <4 x i1> %0
}
@@ -159,7 +159,7 @@ define <2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
+ %0 = call <2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <2 x i1> %0
}
@@ -215,7 +215,7 @@ define <16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.16b, v0.16b, v1.16b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
+ %0 = call <16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <16 x i1> %0
}
@@ -258,7 +258,7 @@ define <8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
+ %0 = call <8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <8 x i1> %0
}
@@ -293,7 +293,7 @@ define <4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
+ %0 = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <4 x i1> %0
}
@@ -324,6 +324,6 @@ define <2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-NOSVE-NEXT: orr v0.8b, v0.8b, v1.8b
; CHECK-NOSVE-NEXT: ret
entry:
- %0 = call <2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
+ %0 = call <2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <2 x i1> %0
}
diff --git a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
index 3d0f293b4687a3..6884f14d685b51 100644
--- a/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
+++ b/llvm/test/CodeGen/AArch64/alias_mask_scalable.ll
@@ -60,7 +60,7 @@ define <vscale x 16 x i1> @whilewr_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 1)
ret <vscale x 16 x i1> %0
}
@@ -98,7 +98,7 @@ define <vscale x 8 x i1> @whilewr_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 1)
ret <vscale x 8 x i1> %0
}
@@ -130,7 +130,7 @@ define <vscale x 4 x i1> @whilewr_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 1)
ret <vscale x 4 x i1> %0
}
@@ -158,7 +158,7 @@ define <vscale x 2 x i1> @whilewr_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 1)
ret <vscale x 2 x i1> %0
}
@@ -223,7 +223,7 @@ define <vscale x 16 x i1> @whilerw_8(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: ldr x29, [sp], #16 // 8-byte Folded Reload
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 16 x i1> @llvm.experimental.get.alias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
+ %0 = call <vscale x 16 x i1> @llvm.experimental.get.nonalias.lane.mask.v16i1(ptr %a, ptr %b, i64 1, i1 0)
ret <vscale x 16 x i1> %0
}
@@ -262,7 +262,7 @@ define <vscale x 8 x i1> @whilerw_16(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 8 x i1> @llvm.experimental.get.alias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
+ %0 = call <vscale x 8 x i1> @llvm.experimental.get.nonalias.lane.mask.v8i1(ptr %a, ptr %b, i64 2, i1 0)
ret <vscale x 8 x i1> %0
}
@@ -295,7 +295,7 @@ define <vscale x 4 x i1> @whilerw_32(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 4 x i1> @llvm.experimental.get.alias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
+ %0 = call <vscale x 4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %a, ptr %b, i64 4, i1 0)
ret <vscale x 4 x i1> %0
}
@@ -324,6 +324,6 @@ define <vscale x 2 x i1> @whilerw_64(ptr %a, ptr %b) {
; CHECK-SVE-NEXT: sel p0.b, p0, p0.b, p1.b
; CHECK-SVE-NEXT: ret
entry:
- %0 = call <vscale x 2 x i1> @llvm.experimental.get.alias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
+ %0 = call <vscale x 2 x i1> @llvm.experimental.get.nonalias.lane.mask.v2i1(ptr %a, ptr %b, i64 8, i1 0)
ret <vscale x 2 x i1> %0
}
>From d7013e0f4bd16045b423e2479863bc6442aa29a5 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:15:26 +0000
Subject: [PATCH 15/18] Fix pointer types in example
---
llvm/docs/LangRef.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst
index a752c56677090a..1d14600237c9eb 100644
--- a/llvm/docs/LangRef.rst
+++ b/llvm/docs/LangRef.rst
@@ -23707,9 +23707,9 @@ Examples:
.. code-block:: llvm
%nonalias.lane.mask = call <4 x i1> @llvm.experimental.get.nonalias.lane.mask.v4i1(ptr %ptrA, ptr %ptrB, i64 4, i1 1)
- %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(<4 x i32>* %ptrA, i32 4, <4 x i1> %nonalias.lane.mask, <4 x i32> poison)
+ %vecA = call <4 x i32> @llvm.masked.load.v4i32.p0v4i32(ptr %ptrA, i32 4, <4 x i1> %nonalias.lane.mask, <4 x i32> poison)
[...]
- call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, <4 x i32>* %ptrB, i32 4, <4 x i1> %nonalias.lane.mask)
+ call @llvm.masked.store.v4i32.p0v4i32(<4 x i32> %vecA, ptr %ptrB, i32 4, <4 x i1> %nonalias.lane.mask)
.. _int_experimental_vp_splice:
>From de0328a92e1b75ce28963b91c61e2dc21379fa28 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:15:35 +0000
Subject: [PATCH 16/18] Remove shouldExpandGetAliasLaneMask
---
llvm/include/llvm/CodeGen/TargetLowering.h | 7 -------
1 file changed, 7 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 3fe5b3d108ba10..38ac90f0c081b3 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -468,13 +468,6 @@ class TargetLoweringBase {
return true;
}
- /// Return true if the @llvm.experimental.get.alias.lane.mask intrinsic should
- /// be expanded using generic code in SelectionDAGBuilder.
- virtual bool shouldExpandGetAliasLaneMask(EVT VT, EVT PtrVT,
- unsigned EltSize) const {
- return true;
- }
-
virtual bool shouldExpandGetVectorLength(EVT CountVT, unsigned VF,
bool IsScalable) const {
return true;
>From 1ea522f34787582d36f377fa62de703b2629a228 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:25:35 +0000
Subject: [PATCH 17/18] Lower to ISD node rather than intrinsic
---
.../Target/AArch64/AArch64ISelLowering.cpp | 19 +++++--------------
1 file changed, 5 insertions(+), 14 deletions(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index 891e1c99dfe740..defceb68a447d4 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -5277,34 +5277,27 @@ SDValue
AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
SelectionDAG &DAG) const {
SDLoc DL(Op);
- unsigned IntrinsicID = 0;
uint64_t EltSize = Op.getConstantOperandVal(2);
bool IsWriteAfterRead = Op.getConstantOperandVal(3) == 1;
+ unsigned Opcode =
+ IsWriteAfterRead ? AArch64ISD::WHILEWR : AArch64ISD::WHILERW;
EVT VT = Op.getValueType();
MVT SimpleVT = VT.getSimpleVT();
// Make sure that the promoted mask size and element size match
switch (EltSize) {
case 1:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_b
- : Intrinsic::aarch64_sve_whilerw_b;
assert((SimpleVT == MVT::v16i8 || SimpleVT == MVT::nxv16i1) &&
"Unexpected mask or element size");
break;
case 2:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_h
- : Intrinsic::aarch64_sve_whilerw_h;
assert((SimpleVT == MVT::v8i8 || SimpleVT == MVT::nxv8i1) &&
"Unexpected mask or element size");
break;
case 4:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_s
- : Intrinsic::aarch64_sve_whilerw_s;
assert((SimpleVT == MVT::v4i16 || SimpleVT == MVT::nxv4i1) &&
"Unexpected mask or element size");
break;
case 8:
- IntrinsicID = IsWriteAfterRead ? Intrinsic::aarch64_sve_whilewr_d
- : Intrinsic::aarch64_sve_whilerw_d;
assert((SimpleVT == MVT::v2i32 || SimpleVT == MVT::nxv2i1) &&
"Unexpected mask or element size");
break;
@@ -5312,11 +5305,9 @@ AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
llvm_unreachable("Unexpected element size for get.alias.lane.mask");
break;
}
- SDValue ID = DAG.getTargetConstant(IntrinsicID, DL, MVT::i64);
if (VT.isScalableVector())
- return DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, VT, ID, Op.getOperand(0),
- Op.getOperand(1));
+ return DAG.getNode(Opcode, DL, VT, Op.getOperand(0), Op.getOperand(1));
// We can use the SVE whilewr/whilerw instruction to lower this
// intrinsic by creating the appropriate sequence of scalable vector
@@ -5326,8 +5317,8 @@ AArch64TargetLowering::LowerNONALIAS_LANE_MASK(SDValue Op,
EVT ContainerVT = getContainerForFixedLengthVector(DAG, VT);
EVT WhileVT = ContainerVT.changeElementType(MVT::i1);
- SDValue Mask = DAG.getNode(ISD::INTRINSIC_WO_CHAIN, DL, WhileVT, ID,
- Op.getOperand(0), Op.getOperand(1));
+ SDValue Mask =
+ DAG.getNode(Opcode, DL, WhileVT, Op.getOperand(0), Op.getOperand(1));
SDValue MaskAsInt = DAG.getNode(ISD::SIGN_EXTEND, DL, ContainerVT, Mask);
return DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, MaskAsInt,
DAG.getVectorIdxConstant(0, DL));
>From 6b89fef88bd1817ee8d3e2b315bb6489de030422 Mon Sep 17 00:00:00 2001
From: Samuel Tebbs <samuel.tebbs at arm.com>
Date: Thu, 30 Jan 2025 16:25:58 +0000
Subject: [PATCH 18/18] Remove get_alias_lane_mask case from aarch64 intrinsic
lowering
---
llvm/lib/Target/AArch64/AArch64ISelLowering.cpp | 1 -
1 file changed, 1 deletion(-)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index defceb68a447d4..0830426df1022d 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -6548,7 +6548,6 @@ SDValue AArch64TargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op,
return DAG.getNode(AArch64ISD::USDOT, dl, Op.getValueType(),
Op.getOperand(1), Op.getOperand(2), Op.getOperand(3));
}
- case Intrinsic::experimental_get_alias_lane_mask:
case Intrinsic::get_active_lane_mask: {
unsigned IntrinsicID = Intrinsic::aarch64_sve_whilelo;
SDValue ID = DAG.getTargetConstant(IntrinsicID, dl, MVT::i64);
More information about the llvm-commits
mailing list