[llvm] [Intrinsics][AArch64] Add intrinsic to mask off aliasing vector lanes (PR #117007)

Sander de Smalen via llvm-commits llvm-commits at lists.llvm.org
Fri Aug 1 03:19:37 PDT 2025


================
@@ -167,6 +167,11 @@ def AArch64st1q_scatter : SDNode<"AArch64ISD::SST1Q_PRED", SDT_AArch64_SCATTER_V
 // AArch64 SVE/SVE2 - the remaining node definitions
 //
 
+// Alias masks
+def SDT_AArch64Mask : SDTypeProfile<1, 2, [SDTCisVec<0>, SDTCisInt<1>, SDTCisSameAs<2, 1>, SDTCVecEltisVT<0,i1>]>;
+def AArch64whilewr : SDNode<"AArch64ISD::WHILEWR", SDT_AArch64Mask>;
+def AArch64whilerw : SDNode<"AArch64ISD::WHILERW", SDT_AArch64Mask>;
----------------
sdesmalen-arm wrote:

I don't see the need for the AArch64-specific `AArch64ISD::WHILEWR/AArch64whilewr` and `AArch64ISD::WHILERW/AArch64whilerw`. You should be able to use `ISD::LOOP_DEPENDENCE_WAR_MASK` and `ISD::LOOP_DEPENDENCE_RAW_MASK` directly (and create generic `def loop_dependence_war_mask : SDNode<>`s for those)

https://github.com/llvm/llvm-project/pull/117007


More information about the llvm-commits mailing list