[llvm] [AArch64] Add @llvm.experimental.vector.match (PR #101974)

David Sherwood via llvm-commits llvm-commits at lists.llvm.org
Wed Oct 23 01:22:24 PDT 2024


================
@@ -8137,6 +8137,42 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
              DAG.getNode(ISD::EXTRACT_SUBVECTOR, sdl, ResultVT, Vec, Index));
     return;
   }
+  case Intrinsic::experimental_vector_match: {
+    SDValue Op1 = getValue(I.getOperand(0));
+    SDValue Op2 = getValue(I.getOperand(1));
+    SDValue Mask = getValue(I.getOperand(2));
+    EVT Op1VT = Op1.getValueType();
+    EVT Op2VT = Op2.getValueType();
+    EVT ResVT = Mask.getValueType();
+    unsigned SearchSize = Op2VT.getVectorNumElements();
+
+    LLVMContext &Ctx = *DAG.getContext();
+    const auto &TTI =
+        TLI.getTargetMachine().getTargetTransformInfo(*I.getFunction());
+
+    // If the target has native support for this vector match operation, lower
+    // the intrinsic directly; otherwise, lower it below.
+    if (TTI.hasVectorMatch(cast<VectorType>(Op1VT.getTypeForEVT(Ctx)),
----------------
david-arm wrote:

Thinking about this some more, in order to use this intrinsic from the loop vectoriser you'll need a cost model anyway for this intrinsic anyway, and the cost model for the generic lowering case will likely be so high that it prevents vectorisation.

However, if you're planning to use this from LoopIdiomVectorize then I agree it's difficult to decide whether to use the match intrinsic. It's up to you then whether you use TLI or TTI, but you will still need to add a cost model either way. It's just if you go down the TTI route we will be adding an interface that will likely only ever be used by LoopIdiomVectorize.

https://github.com/llvm/llvm-project/pull/101974


More information about the llvm-commits mailing list