[llvm] [LV][EVL] Support call instruction with EVL-vectorization (PR #110412)
Florian Hahn via llvm-commits
llvm-commits at lists.llvm.org
Mon Nov 11 02:37:50 PST 2024
================
@@ -1039,14 +1073,14 @@ InstructionCost VPWidenIntrinsicRecipe::computeCost(ElementCount VF,
Type *RetTy = ToVectorTy(Ctx.Types.inferScalarType(this), VF);
SmallVector<Type *> ParamTys;
- for (unsigned I = 0; I != getNumOperands(); ++I)
+ for (unsigned I = 0; I != NumOperands; ++I)
ParamTys.push_back(
ToVectorTy(Ctx.Types.inferScalarType(getOperand(I)), VF));
// TODO: Rework TTI interface to avoid reliance on underlying IntrinsicInst.
FastMathFlags FMF = hasFastMathFlags() ? getFastMathFlags() : FastMathFlags();
IntrinsicCostAttributes CostAttrs(
- VectorIntrinsicID, RetTy, Arguments, ParamTys, FMF,
+ FID, RetTy, Arguments, ParamTys, FMF,
dyn_cast_or_null<IntrinsicInst>(getUnderlyingValue()));
return Ctx.TTI.getIntrinsicInstrCost(CostAttrs, CostKind);
----------------
fhahn wrote:
Is the Vplan based cost more accurate? In that case, can the legacy cm be fixed?
https://github.com/llvm/llvm-project/pull/110412
More information about the llvm-commits
mailing list