[llvm-branch-commits] [llvm] [AArch64] SLP can vectorize frem (PR #82488)
Paul Walker via llvm-branch-commits
llvm-branch-commits at lists.llvm.org
Thu Feb 22 03:49:32 PST 2024
================
@@ -869,6 +870,18 @@ TargetTransformInfo::getOperandInfo(const Value *V) {
return {OpInfo, OpProps};
}
+InstructionCost TargetTransformInfo::getVecLibCallCost(
+ const int OpCode, const TargetLibraryInfo *TLI, VectorType *VecTy,
+ TTI::TargetCostKind CostKind) {
+ Type *ScalarTy = VecTy->getScalarType();
+ LibFunc Func;
+ if (TLI->getLibFunc(OpCode, ScalarTy, Func) &&
+ TLI->isFunctionVectorizable(TLI->getName(Func), VecTy->getElementCount()))
----------------
paulwalker-arm wrote:
TTI should be costing known entities and not trying to second guess what a transformation is doing. Whilst TLI provides a route for target specific queries relating to the availability of vector math routines, that doesn’t mean those mapping will be used. The only source of truth is the transformation pass itself and thus it needs to ask the TTI the correct question based on this source of truth (i.e. whether it intends to use TLI mappings[1]). Both TTI and TLI abstract away the target specific queries relating to their design so I don’t understand what is the target specific “magic” you are worried about.
[1] This is very important because if the vector function is not used then a scalable vector FREM must have an invalid cost because there is no code generation support for it.
https://github.com/llvm/llvm-project/pull/82488
More information about the llvm-branch-commits
mailing list