[llvm] [SLP]Correctly detect minnum/maxnum patterns for select/cmp operations on floats. (PR #98570)
Simon Pilgrim via llvm-commits
llvm-commits at lists.llvm.org
Tue Jul 16 04:41:21 PDT 2024
================
@@ -9636,9 +9636,27 @@ BoUpSLP::getEntryCost(const TreeEntry *E, ArrayRef<Value *> VectorizedVals,
? CmpInst::BAD_FCMP_PREDICATE
: CmpInst::BAD_ICMP_PREDICATE;
- return TTI->getCmpSelInstrCost(E->getOpcode(), OrigScalarTy,
- Builder.getInt1Ty(), CurrentPred, CostKind,
- VI);
+ InstructionCost ScalarCost = TTI->getCmpSelInstrCost(
+ E->getOpcode(), OrigScalarTy, Builder.getInt1Ty(), CurrentPred,
+ CostKind, VI);
+ auto IntrinsicAndUse = canConvertToMinOrMaxIntrinsic(VI);
----------------
RKSimon wrote:
Is this any cleaner if we use a binding? (and below in GetVectorCost).
```
auto [MinMaxID, SelectOnly] = canConvertToMinOrMaxIntrinsic(VI);
```
https://github.com/llvm/llvm-project/pull/98570
More information about the llvm-commits
mailing list