[llvm] [LoopVectorize] Don't discount instructions scalarized due to tail folding (PR #109289)

David Sherwood via llvm-commits llvm-commits at lists.llvm.org
Fri Sep 27 08:38:07 PDT 2024


================
@@ -5501,10 +5501,14 @@ InstructionCost LoopVectorizationCostModel::computePredInstDiscount(
     // Scale the total scalar cost by block probability.
     ScalarCost /= getReciprocalPredBlockProb();
 
-    // Compute the discount. A non-negative discount means the vector version
-    // of the instruction costs more, and scalarizing would be beneficial.
-    Discount += VectorCost - ScalarCost;
-    ScalarCosts[I] = ScalarCost;
+    // Compute the discount, unless this instruction must be scalarized due to
+    // tail folding, as then the vector cost is already the scalar cost. A
+    // non-negative discount means the vector version of the instruction costs
+    // more, and scalarizing would be beneficial.
+    if (!foldTailByMasking() || getWideningDecision(I, VF) != CM_Scalarize) {
----------------
david-arm wrote:

I must admit I'm not too familiar with this code and I need some time to understand what effect this change has on the costs, but I'll take a deeper look next week!

https://github.com/llvm/llvm-project/pull/109289


More information about the llvm-commits mailing list