[llvm] [LoopVectorize] Don't discount instructions scalarized due to tail folding (PR #109289)

David Sherwood via llvm-commits llvm-commits at lists.llvm.org
Mon Sep 30 04:06:24 PDT 2024


================
@@ -5501,10 +5501,14 @@ InstructionCost LoopVectorizationCostModel::computePredInstDiscount(
     // Scale the total scalar cost by block probability.
     ScalarCost /= getReciprocalPredBlockProb();
 
-    // Compute the discount. A non-negative discount means the vector version
-    // of the instruction costs more, and scalarizing would be beneficial.
-    Discount += VectorCost - ScalarCost;
-    ScalarCosts[I] = ScalarCost;
+    // Compute the discount, unless this instruction must be scalarized due to
+    // tail folding, as then the vector cost is already the scalar cost. A
+    // non-negative discount means the vector version of the instruction costs
+    // more, and scalarizing would be beneficial.
+    if (!foldTailByMasking() || getWideningDecision(I, VF) != CM_Scalarize) {
----------------
david-arm wrote:

OK so this patch is saying that if block needs predication due to tail-folding and we've already decided to scalarise the instruction for the vector VF, then we shouldn't apply a discount. However, it feels like there two problems with this:

1. I don't see why we should restrict this to tail-folding only. If we've also made the widening decision to scalarise for non-tail-folded loops then surely we'd also not want to calculate the discount?
2. In theory the discount should really be close to zero if we've made the decision to scalarise. Unless I've misunderstood something, it seems like a more fundamental problem here is why VectorCost is larger than ScalarCost for the scenario you're interested in? Perhaps ScalarCost is too low?

I think it would be helpful to add some cost model tests to this patch that have some debug output showing how the costs for each VF change. What seems to be happening with this change is that we're now reporting higher loop costs for VF > 1, and this is leading to the decision not to vectorise at all.

https://github.com/llvm/llvm-project/pull/109289


More information about the llvm-commits mailing list