[llvm] [LoopVectorize] Don't discount instructions scalarized due to tail folding (PR #109289)

John Brawn via llvm-commits llvm-commits at lists.llvm.org
Mon Sep 30 09:24:06 PDT 2024


================
@@ -5501,10 +5501,14 @@ InstructionCost LoopVectorizationCostModel::computePredInstDiscount(
     // Scale the total scalar cost by block probability.
     ScalarCost /= getReciprocalPredBlockProb();
 
-    // Compute the discount. A non-negative discount means the vector version
-    // of the instruction costs more, and scalarizing would be beneficial.
-    Discount += VectorCost - ScalarCost;
-    ScalarCosts[I] = ScalarCost;
+    // Compute the discount, unless this instruction must be scalarized due to
+    // tail folding, as then the vector cost is already the scalar cost. A
+    // non-negative discount means the vector version of the instruction costs
+    // more, and scalarizing would be beneficial.
+    if (!foldTailByMasking() || getWideningDecision(I, VF) != CM_Scalarize) {
----------------
john-brawn-arm wrote:

> 1. I don't see why we should restrict this to tail-folding only. If we've also made the widening decision to scalarise for non-tail-folded loops then surely we'd also not want to calculate the discount?

This is possibly true, but I tried that and there's a lot more test changes as a result, and briefly looking at them it wasn't immediately obvious if they were better or worse.

> 2. In theory the discount should really be close to zero if we've made the decision to scalarise. Unless I've misunderstood something, it seems like a more fundamental problem here is why VectorCost is larger than ScalarCost for the scenario you're interested in? Perhaps ScalarCost is too low?

Looking at low_trip_count_store in llvm/test/Transforms/LoopVectorize/AArch64/conditional-branches-cost.ll the current sequence of events that happens is (for a vector length of 4):
 * getMemInstScalarizationCost is called on the store, which needs to be scalarized and predicated because of tail masking, and calculates a cost of 20, which includes the cost of doing the compare and branch into the predicated block.
 *  computePredInstDiscount uses this as the VectorCost, as it uses getInstructionCost which for scalarized instruction returns the cost value in InstsToScalarize
 * The calculation of ScalarCost assumes that the predicated block already exists, so it just calculates the cost of moving the instruction into the predicated block and scalarizing it in there, which it calculates as 4 (for 4 copies of the store)
 * This results in a discount of 16, causing 20-16=4 to be used as the cost of the scalarized store.

ScalarCost is too low for the scalarized store, because it's assuming that the predicated block already exists, but the cost of the predicated block is exactly what we need to take into account to avoid pointless tail folding by masking. 

> I think it would be helpful to add some cost model tests to this patch that have some debug output showing how the costs for each VF change. What seems to be happening with this change is that we're now reporting higher loop costs for VF > 1, and this is leading to the decision not to vectorise at all.

I'll add these tests.


https://github.com/llvm/llvm-project/pull/109289


More information about the llvm-commits mailing list