[llvm] [LoopVectorize] Don't scalarize predicated instruction with optsize (PR #129265)

David Sherwood via llvm-commits llvm-commits at lists.llvm.org
Thu Mar 6 04:02:37 PST 2025


================
@@ -986,13 +977,18 @@ class LoopVectorizationCostModel {
                              AssumptionCache *AC,
                              OptimizationRemarkEmitter *ORE, const Function *F,
                              const LoopVectorizeHints *Hints,
-                             InterleavedAccessInfo &IAI)
+                             InterleavedAccessInfo &IAI,
+                             ProfileSummaryInfo *PSI, BlockFrequencyInfo *BFI)
       : ScalarEpilogueStatus(SEL), TheLoop(L), PSE(PSE), LI(LI), Legal(Legal),
         TTI(TTI), TLI(TLI), DB(DB), AC(AC), ORE(ORE), TheFunction(F),
         Hints(Hints), InterleaveInfo(IAI) {
     if (TTI.supportsScalableVectors() || ForceTargetSupportsScalableVectors)
       initializeVScaleForTuning();
     CostKind = F->hasMinSize() ? TTI::TCK_CodeSize : TTI::TCK_RecipThroughput;
+    // Query this against the original loop and save it here because the profile
+    // of the original loop header may change as the transformation happens.
+    OptForSize = llvm::shouldOptimizeForSize(L->getHeader(), PSI, BFI,
----------------
david-arm wrote:

I think the cost model changes look like a useful clean-up on their own since, like you say, shouldOptimizeForSize includes checking the function attribute anyway. Could you create a separate NFC patch for this? Then afterwards this patch shrinks to just the test and getMemInstScalarizationCost changes.

https://github.com/llvm/llvm-project/pull/129265


More information about the llvm-commits mailing list