[llvm] [VPlan] Use BlockFrequencyInfo in getPredBlockCostDivisor (PR #158690)

David Sherwood via llvm-commits llvm-commits at lists.llvm.org
Tue Nov 25 02:31:18 PST 2025


================
@@ -12,8 +12,86 @@ define dso_local void @test(ptr %start, ptr %end) #0 {
 ; AVX-NEXT:  entry:
 ; AVX-NEXT:    [[I11_NOT1:%.*]] = icmp eq ptr [[START:%.*]], [[END:%.*]]
 ; AVX-NEXT:    br i1 [[I11_NOT1]], label [[EXIT:%.*]], label [[BB12:%.*]]
+; AVX:       iter.check:
+; AVX-NEXT:    [[END3:%.*]] = ptrtoint ptr [[END]] to i64
+; AVX-NEXT:    [[START4:%.*]] = ptrtoint ptr [[START]] to i64
+; AVX-NEXT:    [[TMP0:%.*]] = add i64 [[END3]], -4
+; AVX-NEXT:    [[TMP1:%.*]] = sub i64 [[TMP0]], [[START4]]
+; AVX-NEXT:    [[TMP2:%.*]] = lshr i64 [[TMP1]], 2
+; AVX-NEXT:    [[TMP3:%.*]] = add nuw nsw i64 [[TMP2]], 1
+; AVX-NEXT:    [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 [[TMP1]], 28
+; AVX-NEXT:    br i1 [[MIN_ITERS_CHECK]], label [[BB12_PREHEADER:%.*]], label [[VECTOR_MAIN_LOOP_ITER_CHECK:%.*]]
+; AVX:       vector.main.loop.iter.check:
+; AVX-NEXT:    [[MIN_ITERS_CHECK5:%.*]] = icmp ult i64 [[TMP1]], 124
+; AVX-NEXT:    br i1 [[MIN_ITERS_CHECK5]], label [[VEC_EPILOG_PH:%.*]], label [[VECTOR_PH:%.*]]
+; AVX:       vector.ph:
+; AVX-NEXT:    [[N_VEC_REMAINING:%.*]] = and i64 [[TMP3]], 24
+; AVX-NEXT:    [[N_VEC:%.*]] = and i64 [[TMP3]], 9223372036854775776
+; AVX-NEXT:    br label [[VECTOR_BODY:%.*]]
+; AVX:       vector.body:
+; AVX-NEXT:    [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
+; AVX-NEXT:    [[TMP4:%.*]] = shl i64 [[INDEX]], 2
+; AVX-NEXT:    [[NEXT_GEP:%.*]] = getelementptr i8, ptr [[START]], i64 [[TMP4]]
+; AVX-NEXT:    [[TMP5:%.*]] = getelementptr i8, ptr [[NEXT_GEP]], i64 32
+; AVX-NEXT:    [[TMP6:%.*]] = getelementptr i8, ptr [[NEXT_GEP]], i64 64
+; AVX-NEXT:    [[TMP7:%.*]] = getelementptr i8, ptr [[NEXT_GEP]], i64 96
+; AVX-NEXT:    [[WIDE_LOAD:%.*]] = load <8 x i32>, ptr [[NEXT_GEP]], align 4
+; AVX-NEXT:    [[WIDE_LOAD6:%.*]] = load <8 x i32>, ptr [[TMP5]], align 4
+; AVX-NEXT:    [[WIDE_LOAD7:%.*]] = load <8 x i32>, ptr [[TMP6]], align 4
+; AVX-NEXT:    [[WIDE_LOAD8:%.*]] = load <8 x i32>, ptr [[TMP7]], align 4
+; AVX-NEXT:    [[TMP8:%.*]] = icmp eq <8 x i32> [[WIDE_LOAD]], splat (i32 -12)
+; AVX-NEXT:    [[TMP9:%.*]] = icmp eq <8 x i32> [[WIDE_LOAD6]], splat (i32 -12)
+; AVX-NEXT:    [[TMP10:%.*]] = icmp eq <8 x i32> [[WIDE_LOAD7]], splat (i32 -12)
+; AVX-NEXT:    [[TMP11:%.*]] = icmp eq <8 x i32> [[WIDE_LOAD8]], splat (i32 -12)
+; AVX-NEXT:    [[TMP12:%.*]] = icmp eq <8 x i32> [[WIDE_LOAD]], splat (i32 13)
+; AVX-NEXT:    [[TMP13:%.*]] = icmp eq <8 x i32> [[WIDE_LOAD6]], splat (i32 13)
+; AVX-NEXT:    [[TMP14:%.*]] = icmp eq <8 x i32> [[WIDE_LOAD7]], splat (i32 13)
+; AVX-NEXT:    [[TMP15:%.*]] = icmp eq <8 x i32> [[WIDE_LOAD8]], splat (i32 13)
+; AVX-NEXT:    [[TMP16:%.*]] = or <8 x i1> [[TMP8]], [[TMP12]]
+; AVX-NEXT:    [[TMP17:%.*]] = or <8 x i1> [[TMP9]], [[TMP13]]
+; AVX-NEXT:    [[TMP18:%.*]] = or <8 x i1> [[TMP10]], [[TMP14]]
+; AVX-NEXT:    [[TMP19:%.*]] = or <8 x i1> [[TMP11]], [[TMP15]]
+; AVX-NEXT:    tail call void @llvm.masked.store.v8i32.p0(<8 x i32> splat (i32 42), ptr align 4 [[NEXT_GEP]], <8 x i1> [[TMP16]])
----------------
david-arm wrote:

I'm very surprised we now start vectorising this loop. It seems like a mistake to me and likely a regression. The chance of any store taking place is tiny, i.e. a 1 out of every UINT_MAX - see the code:

```
  %val = load i32, ptr %ptr, align 4
  %c1 = icmp eq i32 %val, 13
  %c2 = icmp eq i32 %val, -12
  %c3 = or i1 %c1, %c2
  br i1 %c3, label %store, label %latch
```

It would be good to find out what's wrong here.

https://github.com/llvm/llvm-project/pull/158690


More information about the llvm-commits mailing list