[llvm] [VPlan] Compute blend masks from minimum set of edge masks (PR #184838)
via llvm-commits
llvm-commits at lists.llvm.org
Thu Mar 5 10:01:31 PST 2026
llvmbot wrote:
<!--LLVM PR SUMMARY COMMENT-->
@llvm/pr-subscribers-llvm-transforms
Author: Luke Lau (lukel97)
<details>
<summary>Changes</summary>
Given a phi in a CFG like this:
```mermaid
flowchart
a --> b & c
a["`
a:
br i1 %c0, %b, %c
`"]
b & c --> d
b["`
b:
%x = ...
br i1 %c1, %d, %e
`"]
c["`
c:
%y = ...
br i1 %c2, %d, %e
`"]
d["d:
phi = [%x, %b], [%y, %c]"] --> e
b & c --> e
```
We generate a blend like:
```
BLEND = %x/(c0 && c1) %y/(!c0 && c2)
```
This is because we use the edge mask for the blend mask, which in turn is the logical and of the incoming block's in-mask and the branch condition.
However blend masks are slightly different from block in-masks because we don't need to guarantee dominance, only that each incoming mask is disjoint from the rest. So we don't need to check c1 and c2 and can determine the path that was taken to the phi from c0 alone:
```
BLEND = %x/c0 %y/!c0
```
The idea in this patch is to partition the CFG into subgraphs, where each subgraph uniquely leads to one incoming edge.
We can then collect the incoming edges to each of these subgraphs and use the disjunction of them as the minimal blend mask for the original incoming edge.
In the example below, we have 3 different subgraphs for 3 incoming edges. The edges with X mark indicate the incoming edges to the subgraph that would be or'ed into the resulting masks.
```mermaid
flowchart
a --> b
a --x c & d
b --x e & f
subgraph g unique
c & d --> g
end
g --> i
subgraph e unique
e
end
e --> i
subgraph h unique
f --> h
end
h --> i
```
We work out the subgraphs by traversing upwards from each incoming edge and computing the set of "non-unique" nodes that other incoming blocks can also reach, which are then the limits of the subgraph.
This removes redundant masks on RISC-V on llvm-test-suite + spec cpu 2017. E.g here's a diff on 526.blender_r:
```diff
--- build.rva23u64-O3-a/External/SPEC/CFP2017rate/526.blender_r/CMakeFiles/526.blender_r.dir/Users/luke/Developer/cpu2017/benchspec/CPU/526.blender_r/src/blender/source/blender
/editors/curve/editcurve.s
+++ build.rva23u64-O3-b/External/SPEC/CFP2017rate/526.blender_r/CMakeFiles/526.blender_r.dir/Users/luke/Developer/cpu2017/benchspec/CPU/526.blender_r/src/blender/source/blender
/editors/curve/editcurve.s
@@ -2882,20 +2882,15 @@
.LBB10_10: # %vector.body37
# Parent Loop BB10_4 Depth=1
# => This Inner Loop Header: Depth=2
- vsetvli a0, a5, e16, m2, ta, mu
+ vsetvli a0, a5, e16, m2, ta, ma
addi a2, a1, 24
- vlse16.v v8, (a2), a3
addi a4, a1, 26
- vlse16.v v10, (a4), a3
+ vlse16.v v8, (a4), a3
+ vlse16.v v10, (a2), a3
sub a5, a5, a0
- vand.vi v12, v8, 1
- vmsne.vi v14, v12, 0
- vmsne.vi v12, v10, 0
- vmorn.mm v0, v12, v14
- vand.vi v8, v8, -2
- vor.vi v8, v8, 1, v0.t
- vmseq.vi v0, v10, 0
- sh3add a0, a0, a0
+ sh3add a0, a0, a0
+ vmseq.vi v0, v8, 0
+ vxor.vi v8, v10, 1
vsse16.v v8, (a2), a3, v0.t
sh2add a1, a0, a1
bnez a5, .LBB10_10
```
The main motivation for this though is to allow us to do more complex CFG stuff in #<!-- -->172454 without regressions.
---
Patch is 34.64 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/184838.diff
7 Files Affected:
- (modified) llvm/lib/Transforms/Vectorize/VPlanCFG.h (+27-1)
- (modified) llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp (+77-1)
- (modified) llvm/test/Transforms/LoopVectorize/AArch64/blend-costs.ll (+35-16)
- (modified) llvm/test/Transforms/LoopVectorize/AArch64/force-target-instruction-cost.ll (+1-1)
- (modified) llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-complex-mask.ll (+2-2)
- (modified) llvm/test/Transforms/LoopVectorize/VPlan/predicator.ll (+93-12)
- (added) llvm/test/Transforms/LoopVectorize/predicator.ll (+204)
``````````diff
diff --git a/llvm/lib/Transforms/Vectorize/VPlanCFG.h b/llvm/lib/Transforms/Vectorize/VPlanCFG.h
index 963d84675693a..13281e4a9e99f 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanCFG.h
+++ b/llvm/lib/Transforms/Vectorize/VPlanCFG.h
@@ -207,7 +207,7 @@ template <typename BlockTy> class VPBlockShallowTraversalWrapper {
public:
VPBlockShallowTraversalWrapper(BlockTy Entry) : Entry(Entry) {}
- BlockTy getEntry() { return Entry; }
+ BlockTy getEntry() const { return Entry; }
};
template <> struct GraphTraits<VPBlockShallowTraversalWrapper<VPBlockBase *>> {
@@ -246,6 +246,25 @@ struct GraphTraits<VPBlockShallowTraversalWrapper<const VPBlockBase *>> {
}
};
+template <>
+struct GraphTraits<Inverse<VPBlockShallowTraversalWrapper<VPBlockBase *>>> {
+ using NodeRef = VPBlockBase *;
+ using ChildIteratorType = SmallVectorImpl<VPBlockBase *>::const_iterator;
+
+ static NodeRef
+ getEntryNode(Inverse<VPBlockShallowTraversalWrapper<VPBlockBase *>> N) {
+ return N.Graph.getEntry();
+ }
+
+ static inline ChildIteratorType child_begin(NodeRef N) {
+ return N->getPredecessors().begin();
+ }
+
+ static inline ChildIteratorType child_end(NodeRef N) {
+ return N->getPredecessors().end();
+ }
+};
+
/// Returns an iterator range to traverse the graph starting at \p G in
/// depth-first order. The iterator won't traverse through region blocks.
inline iterator_range<
@@ -259,6 +278,13 @@ vp_depth_first_shallow(const VPBlockBase *G) {
return depth_first(VPBlockShallowTraversalWrapper<const VPBlockBase *>(G));
}
+/// Returns an iterator range to traverse the graph **upwards through
+/// predecessors** starting at \p G in depth-first order. The iterator won't
+/// traverse through region blocks.
+inline auto vp_inverse_depth_first_shallow(VPBlockBase *G) {
+ return inverse_depth_first(VPBlockShallowTraversalWrapper<VPBlockBase *>(G));
+}
+
/// Returns an iterator range to traverse the graph starting at \p G in
/// post order. The iterator won't traverse through region blocks.
inline iterator_range<
diff --git a/llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp b/llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp
index f22a33fa8eec3..276f820379497 100644
--- a/llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp
+++ b/llvm/lib/Transforms/Vectorize/VPlanPredicator.cpp
@@ -76,6 +76,11 @@ class VPPredicator {
/// Compute the predicate of \p VPBB.
void createBlockInMask(VPBasicBlock *VPBB);
+ /// Compute the masks for a VPBlendRecipe in \p VPBB from the minumum number
+ /// of edge masks required.
+ DenseMap<const VPBasicBlock *, VPValue *>
+ computeBlendMasks(VPBasicBlock *VPBB);
+
/// Convert phi recipes in \p VPBB to VPBlendRecipes.
void convertPhisToBlends(VPBasicBlock *VPBB);
};
@@ -205,10 +210,81 @@ void VPPredicator::createSwitchEdgeMasks(const VPInstruction *SI) {
setEdgeMask(Src, DefaultDst, DefaultMask);
}
+DenseMap<const VPBasicBlock *, VPValue *>
+VPPredicator::computeBlendMasks(VPBasicBlock *VPBB) {
+ // First compute the set of ancestors which are reachable from multiple
+ // incoming blocks. This is where we can no longer determine the unique
+ // incoming edge.
+ SmallPtrSet<VPBlockBase *, 8> NonUnique;
+ DenseMap<VPBlockBase *, unsigned> Freq;
+ for (VPBlockBase *InVPBB : VPBB->predecessors()) {
+ for (VPBlockBase *VPBB : vp_inverse_depth_first_shallow(InVPBB)) {
+ Freq[VPBB]++;
+ if (Freq[VPBB] > 1)
+ NonUnique.insert(VPBB);
+ }
+ }
+
+ auto IsNonUnique = [&NonUnique](VPBlockBase *VPBB) {
+ return NonUnique.contains(VPBB);
+ };
+
+ // Then for each incoming block, compute the disjunction of the incoming edges
+ // to its "unique" subgraph.
+ DenseMap<const VPBasicBlock *, VPValue *> Masks;
+ for (VPBlockBase *InVPBBBase : VPBB->predecessors()) {
+ auto *InVPBB = cast<VPBasicBlock>(InVPBBBase);
+
+ // If the incoming block isn't unique, we need to use the incoming edge
+ // mask.
+ if (NonUnique.contains(InVPBB)) {
+ Masks[InVPBB] = getEdgeMask(InVPBB, VPBB);
+ continue;
+ }
+
+ // Traverse upwards and find the edges where the path is no longer unique to
+ // that incoming edge.
+ VPValue *Mask = nullptr;
+ SmallVector<VPBasicBlock *> Worklist = {InVPBB};
+ SmallPtrSet<VPBasicBlock *, 8> Visited;
+ while (!Worklist.empty()) {
+ VPBasicBlock *Unique = Worklist.pop_back_val();
+ if (!Visited.insert(Unique).second)
+ continue;
+
+ // If all predecessors aren't unique, just use the block mask.
+ if (all_of(Unique->predecessors(), IsNonUnique)) {
+ Mask = Mask ? Builder.createOr(Mask, getBlockInMask(Unique))
+ : getBlockInMask(Unique);
+ continue;
+ }
+
+ for (VPBlockBase *PredBase : Unique->predecessors()) {
+ auto *Pred = cast<VPBasicBlock>(PredBase);
+ if (NonUnique.contains(Pred)) {
+ // We've reached a non-unique node. Stop and add that edge mask.
+ VPValue *Edge = getEdgeMask(Pred, Unique);
+ Mask = Mask ? Builder.createOr(Mask, Edge) : Edge;
+ } else {
+ Worklist.push_back(Pred);
+ }
+ }
+ }
+ Masks[InVPBB] = Mask;
+ }
+
+ return Masks;
+}
+
void VPPredicator::convertPhisToBlends(VPBasicBlock *VPBB) {
SmallVector<VPPhi *> Phis;
for (VPRecipeBase &R : VPBB->phis())
Phis.push_back(cast<VPPhi>(&R));
+
+ DenseMap<const VPBasicBlock *, VPValue *> BlendMasks;
+ if (!Phis.empty())
+ BlendMasks = computeBlendMasks(VPBB);
+
for (VPPhi *PhiR : Phis) {
// The non-header Phi is converted into a Blend recipe below,
// so we don't have to worry about the insertion order and we can just use
@@ -229,7 +305,7 @@ void VPPredicator::convertPhisToBlends(VPBasicBlock *VPBB) {
SmallVector<VPValue *, 2> OperandsWithMask;
for (const auto &[InVPV, InVPBB] : PhiR->incoming_values_and_blocks()) {
OperandsWithMask.push_back(InVPV);
- OperandsWithMask.push_back(getEdgeMask(InVPBB, VPBB));
+ OperandsWithMask.push_back(BlendMasks[InVPBB]);
}
PHINode *IRPhi = cast_or_null<PHINode>(PhiR->getUnderlyingValue());
auto *Blend =
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/blend-costs.ll b/llvm/test/Transforms/LoopVectorize/AArch64/blend-costs.ll
index 886401bff72e3..f64defe3705d8 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/blend-costs.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/blend-costs.ll
@@ -16,106 +16,125 @@ define void @test_blend_feeding_replicated_store_1(i64 %N, ptr noalias %src, ptr
; CHECK-NEXT: [[TMP1:%.*]] = icmp eq i64 [[N_MOD_VF]], 0
; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i64 16, i64 [[N_MOD_VF]]
; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 [[TMP43]], [[TMP2]]
+; CHECK-NEXT: [[BROADCAST_SPLATINSERT:%.*]] = insertelement <16 x ptr> poison, ptr [[DST]], i64 0
+; CHECK-NEXT: [[BROADCAST_SPLAT:%.*]] = shufflevector <16 x ptr> [[BROADCAST_SPLATINSERT]], <16 x ptr> poison, <16 x i32> zeroinitializer
; CHECK-NEXT: br label %[[VECTOR_BODY:.*]]
; CHECK: [[VECTOR_BODY]]:
; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[PRED_STORE_CONTINUE30:.*]] ]
; CHECK-NEXT: [[TMP4:%.*]] = getelementptr inbounds i32, ptr [[SRC]], i64 [[INDEX]]
; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <16 x i32>, ptr [[TMP4]], align 4
; CHECK-NEXT: [[TMP5:%.*]] = icmp sge <16 x i32> [[WIDE_LOAD]], zeroinitializer
+; CHECK-NEXT: [[PREDPHI:%.*]] = select <16 x i1> [[TMP5]], <16 x ptr> zeroinitializer, <16 x ptr> [[BROADCAST_SPLAT]]
; CHECK-NEXT: [[TMP21:%.*]] = extractelement <16 x i1> [[TMP5]], i32 0
; CHECK-NEXT: br i1 [[TMP21]], label %[[PRED_STORE_IF:.*]], label %[[PRED_STORE_CONTINUE:.*]]
; CHECK: [[PRED_STORE_IF]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP23:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 0
+; CHECK-NEXT: store i8 0, ptr [[TMP23]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE]]
; CHECK: [[PRED_STORE_CONTINUE]]:
; CHECK-NEXT: [[TMP6:%.*]] = extractelement <16 x i1> [[TMP5]], i32 1
; CHECK-NEXT: br i1 [[TMP6]], label %[[PRED_STORE_IF1:.*]], label %[[PRED_STORE_CONTINUE2:.*]]
; CHECK: [[PRED_STORE_IF1]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP25:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 1
+; CHECK-NEXT: store i8 0, ptr [[TMP25]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE2]]
; CHECK: [[PRED_STORE_CONTINUE2]]:
; CHECK-NEXT: [[TMP7:%.*]] = extractelement <16 x i1> [[TMP5]], i32 2
; CHECK-NEXT: br i1 [[TMP7]], label %[[PRED_STORE_IF3:.*]], label %[[PRED_STORE_CONTINUE4:.*]]
; CHECK: [[PRED_STORE_IF3]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP27:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 2
+; CHECK-NEXT: store i8 0, ptr [[TMP27]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE4]]
; CHECK: [[PRED_STORE_CONTINUE4]]:
; CHECK-NEXT: [[TMP8:%.*]] = extractelement <16 x i1> [[TMP5]], i32 3
; CHECK-NEXT: br i1 [[TMP8]], label %[[PRED_STORE_IF5:.*]], label %[[PRED_STORE_CONTINUE6:.*]]
; CHECK: [[PRED_STORE_IF5]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP29:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 3
+; CHECK-NEXT: store i8 0, ptr [[TMP29]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE6]]
; CHECK: [[PRED_STORE_CONTINUE6]]:
; CHECK-NEXT: [[TMP9:%.*]] = extractelement <16 x i1> [[TMP5]], i32 4
; CHECK-NEXT: br i1 [[TMP9]], label %[[PRED_STORE_IF7:.*]], label %[[PRED_STORE_CONTINUE8:.*]]
; CHECK: [[PRED_STORE_IF7]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP31:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 4
+; CHECK-NEXT: store i8 0, ptr [[TMP31]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE8]]
; CHECK: [[PRED_STORE_CONTINUE8]]:
; CHECK-NEXT: [[TMP10:%.*]] = extractelement <16 x i1> [[TMP5]], i32 5
; CHECK-NEXT: br i1 [[TMP10]], label %[[PRED_STORE_IF9:.*]], label %[[PRED_STORE_CONTINUE10:.*]]
; CHECK: [[PRED_STORE_IF9]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP33:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 5
+; CHECK-NEXT: store i8 0, ptr [[TMP33]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE10]]
; CHECK: [[PRED_STORE_CONTINUE10]]:
; CHECK-NEXT: [[TMP11:%.*]] = extractelement <16 x i1> [[TMP5]], i32 6
; CHECK-NEXT: br i1 [[TMP11]], label %[[PRED_STORE_IF11:.*]], label %[[PRED_STORE_CONTINUE12:.*]]
; CHECK: [[PRED_STORE_IF11]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP35:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 6
+; CHECK-NEXT: store i8 0, ptr [[TMP35]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE12]]
; CHECK: [[PRED_STORE_CONTINUE12]]:
; CHECK-NEXT: [[TMP12:%.*]] = extractelement <16 x i1> [[TMP5]], i32 7
; CHECK-NEXT: br i1 [[TMP12]], label %[[PRED_STORE_IF13:.*]], label %[[PRED_STORE_CONTINUE14:.*]]
; CHECK: [[PRED_STORE_IF13]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP37:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 7
+; CHECK-NEXT: store i8 0, ptr [[TMP37]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE14]]
; CHECK: [[PRED_STORE_CONTINUE14]]:
; CHECK-NEXT: [[TMP13:%.*]] = extractelement <16 x i1> [[TMP5]], i32 8
; CHECK-NEXT: br i1 [[TMP13]], label %[[PRED_STORE_IF15:.*]], label %[[PRED_STORE_CONTINUE16:.*]]
; CHECK: [[PRED_STORE_IF15]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP22:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 8
+; CHECK-NEXT: store i8 0, ptr [[TMP22]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE16]]
; CHECK: [[PRED_STORE_CONTINUE16]]:
; CHECK-NEXT: [[TMP14:%.*]] = extractelement <16 x i1> [[TMP5]], i32 9
; CHECK-NEXT: br i1 [[TMP14]], label %[[PRED_STORE_IF17:.*]], label %[[PRED_STORE_CONTINUE18:.*]]
; CHECK: [[PRED_STORE_IF17]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP24:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 9
+; CHECK-NEXT: store i8 0, ptr [[TMP24]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE18]]
; CHECK: [[PRED_STORE_CONTINUE18]]:
; CHECK-NEXT: [[TMP15:%.*]] = extractelement <16 x i1> [[TMP5]], i32 10
; CHECK-NEXT: br i1 [[TMP15]], label %[[PRED_STORE_IF19:.*]], label %[[PRED_STORE_CONTINUE20:.*]]
; CHECK: [[PRED_STORE_IF19]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP26:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 10
+; CHECK-NEXT: store i8 0, ptr [[TMP26]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE20]]
; CHECK: [[PRED_STORE_CONTINUE20]]:
; CHECK-NEXT: [[TMP16:%.*]] = extractelement <16 x i1> [[TMP5]], i32 11
; CHECK-NEXT: br i1 [[TMP16]], label %[[PRED_STORE_IF21:.*]], label %[[PRED_STORE_CONTINUE22:.*]]
; CHECK: [[PRED_STORE_IF21]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP28:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 11
+; CHECK-NEXT: store i8 0, ptr [[TMP28]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE22]]
; CHECK: [[PRED_STORE_CONTINUE22]]:
; CHECK-NEXT: [[TMP17:%.*]] = extractelement <16 x i1> [[TMP5]], i32 12
; CHECK-NEXT: br i1 [[TMP17]], label %[[PRED_STORE_IF23:.*]], label %[[PRED_STORE_CONTINUE24:.*]]
; CHECK: [[PRED_STORE_IF23]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP30:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 12
+; CHECK-NEXT: store i8 0, ptr [[TMP30]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE24]]
; CHECK: [[PRED_STORE_CONTINUE24]]:
; CHECK-NEXT: [[TMP18:%.*]] = extractelement <16 x i1> [[TMP5]], i32 13
; CHECK-NEXT: br i1 [[TMP18]], label %[[PRED_STORE_IF25:.*]], label %[[PRED_STORE_CONTINUE26:.*]]
; CHECK: [[PRED_STORE_IF25]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP32:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 13
+; CHECK-NEXT: store i8 0, ptr [[TMP32]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE26]]
; CHECK: [[PRED_STORE_CONTINUE26]]:
; CHECK-NEXT: [[TMP19:%.*]] = extractelement <16 x i1> [[TMP5]], i32 14
; CHECK-NEXT: br i1 [[TMP19]], label %[[PRED_STORE_IF27:.*]], label %[[PRED_STORE_CONTINUE28:.*]]
; CHECK: [[PRED_STORE_IF27]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP34:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 14
+; CHECK-NEXT: store i8 0, ptr [[TMP34]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE28]]
; CHECK: [[PRED_STORE_CONTINUE28]]:
; CHECK-NEXT: [[TMP20:%.*]] = extractelement <16 x i1> [[TMP5]], i32 15
; CHECK-NEXT: br i1 [[TMP20]], label %[[PRED_STORE_IF29:.*]], label %[[PRED_STORE_CONTINUE30]]
; CHECK: [[PRED_STORE_IF29]]:
-; CHECK-NEXT: store i8 0, ptr null, align 1
+; CHECK-NEXT: [[TMP36:%.*]] = extractelement <16 x ptr> [[PREDPHI]], i32 15
+; CHECK-NEXT: store i8 0, ptr [[TMP36]], align 1
; CHECK-NEXT: br label %[[PRED_STORE_CONTINUE30]]
; CHECK: [[PRED_STORE_CONTINUE30]]:
; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 16
diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/force-target-instruction-cost.ll b/llvm/test/Transforms/LoopVectorize/AArch64/force-target-instruction-cost.ll
index 0a62ac9804524..046d44c12afbe 100644
--- a/llvm/test/Transforms/LoopVectorize/AArch64/force-target-instruction-cost.ll
+++ b/llvm/test/Transforms/LoopVectorize/AArch64/force-target-instruction-cost.ll
@@ -223,7 +223,7 @@ define void @test_exit_branch_cost(ptr %dst, ptr noalias %x.ptr, ptr noalias %y.
; COMMON-NEXT: [[TMP22:%.*]] = select <2 x i1> [[TMP7]], <2 x i1> [[BROADCAST_SPLAT]], <2 x i1> zeroinitializer
; COMMON-NEXT: [[TMP13:%.*]] = select <2 x i1> [[TMP22]], <2 x i1> [[BROADCAST_SPLAT3]], <2 x i1> zeroinitializer
; COMMON-NEXT: [[TMP14:%.*]] = or <2 x i1> [[TMP6]], [[TMP13]]
-; COMMON-NEXT: [[PREDPHI:%.*]] = select <2 x i1> [[TMP13]], <2 x i64> zeroinitializer, <2 x i64> splat (i64 1)
+; COMMON-NEXT: [[PREDPHI:%.*]] = select <2 x i1> [[TMP6]], <2 x i64> splat (i64 1), <2 x i64> zeroinitializer
; COMMON-NEXT: [[TMP15:%.*]] = extractelement <2 x i1> [[TMP14]], i32 0
; COMMON-NEXT: br i1 [[TMP15]], label %[[PRED_STORE_IF10:.*]], label %[[PRED_STORE_CONTINUE11:.*]]
; COMMON: [[PRED_STORE_IF10]]:
diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-complex-mask.ll b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-complex-mask.ll
index 1aa53e1ef95a0..ba11f7fd3b87d 100644
--- a/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-complex-mask.ll
+++ b/llvm/test/Transforms/LoopVectorize/RISCV/tail-folding-complex-mask.ll
@@ -36,7 +36,7 @@ define void @test(i64 %n, ptr noalias %src0, ptr noalias %src1, ptr noalias %src
; IF-EVL-NEXT: [[TMP9:%.*]] = icmp ult <vscale x 4 x i32> [[TMP8]], [[BROADCAST_SPLAT6]]
; IF-EVL-NEXT: [[TMP10:%.*]] = getelementptr i32, ptr [[SRC0]], i64 [[EVL_BASED_IV]]
; IF-EVL-NEXT: [[VP_OP_LOAD:%.*]] = call <vscale x 4 x i32> @llvm.vp.load.nxv4i32.p0(ptr align 4 [[TMP10]], <vscale x 4 x i1> [[BROADCAST_SPLAT]], i32 [[TMP7]])
-; IF-EVL-NEXT: [[PREDPHI:%.*]] = select <vscale x 4 x i1> [[TMP3]], <vscale x 4 x i32> zeroinitializer, <vscale x 4 x i32> [[VP_OP_LOAD]]
+; IF-EVL-NEXT: [[PREDPHI:%.*]] = select i1 [[C1]], <vscale x 4 x i32> [[VP_OP_LOAD]], <vscale x 4 x i32> zeroinitializer
; IF-EVL-NEXT: [[TMP11:%.*]] = getelementptr i32, ptr [[SRC1]], i64 [[EVL_BASED_IV]]
; IF-EVL-NEXT: [[VP_OP_LOAD7:%.*]] = call <vscale x 4 x i32> @llvm.vp.load.nxv4i32.p0(ptr align 4 [[TMP11]], <vscale x 4 x i1> [[TMP4]], i32 [[TMP7]])
; IF-EVL-NEXT: [[TMP12:%.*]] = add <vscale x 4 x i32> [[VP_OP_LOAD7]], [[PREDPHI]]
@@ -93,7 +93,7 @@ define void @test(i64 %n, ptr noalias %src0, ptr noalias %src1, ptr noalias %src
; NO-VP-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ]
; NO-VP-NEXT: [[TMP13:%.*]] = getelementptr i32, ptr [[SRC0]], i64 [[INDEX]]
; NO-VP-NEXT: [[WIDE_MASKED_LOAD:%.*]] = call <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr align 4 [[TMP13]], <vscale x 4 x i1> [[BROADCAST_SPLAT2]], <vscale x 4 x i32> poison)
-; NO-VP-NEXT: [[PREDPHI:%.*]] = select <vscale x 4 x i1> [[TMP7]], <vscale x 4 x i32> zeroinitializer, <vscale x 4 x i32> [[WIDE_MASKED_LOAD]]
+; NO-VP-NEXT: [[PREDPHI:%.*]] = select i1 [[C1]], <vscale x 4 x i32> [[WIDE_MASKED_LOAD]], <vscale x 4 x i32> zeroinitializer
; NO-VP-NEXT: [[TMP14:%.*]] = getelementptr i32, ptr [[SRC1]], i64 [[INDEX]]
; NO-VP-NEXT: [[WIDE_MASKED_LOAD5:%.*]] = call <vscale x 4 x i32> @llvm.masked.load.nxv4i32.p0(ptr align 4 [[TMP14]], <vscale x 4 x i1> [[TMP8]], <vscale x 4 x i32> poison)
; NO-VP-NEXT: [[TMP15:%.*]] = add <vscale x 4 x i32> [[WIDE_MASKED_LOAD5]], [[PREDPHI]]
diff --git a/llvm/test/Transforms/LoopVectorize/VPlan/predicator.ll b/llvm/test/Transforms/LoopVectorize/VPlan/predicator.ll
index ac12dd5f98bfe..56d99fb615f0d 100644
--- a/llvm/test/Transforms/LoopVectorize/VPlan/predicator.ll
+++ b/llvm/test/Transforms/LoopVectorize/VPlan/predicator.ll
@@ -98,7 +98,7 @@ define void @mask_reuse(ptr %a) {
; CHECK-NEXT: bb4:
; CHECK-NEXT: EMIT vp<[[VP8:%[0-9]+]]> = not ir<%c0>
; CHECK-NEXT: EMIT vp<[[VP9:%[0-9]+]]> = or vp<[[VP7]]>, vp<[[VP8]]>
-; CHECK-NEXT: BLEND ir<%phi4> = ir<%add3>/vp<[[VP7]]> ir<%iv>/vp<[[VP8]]>
+; CHECK-NEXT: BLEND ir<%phi4> = ir<%add3>/ir<%c0> ir<%iv>/vp<[[VP8]]>
; CHECK-NEXT: EMIT store ir<%phi4>, ir<%gep>, vp<[[VP9]]>
; CHECK-NEXT: EMIT ir<%iv.next> = add nuw nsw ir<%iv>, ir<1>, vp<[[VP9]]>
; CHECK-NEXT: EMIT ir<%ec> = icmp eq ir<%iv.next>, ir<128>, vp<[[VP9]]>
@@ -189,7 +189,7 @@ define void @optimized_mask(ptr %a) {
; CHECK-NEXT: bb4:
; CHECK-NEXT: EMIT vp<[[VP8:%[0-9]+]]> = logical-and vp<[[VP6]]>, ir<%c3>
; CHECK-NEXT: EMIT vp<[[VP9:%[0-9]+]]> = or vp<[[VP8]]>, vp<[[VP7]]>
-; CHECK-NEXT: BLEND ir<%phi4> = ir<%add3>/vp<[[VP8]]> ir<%add2>/vp<[[VP7]]>
+; CHECK-NEXT: BLEND ir<%phi4> = ir<%add3>/vp<[[VP6]]> ir<%add2>/vp<[[VP7]]>
; CHECK-NEXT: EMIT i...
[truncated]
``````````
</details>
https://github.com/llvm/llvm-project/pull/184838
More information about the llvm-commits
mailing list