[llvm] f0505c3 - [RISCV] Form vredsum from explode_vector + scalar (left) reduce (#67821)

via llvm-commits llvm-commits at lists.llvm.org
Sun Oct 1 17:42:12 PDT 2023


Author: Philip Reames
Date: 2023-10-01T17:42:07-07:00
New Revision: f0505c3dbe50f533e55d21dfcb584fcac44bd80c

URL: https://github.com/llvm/llvm-project/commit/f0505c3dbe50f533e55d21dfcb584fcac44bd80c
DIFF: https://github.com/llvm/llvm-project/commit/f0505c3dbe50f533e55d21dfcb584fcac44bd80c.diff

LOG: [RISCV] Form vredsum from explode_vector + scalar (left) reduce (#67821)

This change adds two related DAG combines which together will take a
left-reduce scalar add tree of an explode_vector, and will incrementally
form a vector reduction of the vector prefix. If the entire vector is
reduced, the result will be a reduction over the entire vector.

Profitability wise, this relies on vredsum being cheaper than a pair of
extracts and scalar add. Given vredsum is linear in LMUL, and the
vslidedown required for the extract is *also* linear in LMUL, this is
clearly true at higher index values. At N=2, it's a bit questionable,
but I think the vredsum form is probably a better canonical form
anyways.

Note that this only matches left reduces. This happens to be the
motivating example I have (from spec2017 x264). This approach could be
generalized to handle right reduces without much effort, and could be
generalized to handle any reduce whose tree starts with adjacent
elements if desired. The approach fails for a reduce such as (A+C)+(B+D)
because we can't find a root to start the reduce with without scanning
the entire associative add expression. We could maybe explore using
masked reduces for the root node, but that seems of questionable
profitability. (As in, worth questioning - I haven't explored in any
detail.)

This is covering up a deficiency in SLP. If SLP encounters the scalar
form of reduce_or(A) + reduce_sum(a) where a is some common
vectorizeable tree, SLP will sometimes fail to revisit one of the
reductions after vectorizing the other. Fixing this in SLP is hard, and
there's no good reason not to handle the easy cases in the backend.

Another option here would be to do this in VectorCombine or generic DAG.
I chose not to as the profitability of the non-legal typed prefix cases
is very target dependent. I think this makes sense as a starting point,
even if we move it elsewhere later.

This is currently restructed only to add reduces, but obviously makes
sense for any associative reduction operator. Once this is approved, I
plan to extend it in this manner. I'm simply staging work in case we
decide to go in another direction.

Added: 
    

Modified: 
    llvm/lib/Target/RISCV/RISCVISelLowering.cpp
    llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-explodevector.ll
    llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-formation.ll

Removed: 
    


################################################################################
diff  --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 5e3975df1c4425d..fb7ed79c7888421 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -11122,6 +11122,85 @@ void RISCVTargetLowering::ReplaceNodeResults(SDNode *N,
   }
 }
 
+/// Perform two related transforms whose purpose is to incrementally recognize
+/// an explode_vector followed by scalar reduction as a vector reduction node.
+/// This exists to recover from a deficiency in SLP which can't handle
+/// forests with multiple roots sharing common nodes.  In some cases, one
+/// of the trees will be vectorized, and the other will remain (unprofitably)
+/// scalarized.
+static SDValue
+combineBinOpOfExtractToReduceTree(SDNode *N, SelectionDAG &DAG,
+                                  const RISCVSubtarget &Subtarget) {
+
+  // This transforms need to run before all integer types have been legalized
+  // to i64 (so that the vector element type matches the add type), and while
+  // it's safe to introduce odd sized vector types.
+  if (DAG.NewNodesMustHaveLegalTypes)
+    return SDValue();
+
+  const SDLoc DL(N);
+  const EVT VT = N->getValueType(0);
+  [[maybe_unused]] const unsigned Opc = N->getOpcode();
+  assert(Opc == ISD::ADD && "extend this to other reduction types");
+  const SDValue LHS = N->getOperand(0);
+  const SDValue RHS = N->getOperand(1);
+
+  if (!LHS.hasOneUse() || !RHS.hasOneUse())
+    return SDValue();
+
+  if (RHS.getOpcode() != ISD::EXTRACT_VECTOR_ELT ||
+      !isa<ConstantSDNode>(RHS.getOperand(1)))
+    return SDValue();
+
+  SDValue SrcVec = RHS.getOperand(0);
+  EVT SrcVecVT = SrcVec.getValueType();
+  assert(SrcVecVT.getVectorElementType() == VT);
+  if (SrcVecVT.isScalableVector())
+    return SDValue();
+
+  if (SrcVecVT.getScalarSizeInBits() > Subtarget.getELen())
+    return SDValue();
+
+  // match binop (extract_vector_elt V, 0), (extract_vector_elt V, 1) to
+  // reduce_op (extract_subvector [2 x VT] from V).  This will form the
+  // root of our reduction tree. TODO: We could extend this to any two
+  // adjacent constant indices if desired.
+  if (LHS.getOpcode() == ISD::EXTRACT_VECTOR_ELT &&
+      LHS.getOperand(0) == SrcVec && isNullConstant(LHS.getOperand(1)) &&
+      isOneConstant(RHS.getOperand(1))) {
+    EVT ReduceVT = EVT::getVectorVT(*DAG.getContext(), VT, 2);
+    SDValue Vec = DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, ReduceVT, SrcVec,
+                              DAG.getVectorIdxConstant(0, DL));
+    return DAG.getNode(ISD::VECREDUCE_ADD, DL, VT, Vec);
+  }
+
+  // Match (binop (reduce (extract_subvector V, 0),
+  //                      (extract_vector_elt V, sizeof(SubVec))))
+  // into a reduction of one more element from the original vector V.
+  if (LHS.getOpcode() != ISD::VECREDUCE_ADD)
+    return SDValue();
+
+  SDValue ReduceVec = LHS.getOperand(0);
+  if (ReduceVec.getOpcode() == ISD::EXTRACT_SUBVECTOR &&
+      ReduceVec.hasOneUse() && ReduceVec.getOperand(0) == RHS.getOperand(0) &&
+      isNullConstant(ReduceVec.getOperand(1))) {
+    uint64_t Idx = cast<ConstantSDNode>(RHS.getOperand(1))->getLimitedValue();
+    if (ReduceVec.getValueType().getVectorNumElements() == Idx) {
+      // For illegal types (e.g. 3xi32), most will be combined again into a
+      // wider (hopefully legal) type.  If this is a terminal state, we are
+      // relying on type legalization here to produce something reasonable
+      // and this lowering quality could probably be improved. (TODO)
+      EVT ReduceVT = EVT::getVectorVT(*DAG.getContext(), VT, Idx + 1);
+      SDValue Vec = DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, ReduceVT, SrcVec,
+                                DAG.getVectorIdxConstant(0, DL));
+      return DAG.getNode(ISD::VECREDUCE_ADD, DL, VT, Vec);
+    }
+  }
+
+  return SDValue();
+}
+
+
 // Try to fold (<bop> x, (reduction.<bop> vec, start))
 static SDValue combineBinOpToReduce(SDNode *N, SelectionDAG &DAG,
                                     const RISCVSubtarget &Subtarget) {
@@ -11449,6 +11528,9 @@ static SDValue performADDCombine(SDNode *N, SelectionDAG &DAG,
     return V;
   if (SDValue V = combineBinOpToReduce(N, DAG, Subtarget))
     return V;
+  if (SDValue V = combineBinOpOfExtractToReduceTree(N, DAG, Subtarget))
+    return V;
+
   // fold (add (select lhs, rhs, cc, 0, y), x) ->
   //      (select lhs, rhs, cc, x, (add x, y))
   return combineSelectAndUseCommutative(N, DAG, /*AllOnes*/ false, Subtarget);

diff  --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-explodevector.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-explodevector.ll
index d22505eac047886..ab137b1ac818299 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-explodevector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-explodevector.ll
@@ -9,11 +9,11 @@ define i8 @explode_2xi8(<2 x i8> %v) {
 ; CHECK-NEXT:    vmv.x.s a0, v8
 ; CHECK-NEXT:    vslidedown.vi v8, v8, 1
 ; CHECK-NEXT:    vmv.x.s a1, v8
-; CHECK-NEXT:    add a0, a0, a1
+; CHECK-NEXT:    xor a0, a0, a1
 ; CHECK-NEXT:    ret
   %e0 = extractelement <2 x i8> %v, i32 0
   %e1 = extractelement <2 x i8> %v, i32 1
-  %add0 = add i8 %e0, %e1
+  %add0 = xor i8 %e0, %e1
   ret i8 %add0
 }
 
@@ -28,16 +28,16 @@ define i8 @explode_4xi8(<4 x i8> %v) {
 ; CHECK-NEXT:    vmv.x.s a2, v9
 ; CHECK-NEXT:    vslidedown.vi v8, v8, 3
 ; CHECK-NEXT:    vmv.x.s a3, v8
-; CHECK-NEXT:    add a0, a0, a1
-; CHECK-NEXT:    xor a0, a0, a2
-; CHECK-NEXT:    add a0, a0, a3
+; CHECK-NEXT:    xor a0, a0, a1
+; CHECK-NEXT:    add a2, a2, a3
+; CHECK-NEXT:    add a0, a0, a2
 ; CHECK-NEXT:    ret
   %e0 = extractelement <4 x i8> %v, i32 0
   %e1 = extractelement <4 x i8> %v, i32 1
   %e2 = extractelement <4 x i8> %v, i32 2
   %e3 = extractelement <4 x i8> %v, i32 3
-  %add0 = add i8 %e0, %e1
-  %add1 = xor i8 %add0, %e2
+  %add0 = xor i8 %e0, %e1
+  %add1 = add i8 %add0, %e2
   %add2 = add i8 %add1, %e3
   ret i8 %add2
 }
@@ -62,13 +62,13 @@ define i8 @explode_8xi8(<8 x i8> %v) {
 ; CHECK-NEXT:    vmv.x.s a6, v9
 ; CHECK-NEXT:    vslidedown.vi v8, v8, 7
 ; CHECK-NEXT:    vmv.x.s a7, v8
-; CHECK-NEXT:    add a0, a0, a1
-; CHECK-NEXT:    xor a0, a0, a2
-; CHECK-NEXT:    add a3, a3, a4
-; CHECK-NEXT:    add a3, a3, a5
-; CHECK-NEXT:    add a0, a0, a3
-; CHECK-NEXT:    add a6, a6, a7
-; CHECK-NEXT:    add a0, a0, a6
+; CHECK-NEXT:    xor a0, a0, a1
+; CHECK-NEXT:    add a2, a2, a3
+; CHECK-NEXT:    add a0, a0, a2
+; CHECK-NEXT:    add a4, a4, a5
+; CHECK-NEXT:    add a4, a4, a6
+; CHECK-NEXT:    add a0, a0, a4
+; CHECK-NEXT:    add a0, a0, a7
 ; CHECK-NEXT:    ret
   %e0 = extractelement <8 x i8> %v, i32 0
   %e1 = extractelement <8 x i8> %v, i32 1
@@ -78,8 +78,8 @@ define i8 @explode_8xi8(<8 x i8> %v) {
   %e5 = extractelement <8 x i8> %v, i32 5
   %e6 = extractelement <8 x i8> %v, i32 6
   %e7 = extractelement <8 x i8> %v, i32 7
-  %add0 = add i8 %e0, %e1
-  %add1 = xor i8 %add0, %e2
+  %add0 = xor i8 %e0, %e1
+  %add1 = add i8 %add0, %e2
   %add2 = add i8 %add1, %e3
   %add3 = add i8 %add2, %e4
   %add4 = add i8 %add3, %e5
@@ -127,21 +127,21 @@ define i8 @explode_16xi8(<16 x i8> %v) {
 ; RV32-NEXT:    vmv.x.s t6, v9
 ; RV32-NEXT:    vslidedown.vi v8, v8, 15
 ; RV32-NEXT:    vmv.x.s s0, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    xor a0, a0, a2
-; RV32-NEXT:    add a3, a3, a4
-; RV32-NEXT:    add a3, a3, a5
-; RV32-NEXT:    add a0, a0, a3
-; RV32-NEXT:    add a6, a6, a7
-; RV32-NEXT:    add a6, a6, t0
-; RV32-NEXT:    add a6, a6, t1
-; RV32-NEXT:    add a0, a0, a6
-; RV32-NEXT:    add t2, t2, t3
-; RV32-NEXT:    add t2, t2, t4
-; RV32-NEXT:    add t2, t2, t5
-; RV32-NEXT:    add t2, t2, t6
-; RV32-NEXT:    add a0, a0, t2
-; RV32-NEXT:    add a0, a0, s0
+; RV32-NEXT:    xor a0, a0, a1
+; RV32-NEXT:    add a2, a2, a3
+; RV32-NEXT:    add a0, a0, a2
+; RV32-NEXT:    add a4, a4, a5
+; RV32-NEXT:    add a4, a4, a6
+; RV32-NEXT:    add a0, a0, a4
+; RV32-NEXT:    add a7, a7, t0
+; RV32-NEXT:    add a7, a7, t1
+; RV32-NEXT:    add a7, a7, t2
+; RV32-NEXT:    add a0, a0, a7
+; RV32-NEXT:    add t3, t3, t4
+; RV32-NEXT:    add t3, t3, t5
+; RV32-NEXT:    add t3, t3, t6
+; RV32-NEXT:    add t3, t3, s0
+; RV32-NEXT:    add a0, a0, t3
 ; RV32-NEXT:    lw s0, 12(sp) # 4-byte Folded Reload
 ; RV32-NEXT:    addi sp, sp, 16
 ; RV32-NEXT:    ret
@@ -184,21 +184,21 @@ define i8 @explode_16xi8(<16 x i8> %v) {
 ; RV64-NEXT:    vmv.x.s t6, v9
 ; RV64-NEXT:    vslidedown.vi v8, v8, 15
 ; RV64-NEXT:    vmv.x.s s0, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    xor a0, a0, a2
-; RV64-NEXT:    add a3, a3, a4
-; RV64-NEXT:    add a3, a3, a5
-; RV64-NEXT:    add a0, a0, a3
-; RV64-NEXT:    add a6, a6, a7
-; RV64-NEXT:    add a6, a6, t0
-; RV64-NEXT:    add a6, a6, t1
-; RV64-NEXT:    add a0, a0, a6
-; RV64-NEXT:    add t2, t2, t3
-; RV64-NEXT:    add t2, t2, t4
-; RV64-NEXT:    add t2, t2, t5
-; RV64-NEXT:    add t2, t2, t6
-; RV64-NEXT:    add a0, a0, t2
-; RV64-NEXT:    add a0, a0, s0
+; RV64-NEXT:    xor a0, a0, a1
+; RV64-NEXT:    add a2, a2, a3
+; RV64-NEXT:    add a0, a0, a2
+; RV64-NEXT:    add a4, a4, a5
+; RV64-NEXT:    add a4, a4, a6
+; RV64-NEXT:    add a0, a0, a4
+; RV64-NEXT:    add a7, a7, t0
+; RV64-NEXT:    add a7, a7, t1
+; RV64-NEXT:    add a7, a7, t2
+; RV64-NEXT:    add a0, a0, a7
+; RV64-NEXT:    add t3, t3, t4
+; RV64-NEXT:    add t3, t3, t5
+; RV64-NEXT:    add t3, t3, t6
+; RV64-NEXT:    add t3, t3, s0
+; RV64-NEXT:    add a0, a0, t3
 ; RV64-NEXT:    ld s0, 8(sp) # 8-byte Folded Reload
 ; RV64-NEXT:    addi sp, sp, 16
 ; RV64-NEXT:    ret
@@ -218,8 +218,8 @@ define i8 @explode_16xi8(<16 x i8> %v) {
   %e13 = extractelement <16 x i8> %v, i32 13
   %e14 = extractelement <16 x i8> %v, i32 14
   %e15 = extractelement <16 x i8> %v, i32 15
-  %add0 = add i8 %e0, %e1
-  %add1 = xor i8 %add0, %e2
+  %add0 = xor i8 %e0, %e1
+  %add1 = add i8 %add0, %e2
   %add2 = add i8 %add1, %e3
   %add3 = add i8 %add2, %e4
   %add4 = add i8 %add3, %e5
@@ -243,11 +243,11 @@ define i16 @explode_2xi16(<2 x i16> %v) {
 ; CHECK-NEXT:    vmv.x.s a0, v8
 ; CHECK-NEXT:    vslidedown.vi v8, v8, 1
 ; CHECK-NEXT:    vmv.x.s a1, v8
-; CHECK-NEXT:    add a0, a0, a1
+; CHECK-NEXT:    xor a0, a0, a1
 ; CHECK-NEXT:    ret
   %e0 = extractelement <2 x i16> %v, i32 0
   %e1 = extractelement <2 x i16> %v, i32 1
-  %add0 = add i16 %e0, %e1
+  %add0 = xor i16 %e0, %e1
   ret i16 %add0
 }
 
@@ -262,16 +262,16 @@ define i16 @explode_4xi16(<4 x i16> %v) {
 ; CHECK-NEXT:    vmv.x.s a2, v9
 ; CHECK-NEXT:    vslidedown.vi v8, v8, 3
 ; CHECK-NEXT:    vmv.x.s a3, v8
-; CHECK-NEXT:    add a0, a0, a1
-; CHECK-NEXT:    xor a0, a0, a2
-; CHECK-NEXT:    add a0, a0, a3
+; CHECK-NEXT:    xor a0, a0, a1
+; CHECK-NEXT:    add a2, a2, a3
+; CHECK-NEXT:    add a0, a0, a2
 ; CHECK-NEXT:    ret
   %e0 = extractelement <4 x i16> %v, i32 0
   %e1 = extractelement <4 x i16> %v, i32 1
   %e2 = extractelement <4 x i16> %v, i32 2
   %e3 = extractelement <4 x i16> %v, i32 3
-  %add0 = add i16 %e0, %e1
-  %add1 = xor i16 %add0, %e2
+  %add0 = xor i16 %e0, %e1
+  %add1 = add i16 %add0, %e2
   %add2 = add i16 %add1, %e3
   ret i16 %add2
 }
@@ -296,13 +296,13 @@ define i16 @explode_8xi16(<8 x i16> %v) {
 ; CHECK-NEXT:    vmv.x.s a6, v9
 ; CHECK-NEXT:    vslidedown.vi v8, v8, 7
 ; CHECK-NEXT:    vmv.x.s a7, v8
-; CHECK-NEXT:    add a0, a0, a1
-; CHECK-NEXT:    xor a0, a0, a2
-; CHECK-NEXT:    add a3, a3, a4
-; CHECK-NEXT:    add a3, a3, a5
-; CHECK-NEXT:    add a0, a0, a3
-; CHECK-NEXT:    add a6, a6, a7
-; CHECK-NEXT:    add a0, a0, a6
+; CHECK-NEXT:    xor a0, a0, a1
+; CHECK-NEXT:    add a2, a2, a3
+; CHECK-NEXT:    add a0, a0, a2
+; CHECK-NEXT:    add a4, a4, a5
+; CHECK-NEXT:    add a4, a4, a6
+; CHECK-NEXT:    add a0, a0, a4
+; CHECK-NEXT:    add a0, a0, a7
 ; CHECK-NEXT:    ret
   %e0 = extractelement <8 x i16> %v, i32 0
   %e1 = extractelement <8 x i16> %v, i32 1
@@ -312,8 +312,8 @@ define i16 @explode_8xi16(<8 x i16> %v) {
   %e5 = extractelement <8 x i16> %v, i32 5
   %e6 = extractelement <8 x i16> %v, i32 6
   %e7 = extractelement <8 x i16> %v, i32 7
-  %add0 = add i16 %e0, %e1
-  %add1 = xor i16 %add0, %e2
+  %add0 = xor i16 %e0, %e1
+  %add1 = add i16 %add0, %e2
   %add2 = add i16 %add1, %e3
   %add3 = add i16 %add2, %e4
   %add4 = add i16 %add3, %e5
@@ -362,21 +362,21 @@ define i16 @explode_16xi16(<16 x i16> %v) {
 ; RV32-NEXT:    vmv.x.s t6, v10
 ; RV32-NEXT:    vslidedown.vi v8, v8, 15
 ; RV32-NEXT:    vmv.x.s s0, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    xor a0, a0, a2
-; RV32-NEXT:    add a3, a3, a4
-; RV32-NEXT:    add a3, a3, a5
-; RV32-NEXT:    add a0, a0, a3
-; RV32-NEXT:    add a6, a6, a7
-; RV32-NEXT:    add a6, a6, t0
-; RV32-NEXT:    add a6, a6, t1
-; RV32-NEXT:    add a0, a0, a6
-; RV32-NEXT:    add t2, t2, t3
-; RV32-NEXT:    add t2, t2, t4
-; RV32-NEXT:    add t2, t2, t5
-; RV32-NEXT:    add t2, t2, t6
-; RV32-NEXT:    add a0, a0, t2
-; RV32-NEXT:    add a0, a0, s0
+; RV32-NEXT:    xor a0, a0, a1
+; RV32-NEXT:    add a2, a2, a3
+; RV32-NEXT:    add a0, a0, a2
+; RV32-NEXT:    add a4, a4, a5
+; RV32-NEXT:    add a4, a4, a6
+; RV32-NEXT:    add a0, a0, a4
+; RV32-NEXT:    add a7, a7, t0
+; RV32-NEXT:    add a7, a7, t1
+; RV32-NEXT:    add a7, a7, t2
+; RV32-NEXT:    add a0, a0, a7
+; RV32-NEXT:    add t3, t3, t4
+; RV32-NEXT:    add t3, t3, t5
+; RV32-NEXT:    add t3, t3, t6
+; RV32-NEXT:    add t3, t3, s0
+; RV32-NEXT:    add a0, a0, t3
 ; RV32-NEXT:    lw s0, 12(sp) # 4-byte Folded Reload
 ; RV32-NEXT:    addi sp, sp, 16
 ; RV32-NEXT:    ret
@@ -420,21 +420,21 @@ define i16 @explode_16xi16(<16 x i16> %v) {
 ; RV64-NEXT:    vmv.x.s t6, v10
 ; RV64-NEXT:    vslidedown.vi v8, v8, 15
 ; RV64-NEXT:    vmv.x.s s0, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    xor a0, a0, a2
-; RV64-NEXT:    add a3, a3, a4
-; RV64-NEXT:    add a3, a3, a5
-; RV64-NEXT:    add a0, a0, a3
-; RV64-NEXT:    add a6, a6, a7
-; RV64-NEXT:    add a6, a6, t0
-; RV64-NEXT:    add a6, a6, t1
-; RV64-NEXT:    add a0, a0, a6
-; RV64-NEXT:    add t2, t2, t3
-; RV64-NEXT:    add t2, t2, t4
-; RV64-NEXT:    add t2, t2, t5
-; RV64-NEXT:    add t2, t2, t6
-; RV64-NEXT:    add a0, a0, t2
-; RV64-NEXT:    add a0, a0, s0
+; RV64-NEXT:    xor a0, a0, a1
+; RV64-NEXT:    add a2, a2, a3
+; RV64-NEXT:    add a0, a0, a2
+; RV64-NEXT:    add a4, a4, a5
+; RV64-NEXT:    add a4, a4, a6
+; RV64-NEXT:    add a0, a0, a4
+; RV64-NEXT:    add a7, a7, t0
+; RV64-NEXT:    add a7, a7, t1
+; RV64-NEXT:    add a7, a7, t2
+; RV64-NEXT:    add a0, a0, a7
+; RV64-NEXT:    add t3, t3, t4
+; RV64-NEXT:    add t3, t3, t5
+; RV64-NEXT:    add t3, t3, t6
+; RV64-NEXT:    add t3, t3, s0
+; RV64-NEXT:    add a0, a0, t3
 ; RV64-NEXT:    ld s0, 8(sp) # 8-byte Folded Reload
 ; RV64-NEXT:    addi sp, sp, 16
 ; RV64-NEXT:    ret
@@ -454,8 +454,8 @@ define i16 @explode_16xi16(<16 x i16> %v) {
   %e13 = extractelement <16 x i16> %v, i32 13
   %e14 = extractelement <16 x i16> %v, i32 14
   %e15 = extractelement <16 x i16> %v, i32 15
-  %add0 = add i16 %e0, %e1
-  %add1 = xor i16 %add0, %e2
+  %add0 = xor i16 %e0, %e1
+  %add1 = add i16 %add0, %e2
   %add2 = add i16 %add1, %e3
   %add3 = add i16 %add2, %e4
   %add4 = add i16 %add3, %e5
@@ -473,26 +473,17 @@ define i16 @explode_16xi16(<16 x i16> %v) {
 }
 
 define i32 @explode_2xi32(<2 x i32> %v) {
-; RV32-LABEL: explode_2xi32:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 1, e32, mf2, ta, ma
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vslidedown.vi v8, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: explode_2xi32:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 1, e32, mf2, ta, ma
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vslidedown.vi v8, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v8
-; RV64-NEXT:    addw a0, a0, a1
-; RV64-NEXT:    ret
+; CHECK-LABEL: explode_2xi32:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 1, e32, mf2, ta, ma
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    vslidedown.vi v8, v8, 1
+; CHECK-NEXT:    vmv.x.s a1, v8
+; CHECK-NEXT:    xor a0, a0, a1
+; CHECK-NEXT:    ret
   %e0 = extractelement <2 x i32> %v, i32 0
   %e1 = extractelement <2 x i32> %v, i32 1
-  %add0 = add i32 %e0, %e1
+  %add0 = xor i32 %e0, %e1
   ret i32 %add0
 }
 
@@ -507,9 +498,9 @@ define i32 @explode_4xi32(<4 x i32> %v) {
 ; RV32-NEXT:    vmv.x.s a2, v9
 ; RV32-NEXT:    vslidedown.vi v8, v8, 3
 ; RV32-NEXT:    vmv.x.s a3, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    xor a0, a0, a2
-; RV32-NEXT:    add a0, a0, a3
+; RV32-NEXT:    xor a0, a0, a1
+; RV32-NEXT:    add a2, a2, a3
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    ret
 ;
 ; RV64-LABEL: explode_4xi32:
@@ -522,16 +513,16 @@ define i32 @explode_4xi32(<4 x i32> %v) {
 ; RV64-NEXT:    vmv.x.s a2, v9
 ; RV64-NEXT:    vslidedown.vi v8, v8, 3
 ; RV64-NEXT:    vmv.x.s a3, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    xor a0, a0, a2
-; RV64-NEXT:    addw a0, a0, a3
+; RV64-NEXT:    xor a0, a0, a1
+; RV64-NEXT:    add a2, a2, a3
+; RV64-NEXT:    addw a0, a0, a2
 ; RV64-NEXT:    ret
   %e0 = extractelement <4 x i32> %v, i32 0
   %e1 = extractelement <4 x i32> %v, i32 1
   %e2 = extractelement <4 x i32> %v, i32 2
   %e3 = extractelement <4 x i32> %v, i32 3
-  %add0 = add i32 %e0, %e1
-  %add1 = xor i32 %add0, %e2
+  %add0 = xor i32 %e0, %e1
+  %add1 = add i32 %add0, %e2
   %add2 = add i32 %add1, %e3
   ret i32 %add2
 }
@@ -557,13 +548,13 @@ define i32 @explode_8xi32(<8 x i32> %v) {
 ; RV32-NEXT:    vmv.x.s a6, v10
 ; RV32-NEXT:    vslidedown.vi v8, v8, 7
 ; RV32-NEXT:    vmv.x.s a7, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    xor a0, a0, a2
-; RV32-NEXT:    add a3, a3, a4
-; RV32-NEXT:    add a3, a3, a5
-; RV32-NEXT:    add a0, a0, a3
-; RV32-NEXT:    add a6, a6, a7
-; RV32-NEXT:    add a0, a0, a6
+; RV32-NEXT:    xor a0, a0, a1
+; RV32-NEXT:    add a2, a2, a3
+; RV32-NEXT:    add a0, a0, a2
+; RV32-NEXT:    add a4, a4, a5
+; RV32-NEXT:    add a4, a4, a6
+; RV32-NEXT:    add a0, a0, a4
+; RV32-NEXT:    add a0, a0, a7
 ; RV32-NEXT:    ret
 ;
 ; RV64-LABEL: explode_8xi32:
@@ -585,13 +576,13 @@ define i32 @explode_8xi32(<8 x i32> %v) {
 ; RV64-NEXT:    vmv.x.s a6, v10
 ; RV64-NEXT:    vslidedown.vi v8, v8, 7
 ; RV64-NEXT:    vmv.x.s a7, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    xor a0, a0, a2
-; RV64-NEXT:    add a3, a3, a4
-; RV64-NEXT:    add a3, a3, a5
-; RV64-NEXT:    add a0, a0, a3
-; RV64-NEXT:    add a6, a6, a7
-; RV64-NEXT:    addw a0, a0, a6
+; RV64-NEXT:    xor a0, a0, a1
+; RV64-NEXT:    add a2, a2, a3
+; RV64-NEXT:    add a0, a0, a2
+; RV64-NEXT:    add a4, a4, a5
+; RV64-NEXT:    add a4, a4, a6
+; RV64-NEXT:    add a0, a0, a4
+; RV64-NEXT:    addw a0, a0, a7
 ; RV64-NEXT:    ret
   %e0 = extractelement <8 x i32> %v, i32 0
   %e1 = extractelement <8 x i32> %v, i32 1
@@ -601,8 +592,8 @@ define i32 @explode_8xi32(<8 x i32> %v) {
   %e5 = extractelement <8 x i32> %v, i32 5
   %e6 = extractelement <8 x i32> %v, i32 6
   %e7 = extractelement <8 x i32> %v, i32 7
-  %add0 = add i32 %e0, %e1
-  %add1 = xor i32 %add0, %e2
+  %add0 = xor i32 %e0, %e1
+  %add1 = add i32 %add0, %e2
   %add2 = add i32 %add1, %e3
   %add3 = add i32 %add2, %e4
   %add4 = add i32 %add3, %e5
@@ -653,14 +644,14 @@ define i32 @explode_16xi32(<16 x i32> %v) {
 ; RV32-NEXT:    lw t5, 52(sp)
 ; RV32-NEXT:    lw t6, 56(sp)
 ; RV32-NEXT:    lw s2, 60(sp)
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    xor a0, a0, a2
-; RV32-NEXT:    add a3, a3, a4
-; RV32-NEXT:    add a3, a3, a5
-; RV32-NEXT:    add a0, a0, a3
-; RV32-NEXT:    add a6, a6, a7
-; RV32-NEXT:    add a6, a6, t0
-; RV32-NEXT:    add a0, a0, a6
+; RV32-NEXT:    xor a0, a0, a1
+; RV32-NEXT:    add a2, a2, a3
+; RV32-NEXT:    add a0, a0, a2
+; RV32-NEXT:    add a4, a4, a5
+; RV32-NEXT:    add a4, a4, a6
+; RV32-NEXT:    add a0, a0, a4
+; RV32-NEXT:    add a7, a7, t0
+; RV32-NEXT:    add a0, a0, a7
 ; RV32-NEXT:    add t1, t1, t2
 ; RV32-NEXT:    add t1, t1, t3
 ; RV32-NEXT:    add a0, a0, t1
@@ -716,14 +707,14 @@ define i32 @explode_16xi32(<16 x i32> %v) {
 ; RV64-NEXT:    lw t5, 52(sp)
 ; RV64-NEXT:    lw t6, 56(sp)
 ; RV64-NEXT:    lw s2, 60(sp)
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    xor a0, a0, a2
-; RV64-NEXT:    add a3, a3, a4
-; RV64-NEXT:    add a3, a3, a5
-; RV64-NEXT:    add a0, a0, a3
-; RV64-NEXT:    add a6, a6, a7
-; RV64-NEXT:    add a6, a6, t0
-; RV64-NEXT:    add a0, a0, a6
+; RV64-NEXT:    xor a0, a0, a1
+; RV64-NEXT:    add a2, a2, a3
+; RV64-NEXT:    add a0, a0, a2
+; RV64-NEXT:    add a4, a4, a5
+; RV64-NEXT:    add a4, a4, a6
+; RV64-NEXT:    add a0, a0, a4
+; RV64-NEXT:    add a7, a7, t0
+; RV64-NEXT:    add a0, a0, a7
 ; RV64-NEXT:    add t1, t1, t2
 ; RV64-NEXT:    add t1, t1, t3
 ; RV64-NEXT:    add a0, a0, t1
@@ -753,8 +744,8 @@ define i32 @explode_16xi32(<16 x i32> %v) {
   %e13 = extractelement <16 x i32> %v, i32 13
   %e14 = extractelement <16 x i32> %v, i32 14
   %e15 = extractelement <16 x i32> %v, i32 15
-  %add0 = add i32 %e0, %e1
-  %add1 = xor i32 %add0, %e2
+  %add0 = xor i32 %e0, %e1
+  %add1 = add i32 %add0, %e2
   %add2 = add i32 %add1, %e3
   %add3 = add i32 %add2, %e4
   %add4 = add i32 %add3, %e5
@@ -783,10 +774,8 @@ define i64 @explode_2xi64(<2 x i64> %v) {
 ; RV32-NEXT:    vsrl.vx v9, v8, a0
 ; RV32-NEXT:    vmv.x.s a0, v9
 ; RV32-NEXT:    vmv.x.s a3, v8
-; RV32-NEXT:    add a1, a1, a0
-; RV32-NEXT:    add a0, a2, a3
-; RV32-NEXT:    sltu a2, a0, a2
-; RV32-NEXT:    add a1, a1, a2
+; RV32-NEXT:    xor a1, a1, a0
+; RV32-NEXT:    xor a0, a2, a3
 ; RV32-NEXT:    ret
 ;
 ; RV64-LABEL: explode_2xi64:
@@ -795,11 +784,11 @@ define i64 @explode_2xi64(<2 x i64> %v) {
 ; RV64-NEXT:    vmv.x.s a0, v8
 ; RV64-NEXT:    vslidedown.vi v8, v8, 1
 ; RV64-NEXT:    vmv.x.s a1, v8
-; RV64-NEXT:    add a0, a0, a1
+; RV64-NEXT:    xor a0, a0, a1
 ; RV64-NEXT:    ret
   %e0 = extractelement <2 x i64> %v, i32 0
   %e1 = extractelement <2 x i64> %v, i32 1
-  %add0 = add i64 %e0, %e1
+  %add0 = xor i64 %e0, %e1
   ret i64 %add0
 }
 
@@ -821,17 +810,17 @@ define i64 @explode_4xi64(<4 x i64> %v) {
 ; RV32-NEXT:    vmv.x.s a6, v10
 ; RV32-NEXT:    vslidedown.vi v8, v8, 3
 ; RV32-NEXT:    vsrl.vx v10, v8, a0
-; RV32-NEXT:    vmv.x.s a7, v10
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    add a1, a1, a3
-; RV32-NEXT:    add a4, a2, a4
-; RV32-NEXT:    sltu a2, a4, a2
+; RV32-NEXT:    vmv.x.s a0, v10
+; RV32-NEXT:    vmv.x.s a7, v8
+; RV32-NEXT:    xor a1, a1, a3
+; RV32-NEXT:    xor a2, a2, a4
+; RV32-NEXT:    add a6, a2, a6
+; RV32-NEXT:    sltu a2, a6, a2
+; RV32-NEXT:    add a1, a1, a5
 ; RV32-NEXT:    add a1, a1, a2
-; RV32-NEXT:    xor a1, a1, a5
-; RV32-NEXT:    xor a2, a4, a6
-; RV32-NEXT:    add a0, a2, a0
-; RV32-NEXT:    sltu a2, a0, a2
-; RV32-NEXT:    add a1, a1, a7
+; RV32-NEXT:    add a1, a1, a0
+; RV32-NEXT:    add a0, a6, a7
+; RV32-NEXT:    sltu a2, a0, a6
 ; RV32-NEXT:    add a1, a1, a2
 ; RV32-NEXT:    ret
 ;
@@ -846,16 +835,16 @@ define i64 @explode_4xi64(<4 x i64> %v) {
 ; RV64-NEXT:    vmv.x.s a2, v10
 ; RV64-NEXT:    vslidedown.vi v8, v8, 3
 ; RV64-NEXT:    vmv.x.s a3, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    xor a0, a0, a2
-; RV64-NEXT:    add a0, a0, a3
+; RV64-NEXT:    xor a0, a0, a1
+; RV64-NEXT:    add a2, a2, a3
+; RV64-NEXT:    add a0, a0, a2
 ; RV64-NEXT:    ret
   %e0 = extractelement <4 x i64> %v, i32 0
   %e1 = extractelement <4 x i64> %v, i32 1
   %e2 = extractelement <4 x i64> %v, i32 2
   %e3 = extractelement <4 x i64> %v, i32 3
-  %add0 = add i64 %e0, %e1
-  %add1 = xor i64 %add0, %e2
+  %add0 = xor i64 %e0, %e1
+  %add1 = add i64 %add0, %e2
   %add2 = add i64 %add1, %e3
   ret i64 %add2
 }
@@ -901,15 +890,15 @@ define i64 @explode_8xi64(<8 x i64> %v) {
 ; RV32-NEXT:    vsrl.vx v12, v8, a0
 ; RV32-NEXT:    vmv.x.s a0, v12
 ; RV32-NEXT:    vmv.x.s s0, v8
-; RV32-NEXT:    add a1, a1, a3
-; RV32-NEXT:    add a4, a2, a4
-; RV32-NEXT:    sltu a2, a4, a2
+; RV32-NEXT:    xor a1, a1, a3
+; RV32-NEXT:    xor a2, a2, a4
+; RV32-NEXT:    add a6, a2, a6
+; RV32-NEXT:    sltu a2, a6, a2
+; RV32-NEXT:    add a1, a1, a5
 ; RV32-NEXT:    add a1, a1, a2
-; RV32-NEXT:    xor a1, a1, a5
-; RV32-NEXT:    xor a2, a4, a6
-; RV32-NEXT:    add t0, a2, t0
-; RV32-NEXT:    sltu a2, t0, a2
 ; RV32-NEXT:    add a1, a1, a7
+; RV32-NEXT:    add t0, a6, t0
+; RV32-NEXT:    sltu a2, t0, a6
 ; RV32-NEXT:    add a2, a2, t1
 ; RV32-NEXT:    add a1, a1, a2
 ; RV32-NEXT:    add t2, t0, t2
@@ -958,13 +947,13 @@ define i64 @explode_8xi64(<8 x i64> %v) {
 ; RV64-NEXT:    ld a5, 40(sp)
 ; RV64-NEXT:    ld a6, 48(sp)
 ; RV64-NEXT:    ld a7, 56(sp)
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    xor a0, a0, a2
-; RV64-NEXT:    add a0, a0, a3
-; RV64-NEXT:    add a4, a4, a5
+; RV64-NEXT:    xor a0, a0, a1
+; RV64-NEXT:    add a2, a2, a3
+; RV64-NEXT:    add a0, a0, a2
 ; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    add a6, a6, a7
-; RV64-NEXT:    add a0, a0, a6
+; RV64-NEXT:    add a5, a5, a6
+; RV64-NEXT:    add a0, a0, a5
+; RV64-NEXT:    add a0, a0, a7
 ; RV64-NEXT:    addi sp, s0, -128
 ; RV64-NEXT:    ld ra, 120(sp) # 8-byte Folded Reload
 ; RV64-NEXT:    ld s0, 112(sp) # 8-byte Folded Reload
@@ -978,8 +967,8 @@ define i64 @explode_8xi64(<8 x i64> %v) {
   %e5 = extractelement <8 x i64> %v, i32 5
   %e6 = extractelement <8 x i64> %v, i32 6
   %e7 = extractelement <8 x i64> %v, i32 7
-  %add0 = add i64 %e0, %e1
-  %add1 = xor i64 %add0, %e2
+  %add0 = xor i64 %e0, %e1
+  %add1 = add i64 %add0, %e2
   %add2 = add i64 %add1, %e3
   %add3 = add i64 %add2, %e4
   %add4 = add i64 %add3, %e5
@@ -1019,126 +1008,126 @@ define i64 @explode_16xi64(<16 x i64> %v) {
 ; RV32-NEXT:    .cfi_offset s9, -44
 ; RV32-NEXT:    .cfi_offset s10, -48
 ; RV32-NEXT:    .cfi_offset s11, -52
-; RV32-NEXT:    li a1, 32
+; RV32-NEXT:    li a0, 32
 ; RV32-NEXT:    vsetivli zero, 1, e64, m8, ta, ma
-; RV32-NEXT:    vsrl.vx v16, v8, a1
-; RV32-NEXT:    vmv.x.s a0, v16
-; RV32-NEXT:    sw a0, 8(sp) # 4-byte Folded Spill
-; RV32-NEXT:    vmv.x.s a0, v8
+; RV32-NEXT:    vsrl.vx v16, v8, a0
+; RV32-NEXT:    vmv.x.s a1, v16
+; RV32-NEXT:    sw a1, 8(sp) # 4-byte Folded Spill
+; RV32-NEXT:    vmv.x.s a2, v8
 ; RV32-NEXT:    vslidedown.vi v16, v8, 1
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s a3, v24
 ; RV32-NEXT:    vmv.x.s a4, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 2
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s a5, v24
 ; RV32-NEXT:    vmv.x.s a6, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 3
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s a7, v24
 ; RV32-NEXT:    vmv.x.s t0, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 4
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s3, v24
 ; RV32-NEXT:    vmv.x.s t1, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 5
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s4, v24
 ; RV32-NEXT:    vmv.x.s t2, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 6
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s5, v24
 ; RV32-NEXT:    vmv.x.s t3, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 7
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s6, v24
 ; RV32-NEXT:    vmv.x.s t4, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 8
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s7, v24
 ; RV32-NEXT:    vmv.x.s t5, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 9
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s8, v24
 ; RV32-NEXT:    vmv.x.s t6, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 10
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s9, v24
 ; RV32-NEXT:    vmv.x.s s0, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 11
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s10, v24
 ; RV32-NEXT:    vmv.x.s s1, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 12
-; RV32-NEXT:    vsrl.vx v24, v16, a1
+; RV32-NEXT:    vsrl.vx v24, v16, a0
 ; RV32-NEXT:    vmv.x.s s11, v24
 ; RV32-NEXT:    vmv.x.s s2, v16
 ; RV32-NEXT:    vslidedown.vi v24, v8, 13
-; RV32-NEXT:    vsrl.vx v16, v24, a1
+; RV32-NEXT:    vsrl.vx v16, v24, a0
 ; RV32-NEXT:    vmv.x.s ra, v16
 ; RV32-NEXT:    vslidedown.vi v16, v8, 14
-; RV32-NEXT:    vsrl.vx v0, v16, a1
+; RV32-NEXT:    vsrl.vx v0, v16, a0
 ; RV32-NEXT:    vslidedown.vi v8, v8, 15
-; RV32-NEXT:    vmv.x.s a2, v24
-; RV32-NEXT:    vsrl.vx v24, v8, a1
-; RV32-NEXT:    lw a1, 8(sp) # 4-byte Folded Reload
-; RV32-NEXT:    add a3, a1, a3
-; RV32-NEXT:    add a4, a0, a4
-; RV32-NEXT:    sltu a0, a4, a0
-; RV32-NEXT:    add a0, a3, a0
-; RV32-NEXT:    xor a0, a0, a5
-; RV32-NEXT:    xor a1, a4, a6
+; RV32-NEXT:    vmv.x.s a1, v24
+; RV32-NEXT:    vsrl.vx v24, v8, a0
+; RV32-NEXT:    lw a0, 8(sp) # 4-byte Folded Reload
+; RV32-NEXT:    xor a0, a0, a3
+; RV32-NEXT:    xor a2, a2, a4
+; RV32-NEXT:    add a0, a0, a5
+; RV32-NEXT:    add a6, a2, a6
+; RV32-NEXT:    sltu a2, a6, a2
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add a0, a0, a7
-; RV32-NEXT:    add t0, a1, t0
-; RV32-NEXT:    sltu a1, t0, a1
-; RV32-NEXT:    add a1, a1, s3
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    add t0, a6, t0
+; RV32-NEXT:    sltu a2, t0, a6
+; RV32-NEXT:    add a2, a2, s3
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add t1, t0, t1
-; RV32-NEXT:    sltu a1, t1, t0
-; RV32-NEXT:    add a1, a1, s4
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    sltu a2, t1, t0
+; RV32-NEXT:    add a2, a2, s4
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add t2, t1, t2
-; RV32-NEXT:    sltu a1, t2, t1
-; RV32-NEXT:    add a1, a1, s5
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    sltu a2, t2, t1
+; RV32-NEXT:    add a2, a2, s5
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add t3, t2, t3
-; RV32-NEXT:    sltu a1, t3, t2
-; RV32-NEXT:    add a1, a1, s6
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    sltu a2, t3, t2
+; RV32-NEXT:    add a2, a2, s6
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add t4, t3, t4
-; RV32-NEXT:    sltu a1, t4, t3
-; RV32-NEXT:    add a1, a1, s7
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    sltu a2, t4, t3
+; RV32-NEXT:    add a2, a2, s7
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add t5, t4, t5
-; RV32-NEXT:    sltu a1, t5, t4
-; RV32-NEXT:    add a1, a1, s8
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    sltu a2, t5, t4
+; RV32-NEXT:    add a2, a2, s8
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add t6, t5, t6
-; RV32-NEXT:    sltu a1, t6, t5
-; RV32-NEXT:    add a1, a1, s9
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    sltu a2, t6, t5
+; RV32-NEXT:    add a2, a2, s9
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add s0, t6, s0
-; RV32-NEXT:    sltu a1, s0, t6
-; RV32-NEXT:    add a1, a1, s10
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    sltu a2, s0, t6
+; RV32-NEXT:    add a2, a2, s10
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add s1, s0, s1
-; RV32-NEXT:    sltu a1, s1, s0
-; RV32-NEXT:    add a1, a1, s11
-; RV32-NEXT:    add a0, a0, a1
+; RV32-NEXT:    sltu a2, s1, s0
+; RV32-NEXT:    add a2, a2, s11
+; RV32-NEXT:    add a0, a0, a2
 ; RV32-NEXT:    add s2, s1, s2
-; RV32-NEXT:    sltu a1, s2, s1
-; RV32-NEXT:    add a1, a1, ra
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    vmv.x.s a1, v0
-; RV32-NEXT:    add a2, s2, a2
-; RV32-NEXT:    sltu a3, a2, s2
-; RV32-NEXT:    add a1, a3, a1
+; RV32-NEXT:    sltu a2, s2, s1
+; RV32-NEXT:    add a2, a2, ra
+; RV32-NEXT:    add a0, a0, a2
+; RV32-NEXT:    vmv.x.s a2, v0
+; RV32-NEXT:    add a1, s2, a1
+; RV32-NEXT:    sltu a3, a1, s2
+; RV32-NEXT:    add a2, a3, a2
 ; RV32-NEXT:    vmv.x.s a3, v16
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    vmv.x.s a1, v24
-; RV32-NEXT:    add a3, a2, a3
-; RV32-NEXT:    sltu a2, a3, a2
-; RV32-NEXT:    add a1, a2, a1
+; RV32-NEXT:    add a0, a0, a2
+; RV32-NEXT:    vmv.x.s a2, v24
+; RV32-NEXT:    add a3, a1, a3
+; RV32-NEXT:    sltu a1, a3, a1
+; RV32-NEXT:    add a1, a1, a2
 ; RV32-NEXT:    add a1, a0, a1
 ; RV32-NEXT:    vmv.x.s a0, v8
 ; RV32-NEXT:    add a0, a3, a0
@@ -1197,21 +1186,21 @@ define i64 @explode_16xi64(<16 x i64> %v) {
 ; RV64-NEXT:    ld t5, 104(sp)
 ; RV64-NEXT:    ld t6, 112(sp)
 ; RV64-NEXT:    ld s2, 120(sp)
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    xor a0, a0, a2
-; RV64-NEXT:    add a0, a0, a3
-; RV64-NEXT:    add a4, a4, a5
+; RV64-NEXT:    xor a0, a0, a1
+; RV64-NEXT:    add a2, a2, a3
+; RV64-NEXT:    add a0, a0, a2
 ; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    add a6, a6, a7
-; RV64-NEXT:    add a6, a6, t0
-; RV64-NEXT:    add a0, a0, a6
-; RV64-NEXT:    add t1, t1, t2
-; RV64-NEXT:    add t1, t1, t3
-; RV64-NEXT:    add t1, t1, t4
-; RV64-NEXT:    add a0, a0, t1
-; RV64-NEXT:    add t5, t5, t6
-; RV64-NEXT:    add t5, t5, s2
-; RV64-NEXT:    add a0, a0, t5
+; RV64-NEXT:    add a5, a5, a6
+; RV64-NEXT:    add a0, a0, a5
+; RV64-NEXT:    add a7, a7, t0
+; RV64-NEXT:    add a7, a7, t1
+; RV64-NEXT:    add a0, a0, a7
+; RV64-NEXT:    add t2, t2, t3
+; RV64-NEXT:    add t2, t2, t4
+; RV64-NEXT:    add t2, t2, t5
+; RV64-NEXT:    add a0, a0, t2
+; RV64-NEXT:    add t6, t6, s2
+; RV64-NEXT:    add a0, a0, t6
 ; RV64-NEXT:    addi sp, s0, -256
 ; RV64-NEXT:    ld ra, 248(sp) # 8-byte Folded Reload
 ; RV64-NEXT:    ld s0, 240(sp) # 8-byte Folded Reload
@@ -1234,8 +1223,8 @@ define i64 @explode_16xi64(<16 x i64> %v) {
   %e13 = extractelement <16 x i64> %v, i32 13
   %e14 = extractelement <16 x i64> %v, i32 14
   %e15 = extractelement <16 x i64> %v, i32 15
-  %add0 = add i64 %e0, %e1
-  %add1 = xor i64 %add0, %e2
+  %add0 = xor i64 %e0, %e1
+  %add1 = add i64 %add0, %e2
   %add2 = add i64 %add1, %e3
   %add3 = add i64 %add2, %e4
   %add4 = add i64 %add3, %e5

diff  --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-formation.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-formation.ll
index 5ef6b291e309517..173b70def03d4c5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-formation.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-formation.ll
@@ -1,25 +1,15 @@
 ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
-; RUN: llc -mtriple=riscv32 -mattr=+v,+m -verify-machineinstrs < %s | FileCheck %s --check-prefixes=CHECK,RV32
-; RUN: llc -mtriple=riscv64 -mattr=+v,+m -verify-machineinstrs < %s | FileCheck %s --check-prefixes=CHECK,RV64
+; RUN: llc -mtriple=riscv32 -mattr=+v,+m -verify-machineinstrs < %s | FileCheck %s
+; RUN: llc -mtriple=riscv64 -mattr=+v,+m -verify-machineinstrs < %s | FileCheck %s
 
 define i32 @reduce_sum_2xi32(<2 x i32> %v) {
-; RV32-LABEL: reduce_sum_2xi32:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 1, e32, mf2, ta, ma
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vslidedown.vi v8, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_2xi32:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 1, e32, mf2, ta, ma
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vslidedown.vi v8, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v8
-; RV64-NEXT:    addw a0, a0, a1
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_2xi32:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 2, e32, mf2, ta, ma
+; CHECK-NEXT:    vmv.s.x v9, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v9
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %e0 = extractelement <2 x i32> %v, i32 0
   %e1 = extractelement <2 x i32> %v, i32 1
   %add0 = add i32 %e0, %e1
@@ -27,35 +17,13 @@ define i32 @reduce_sum_2xi32(<2 x i32> %v) {
 }
 
 define i32 @reduce_sum_4xi32(<4 x i32> %v) {
-; RV32-LABEL: reduce_sum_4xi32:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vslidedown.vi v9, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v9
-; RV32-NEXT:    vslidedown.vi v9, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v9
-; RV32-NEXT:    vslidedown.vi v8, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_4xi32:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vslidedown.vi v9, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v9
-; RV64-NEXT:    vslidedown.vi v9, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v9
-; RV64-NEXT:    vslidedown.vi v8, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    addw a0, a0, a2
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_4xi32:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
+; CHECK-NEXT:    vmv.s.x v9, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v9
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %e0 = extractelement <4 x i32> %v, i32 0
   %e1 = extractelement <4 x i32> %v, i32 1
   %e2 = extractelement <4 x i32> %v, i32 2
@@ -68,61 +36,13 @@ define i32 @reduce_sum_4xi32(<4 x i32> %v) {
 
 
 define i32 @reduce_sum_8xi32(<8 x i32> %v) {
-; RV32-LABEL: reduce_sum_8xi32:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vslidedown.vi v10, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v10
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v10, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 6
-; RV32-NEXT:    vmv.x.s a6, v10
-; RV32-NEXT:    vslidedown.vi v8, v8, 7
-; RV32-NEXT:    vmv.x.s a7, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a4, a4, a6
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    add a0, a0, a7
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_8xi32:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vslidedown.vi v10, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v10
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v10, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 6
-; RV64-NEXT:    vmv.x.s a6, v10
-; RV64-NEXT:    vslidedown.vi v8, v8, 7
-; RV64-NEXT:    vmv.x.s a7, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    add a4, a4, a6
-; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    addw a0, a0, a7
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_8xi32:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
+; CHECK-NEXT:    vmv.s.x v10, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v10
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %e0 = extractelement <8 x i32> %v, i32 0
   %e1 = extractelement <8 x i32> %v, i32 1
   %e2 = extractelement <8 x i32> %v, i32 2
@@ -142,131 +62,13 @@ define i32 @reduce_sum_8xi32(<8 x i32> %v) {
 }
 
 define i32 @reduce_sum_16xi32(<16 x i32> %v) {
-; RV32-LABEL: reduce_sum_16xi32:
-; RV32:       # %bb.0:
-; RV32-NEXT:    addi sp, sp, -128
-; RV32-NEXT:    .cfi_def_cfa_offset 128
-; RV32-NEXT:    sw ra, 124(sp) # 4-byte Folded Spill
-; RV32-NEXT:    sw s0, 120(sp) # 4-byte Folded Spill
-; RV32-NEXT:    sw s2, 116(sp) # 4-byte Folded Spill
-; RV32-NEXT:    .cfi_offset ra, -4
-; RV32-NEXT:    .cfi_offset s0, -8
-; RV32-NEXT:    .cfi_offset s2, -12
-; RV32-NEXT:    addi s0, sp, 128
-; RV32-NEXT:    .cfi_def_cfa s0, 0
-; RV32-NEXT:    andi sp, sp, -64
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vslidedown.vi v12, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v12
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 6
-; RV32-NEXT:    vmv.x.s a6, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 7
-; RV32-NEXT:    vmv.x.s a7, v12
-; RV32-NEXT:    mv t0, sp
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vse32.v v8, (t0)
-; RV32-NEXT:    lw t0, 32(sp)
-; RV32-NEXT:    lw t1, 36(sp)
-; RV32-NEXT:    lw t2, 40(sp)
-; RV32-NEXT:    lw t3, 44(sp)
-; RV32-NEXT:    lw t4, 48(sp)
-; RV32-NEXT:    lw t5, 52(sp)
-; RV32-NEXT:    lw t6, 56(sp)
-; RV32-NEXT:    lw s2, 60(sp)
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a4, a4, a6
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    add a7, a7, t0
-; RV32-NEXT:    add a0, a0, a7
-; RV32-NEXT:    add t1, t1, t2
-; RV32-NEXT:    add t1, t1, t3
-; RV32-NEXT:    add a0, a0, t1
-; RV32-NEXT:    add t4, t4, t5
-; RV32-NEXT:    add t4, t4, t6
-; RV32-NEXT:    add t4, t4, s2
-; RV32-NEXT:    add a0, a0, t4
-; RV32-NEXT:    addi sp, s0, -128
-; RV32-NEXT:    lw ra, 124(sp) # 4-byte Folded Reload
-; RV32-NEXT:    lw s0, 120(sp) # 4-byte Folded Reload
-; RV32-NEXT:    lw s2, 116(sp) # 4-byte Folded Reload
-; RV32-NEXT:    addi sp, sp, 128
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32:
-; RV64:       # %bb.0:
-; RV64-NEXT:    addi sp, sp, -128
-; RV64-NEXT:    .cfi_def_cfa_offset 128
-; RV64-NEXT:    sd ra, 120(sp) # 8-byte Folded Spill
-; RV64-NEXT:    sd s0, 112(sp) # 8-byte Folded Spill
-; RV64-NEXT:    sd s2, 104(sp) # 8-byte Folded Spill
-; RV64-NEXT:    .cfi_offset ra, -8
-; RV64-NEXT:    .cfi_offset s0, -16
-; RV64-NEXT:    .cfi_offset s2, -24
-; RV64-NEXT:    addi s0, sp, 128
-; RV64-NEXT:    .cfi_def_cfa s0, 0
-; RV64-NEXT:    andi sp, sp, -64
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vslidedown.vi v12, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v12
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 6
-; RV64-NEXT:    vmv.x.s a6, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 7
-; RV64-NEXT:    vmv.x.s a7, v12
-; RV64-NEXT:    mv t0, sp
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vse32.v v8, (t0)
-; RV64-NEXT:    lw t0, 32(sp)
-; RV64-NEXT:    lw t1, 36(sp)
-; RV64-NEXT:    lw t2, 40(sp)
-; RV64-NEXT:    lw t3, 44(sp)
-; RV64-NEXT:    lw t4, 48(sp)
-; RV64-NEXT:    lw t5, 52(sp)
-; RV64-NEXT:    lw t6, 56(sp)
-; RV64-NEXT:    lw s2, 60(sp)
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    add a4, a4, a6
-; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    add a7, a7, t0
-; RV64-NEXT:    add a0, a0, a7
-; RV64-NEXT:    add t1, t1, t2
-; RV64-NEXT:    add t1, t1, t3
-; RV64-NEXT:    add a0, a0, t1
-; RV64-NEXT:    add t4, t4, t5
-; RV64-NEXT:    add t4, t4, t6
-; RV64-NEXT:    add t4, t4, s2
-; RV64-NEXT:    addw a0, a0, t4
-; RV64-NEXT:    addi sp, s0, -128
-; RV64-NEXT:    ld ra, 120(sp) # 8-byte Folded Reload
-; RV64-NEXT:    ld s0, 112(sp) # 8-byte Folded Reload
-; RV64-NEXT:    ld s2, 104(sp) # 8-byte Folded Reload
-; RV64-NEXT:    addi sp, sp, 128
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
+; CHECK-NEXT:    vmv.s.x v12, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v12
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
   %e2 = extractelement <16 x i32> %v, i32 2
@@ -302,27 +104,14 @@ define i32 @reduce_sum_16xi32(<16 x i32> %v) {
 }
 
 define i32 @reduce_sum_16xi32_prefix2(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix2:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v8, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix2:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v8, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v8
-; RV64-NEXT:    addw a0, a0, a1
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix2:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 2, e32, mf2, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    vmv.s.x v9, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v9
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -331,33 +120,15 @@ define i32 @reduce_sum_16xi32_prefix2(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix3(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix3:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v9, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v9
-; RV32-NEXT:    vslidedown.vi v8, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix3:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v9, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v9
-; RV64-NEXT:    vslidedown.vi v8, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    addw a0, a0, a2
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix3:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    vmv.s.x v9, zero
+; CHECK-NEXT:    vslideup.vi v8, v9, 3
+; CHECK-NEXT:    vredsum.vs v8, v8, v9
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -368,39 +139,14 @@ define i32 @reduce_sum_16xi32_prefix3(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix4(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix4:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v9, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v9
-; RV32-NEXT:    vslidedown.vi v9, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v9
-; RV32-NEXT:    vslidedown.vi v8, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix4:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v9, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v9
-; RV64-NEXT:    vslidedown.vi v9, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v9
-; RV64-NEXT:    vslidedown.vi v8, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    addw a0, a0, a2
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix4:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    vmv.s.x v9, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v9
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -413,47 +159,21 @@ define i32 @reduce_sum_16xi32_prefix4(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix5(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix5:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v10, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v10
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v8, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix5:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v10, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v10
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v8, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    addw a0, a0, a4
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix5:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    li a1, 224
+; CHECK-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
+; CHECK-NEXT:    vmv.s.x v0, a1
+; CHECK-NEXT:    vmv.v.i v8, -1
+; CHECK-NEXT:    vmerge.vim v8, v8, 0, v0
+; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vle32.v v10, (a0)
+; CHECK-NEXT:    vsext.vf4 v12, v8
+; CHECK-NEXT:    vand.vv v8, v10, v12
+; CHECK-NEXT:    vmv.s.x v10, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v10
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -468,53 +188,21 @@ define i32 @reduce_sum_16xi32_prefix5(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix6(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix6:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v10, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v10
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v10, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v10
-; RV32-NEXT:    vslidedown.vi v8, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix6:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v10, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v10
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v10, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v10
-; RV64-NEXT:    vslidedown.vi v8, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    addw a0, a0, a4
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix6:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    li a1, 192
+; CHECK-NEXT:    vsetivli zero, 8, e8, mf2, ta, ma
+; CHECK-NEXT:    vmv.s.x v0, a1
+; CHECK-NEXT:    vmv.v.i v8, -1
+; CHECK-NEXT:    vmerge.vim v8, v8, 0, v0
+; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vle32.v v10, (a0)
+; CHECK-NEXT:    vsext.vf4 v12, v8
+; CHECK-NEXT:    vand.vv v8, v10, v12
+; CHECK-NEXT:    vmv.s.x v10, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v10
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -531,59 +219,15 @@ define i32 @reduce_sum_16xi32_prefix6(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix7(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix7:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v10, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v10
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v10, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v10
-; RV32-NEXT:    vslidedown.vi v8, v8, 6
-; RV32-NEXT:    vmv.x.s a6, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a4, a4, a6
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix7:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v10, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v10
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v10, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v10
-; RV64-NEXT:    vslidedown.vi v8, v8, 6
-; RV64-NEXT:    vmv.x.s a6, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    add a4, a4, a6
-; RV64-NEXT:    addw a0, a0, a4
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix7:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    vmv.s.x v10, zero
+; CHECK-NEXT:    vslideup.vi v8, v10, 7
+; CHECK-NEXT:    vredsum.vs v8, v8, v10
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -602,65 +246,14 @@ define i32 @reduce_sum_16xi32_prefix7(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix8(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix8:
-; RV32:       # %bb.0:
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v10, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v10
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v10, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v10
-; RV32-NEXT:    vslidedown.vi v10, v8, 6
-; RV32-NEXT:    vmv.x.s a6, v10
-; RV32-NEXT:    vslidedown.vi v8, v8, 7
-; RV32-NEXT:    vmv.x.s a7, v8
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a4, a4, a6
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    add a0, a0, a7
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix8:
-; RV64:       # %bb.0:
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v10, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v10
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v10, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v10
-; RV64-NEXT:    vslidedown.vi v10, v8, 6
-; RV64-NEXT:    vmv.x.s a6, v10
-; RV64-NEXT:    vslidedown.vi v8, v8, 7
-; RV64-NEXT:    vmv.x.s a7, v8
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    add a4, a4, a6
-; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    addw a0, a0, a7
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix8:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    vmv.s.x v10, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v10
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -681,101 +274,22 @@ define i32 @reduce_sum_16xi32_prefix8(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix9(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix9:
-; RV32:       # %bb.0:
-; RV32-NEXT:    addi sp, sp, -128
-; RV32-NEXT:    .cfi_def_cfa_offset 128
-; RV32-NEXT:    sw ra, 124(sp) # 4-byte Folded Spill
-; RV32-NEXT:    sw s0, 120(sp) # 4-byte Folded Spill
-; RV32-NEXT:    .cfi_offset ra, -4
-; RV32-NEXT:    .cfi_offset s0, -8
-; RV32-NEXT:    addi s0, sp, 128
-; RV32-NEXT:    .cfi_def_cfa s0, 0
-; RV32-NEXT:    andi sp, sp, -64
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v12
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 6
-; RV32-NEXT:    vmv.x.s a6, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 7
-; RV32-NEXT:    vmv.x.s a7, v12
-; RV32-NEXT:    mv t0, sp
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vse32.v v8, (t0)
-; RV32-NEXT:    lw t0, 32(sp)
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a4, a4, a6
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    add a7, a7, t0
-; RV32-NEXT:    add a0, a0, a7
-; RV32-NEXT:    addi sp, s0, -128
-; RV32-NEXT:    lw ra, 124(sp) # 4-byte Folded Reload
-; RV32-NEXT:    lw s0, 120(sp) # 4-byte Folded Reload
-; RV32-NEXT:    addi sp, sp, 128
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix9:
-; RV64:       # %bb.0:
-; RV64-NEXT:    addi sp, sp, -128
-; RV64-NEXT:    .cfi_def_cfa_offset 128
-; RV64-NEXT:    sd ra, 120(sp) # 8-byte Folded Spill
-; RV64-NEXT:    sd s0, 112(sp) # 8-byte Folded Spill
-; RV64-NEXT:    .cfi_offset ra, -8
-; RV64-NEXT:    .cfi_offset s0, -16
-; RV64-NEXT:    addi s0, sp, 128
-; RV64-NEXT:    .cfi_def_cfa s0, 0
-; RV64-NEXT:    andi sp, sp, -64
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v12
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 6
-; RV64-NEXT:    vmv.x.s a6, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 7
-; RV64-NEXT:    vmv.x.s a7, v12
-; RV64-NEXT:    mv t0, sp
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vse32.v v8, (t0)
-; RV64-NEXT:    lw t0, 32(sp)
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    add a4, a4, a6
-; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    add a7, a7, t0
-; RV64-NEXT:    addw a0, a0, a7
-; RV64-NEXT:    addi sp, s0, -128
-; RV64-NEXT:    ld ra, 120(sp) # 8-byte Folded Reload
-; RV64-NEXT:    ld s0, 112(sp) # 8-byte Folded Reload
-; RV64-NEXT:    addi sp, sp, 128
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix9:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    li a0, -512
+; CHECK-NEXT:    vmv.s.x v0, a0
+; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
+; CHECK-NEXT:    vmv.v.i v12, -1
+; CHECK-NEXT:    vmerge.vim v12, v12, 0, v0
+; CHECK-NEXT:    vsetvli zero, zero, e32, m4, ta, ma
+; CHECK-NEXT:    vsext.vf4 v16, v12
+; CHECK-NEXT:    vand.vv v8, v8, v16
+; CHECK-NEXT:    vmv.s.x v12, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v12
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -798,117 +312,22 @@ define i32 @reduce_sum_16xi32_prefix9(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix13(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix13:
-; RV32:       # %bb.0:
-; RV32-NEXT:    addi sp, sp, -128
-; RV32-NEXT:    .cfi_def_cfa_offset 128
-; RV32-NEXT:    sw ra, 124(sp) # 4-byte Folded Spill
-; RV32-NEXT:    sw s0, 120(sp) # 4-byte Folded Spill
-; RV32-NEXT:    .cfi_offset ra, -4
-; RV32-NEXT:    .cfi_offset s0, -8
-; RV32-NEXT:    addi s0, sp, 128
-; RV32-NEXT:    .cfi_def_cfa s0, 0
-; RV32-NEXT:    andi sp, sp, -64
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v12
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 6
-; RV32-NEXT:    vmv.x.s a6, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 7
-; RV32-NEXT:    vmv.x.s a7, v12
-; RV32-NEXT:    mv t0, sp
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vse32.v v8, (t0)
-; RV32-NEXT:    lw t0, 32(sp)
-; RV32-NEXT:    lw t1, 36(sp)
-; RV32-NEXT:    lw t2, 40(sp)
-; RV32-NEXT:    lw t3, 44(sp)
-; RV32-NEXT:    lw t4, 48(sp)
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a4, a4, a6
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    add a7, a7, t0
-; RV32-NEXT:    add a7, a7, t1
-; RV32-NEXT:    add a7, a7, t2
-; RV32-NEXT:    add a0, a0, a7
-; RV32-NEXT:    add t3, t3, t4
-; RV32-NEXT:    add a0, a0, t3
-; RV32-NEXT:    addi sp, s0, -128
-; RV32-NEXT:    lw ra, 124(sp) # 4-byte Folded Reload
-; RV32-NEXT:    lw s0, 120(sp) # 4-byte Folded Reload
-; RV32-NEXT:    addi sp, sp, 128
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix13:
-; RV64:       # %bb.0:
-; RV64-NEXT:    addi sp, sp, -128
-; RV64-NEXT:    .cfi_def_cfa_offset 128
-; RV64-NEXT:    sd ra, 120(sp) # 8-byte Folded Spill
-; RV64-NEXT:    sd s0, 112(sp) # 8-byte Folded Spill
-; RV64-NEXT:    .cfi_offset ra, -8
-; RV64-NEXT:    .cfi_offset s0, -16
-; RV64-NEXT:    addi s0, sp, 128
-; RV64-NEXT:    .cfi_def_cfa s0, 0
-; RV64-NEXT:    andi sp, sp, -64
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v12
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 6
-; RV64-NEXT:    vmv.x.s a6, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 7
-; RV64-NEXT:    vmv.x.s a7, v12
-; RV64-NEXT:    mv t0, sp
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vse32.v v8, (t0)
-; RV64-NEXT:    lw t0, 32(sp)
-; RV64-NEXT:    lw t1, 36(sp)
-; RV64-NEXT:    lw t2, 40(sp)
-; RV64-NEXT:    lw t3, 44(sp)
-; RV64-NEXT:    lw t4, 48(sp)
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    add a4, a4, a6
-; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    add a7, a7, t0
-; RV64-NEXT:    add a7, a7, t1
-; RV64-NEXT:    add a7, a7, t2
-; RV64-NEXT:    add a0, a0, a7
-; RV64-NEXT:    add t3, t3, t4
-; RV64-NEXT:    addw a0, a0, t3
-; RV64-NEXT:    addi sp, s0, -128
-; RV64-NEXT:    ld ra, 120(sp) # 8-byte Folded Reload
-; RV64-NEXT:    ld s0, 112(sp) # 8-byte Folded Reload
-; RV64-NEXT:    addi sp, sp, 128
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix13:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    lui a0, 14
+; CHECK-NEXT:    vmv.s.x v0, a0
+; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
+; CHECK-NEXT:    vmv.v.i v12, -1
+; CHECK-NEXT:    vmerge.vim v12, v12, 0, v0
+; CHECK-NEXT:    vsetvli zero, zero, e32, m4, ta, ma
+; CHECK-NEXT:    vsext.vf4 v16, v12
+; CHECK-NEXT:    vand.vv v8, v8, v16
+; CHECK-NEXT:    vmv.s.x v12, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v12
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -940,121 +359,22 @@ define i32 @reduce_sum_16xi32_prefix13(ptr %p) {
 
 
 define i32 @reduce_sum_16xi32_prefix14(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix14:
-; RV32:       # %bb.0:
-; RV32-NEXT:    addi sp, sp, -128
-; RV32-NEXT:    .cfi_def_cfa_offset 128
-; RV32-NEXT:    sw ra, 124(sp) # 4-byte Folded Spill
-; RV32-NEXT:    sw s0, 120(sp) # 4-byte Folded Spill
-; RV32-NEXT:    .cfi_offset ra, -4
-; RV32-NEXT:    .cfi_offset s0, -8
-; RV32-NEXT:    addi s0, sp, 128
-; RV32-NEXT:    .cfi_def_cfa s0, 0
-; RV32-NEXT:    andi sp, sp, -64
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v12
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 6
-; RV32-NEXT:    vmv.x.s a6, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 7
-; RV32-NEXT:    vmv.x.s a7, v12
-; RV32-NEXT:    mv t0, sp
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vse32.v v8, (t0)
-; RV32-NEXT:    lw t0, 32(sp)
-; RV32-NEXT:    lw t1, 36(sp)
-; RV32-NEXT:    lw t2, 40(sp)
-; RV32-NEXT:    lw t3, 44(sp)
-; RV32-NEXT:    lw t4, 48(sp)
-; RV32-NEXT:    lw t5, 52(sp)
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a4, a4, a6
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    add a7, a7, t0
-; RV32-NEXT:    add a7, a7, t1
-; RV32-NEXT:    add a7, a7, t2
-; RV32-NEXT:    add a0, a0, a7
-; RV32-NEXT:    add t3, t3, t4
-; RV32-NEXT:    add t3, t3, t5
-; RV32-NEXT:    add a0, a0, t3
-; RV32-NEXT:    addi sp, s0, -128
-; RV32-NEXT:    lw ra, 124(sp) # 4-byte Folded Reload
-; RV32-NEXT:    lw s0, 120(sp) # 4-byte Folded Reload
-; RV32-NEXT:    addi sp, sp, 128
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix14:
-; RV64:       # %bb.0:
-; RV64-NEXT:    addi sp, sp, -128
-; RV64-NEXT:    .cfi_def_cfa_offset 128
-; RV64-NEXT:    sd ra, 120(sp) # 8-byte Folded Spill
-; RV64-NEXT:    sd s0, 112(sp) # 8-byte Folded Spill
-; RV64-NEXT:    .cfi_offset ra, -8
-; RV64-NEXT:    .cfi_offset s0, -16
-; RV64-NEXT:    addi s0, sp, 128
-; RV64-NEXT:    .cfi_def_cfa s0, 0
-; RV64-NEXT:    andi sp, sp, -64
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v12
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 6
-; RV64-NEXT:    vmv.x.s a6, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 7
-; RV64-NEXT:    vmv.x.s a7, v12
-; RV64-NEXT:    mv t0, sp
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vse32.v v8, (t0)
-; RV64-NEXT:    lw t0, 32(sp)
-; RV64-NEXT:    lw t1, 36(sp)
-; RV64-NEXT:    lw t2, 40(sp)
-; RV64-NEXT:    lw t3, 44(sp)
-; RV64-NEXT:    lw t4, 48(sp)
-; RV64-NEXT:    lw t5, 52(sp)
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    add a4, a4, a6
-; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    add a7, a7, t0
-; RV64-NEXT:    add a7, a7, t1
-; RV64-NEXT:    add a7, a7, t2
-; RV64-NEXT:    add a0, a0, a7
-; RV64-NEXT:    add t3, t3, t4
-; RV64-NEXT:    add t3, t3, t5
-; RV64-NEXT:    addw a0, a0, t3
-; RV64-NEXT:    addi sp, s0, -128
-; RV64-NEXT:    ld ra, 120(sp) # 8-byte Folded Reload
-; RV64-NEXT:    ld s0, 112(sp) # 8-byte Folded Reload
-; RV64-NEXT:    addi sp, sp, 128
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix14:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    lui a0, 12
+; CHECK-NEXT:    vmv.s.x v0, a0
+; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
+; CHECK-NEXT:    vmv.v.i v12, -1
+; CHECK-NEXT:    vmerge.vim v12, v12, 0, v0
+; CHECK-NEXT:    vsetvli zero, zero, e32, m4, ta, ma
+; CHECK-NEXT:    vsext.vf4 v16, v12
+; CHECK-NEXT:    vand.vv v8, v8, v16
+; CHECK-NEXT:    vmv.s.x v12, zero
+; CHECK-NEXT:    vredsum.vs v8, v8, v12
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -1087,125 +407,15 @@ define i32 @reduce_sum_16xi32_prefix14(ptr %p) {
 }
 
 define i32 @reduce_sum_16xi32_prefix15(ptr %p) {
-; RV32-LABEL: reduce_sum_16xi32_prefix15:
-; RV32:       # %bb.0:
-; RV32-NEXT:    addi sp, sp, -128
-; RV32-NEXT:    .cfi_def_cfa_offset 128
-; RV32-NEXT:    sw ra, 124(sp) # 4-byte Folded Spill
-; RV32-NEXT:    sw s0, 120(sp) # 4-byte Folded Spill
-; RV32-NEXT:    .cfi_offset ra, -4
-; RV32-NEXT:    .cfi_offset s0, -8
-; RV32-NEXT:    addi s0, sp, 128
-; RV32-NEXT:    .cfi_def_cfa s0, 0
-; RV32-NEXT:    andi sp, sp, -64
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vle32.v v8, (a0)
-; RV32-NEXT:    vmv.x.s a0, v8
-; RV32-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 1
-; RV32-NEXT:    vmv.x.s a1, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 2
-; RV32-NEXT:    vmv.x.s a2, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 3
-; RV32-NEXT:    vmv.x.s a3, v12
-; RV32-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT:    vslidedown.vi v12, v8, 4
-; RV32-NEXT:    vmv.x.s a4, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 5
-; RV32-NEXT:    vmv.x.s a5, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 6
-; RV32-NEXT:    vmv.x.s a6, v12
-; RV32-NEXT:    vslidedown.vi v12, v8, 7
-; RV32-NEXT:    vmv.x.s a7, v12
-; RV32-NEXT:    mv t0, sp
-; RV32-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT:    vse32.v v8, (t0)
-; RV32-NEXT:    lw t0, 32(sp)
-; RV32-NEXT:    lw t1, 36(sp)
-; RV32-NEXT:    lw t2, 40(sp)
-; RV32-NEXT:    lw t3, 44(sp)
-; RV32-NEXT:    lw t4, 48(sp)
-; RV32-NEXT:    lw t5, 52(sp)
-; RV32-NEXT:    lw t6, 56(sp)
-; RV32-NEXT:    add a0, a0, a1
-; RV32-NEXT:    add a2, a2, a3
-; RV32-NEXT:    add a0, a0, a2
-; RV32-NEXT:    add a4, a4, a5
-; RV32-NEXT:    add a4, a4, a6
-; RV32-NEXT:    add a0, a0, a4
-; RV32-NEXT:    add a7, a7, t0
-; RV32-NEXT:    add a7, a7, t1
-; RV32-NEXT:    add a7, a7, t2
-; RV32-NEXT:    add a0, a0, a7
-; RV32-NEXT:    add t3, t3, t4
-; RV32-NEXT:    add t3, t3, t5
-; RV32-NEXT:    add t3, t3, t6
-; RV32-NEXT:    add a0, a0, t3
-; RV32-NEXT:    addi sp, s0, -128
-; RV32-NEXT:    lw ra, 124(sp) # 4-byte Folded Reload
-; RV32-NEXT:    lw s0, 120(sp) # 4-byte Folded Reload
-; RV32-NEXT:    addi sp, sp, 128
-; RV32-NEXT:    ret
-;
-; RV64-LABEL: reduce_sum_16xi32_prefix15:
-; RV64:       # %bb.0:
-; RV64-NEXT:    addi sp, sp, -128
-; RV64-NEXT:    .cfi_def_cfa_offset 128
-; RV64-NEXT:    sd ra, 120(sp) # 8-byte Folded Spill
-; RV64-NEXT:    sd s0, 112(sp) # 8-byte Folded Spill
-; RV64-NEXT:    .cfi_offset ra, -8
-; RV64-NEXT:    .cfi_offset s0, -16
-; RV64-NEXT:    addi s0, sp, 128
-; RV64-NEXT:    .cfi_def_cfa s0, 0
-; RV64-NEXT:    andi sp, sp, -64
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vle32.v v8, (a0)
-; RV64-NEXT:    vmv.x.s a0, v8
-; RV64-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 1
-; RV64-NEXT:    vmv.x.s a1, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 2
-; RV64-NEXT:    vmv.x.s a2, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 3
-; RV64-NEXT:    vmv.x.s a3, v12
-; RV64-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT:    vslidedown.vi v12, v8, 4
-; RV64-NEXT:    vmv.x.s a4, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 5
-; RV64-NEXT:    vmv.x.s a5, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 6
-; RV64-NEXT:    vmv.x.s a6, v12
-; RV64-NEXT:    vslidedown.vi v12, v8, 7
-; RV64-NEXT:    vmv.x.s a7, v12
-; RV64-NEXT:    mv t0, sp
-; RV64-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
-; RV64-NEXT:    vse32.v v8, (t0)
-; RV64-NEXT:    lw t0, 32(sp)
-; RV64-NEXT:    lw t1, 36(sp)
-; RV64-NEXT:    lw t2, 40(sp)
-; RV64-NEXT:    lw t3, 44(sp)
-; RV64-NEXT:    lw t4, 48(sp)
-; RV64-NEXT:    lw t5, 52(sp)
-; RV64-NEXT:    lw t6, 56(sp)
-; RV64-NEXT:    add a0, a0, a1
-; RV64-NEXT:    add a2, a2, a3
-; RV64-NEXT:    add a0, a0, a2
-; RV64-NEXT:    add a4, a4, a5
-; RV64-NEXT:    add a4, a4, a6
-; RV64-NEXT:    add a0, a0, a4
-; RV64-NEXT:    add a7, a7, t0
-; RV64-NEXT:    add a7, a7, t1
-; RV64-NEXT:    add a7, a7, t2
-; RV64-NEXT:    add a0, a0, a7
-; RV64-NEXT:    add t3, t3, t4
-; RV64-NEXT:    add t3, t3, t5
-; RV64-NEXT:    add t3, t3, t6
-; RV64-NEXT:    addw a0, a0, t3
-; RV64-NEXT:    addi sp, s0, -128
-; RV64-NEXT:    ld ra, 120(sp) # 8-byte Folded Reload
-; RV64-NEXT:    ld s0, 112(sp) # 8-byte Folded Reload
-; RV64-NEXT:    addi sp, sp, 128
-; RV64-NEXT:    ret
+; CHECK-LABEL: reduce_sum_16xi32_prefix15:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
+; CHECK-NEXT:    vle32.v v8, (a0)
+; CHECK-NEXT:    vmv.s.x v12, zero
+; CHECK-NEXT:    vslideup.vi v8, v12, 15
+; CHECK-NEXT:    vredsum.vs v8, v8, v12
+; CHECK-NEXT:    vmv.x.s a0, v8
+; CHECK-NEXT:    ret
   %v = load <16 x i32>, ptr %p, align 256
   %e0 = extractelement <16 x i32> %v, i32 0
   %e1 = extractelement <16 x i32> %v, i32 1
@@ -1238,6 +448,3 @@ define i32 @reduce_sum_16xi32_prefix15(ptr %p) {
   %add13 = add i32 %add12, %e14
   ret i32 %add13
 }
-
-;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line:
-; CHECK: {{.*}}


        


More information about the llvm-commits mailing list