[llvm] [RISCV][DAG][TLI] Avoid scalarizing length decreasing shuffles (PR #115532)
Philip Reames via llvm-commits
llvm-commits at lists.llvm.org
Fri Nov 8 10:58:24 PST 2024
https://github.com/preames created https://github.com/llvm/llvm-project/pull/115532
This change adds a TLI hook so that a target can require a length decreasing shuffle is padded to the source type instead of scalarized. The existing lowering path scalarizes many of these (there's an early check for cases which can make two operand shuffles), and then relies on a DAG combine to reconstruct those profitable to lower as a shuffle.
For RISCV, this is undesirable mostly because scalarization is so dang expensive. Each scalar element is at least one vslide* instruction (and often two) - which is linear in LMUL. On the other hand, a generic shuffle is at worst one vrgather - which is quadratic in LMUL. For VL>8, the shuffle will always win. For smaller VL values, we can often use cheaper lowerings. Not always, but usually. We also have the option of scalarizing when profitable - though we don't today.
There are two unprofitable cases exposed in the test changes:
* The first is the increase in VL across the vnsrl (deinterleave) shuffles. This is an artifact of the fact that the single source lowering doesn't try to pick a smaller VL based on the undef tail. We could fix this, but honestly, I doubt this matter since we're not changing LMUL. (Once enabled, VLOptimizer should undo these too.)
* The second is more interesting. For high shuffles formed via the DAG combine, we previously formed a two operand shuffle - which results in two LMUL/2 shuffles. For m2 and above, this was profitable. For m1 and below, it was actively unprofitable. I briefly explored splitting high LMUL shuffles, but doing so interacts badly with our existing recursion for two operand shuffles during lowering. We can come back to this if desired, but I don't think this needs to be blocking.
>From e30c3d522304ba9537b5581949d0004b3df72d4e Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Fri, 8 Nov 2024 10:42:10 -0800
Subject: [PATCH] [RISCV][DAG][TLI] Avoid scalarizing length decreasing
shuffles
This change adds a TLI hook so that a target can require a length decreasing
shuffle is padded to the source type instead of scalarized. The existing
lowering path scalarizes many of these (there's an early check for cases
which can make two operand shuffles), and then relies on a DAG combine to
reconstruct those profitable to lower as a shuffle.
For RISCV, this is undesirable mostly because scalarization is so dang
expensive. Each scalar element is at least one vslide* instruction
(and often two) - which is linear in LMUL. On the other hand, a
generic shuffle is at worst one vrgather - which is quadratic in LMUL.
For VL>8, the shuffle will always win. For smaller VL values,
we can often use cheaper lowerings. Not always, but usually. We
also have the option of scalarizing when profitable - though we don't
today.
There are two unprofitable cases exposed in the test changes:
* The first is the increase in VL across the vnsrl (deinterleave)
shuffles. This is an artifact of the fact that the single source
lowering doesn't try to pick a smaller VL based on the undef tail.
We could fix this, but honestly, I doubt this matter since we're
not changing LMUL. (Once enabled, VLOptimizer should undo these
too.)
* The second is more interesting. For high shuffles formed via the
DAG combine, we previously formed a two operand shuffle - which
results in two LMUL/2 shuffles. For m2 and above, this was
profitable. For m1 and below, it was actively unprofitable. I
briefly explored splitting high LMUL shuffles, but doing so
interacts badly with our existing recursion for two operand
shuffles during lowering. We can come back to this if desired,
but I don't think this needs to be blocking.
---
llvm/include/llvm/CodeGen/TargetLowering.h | 8 +
.../SelectionDAG/SelectionDAGBuilder.cpp | 19 +-
llvm/lib/Target/RISCV/RISCVISelLowering.h | 6 +
.../RISCV/rvv/fixed-vectors-int-shuffles.ll | 18 +-
.../rvv/fixed-vectors-interleaved-access.ll | 1056 ++++++-----------
.../fixed-vectors-shuffle-changes-length.ll | 245 ++--
.../rvv/fixed-vectors-shuffle-deinterleave.ll | 129 +-
.../rvv/fixed-vectors-shufflevector-vnsrl.ll | 198 ++--
llvm/test/CodeGen/RISCV/rvv/vl-opt-op-info.ll | 6 +-
9 files changed, 599 insertions(+), 1086 deletions(-)
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index e0b638201a0474..5fe2b7949cf284 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -545,6 +545,14 @@ class TargetLoweringBase {
return DefinedValues < 3;
}
+ // In SDAG construction, should length decreasing shuffles be expanded
+ // to a sequence of extracts and inserts if they can't be recognized
+ // via the two extracted operand form?
+ virtual bool
+ shouldScalarizeLengthDescreasingShuffle() const {
+ return true;
+ }
+
/// Return true if integer divide is usually cheaper than a sequence of
/// several shifts, adds, and multiplies for this target.
/// The definition of "cheaper" may depend on whether we're optimizing
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 3b046aa25f5444..f4e011e4cd91f5 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -4181,9 +4181,24 @@ void SelectionDAGBuilder::visitShuffleVector(const User &I) {
return;
}
- // We can't use either concat vectors or extract subvectors so fall back to
+ // If the target prefers, emit a padded shuffle vector at the
+ // source operand width, then extract the original destination
+ // type.
+ if (!TLI.shouldScalarizeLengthDescreasingShuffle()) {
+ EVT PaddedVT = EVT::getVectorVT(*DAG.getContext(), VT.getScalarType(),
+ SrcNumElts);
+ SmallVector<int, 8> ExtendedMask(Mask);
+ ExtendedMask.resize(SrcNumElts, -1);
+ SDValue Result =
+ DAG.getVectorShuffle(PaddedVT, DL, Src1, Src2, ExtendedMask);
+ Result = DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, VT, Result,
+ DAG.getVectorIdxConstant(0, DL));
+ setValue(&I, Result);
+ return;
+ }
+
+ // We can't use either padding or extract subvectors so fall back to
// replacing the shuffle with extract and build vector.
- // to insert and build vector.
EVT EltVT = VT.getVectorElementType();
SmallVector<SDValue,8> Ops;
for (int Idx : Mask) {
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.h b/llvm/lib/Target/RISCV/RISCVISelLowering.h
index 9ae70d257fa442..a8f385a59c7a63 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.h
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.h
@@ -586,6 +586,12 @@ class RISCVTargetLowering : public TargetLowering {
shouldExpandBuildVectorWithShuffles(EVT VT,
unsigned DefinedValues) const override;
+ bool
+ shouldScalarizeLengthDescreasingShuffle() const override {
+ // We always prefer to pad as a canonical form.
+ return false;
+ }
+
bool shouldExpandCttzElements(EVT VT) const override;
/// Return the cost of LMUL for linear operations.
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
index 1e77b3710928d2..ea5d509dca2d0c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
@@ -721,21 +721,11 @@ define <8 x i32> @shuffle_v8i32_2(<8 x i32> %x, <8 x i32> %y) {
define <8 x i8> @shuffle_v64i8_v8i8(<64 x i8> %wide.vec) {
; CHECK-LABEL: shuffle_v64i8_v8i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: li a0, 32
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
-; CHECK-NEXT: vid.v v12
-; CHECK-NEXT: vsll.vi v14, v12, 3
-; CHECK-NEXT: vrgather.vv v12, v8, v14
+; CHECK-NEXT: li a0, 64
; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
-; CHECK-NEXT: vslidedown.vx v8, v8, a0
-; CHECK-NEXT: li a1, 240
-; CHECK-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; CHECK-NEXT: vmv.s.x v0, a1
-; CHECK-NEXT: lui a1, 98561
-; CHECK-NEXT: addi a1, a1, -2048
-; CHECK-NEXT: vmv.v.x v10, a1
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, mu
-; CHECK-NEXT: vrgather.vv v12, v8, v10, v0.t
+; CHECK-NEXT: vid.v v12
+; CHECK-NEXT: vsll.vi v16, v12, 3
+; CHECK-NEXT: vrgather.vv v12, v8, v16
; CHECK-NEXT: vmv1r.v v8, v12
; CHECK-NEXT: ret
%s = shufflevector <64 x i8> %wide.vec, <64 x i8> poison, <8 x i32> <i32 0, i32 8, i32 16, i32 24, i32 32, i32 40, i32 48, i32 56>
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll
index b56814ea4c372a..bd0c71d8d89903 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll
@@ -8,25 +8,16 @@
; FIXME: This should be widened to a vlseg2 of <4 x i32> with VL set to 3
define {<3 x i32>, <3 x i32>} @load_factor2_v3(ptr %ptr) {
-; RV32-LABEL: load_factor2_v3:
-; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 6, e32, m2, ta, ma
-; RV32-NEXT: vle32.v v10, (a0)
-; RV32-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV32-NEXT: vnsrl.wi v8, v10, 0
-; RV32-NEXT: li a0, 32
-; RV32-NEXT: vnsrl.wx v9, v10, a0
-; RV32-NEXT: ret
-;
-; RV64-LABEL: load_factor2_v3:
-; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 6, e32, m2, ta, ma
-; RV64-NEXT: vle32.v v10, (a0)
-; RV64-NEXT: li a0, 32
-; RV64-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV64-NEXT: vnsrl.wx v9, v10, a0
-; RV64-NEXT: vnsrl.wi v8, v10, 0
-; RV64-NEXT: ret
+; CHECK-LABEL: load_factor2_v3:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 6, e32, m2, ta, ma
+; CHECK-NEXT: vle32.v v12, (a0)
+; CHECK-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; CHECK-NEXT: vnsrl.wi v8, v12, 0
+; CHECK-NEXT: li a0, 32
+; CHECK-NEXT: vnsrl.wx v10, v12, a0
+; CHECK-NEXT: vmv1r.v v9, v10
+; CHECK-NEXT: ret
%interleaved.vec = load <6 x i32>, ptr %ptr
%v0 = shufflevector <6 x i32> %interleaved.vec, <6 x i32> poison, <3 x i32> <i32 0, i32 2, i32 4>
%v1 = shufflevector <6 x i32> %interleaved.vec, <6 x i32> poison, <3 x i32> <i32 1, i32 3, i32 5>
@@ -183,478 +174,302 @@ define {<8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>} @load_
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
; RV32-NEXT: csrr a2, vlenb
-; RV32-NEXT: li a3, 84
+; RV32-NEXT: li a3, 61
; RV32-NEXT: mul a2, a2, a3
; RV32-NEXT: sub sp, sp, a2
-; RV32-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0xd4, 0x00, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 84 * vlenb
-; RV32-NEXT: addi a3, a1, 256
-; RV32-NEXT: li a2, 32
-; RV32-NEXT: vsetvli zero, a2, e32, m8, ta, ma
-; RV32-NEXT: vle32.v v8, (a3)
+; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x3d, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 61 * vlenb
+; RV32-NEXT: addi a2, a1, 256
+; RV32-NEXT: addi a3, a1, 128
+; RV32-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; RV32-NEXT: vle64.v v24, (a3)
; RV32-NEXT: csrr a3, vlenb
-; RV32-NEXT: li a4, 76
+; RV32-NEXT: li a4, 53
; RV32-NEXT: mul a3, a3, a4
; RV32-NEXT: add a3, sp, a3
; RV32-NEXT: addi a3, a3, 16
-; RV32-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
-; RV32-NEXT: addi a3, a1, 128
-; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT: vslideup.vi v4, v8, 4
-; RV32-NEXT: csrr a4, vlenb
-; RV32-NEXT: li a5, 40
-; RV32-NEXT: mul a4, a4, a5
-; RV32-NEXT: add a4, sp, a4
-; RV32-NEXT: addi a4, a4, 16
-; RV32-NEXT: vs4r.v v4, (a4) # Unknown-size Folded Spill
-; RV32-NEXT: lui a4, 12
-; RV32-NEXT: vmv.s.x v0, a4
-; RV32-NEXT: csrr a4, vlenb
-; RV32-NEXT: li a5, 24
-; RV32-NEXT: mul a4, a4, a5
-; RV32-NEXT: add a4, sp, a4
-; RV32-NEXT: addi a4, a4, 16
-; RV32-NEXT: vs1r.v v0, (a4) # Unknown-size Folded Spill
-; RV32-NEXT: vsetivli zero, 16, e32, m8, ta, ma
-; RV32-NEXT: vslidedown.vi v8, v8, 16
-; RV32-NEXT: csrr a4, vlenb
-; RV32-NEXT: li a5, 48
-; RV32-NEXT: mul a4, a4, a5
-; RV32-NEXT: add a4, sp, a4
-; RV32-NEXT: addi a4, a4, 16
-; RV32-NEXT: vs8r.v v8, (a4) # Unknown-size Folded Spill
-; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, mu
-; RV32-NEXT: vslideup.vi v4, v8, 10, v0.t
-; RV32-NEXT: lui a4, %hi(.LCPI8_0)
-; RV32-NEXT: addi a4, a4, %lo(.LCPI8_0)
-; RV32-NEXT: vsetvli zero, a2, e32, m8, ta, mu
-; RV32-NEXT: vle16.v v0, (a4)
-; RV32-NEXT: lui a4, %hi(.LCPI8_1)
-; RV32-NEXT: addi a4, a4, %lo(.LCPI8_1)
-; RV32-NEXT: lui a5, 1
-; RV32-NEXT: vle16.v v8, (a4)
-; RV32-NEXT: csrr a4, vlenb
-; RV32-NEXT: li a6, 56
-; RV32-NEXT: mul a4, a4, a6
-; RV32-NEXT: add a4, sp, a4
-; RV32-NEXT: addi a4, a4, 16
-; RV32-NEXT: vs4r.v v8, (a4) # Unknown-size Folded Spill
-; RV32-NEXT: vle32.v v8, (a1)
+; RV32-NEXT: vs8r.v v24, (a3) # Unknown-size Folded Spill
+; RV32-NEXT: vle64.v v8, (a1)
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a4, 68
-; RV32-NEXT: mul a1, a1, a4
+; RV32-NEXT: li a3, 45
+; RV32-NEXT: mul a1, a1, a3
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: vle32.v v24, (a3)
+; RV32-NEXT: vid.v v16
+; RV32-NEXT: li a1, 6
+; RV32-NEXT: vmul.vx v4, v16, a1
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, ma
+; RV32-NEXT: vrgatherei16.vv v16, v8, v4
+; RV32-NEXT: li a1, 56
+; RV32-NEXT: vmv.s.x v7, a1
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV32-NEXT: vadd.vi v8, v4, -16
+; RV32-NEXT: vmv1r.v v0, v7
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, mu
+; RV32-NEXT: vrgatherei16.vv v16, v24, v8, v0.t
+; RV32-NEXT: li a1, 192
+; RV32-NEXT: lui a3, 160
+; RV32-NEXT: vle64.v v24, (a2)
+; RV32-NEXT: csrr a2, vlenb
+; RV32-NEXT: li a4, 37
+; RV32-NEXT: mul a2, a2, a4
+; RV32-NEXT: add a2, sp, a2
+; RV32-NEXT: addi a2, a2, 16
+; RV32-NEXT: vs8r.v v24, (a2) # Unknown-size Folded Spill
+; RV32-NEXT: addi a3, a3, 4
+; RV32-NEXT: vmv.s.x v6, a1
+; RV32-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV32-NEXT: vmv.v.x v8, a3
+; RV32-NEXT: vmv1r.v v0, v6
+; RV32-NEXT: vsetivli zero, 16, e64, m8, ta, mu
+; RV32-NEXT: vrgatherei16.vv v16, v24, v8, v0.t
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 60
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 29
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs8r.v v24, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: addi a1, a5, -64
-; RV32-NEXT: vmv.s.x v16, a1
+; RV32-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV32-NEXT: vadd.vi v8, v4, 1
+; RV32-NEXT: vadd.vi v2, v4, -15
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 44
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 45
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs1r.v v16, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: vrgatherei16.vv v16, v8, v0
+; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, mu
+; RV32-NEXT: vrgatherei16.vv v24, v16, v8
+; RV32-NEXT: vmv1r.v v0, v7
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 44
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 53
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl1r.v v0, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 56
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vrgatherei16.vv v24, v8, v2, v0.t
+; RV32-NEXT: lui a1, 176
+; RV32-NEXT: addi a1, a1, 5
+; RV32-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV32-NEXT: vmv.v.x v2, a1
+; RV32-NEXT: vmv1r.v v0, v6
+; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: li a2, 37
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v8, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vrgatherei16.vv v16, v24, v8, v0.t
-; RV32-NEXT: vsetivli zero, 12, e32, m4, tu, ma
-; RV32-NEXT: vmv.v.v v4, v16
+; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vsetivli zero, 16, e64, m8, ta, mu
+; RV32-NEXT: vrgatherei16.vv v24, v8, v2, v0.t
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 36
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 21
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs4r.v v4, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: vs8r.v v24, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: li a1, 24
+; RV32-NEXT: lui a2, %hi(.LCPI8_0)
+; RV32-NEXT: addi a2, a2, %lo(.LCPI8_0)
+; RV32-NEXT: li a3, 224
+; RV32-NEXT: vmv.s.x v0, a1
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 76
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: slli a4, a1, 2
+; RV32-NEXT: add a1, a4, a1
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, mu
-; RV32-NEXT: vslideup.vi v12, v8, 2
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 24
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: vs1r.v v0, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: vmv.s.x v1, a3
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV32-NEXT: vadd.vi v8, v4, 2
+; RV32-NEXT: vle16.v v6, (a2)
+; RV32-NEXT: vadd.vi v12, v4, -14
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, mu
+; RV32-NEXT: vrgatherei16.vv v24, v16, v8
+; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: li a2, 53
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl1r.v v1, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vrgatherei16.vv v24, v16, v12, v0.t
; RV32-NEXT: vmv1r.v v0, v1
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 48
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 37
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vslideup.vi v12, v16, 8, v0.t
+; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vrgatherei16.vv v24, v8, v6, v0.t
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 56
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 13
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: lui a1, %hi(.LCPI8_2)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_2)
-; RV32-NEXT: lui a3, %hi(.LCPI8_3)
-; RV32-NEXT: addi a3, a3, %lo(.LCPI8_3)
-; RV32-NEXT: vsetvli zero, a2, e16, m4, ta, ma
-; RV32-NEXT: vle16.v v12, (a1)
-; RV32-NEXT: vle16.v v8, (a3)
+; RV32-NEXT: vs8r.v v24, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: lui a1, %hi(.LCPI8_1)
+; RV32-NEXT: addi a1, a1, %lo(.LCPI8_1)
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV32-NEXT: vadd.vi v2, v4, 3
+; RV32-NEXT: vle16.v v6, (a1)
+; RV32-NEXT: vadd.vi v16, v4, -13
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 28
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: slli a1, a1, 1
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs4r.v v8, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: lui a1, %hi(.LCPI8_4)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_4)
-; RV32-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV32-NEXT: vle16.v v2, (a1)
+; RV32-NEXT: vs2r.v v16, (a1) # Unknown-size Folded Spill
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 68
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 45
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetvli zero, a2, e32, m8, ta, mu
-; RV32-NEXT: vrgatherei16.vv v24, v16, v12
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, mu
+; RV32-NEXT: vrgatherei16.vv v24, v16, v2
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 44
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: slli a2, a1, 2
+; RV32-NEXT: add a1, a2, a1
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl1r.v v0, (a1) # Unknown-size Folded Reload
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 60
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 28
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v4, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vrgatherei16.vv v24, v8, v4, v0.t
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 56
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 53
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v8, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetivli zero, 12, e32, m4, tu, ma
-; RV32-NEXT: vmv.v.v v8, v24
+; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 56
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: slli a1, a1, 1
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs4r.v v8, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: vl2r.v v2, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vrgatherei16.vv v24, v16, v2, v0.t
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 76
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: slli a1, a1, 2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, mu
-; RV32-NEXT: vrgatherei16.vv v8, v24, v2
+; RV32-NEXT: vs1r.v v1, (a1) # Unknown-size Folded Spill
; RV32-NEXT: vmv1r.v v0, v1
+; RV32-NEXT: vrgatherei16.vv v24, v8, v6, v0.t
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 48
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vslideup.vi v8, v24, 6, v0.t
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 44
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: slli a2, a1, 2
+; RV32-NEXT: add a1, a2, a1
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs4r.v v8, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: lui a1, %hi(.LCPI8_5)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_5)
-; RV32-NEXT: lui a3, %hi(.LCPI8_6)
-; RV32-NEXT: addi a3, a3, %lo(.LCPI8_6)
-; RV32-NEXT: vsetvli zero, a2, e32, m8, ta, mu
-; RV32-NEXT: vle16.v v24, (a1)
-; RV32-NEXT: vle16.v v4, (a3)
-; RV32-NEXT: li a1, 960
-; RV32-NEXT: vmv.s.x v0, a1
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 4
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs1r.v v0, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: vrgatherei16.vv v8, v16, v24
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 60
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vrgatherei16.vv v8, v24, v4, v0.t
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 28
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: lui a1, %hi(.LCPI8_7)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_7)
-; RV32-NEXT: lui a3, %hi(.LCPI8_8)
-; RV32-NEXT: addi a3, a3, %lo(.LCPI8_8)
-; RV32-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV32-NEXT: vle16.v v16, (a1)
-; RV32-NEXT: lui a1, %hi(.LCPI8_9)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_9)
-; RV32-NEXT: vsetvli zero, a2, e16, m4, ta, ma
-; RV32-NEXT: vle16.v v8, (a3)
-; RV32-NEXT: csrr a3, vlenb
-; RV32-NEXT: slli a3, a3, 2
-; RV32-NEXT: add a3, sp, a3
-; RV32-NEXT: addi a3, a3, 16
-; RV32-NEXT: vs4r.v v8, (a3) # Unknown-size Folded Spill
-; RV32-NEXT: vle16.v v8, (a1)
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vs8r.v v24, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: li a1, 28
+; RV32-NEXT: lui a2, %hi(.LCPI8_2)
+; RV32-NEXT: addi a2, a2, %lo(.LCPI8_2)
+; RV32-NEXT: vmv.s.x v24, a1
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV32-NEXT: vadd.vi v28, v4, 4
+; RV32-NEXT: vle16.v v30, (a2)
+; RV32-NEXT: vadd.vi v6, v4, -12
+; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: li a2, 45
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs4r.v v8, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, mu
+; RV32-NEXT: vrgatherei16.vv v16, v8, v28
+; RV32-NEXT: vmv1r.v v0, v24
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 76
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 53
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, mu
-; RV32-NEXT: vrgatherei16.vv v12, v8, v16
+; RV32-NEXT: vrgatherei16.vv v16, v8, v6, v0.t
; RV32-NEXT: vmv1r.v v0, v1
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 48
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 37
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vslideup.vi v12, v16, 4, v0.t
+; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vrgatherei16.vv v16, v8, v30, v0.t
+; RV32-NEXT: lui a1, %hi(.LCPI8_3)
+; RV32-NEXT: addi a1, a1, %lo(.LCPI8_3)
+; RV32-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV32-NEXT: vadd.vi v28, v4, 5
+; RV32-NEXT: vle16.v v8, (a1)
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 24
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: slli a1, a1, 1
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: vs2r.v v8, (a1) # Unknown-size Folded Spill
+; RV32-NEXT: vadd.vi v8, v4, -11
+; RV32-NEXT: addi a1, sp, 16
+; RV32-NEXT: vs2r.v v8, (a1) # Unknown-size Folded Spill
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 68
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 45
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl8r.v v0, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vsetvli zero, zero, e64, m8, ta, mu
+; RV32-NEXT: vrgatherei16.vv v8, v0, v28
+; RV32-NEXT: vmv1r.v v0, v24
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 2
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v20, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetvli zero, a2, e32, m8, ta, mu
-; RV32-NEXT: vrgatherei16.vv v8, v0, v20
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 4
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl1r.v v0, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v20, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vrgatherei16.vv v8, v24, v20, v0.t
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 4
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: lui a1, %hi(.LCPI8_10)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_10)
-; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, mu
-; RV32-NEXT: vle16.v v12, (a1)
-; RV32-NEXT: lui a1, 15
-; RV32-NEXT: vmv.s.x v3, a1
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 76
-; RV32-NEXT: mul a1, a1, a3
+; RV32-NEXT: li a2, 53
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vslideup.vi v8, v24, 6
-; RV32-NEXT: vmv1r.v v0, v3
-; RV32-NEXT: vrgatherei16.vv v8, v16, v12, v0.t
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 76
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs4r.v v8, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: vmv4r.v v24, v16
-; RV32-NEXT: lui a1, %hi(.LCPI8_11)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_11)
-; RV32-NEXT: lui a3, %hi(.LCPI8_12)
-; RV32-NEXT: addi a3, a3, %lo(.LCPI8_12)
-; RV32-NEXT: vsetvli zero, a2, e32, m8, ta, mu
-; RV32-NEXT: vle16.v v28, (a1)
-; RV32-NEXT: vle16.v v4, (a3)
-; RV32-NEXT: li a1, 1008
-; RV32-NEXT: vmv.s.x v0, a1
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 2
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs1r.v v0, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 68
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vrgatherei16.vv v8, v16, v28
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 60
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vrgatherei16.vv v8, v16, v4, v0.t
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: lui a1, %hi(.LCPI8_13)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_13)
-; RV32-NEXT: lui a3, %hi(.LCPI8_14)
-; RV32-NEXT: addi a3, a3, %lo(.LCPI8_14)
-; RV32-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV32-NEXT: vle16.v v8, (a1)
-; RV32-NEXT: lui a1, %hi(.LCPI8_15)
-; RV32-NEXT: addi a1, a1, %lo(.LCPI8_15)
-; RV32-NEXT: vsetvli zero, a2, e16, m4, ta, ma
-; RV32-NEXT: vle16.v v28, (a3)
-; RV32-NEXT: vle16.v v12, (a1)
; RV32-NEXT: addi a1, sp, 16
-; RV32-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
-; RV32-NEXT: vmv1r.v v0, v3
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 40
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v16, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, mu
-; RV32-NEXT: vrgatherei16.vv v16, v24, v8, v0.t
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 44
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v20, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 28
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetivli zero, 10, e32, m4, tu, ma
-; RV32-NEXT: vmv.v.v v20, v8
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a3, 68
-; RV32-NEXT: mul a1, a1, a3
-; RV32-NEXT: add a1, sp, a1
-; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v0, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetvli zero, a2, e32, m8, ta, mu
-; RV32-NEXT: vrgatherei16.vv v8, v0, v28
+; RV32-NEXT: vl2r.v v6, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vrgatherei16.vv v8, v24, v6, v0.t
; RV32-NEXT: csrr a1, vlenb
; RV32-NEXT: slli a1, a1, 2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl1r.v v0, (a1) # Unknown-size Folded Reload
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a2, 60
+; RV32-NEXT: li a2, 37
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: addi a1, sp, 16
-; RV32-NEXT: vl4r.v v4, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vrgatherei16.vv v8, v24, v4, v0.t
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a2, 24
-; RV32-NEXT: mul a1, a1, a2
+; RV32-NEXT: slli a1, a1, 1
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v28, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vl2r.v v6, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vrgatherei16.vv v8, v24, v6, v0.t
+; RV32-NEXT: vslideup.vi v16, v8, 8
+; RV32-NEXT: addi a1, a0, 256
+; RV32-NEXT: vse64.v v16, (a1)
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 4
+; RV32-NEXT: slli a2, a1, 2
+; RV32-NEXT: add a1, a2, a1
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v0, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vsetivli zero, 10, e32, m4, tu, ma
-; RV32-NEXT: vmv.v.v v28, v0
+; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a2, 76
+; RV32-NEXT: li a2, 13
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v24, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vslideup.vi v8, v16, 8
+; RV32-NEXT: addi a1, a0, 128
+; RV32-NEXT: vse64.v v8, (a1)
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: li a2, 21
+; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl8r.v v0, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vmv.v.v v24, v0
-; RV32-NEXT: vmv.v.v v16, v8
-; RV32-NEXT: addi a1, a0, 320
-; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, ma
-; RV32-NEXT: vse32.v v16, (a1)
-; RV32-NEXT: addi a1, a0, 256
-; RV32-NEXT: vse32.v v24, (a1)
-; RV32-NEXT: addi a1, a0, 192
-; RV32-NEXT: vse32.v v28, (a1)
-; RV32-NEXT: addi a1, a0, 128
-; RV32-NEXT: vse32.v v20, (a1)
-; RV32-NEXT: addi a1, a0, 64
-; RV32-NEXT: csrr a2, vlenb
-; RV32-NEXT: li a3, 56
-; RV32-NEXT: mul a2, a2, a3
-; RV32-NEXT: add a2, sp, a2
-; RV32-NEXT: addi a2, a2, 16
-; RV32-NEXT: vl4r.v v8, (a2) # Unknown-size Folded Reload
-; RV32-NEXT: vse32.v v8, (a1)
+; RV32-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: li a2, 36
+; RV32-NEXT: li a2, 29
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: add a1, sp, a1
; RV32-NEXT: addi a1, a1, 16
-; RV32-NEXT: vl4r.v v8, (a1) # Unknown-size Folded Reload
-; RV32-NEXT: vse32.v v8, (a0)
+; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV32-NEXT: vslideup.vi v8, v16, 8
+; RV32-NEXT: vse64.v v8, (a0)
; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: li a1, 84
+; RV32-NEXT: li a1, 61
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: .cfi_def_cfa sp, 16
@@ -667,432 +482,253 @@ define {<8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>} @load_
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
; RV64-NEXT: csrr a2, vlenb
-; RV64-NEXT: slli a3, a2, 6
-; RV64-NEXT: add a2, a3, a2
+; RV64-NEXT: li a3, 40
+; RV64-NEXT: mul a2, a2, a3
; RV64-NEXT: sub sp, sp, a2
-; RV64-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc1, 0x00, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 65 * vlenb
+; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x28, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 40 * vlenb
; RV64-NEXT: addi a2, a1, 256
-; RV64-NEXT: vsetivli zero, 16, e64, m8, ta, ma
-; RV64-NEXT: vle64.v v16, (a2)
-; RV64-NEXT: csrr a2, vlenb
-; RV64-NEXT: li a3, 21
-; RV64-NEXT: mul a2, a2, a3
-; RV64-NEXT: add a2, sp, a2
-; RV64-NEXT: addi a2, a2, 16
-; RV64-NEXT: vs8r.v v16, (a2) # Unknown-size Folded Spill
-; RV64-NEXT: addi a2, a1, 128
-; RV64-NEXT: vle64.v v8, (a1)
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a3, 57
-; RV64-NEXT: mul a1, a1, a3
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, ma
-; RV64-NEXT: vrgather.vi v12, v16, 4
-; RV64-NEXT: li a1, 128
-; RV64-NEXT: vmv.s.x v0, a1
-; RV64-NEXT: addi a1, sp, 16
-; RV64-NEXT: vs1r.v v0, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vsetivli zero, 8, e64, m8, ta, ma
-; RV64-NEXT: vslidedown.vi v16, v16, 8
+; RV64-NEXT: addi a3, a1, 128
+; RV64-NEXT: vsetivli zero, 16, e16, m2, ta, ma
+; RV64-NEXT: vle64.v v8, (a3)
+; RV64-NEXT: csrr a3, vlenb
+; RV64-NEXT: li a4, 24
+; RV64-NEXT: mul a3, a3, a4
+; RV64-NEXT: add a3, sp, a3
+; RV64-NEXT: addi a3, a3, 16
+; RV64-NEXT: vs8r.v v8, (a3) # Unknown-size Folded Spill
+; RV64-NEXT: vle64.v v16, (a1)
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a3, 37
-; RV64-NEXT: mul a1, a1, a3
+; RV64-NEXT: slli a1, a1, 5
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, mu
-; RV64-NEXT: vrgather.vi v12, v16, 2, v0.t
-; RV64-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV64-NEXT: vid.v v10
+; RV64-NEXT: vid.v v24
; RV64-NEXT: li a1, 6
-; RV64-NEXT: vmul.vx v8, v10, a1
+; RV64-NEXT: vmul.vx v4, v24, a1
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
+; RV64-NEXT: vrgatherei16.vv v24, v16, v4
; RV64-NEXT: li a1, 56
-; RV64-NEXT: vle64.v v24, (a2)
-; RV64-NEXT: csrr a2, vlenb
-; RV64-NEXT: li a3, 45
-; RV64-NEXT: mul a2, a2, a3
-; RV64-NEXT: add a2, sp, a2
-; RV64-NEXT: addi a2, a2, 16
-; RV64-NEXT: vs8r.v v24, (a2) # Unknown-size Folded Spill
-; RV64-NEXT: vmv.s.x v10, a1
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 53
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs1r.v v10, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vadd.vi v10, v8, -16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 57
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v0, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vmv.s.x v1, a1
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV64-NEXT: vadd.vi v16, v4, -16
+; RV64-NEXT: vmv1r.v v0, v1
; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, mu
-; RV64-NEXT: vrgatherei16.vv v16, v0, v8
-; RV64-NEXT: vmv2r.v v4, v8
+; RV64-NEXT: vrgatherei16.vv v24, v8, v16, v0.t
+; RV64-NEXT: li a1, 192
+; RV64-NEXT: lui a3, 160
+; RV64-NEXT: vle64.v v16, (a2)
+; RV64-NEXT: addi a3, a3, 4
+; RV64-NEXT: vmv.s.x v0, a1
+; RV64-NEXT: addi a1, sp, 16
+; RV64-NEXT: vs1r.v v0, (a1) # Unknown-size Folded Spill
+; RV64-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64-NEXT: vmv.v.x v10, a3
+; RV64-NEXT: vsetivli zero, 16, e64, m8, ta, mu
+; RV64-NEXT: vrgatherei16.vv v24, v16, v10, v0.t
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 53
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 3
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl1r.v v6, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vmv1r.v v0, v6
-; RV64-NEXT: vrgatherei16.vv v16, v24, v10, v0.t
-; RV64-NEXT: vsetivli zero, 6, e64, m4, tu, ma
-; RV64-NEXT: vmv.v.v v12, v16
+; RV64-NEXT: vs8r.v v24, (a1) # Unknown-size Folded Spill
+; RV64-NEXT: vmv8r.v v8, v16
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 4
-; RV64-NEXT: add a1, a2, a1
+; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
+; RV64-NEXT: vs8r.v v16, (a1) # Unknown-size Folded Spill
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV64-NEXT: vadd.vi v2, v4, 1
+; RV64-NEXT: vadd.vi v6, v4, -15
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 21
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 5
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, mu
-; RV64-NEXT: vrgather.vi v12, v8, 5
-; RV64-NEXT: addi a1, sp, 16
-; RV64-NEXT: vl1r.v v1, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, mu
+; RV64-NEXT: vrgatherei16.vv v16, v24, v2
; RV64-NEXT: vmv1r.v v0, v1
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 37
+; RV64-NEXT: li a2, 24
; RV64-NEXT: mul a1, a1, a2
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vrgather.vi v12, v16, 3, v0.t
-; RV64-NEXT: vmv.v.v v28, v12
-; RV64-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV64-NEXT: vadd.vi v24, v4, 1
-; RV64-NEXT: vadd.vi v26, v4, -15
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 57
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, mu
-; RV64-NEXT: vrgatherei16.vv v16, v8, v24
-; RV64-NEXT: vmv1r.v v0, v6
+; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vrgatherei16.vv v16, v24, v6, v0.t
+; RV64-NEXT: lui a1, 176
+; RV64-NEXT: addi a1, a1, 5
+; RV64-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64-NEXT: vmv.v.x v24, a1
+; RV64-NEXT: addi a1, sp, 16
+; RV64-NEXT: vl1r.v v0, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vsetivli zero, 16, e64, m8, ta, mu
+; RV64-NEXT: vrgatherei16.vv v16, v8, v24, v0.t
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 45
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 3
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vrgatherei16.vv v16, v8, v26, v0.t
-; RV64-NEXT: vsetivli zero, 6, e64, m4, tu, ma
-; RV64-NEXT: vmv.v.v v28, v16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 13
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v28, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: lui a1, 16
-; RV64-NEXT: addi a1, a1, 7
-; RV64-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV64-NEXT: vmv.v.i v9, 6
-; RV64-NEXT: vmv.v.x v10, a1
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 21
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, ma
-; RV64-NEXT: vrgatherei16.vv v12, v16, v9
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 53
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vrgatherei16.vv v12, v16, v10
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 3
-; RV64-NEXT: add a1, a2, a1
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vmv4r.v v8, v16
-; RV64-NEXT: vrgather.vi v12, v16, 2
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 5
-; RV64-NEXT: add a1, a2, a1
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vrgather.vi v12, v16, 3
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 29
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
+; RV64-NEXT: vslideup.vi v8, v16, 8
+; RV64-NEXT: vse64.v v8, (a0)
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV64-NEXT: vadd.vi v16, v4, 2
+; RV64-NEXT: vadd.vi v2, v4, -14
+; RV64-NEXT: vmv2r.v v6, v4
; RV64-NEXT: li a1, 24
; RV64-NEXT: vmv.s.x v0, a1
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 21
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
+; RV64-NEXT: addi a1, sp, 16
; RV64-NEXT: vs1r.v v0, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV64-NEXT: vadd.vi v16, v4, 2
-; RV64-NEXT: vadd.vi v2, v4, -14
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 57
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 5
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; RV64-NEXT: vrgatherei16.vv v8, v24, v16
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 45
+; RV64-NEXT: li a2, 24
; RV64-NEXT: mul a1, a1, a2
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vrgatherei16.vv v8, v16, v2, v0.t
; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: slli a1, a1, 3
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
+; RV64-NEXT: li a1, 224
+; RV64-NEXT: lui a2, 393219
+; RV64-NEXT: slli a2, a2, 21
+; RV64-NEXT: vmv.s.x v1, a1
+; RV64-NEXT: vsetivli zero, 4, e64, m2, ta, ma
+; RV64-NEXT: vmv.v.x v4, a2
; RV64-NEXT: vmv1r.v v0, v1
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 37
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 53
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 3
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl4r.v v28, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, mu
-; RV64-NEXT: vrgather.vi v28, v24, 4, v0.t
+; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vsetivli zero, 16, e64, m8, ta, mu
+; RV64-NEXT: vrgatherei16.vv v8, v24, v4, v0.t
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 53
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 3
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v28, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vmv2r.v v8, v4
-; RV64-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV64-NEXT: vadd.vi v4, v4, 3
-; RV64-NEXT: vadd.vi v6, v8, -13
-; RV64-NEXT: vmv2r.v v2, v8
+; RV64-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV64-NEXT: vadd.vi v2, v6, 3
+; RV64-NEXT: vadd.vi v4, v6, -13
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 57
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 5
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, mu
-; RV64-NEXT: vrgatherei16.vv v8, v24, v4
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 21
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
+; RV64-NEXT: vrgatherei16.vv v8, v24, v2
+; RV64-NEXT: addi a1, sp, 16
; RV64-NEXT: vl1r.v v0, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vrgatherei16.vv v8, v16, v6, v0.t
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 21
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vmv1r.v v0, v1
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 37
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 3
-; RV64-NEXT: add a1, a2, a1
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl4r.v v4, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, mu
-; RV64-NEXT: vrgather.vi v4, v16, 5, v0.t
-; RV64-NEXT: lui a1, 96
-; RV64-NEXT: li a2, 192
-; RV64-NEXT: vmv.s.x v1, a2
-; RV64-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV64-NEXT: vmv.v.x v8, a1
+; RV64-NEXT: vrgatherei16.vv v8, v16, v4, v0.t
+; RV64-NEXT: lui a1, 208
+; RV64-NEXT: addi a1, a1, 7
+; RV64-NEXT: slli a1, a1, 16
+; RV64-NEXT: addi a1, a1, 1
+; RV64-NEXT: slli a1, a1, 16
+; RV64-NEXT: vsetivli zero, 4, e64, m2, ta, ma
+; RV64-NEXT: vmv.v.x v16, a1
; RV64-NEXT: vmv1r.v v0, v1
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 5
-; RV64-NEXT: add a1, a2, a1
+; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl4r.v v12, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, mu
-; RV64-NEXT: vrgatherei16.vv v12, v16, v8, v0.t
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 5
-; RV64-NEXT: add a1, a2, a1
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
+; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vsetivli zero, 16, e64, m8, ta, mu
+; RV64-NEXT: vrgatherei16.vv v8, v24, v16, v0.t
+; RV64-NEXT: addi a1, sp, 16
+; RV64-NEXT: vs8r.v v8, (a1) # Unknown-size Folded Spill
; RV64-NEXT: li a1, 28
-; RV64-NEXT: vmv.s.x v0, a1
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 3
-; RV64-NEXT: add a1, a2, a1
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs1r.v v0, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV64-NEXT: vadd.vi v22, v2, 4
-; RV64-NEXT: vadd.vi v20, v2, -12
+; RV64-NEXT: vmv.s.x v2, a1
+; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
+; RV64-NEXT: vadd.vi v8, v6, 4
+; RV64-NEXT: vadd.vi v4, v6, -12
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 57
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 5
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, mu
-; RV64-NEXT: vrgatherei16.vv v8, v24, v22
+; RV64-NEXT: vrgatherei16.vv v16, v24, v8
+; RV64-NEXT: vmv1r.v v0, v2
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 45
+; RV64-NEXT: li a2, 24
; RV64-NEXT: mul a1, a1, a2
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vrgatherei16.vv v8, v24, v20, v0.t
-; RV64-NEXT: lui a1, 112
+; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vrgatherei16.vv v16, v8, v4, v0.t
+; RV64-NEXT: lui a1, 114689
+; RV64-NEXT: slli a1, a1, 6
; RV64-NEXT: addi a1, a1, 1
-; RV64-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV64-NEXT: vmv.v.x v12, a1
+; RV64-NEXT: slli a1, a1, 17
+; RV64-NEXT: vsetivli zero, 4, e64, m2, ta, ma
+; RV64-NEXT: vmv.v.x v4, a1
+; RV64-NEXT: vmv1r.v v3, v1
; RV64-NEXT: vmv1r.v v0, v1
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 29
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl4r.v v20, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, mu
-; RV64-NEXT: vrgatherei16.vv v20, v16, v12, v0.t
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 29
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v20, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 53
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl4r.v v12, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 5, e64, m4, tu, ma
-; RV64-NEXT: vmv.v.v v12, v24
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 53
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vs4r.v v12, (a1) # Unknown-size Folded Spill
-; RV64-NEXT: vsetivli zero, 16, e16, m2, ta, ma
-; RV64-NEXT: vadd.vi v12, v2, 5
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 57
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, ma
-; RV64-NEXT: vrgatherei16.vv v24, v16, v12
+; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vsetivli zero, 16, e64, m8, ta, mu
+; RV64-NEXT: vrgatherei16.vv v16, v8, v4, v0.t
; RV64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
-; RV64-NEXT: vadd.vi v12, v2, -11
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 3
-; RV64-NEXT: add a1, a2, a1
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl1r.v v0, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 45
-; RV64-NEXT: mul a1, a1, a2
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vadd.vi v0, v6, 5
+; RV64-NEXT: vadd.vi v4, v6, -11
; RV64-NEXT: vsetvli zero, zero, e64, m8, ta, mu
-; RV64-NEXT: vrgatherei16.vv v24, v16, v12, v0.t
-; RV64-NEXT: vmv4r.v v12, v4
+; RV64-NEXT: vrgatherei16.vv v8, v24, v0
+; RV64-NEXT: vmv1r.v v0, v2
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 21
+; RV64-NEXT: li a2, 24
; RV64-NEXT: mul a1, a1, a2
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl8r.v v0, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vsetivli zero, 5, e64, m4, tu, ma
-; RV64-NEXT: vmv.v.v v12, v0
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 5
-; RV64-NEXT: add a1, a2, a1
-; RV64-NEXT: add a1, sp, a1
-; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl4r.v v20, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vmv.v.v v20, v8
+; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vrgatherei16.vv v8, v24, v4, v0.t
+; RV64-NEXT: lui a1, 240
+; RV64-NEXT: addi a1, a1, 9
+; RV64-NEXT: slli a1, a1, 16
+; RV64-NEXT: addi a1, a1, 3
+; RV64-NEXT: slli a1, a1, 16
+; RV64-NEXT: vsetivli zero, 4, e64, m2, ta, ma
+; RV64-NEXT: vmv.v.x v6, a1
+; RV64-NEXT: vmv1r.v v0, v3
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: li a2, 29
-; RV64-NEXT: mul a1, a1, a2
+; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl4r.v v8, (a1) # Unknown-size Folded Reload
-; RV64-NEXT: vmv.v.v v8, v24
-; RV64-NEXT: addi a1, a0, 320
-; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, ma
-; RV64-NEXT: vse64.v v8, (a1)
+; RV64-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vsetivli zero, 16, e64, m8, ta, mu
+; RV64-NEXT: vrgatherei16.vv v8, v24, v6, v0.t
+; RV64-NEXT: vslideup.vi v16, v8, 8
; RV64-NEXT: addi a1, a0, 256
-; RV64-NEXT: vse64.v v20, (a1)
-; RV64-NEXT: addi a1, a0, 192
-; RV64-NEXT: vse64.v v12, (a1)
-; RV64-NEXT: addi a1, a0, 128
-; RV64-NEXT: csrr a2, vlenb
-; RV64-NEXT: li a3, 53
-; RV64-NEXT: mul a2, a2, a3
-; RV64-NEXT: add a2, sp, a2
-; RV64-NEXT: addi a2, a2, 16
-; RV64-NEXT: vl4r.v v8, (a2) # Unknown-size Folded Reload
-; RV64-NEXT: vse64.v v8, (a1)
-; RV64-NEXT: addi a1, a0, 64
-; RV64-NEXT: csrr a2, vlenb
-; RV64-NEXT: li a3, 13
-; RV64-NEXT: mul a2, a2, a3
-; RV64-NEXT: add a2, sp, a2
-; RV64-NEXT: addi a2, a2, 16
-; RV64-NEXT: vl4r.v v8, (a2) # Unknown-size Folded Reload
-; RV64-NEXT: vse64.v v8, (a1)
+; RV64-NEXT: vse64.v v16, (a1)
+; RV64-NEXT: addi a1, sp, 16
+; RV64-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a2, a1, 4
-; RV64-NEXT: add a1, a2, a1
+; RV64-NEXT: slli a1, a1, 3
; RV64-NEXT: add a1, sp, a1
; RV64-NEXT: addi a1, a1, 16
-; RV64-NEXT: vl4r.v v8, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
+; RV64-NEXT: vslideup.vi v8, v16, 8
+; RV64-NEXT: addi a0, a0, 128
; RV64-NEXT: vse64.v v8, (a0)
; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a1, a0, 6
-; RV64-NEXT: add a0, a1, a0
+; RV64-NEXT: li a1, 40
+; RV64-NEXT: mul a0, a0, a1
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: .cfi_def_cfa sp, 16
; RV64-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
index c9e6a8730eec7e..e97a8f7deda510 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
@@ -6,90 +6,57 @@
define <8 x i1> @v8i1_v16i1(<16 x i1>) {
; RV32-LABEL: v8i1_v16i1:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.x.s a0, v0
-; RV32-NEXT: slli a1, a0, 18
-; RV32-NEXT: srli a1, a1, 31
-; RV32-NEXT: srli a2, a0, 31
-; RV32-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; RV32-NEXT: vmv.v.x v8, a2
-; RV32-NEXT: vslide1down.vx v8, v8, a1
-; RV32-NEXT: slli a1, a0, 27
-; RV32-NEXT: srli a1, a1, 31
-; RV32-NEXT: vslide1down.vx v8, v8, a1
-; RV32-NEXT: slli a1, a0, 28
-; RV32-NEXT: srli a1, a1, 31
-; RV32-NEXT: vslide1down.vx v8, v8, a1
-; RV32-NEXT: slli a1, a0, 19
-; RV32-NEXT: srli a1, a1, 31
-; RV32-NEXT: slli a2, a0, 26
-; RV32-NEXT: srli a2, a2, 31
-; RV32-NEXT: vmv.v.x v9, a2
-; RV32-NEXT: vslide1down.vx v9, v9, a1
-; RV32-NEXT: slli a1, a0, 24
-; RV32-NEXT: srli a1, a1, 31
-; RV32-NEXT: vslide1down.vx v9, v9, a1
-; RV32-NEXT: slli a0, a0, 29
-; RV32-NEXT: srli a0, a0, 31
-; RV32-NEXT: vmv.v.i v0, 15
-; RV32-NEXT: vslide1down.vx v9, v9, a0
-; RV32-NEXT: vslidedown.vi v8, v9, 4, v0.t
-; RV32-NEXT: vand.vi v8, v8, 1
-; RV32-NEXT: vmsne.vi v0, v8, 0
+; RV32-NEXT: lui a0, %hi(.LCPI0_0)
+; RV32-NEXT: addi a0, a0, %lo(.LCPI0_0)
+; RV32-NEXT: vsetivli zero, 16, e8, m1, ta, ma
+; RV32-NEXT: vle8.v v8, (a0)
+; RV32-NEXT: vmv.v.i v9, 0
+; RV32-NEXT: vmerge.vim v9, v9, 1, v0
+; RV32-NEXT: vrgather.vv v10, v9, v8
+; RV32-NEXT: vmsne.vi v0, v10, 0
; RV32-NEXT: ret
;
; RV64-LABEL: v8i1_v16i1:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.x.s a0, v0
-; RV64-NEXT: slli a1, a0, 50
-; RV64-NEXT: srli a1, a1, 63
-; RV64-NEXT: srli a2, a0, 63
-; RV64-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; RV64-NEXT: vmv.v.x v8, a2
-; RV64-NEXT: vslide1down.vx v8, v8, a1
-; RV64-NEXT: slli a1, a0, 59
-; RV64-NEXT: srli a1, a1, 63
-; RV64-NEXT: vslide1down.vx v8, v8, a1
-; RV64-NEXT: slli a1, a0, 60
-; RV64-NEXT: srli a1, a1, 63
-; RV64-NEXT: vslide1down.vx v8, v8, a1
-; RV64-NEXT: slli a1, a0, 51
-; RV64-NEXT: srli a1, a1, 63
-; RV64-NEXT: slli a2, a0, 58
-; RV64-NEXT: srli a2, a2, 63
-; RV64-NEXT: vmv.v.x v9, a2
-; RV64-NEXT: vslide1down.vx v9, v9, a1
-; RV64-NEXT: slli a1, a0, 56
-; RV64-NEXT: srli a1, a1, 63
-; RV64-NEXT: vslide1down.vx v9, v9, a1
-; RV64-NEXT: slli a0, a0, 61
-; RV64-NEXT: srli a0, a0, 63
-; RV64-NEXT: vmv.v.i v0, 15
-; RV64-NEXT: vslide1down.vx v9, v9, a0
-; RV64-NEXT: vslidedown.vi v8, v9, 4, v0.t
-; RV64-NEXT: vand.vi v8, v8, 1
-; RV64-NEXT: vmsne.vi v0, v8, 0
+; RV64-NEXT: lui a0, %hi(.LCPI0_0)
+; RV64-NEXT: ld a0, %lo(.LCPI0_0)(a0)
+; RV64-NEXT: vsetivli zero, 16, e8, m1, ta, ma
+; RV64-NEXT: vmv.v.i v8, 0
+; RV64-NEXT: vmerge.vim v8, v8, 1, v0
+; RV64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; RV64-NEXT: vmv.v.x v9, a0
+; RV64-NEXT: vsetivli zero, 16, e8, m1, ta, ma
+; RV64-NEXT: vrgather.vv v10, v8, v9
+; RV64-NEXT: vmsne.vi v0, v10, 0
; RV64-NEXT: ret
%2 = shufflevector <16 x i1> %0, <16 x i1> poison, <8 x i32> <i32 5, i32 12, i32 7, i32 2, i32 15, i32 13, i32 4, i32 3>
ret <8 x i1> %2
}
define <4 x i32> @v4i32_v8i32(<8 x i32>) {
-; CHECK-LABEL: v4i32_v8i32:
-; CHECK: # %bb.0:
-; CHECK-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; CHECK-NEXT: vid.v v10
-; CHECK-NEXT: vsrl.vi v10, v10, 1
-; CHECK-NEXT: vrsub.vi v11, v10, 3
-; CHECK-NEXT: vrgather.vv v10, v8, v11
-; CHECK-NEXT: vmv.v.i v0, 5
-; CHECK-NEXT: vsetivli zero, 4, e32, m2, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 4
-; CHECK-NEXT: vsetivli zero, 4, e32, m1, ta, mu
-; CHECK-NEXT: vslidedown.vi v10, v8, 1, v0.t
-; CHECK-NEXT: vmv.v.v v8, v10
-; CHECK-NEXT: ret
+; RV32-LABEL: v4i32_v8i32:
+; RV32: # %bb.0:
+; RV32-NEXT: lui a0, %hi(.LCPI1_0)
+; RV32-NEXT: addi a0, a0, %lo(.LCPI1_0)
+; RV32-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV32-NEXT: vle16.v v12, (a0)
+; RV32-NEXT: vrgatherei16.vv v10, v8, v12
+; RV32-NEXT: vmv1r.v v8, v10
+; RV32-NEXT: ret
+;
+; RV64-LABEL: v4i32_v8i32:
+; RV64: # %bb.0:
+; RV64-NEXT: lui a0, 131079
+; RV64-NEXT: slli a0, a0, 4
+; RV64-NEXT: addi a0, a0, 3
+; RV64-NEXT: slli a0, a0, 16
+; RV64-NEXT: addi a0, a0, 5
+; RV64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; RV64-NEXT: vmv.v.x v12, a0
+; RV64-NEXT: vsetivli zero, 8, e32, m2, ta, ma
+; RV64-NEXT: vrgatherei16.vv v10, v8, v12
+; RV64-NEXT: vmv1r.v v8, v10
+; RV64-NEXT: ret
%2 = shufflevector <8 x i32> %0, <8 x i32> poison, <4 x i32> <i32 5, i32 3, i32 7, i32 2>
ret <4 x i32> %2
}
@@ -97,41 +64,23 @@ define <4 x i32> @v4i32_v8i32(<8 x i32>) {
define <4 x i32> @v4i32_v16i32(<16 x i32>) {
; RV32-LABEL: v4i32_v16i32:
; RV32: # %bb.0:
-; RV32-NEXT: vsetivli zero, 8, e16, m1, ta, ma
-; RV32-NEXT: vmv.v.i v12, 1
-; RV32-NEXT: vmv.v.i v13, 6
-; RV32-NEXT: vsetivli zero, 2, e16, m1, tu, ma
-; RV32-NEXT: vslideup.vi v13, v12, 1
-; RV32-NEXT: vsetivli zero, 8, e32, m4, ta, ma
-; RV32-NEXT: vslidedown.vi v16, v8, 8
-; RV32-NEXT: vmv4r.v v20, v8
-; RV32-NEXT: li a0, 32
-; RV32-NEXT: vmv2r.v v22, v14
-; RV32-NEXT: vsetivli zero, 1, e8, mf8, ta, ma
-; RV32-NEXT: vmv.v.i v0, 10
-; RV32-NEXT: vsetivli zero, 8, e32, m2, ta, mu
-; RV32-NEXT: vnsrl.wx v8, v20, a0
-; RV32-NEXT: vrgatherei16.vv v8, v16, v13, v0.t
+; RV32-NEXT: lui a0, %hi(.LCPI2_0)
+; RV32-NEXT: addi a0, a0, %lo(.LCPI2_0)
+; RV32-NEXT: vsetivli zero, 16, e32, m4, ta, ma
+; RV32-NEXT: vle16.v v16, (a0)
+; RV32-NEXT: vrgatherei16.vv v12, v8, v16
+; RV32-NEXT: vmv1r.v v8, v12
; RV32-NEXT: ret
;
; RV64-LABEL: v4i32_v16i32:
; RV64: # %bb.0:
-; RV64-NEXT: vsetivli zero, 8, e32, m4, ta, ma
-; RV64-NEXT: vslidedown.vi v16, v8, 8
-; RV64-NEXT: vmv4r.v v20, v8
-; RV64-NEXT: li a0, 32
-; RV64-NEXT: vmv2r.v v22, v12
-; RV64-NEXT: vsetivli zero, 8, e32, m2, ta, ma
-; RV64-NEXT: vnsrl.wx v8, v20, a0
-; RV64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
-; RV64-NEXT: vmv.v.i v0, 10
-; RV64-NEXT: li a0, 3
-; RV64-NEXT: slli a0, a0, 33
-; RV64-NEXT: addi a0, a0, 1
-; RV64-NEXT: slli a0, a0, 16
-; RV64-NEXT: vmv.v.x v10, a0
-; RV64-NEXT: vsetivli zero, 8, e32, m2, ta, mu
-; RV64-NEXT: vrgatherei16.vv v8, v16, v10, v0.t
+; RV64-NEXT: lui a0, %hi(.LCPI2_0)
+; RV64-NEXT: ld a0, %lo(.LCPI2_0)(a0)
+; RV64-NEXT: vsetivli zero, 4, e64, m2, ta, ma
+; RV64-NEXT: vmv.v.x v16, a0
+; RV64-NEXT: vsetivli zero, 16, e32, m4, ta, ma
+; RV64-NEXT: vrgatherei16.vv v12, v8, v16
+; RV64-NEXT: vmv1r.v v8, v12
; RV64-NEXT: ret
%2 = shufflevector <16 x i32> %0, <16 x i32> poison, <4 x i32> <i32 1, i32 9, i32 5, i32 14>
ret <4 x i32> %2
@@ -140,78 +89,28 @@ define <4 x i32> @v4i32_v16i32(<16 x i32>) {
define <4 x i32> @v4i32_v32i32(<32 x i32>) {
; RV32-LABEL: v4i32_v32i32:
; RV32: # %bb.0:
-; RV32-NEXT: addi sp, sp, -256
-; RV32-NEXT: .cfi_def_cfa_offset 256
-; RV32-NEXT: sw ra, 252(sp) # 4-byte Folded Spill
-; RV32-NEXT: sw s0, 248(sp) # 4-byte Folded Spill
-; RV32-NEXT: .cfi_offset ra, -4
-; RV32-NEXT: .cfi_offset s0, -8
-; RV32-NEXT: addi s0, sp, 256
-; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: andi sp, sp, -128
; RV32-NEXT: li a0, 32
-; RV32-NEXT: mv a1, sp
+; RV32-NEXT: lui a1, %hi(.LCPI3_0)
+; RV32-NEXT: addi a1, a1, %lo(.LCPI3_0)
; RV32-NEXT: vsetvli zero, a0, e32, m8, ta, ma
-; RV32-NEXT: vse32.v v8, (a1)
-; RV32-NEXT: lw a0, 36(sp)
-; RV32-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV32-NEXT: vslidedown.vi v10, v8, 1
-; RV32-NEXT: vmv.x.s a1, v10
-; RV32-NEXT: vmv.v.x v10, a1
-; RV32-NEXT: vslide1down.vx v10, v10, a0
-; RV32-NEXT: lw a0, 120(sp)
-; RV32-NEXT: vsetivli zero, 1, e32, m2, ta, ma
-; RV32-NEXT: vslidedown.vi v8, v8, 4
-; RV32-NEXT: vmv.x.s a1, v8
-; RV32-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV32-NEXT: vslide1down.vx v8, v10, a1
-; RV32-NEXT: vslide1down.vx v8, v8, a0
-; RV32-NEXT: addi sp, s0, -256
-; RV32-NEXT: .cfi_def_cfa sp, 256
-; RV32-NEXT: lw ra, 252(sp) # 4-byte Folded Reload
-; RV32-NEXT: lw s0, 248(sp) # 4-byte Folded Reload
-; RV32-NEXT: .cfi_restore ra
-; RV32-NEXT: .cfi_restore s0
-; RV32-NEXT: addi sp, sp, 256
-; RV32-NEXT: .cfi_def_cfa_offset 0
+; RV32-NEXT: vle16.v v24, (a1)
+; RV32-NEXT: vrgatherei16.vv v16, v8, v24
+; RV32-NEXT: vmv1r.v v8, v16
; RV32-NEXT: ret
;
; RV64-LABEL: v4i32_v32i32:
; RV64: # %bb.0:
-; RV64-NEXT: addi sp, sp, -256
-; RV64-NEXT: .cfi_def_cfa_offset 256
-; RV64-NEXT: sd ra, 248(sp) # 8-byte Folded Spill
-; RV64-NEXT: sd s0, 240(sp) # 8-byte Folded Spill
-; RV64-NEXT: .cfi_offset ra, -8
-; RV64-NEXT: .cfi_offset s0, -16
-; RV64-NEXT: addi s0, sp, 256
-; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: andi sp, sp, -128
+; RV64-NEXT: lui a0, 491521
+; RV64-NEXT: slli a0, a0, 6
+; RV64-NEXT: addi a0, a0, 9
+; RV64-NEXT: slli a0, a0, 16
+; RV64-NEXT: addi a0, a0, 1
+; RV64-NEXT: vsetivli zero, 8, e64, m4, ta, ma
+; RV64-NEXT: vmv.v.x v24, a0
; RV64-NEXT: li a0, 32
-; RV64-NEXT: mv a1, sp
; RV64-NEXT: vsetvli zero, a0, e32, m8, ta, ma
-; RV64-NEXT: vse32.v v8, (a1)
-; RV64-NEXT: lw a0, 36(sp)
-; RV64-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV64-NEXT: vslidedown.vi v10, v8, 1
-; RV64-NEXT: vmv.x.s a1, v10
-; RV64-NEXT: vmv.v.x v10, a1
-; RV64-NEXT: vslide1down.vx v10, v10, a0
-; RV64-NEXT: lw a0, 120(sp)
-; RV64-NEXT: vsetivli zero, 1, e32, m2, ta, ma
-; RV64-NEXT: vslidedown.vi v8, v8, 4
-; RV64-NEXT: vmv.x.s a1, v8
-; RV64-NEXT: vsetivli zero, 4, e32, m1, ta, ma
-; RV64-NEXT: vslide1down.vx v8, v10, a1
-; RV64-NEXT: vslide1down.vx v8, v8, a0
-; RV64-NEXT: addi sp, s0, -256
-; RV64-NEXT: .cfi_def_cfa sp, 256
-; RV64-NEXT: ld ra, 248(sp) # 8-byte Folded Reload
-; RV64-NEXT: ld s0, 240(sp) # 8-byte Folded Reload
-; RV64-NEXT: .cfi_restore ra
-; RV64-NEXT: .cfi_restore s0
-; RV64-NEXT: addi sp, sp, 256
-; RV64-NEXT: .cfi_def_cfa_offset 0
+; RV64-NEXT: vrgatherei16.vv v16, v8, v24
+; RV64-NEXT: vmv1r.v v8, v16
; RV64-NEXT: ret
%2 = shufflevector <32 x i32> %0, <32 x i32> poison, <4 x i32> <i32 1, i32 9, i32 4, i32 30>
ret <4 x i32> %2
@@ -312,10 +211,10 @@ define <32 x i32> @v32i32_v4i32(<4 x i32>) {
define <32 x i8> @vnsrl_v32i8_v64i8(<64 x i8> %in) {
; CHECK-LABEL: vnsrl_v32i8_v64i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: li a0, 32
-; CHECK-NEXT: vsetvli zero, a0, e8, m2, ta, ma
-; CHECK-NEXT: vnsrl.wi v12, v8, 8
-; CHECK-NEXT: vmv.v.v v8, v12
+; CHECK-NEXT: li a0, 64
+; CHECK-NEXT: vsetvli zero, a0, e8, m4, ta, ma
+; CHECK-NEXT: vnsrl.wi v16, v8, 8
+; CHECK-NEXT: vmv2r.v v8, v16
; CHECK-NEXT: ret
%res = shufflevector <64 x i8> %in, <64 x i8> poison, <32 x i32> <i32 1, i32 3, i32 5, i32 7, i32 9, i32 11, i32 13, i32 15, i32 17, i32 19, i32 21, i32 23, i32 25, i32 27, i32 29, i32 31, i32 33, i32 35, i32 37, i32 39, i32 41, i32 43, i32 45, i32 47, i32 49, i32 51, i32 53, i32 55, i32 57, i32 59, i32 61, i32 63>
ret <32 x i8> %res
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-deinterleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-deinterleave.ll
index a8f75f8d1c24d1..e3240f71b928cd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-deinterleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-deinterleave.ll
@@ -11,18 +11,11 @@ define void @deinterleave3_0_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: li a0, 3
; CHECK-NEXT: vmul.vx v9, v9, a0
; CHECK-NEXT: vrgather.vv v10, v8, v9
-; CHECK-NEXT: vadd.vi v9, v9, -8
-; CHECK-NEXT: li a0, 56
-; CHECK-NEXT: vmv.s.x v0, a0
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vv v10, v8, v9, v0.t
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
@@ -37,23 +30,13 @@ define void @deinterleave3_8_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v9, 1
; CHECK-NEXT: vid.v v10
; CHECK-NEXT: li a0, 3
; CHECK-NEXT: vmadd.vx v10, a0, v9
; CHECK-NEXT: vrgather.vv v9, v8, v10
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
-; CHECK-NEXT: vsrl.vi v10, v8, 8
-; CHECK-NEXT: vsll.vi v8, v8, 8
-; CHECK-NEXT: li a0, 24
-; CHECK-NEXT: vmv.s.x v0, a0
-; CHECK-NEXT: vor.vv v8, v8, v10
; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT: vmerge.vvm v8, v9, v8, v0
-; CHECK-NEXT: vse8.v v8, (a1)
+; CHECK-NEXT: vse8.v v9, (a1)
; CHECK-NEXT: ret
entry:
%0 = load <16 x i8>, ptr %in, align 1
@@ -67,21 +50,10 @@ define void @deinterleave4_0_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 4, e8, mf2, ta, ma
-; CHECK-NEXT: vslidedown.vi v9, v8, 4
-; CHECK-NEXT: vsetivli zero, 4, e8, mf4, ta, ma
-; CHECK-NEXT: vwaddu.vv v10, v8, v9
-; CHECK-NEXT: li a0, -1
-; CHECK-NEXT: vwmaccu.vx v10, a0, v9
-; CHECK-NEXT: vmv.v.i v0, 12
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: vsll.vi v9, v9, 2
-; CHECK-NEXT: vadd.vi v9, v9, -8
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vv v10, v8, v9, v0.t
+; CHECK-NEXT: vrgather.vv v10, v8, v9
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
@@ -96,20 +68,14 @@ define void @deinterleave4_8_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
+; CHECK-NEXT: lui a0, 57488
+; CHECK-NEXT: addi a0, a0, 1281
+; CHECK-NEXT: vsetivli zero, 4, e32, m1, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
+; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
+; CHECK-NEXT: vrgather.vv v10, v8, v9
; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT: vmv.v.i v9, -9
-; CHECK-NEXT: vid.v v10
-; CHECK-NEXT: li a0, 5
-; CHECK-NEXT: vmacc.vx v9, a0, v10
-; CHECK-NEXT: vsll.vi v10, v10, 2
-; CHECK-NEXT: vadd.vi v10, v10, 1
-; CHECK-NEXT: vrgather.vv v11, v8, v10
-; CHECK-NEXT: vmv.v.i v0, 12
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vv v11, v8, v9, v0.t
-; CHECK-NEXT: vse8.v v11, (a1)
+; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
%0 = load <16 x i8>, ptr %in, align 1
@@ -123,17 +89,11 @@ define void @deinterleave5_0_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: li a0, 5
; CHECK-NEXT: vmul.vx v9, v9, a0
; CHECK-NEXT: vrgather.vv v10, v8, v9
-; CHECK-NEXT: vadd.vi v9, v9, -8
-; CHECK-NEXT: vmv.v.i v0, 12
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vv v10, v8, v9, v0.t
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
@@ -148,17 +108,12 @@ define void @deinterleave5_8_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v9, 1
; CHECK-NEXT: vid.v v10
; CHECK-NEXT: li a0, 5
; CHECK-NEXT: vmadd.vx v10, a0, v9
; CHECK-NEXT: vrgather.vv v9, v8, v10
-; CHECK-NEXT: vmv.v.i v0, 4
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vi v9, v8, 3, v0.t
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vse8.v v9, (a1)
; CHECK-NEXT: ret
entry:
@@ -173,16 +128,11 @@ define void @deinterleave6_0_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: li a0, 6
; CHECK-NEXT: vmul.vx v9, v9, a0
; CHECK-NEXT: vrgather.vv v10, v8, v9
-; CHECK-NEXT: vmv.v.i v0, 4
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vi v10, v8, 4, v0.t
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
@@ -197,17 +147,12 @@ define void @deinterleave6_8_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vmv.v.i v9, 1
; CHECK-NEXT: vid.v v10
; CHECK-NEXT: li a0, 6
; CHECK-NEXT: vmadd.vx v10, a0, v9
; CHECK-NEXT: vrgather.vv v9, v8, v10
-; CHECK-NEXT: vmv.v.i v0, 4
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vi v9, v8, 5, v0.t
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vse8.v v9, (a1)
; CHECK-NEXT: ret
entry:
@@ -222,16 +167,11 @@ define void @deinterleave7_0_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vid.v v9
; CHECK-NEXT: li a0, 7
; CHECK-NEXT: vmul.vx v9, v9, a0
; CHECK-NEXT: vrgather.vv v10, v8, v9
-; CHECK-NEXT: vmv.v.i v0, 4
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vi v10, v8, 6, v0.t
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
@@ -246,18 +186,14 @@ define void @deinterleave7_8_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
+; CHECK-NEXT: lui a0, 225
+; CHECK-NEXT: addi a0, a0, -2047
+; CHECK-NEXT: vsetivli zero, 4, e32, m1, ta, ma
+; CHECK-NEXT: vmv.v.x v9, a0
+; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
+; CHECK-NEXT: vrgather.vv v10, v8, v9
; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT: vmv.v.i v9, -6
-; CHECK-NEXT: vid.v v10
-; CHECK-NEXT: li a0, 6
-; CHECK-NEXT: vmadd.vx v10, a0, v9
-; CHECK-NEXT: vmv.v.i v0, 6
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v9, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vi v11, v8, 1
-; CHECK-NEXT: vrgather.vv v11, v9, v10, v0.t
-; CHECK-NEXT: vse8.v v11, (a1)
+; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
%0 = load <16 x i8>, ptr %in, align 1
@@ -273,10 +209,11 @@ define void @deinterleave8_0_i8(ptr %in, ptr %out) {
; CHECK-NEXT: vle8.v v8, (a0)
; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
; CHECK-NEXT: vslidedown.vi v9, v8, 8
-; CHECK-NEXT: vsetivli zero, 2, e8, mf2, tu, ma
-; CHECK-NEXT: vslideup.vi v8, v9, 1
; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT: vse8.v v8, (a1)
+; CHECK-NEXT: vwaddu.vv v10, v8, v9
+; CHECK-NEXT: li a0, -1
+; CHECK-NEXT: vwmaccu.vx v10, a0, v9
+; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
%0 = load <16 x i8>, ptr %in, align 1
@@ -290,12 +227,12 @@ define void @deinterleave8_8_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vmv.v.i v0, -3
-; CHECK-NEXT: vsetivli zero, 8, e8, m1, ta, ma
-; CHECK-NEXT: vslidedown.vi v9, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
-; CHECK-NEXT: vrgather.vi v9, v8, 1, v0.t
-; CHECK-NEXT: vse8.v v9, (a1)
+; CHECK-NEXT: vid.v v9
+; CHECK-NEXT: vsll.vi v9, v9, 3
+; CHECK-NEXT: vadd.vi v9, v9, 1
+; CHECK-NEXT: vrgather.vv v10, v8, v9
+; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
+; CHECK-NEXT: vse8.v v10, (a1)
; CHECK-NEXT: ret
entry:
%0 = load <16 x i8>, ptr %in, align 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shufflevector-vnsrl.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shufflevector-vnsrl.ll
index 15c2c2298c0dd6..d92073c2f933c7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shufflevector-vnsrl.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shufflevector-vnsrl.ll
@@ -11,8 +11,8 @@ define void @vnsrl_0_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
; CHECK-NEXT: vnsrl.wi v8, v8, 0
+; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
; CHECK-NEXT: vse8.v v8, (a1)
; CHECK-NEXT: ret
entry:
@@ -27,8 +27,8 @@ define void @vnsrl_8_i8(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
; CHECK-NEXT: vnsrl.wi v8, v8, 8
+; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
; CHECK-NEXT: vse8.v v8, (a1)
; CHECK-NEXT: ret
entry:
@@ -43,8 +43,8 @@ define void @vnsrl_0_i16(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 8, e16, mf2, ta, ma
; V-NEXT: vle16.v v8, (a0)
-; V-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; V-NEXT: vnsrl.wi v8, v8, 0
+; V-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; V-NEXT: vse16.v v8, (a1)
; V-NEXT: ret
;
@@ -52,8 +52,8 @@ define void @vnsrl_0_i16(ptr %in, ptr %out) {
; ZVE32F: # %bb.0: # %entry
; ZVE32F-NEXT: vsetivli zero, 8, e16, mf2, ta, ma
; ZVE32F-NEXT: vle16.v v8, (a0)
-; ZVE32F-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; ZVE32F-NEXT: vnsrl.wi v8, v8, 0
+; ZVE32F-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; ZVE32F-NEXT: vse16.v v8, (a1)
; ZVE32F-NEXT: ret
entry:
@@ -68,8 +68,8 @@ define void @vnsrl_16_i16(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 8, e16, mf2, ta, ma
; V-NEXT: vle16.v v8, (a0)
-; V-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; V-NEXT: vnsrl.wi v8, v8, 16
+; V-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; V-NEXT: vse16.v v8, (a1)
; V-NEXT: ret
;
@@ -77,8 +77,8 @@ define void @vnsrl_16_i16(ptr %in, ptr %out) {
; ZVE32F: # %bb.0: # %entry
; ZVE32F-NEXT: vsetivli zero, 8, e16, mf2, ta, ma
; ZVE32F-NEXT: vle16.v v8, (a0)
-; ZVE32F-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; ZVE32F-NEXT: vnsrl.wi v8, v8, 16
+; ZVE32F-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; ZVE32F-NEXT: vse16.v v8, (a1)
; ZVE32F-NEXT: ret
entry:
@@ -93,8 +93,8 @@ define void @vnsrl_0_half(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 8, e16, mf2, ta, ma
; V-NEXT: vle16.v v8, (a0)
-; V-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; V-NEXT: vnsrl.wi v8, v8, 0
+; V-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; V-NEXT: vse16.v v8, (a1)
; V-NEXT: ret
;
@@ -102,8 +102,8 @@ define void @vnsrl_0_half(ptr %in, ptr %out) {
; ZVE32F: # %bb.0: # %entry
; ZVE32F-NEXT: vsetivli zero, 8, e16, mf2, ta, ma
; ZVE32F-NEXT: vle16.v v8, (a0)
-; ZVE32F-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; ZVE32F-NEXT: vnsrl.wi v8, v8, 0
+; ZVE32F-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; ZVE32F-NEXT: vse16.v v8, (a1)
; ZVE32F-NEXT: ret
entry:
@@ -118,8 +118,8 @@ define void @vnsrl_16_half(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 8, e16, mf2, ta, ma
; V-NEXT: vle16.v v8, (a0)
-; V-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; V-NEXT: vnsrl.wi v8, v8, 16
+; V-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; V-NEXT: vse16.v v8, (a1)
; V-NEXT: ret
;
@@ -127,8 +127,8 @@ define void @vnsrl_16_half(ptr %in, ptr %out) {
; ZVE32F: # %bb.0: # %entry
; ZVE32F-NEXT: vsetivli zero, 8, e16, mf2, ta, ma
; ZVE32F-NEXT: vle16.v v8, (a0)
-; ZVE32F-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; ZVE32F-NEXT: vnsrl.wi v8, v8, 16
+; ZVE32F-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; ZVE32F-NEXT: vse16.v v8, (a1)
; ZVE32F-NEXT: ret
entry:
@@ -143,8 +143,8 @@ define void @vnsrl_0_i32(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 4, e32, mf2, ta, ma
; V-NEXT: vle32.v v8, (a0)
-; V-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; V-NEXT: vnsrl.wi v8, v8, 0
+; V-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; V-NEXT: vse32.v v8, (a1)
; V-NEXT: ret
;
@@ -152,10 +152,11 @@ define void @vnsrl_0_i32(ptr %in, ptr %out) {
; ZVE32F: # %bb.0: # %entry
; ZVE32F-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; ZVE32F-NEXT: vle32.v v8, (a0)
+; ZVE32F-NEXT: vid.v v9
+; ZVE32F-NEXT: vadd.vv v9, v9, v9
+; ZVE32F-NEXT: vrgather.vv v10, v8, v9
; ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, ma
-; ZVE32F-NEXT: vslidedown.vi v9, v8, 2
-; ZVE32F-NEXT: vslideup.vi v8, v9, 1
-; ZVE32F-NEXT: vse32.v v8, (a1)
+; ZVE32F-NEXT: vse32.v v10, (a1)
; ZVE32F-NEXT: ret
entry:
%0 = load <4 x i32>, ptr %in, align 4
@@ -170,8 +171,8 @@ define void @vnsrl_32_i32(ptr %in, ptr %out) {
; V-NEXT: vsetivli zero, 4, e32, mf2, ta, ma
; V-NEXT: vle32.v v8, (a0)
; V-NEXT: li a0, 32
-; V-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; V-NEXT: vnsrl.wx v8, v8, a0
+; V-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; V-NEXT: vse32.v v8, (a1)
; V-NEXT: ret
;
@@ -179,11 +180,12 @@ define void @vnsrl_32_i32(ptr %in, ptr %out) {
; ZVE32F: # %bb.0: # %entry
; ZVE32F-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; ZVE32F-NEXT: vle32.v v8, (a0)
-; ZVE32F-NEXT: vmv.v.i v0, 1
-; ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, mu
-; ZVE32F-NEXT: vslidedown.vi v9, v8, 2
-; ZVE32F-NEXT: vrgather.vi v9, v8, 1, v0.t
-; ZVE32F-NEXT: vse32.v v9, (a1)
+; ZVE32F-NEXT: vid.v v9
+; ZVE32F-NEXT: vadd.vv v9, v9, v9
+; ZVE32F-NEXT: vadd.vi v9, v9, 1
+; ZVE32F-NEXT: vrgather.vv v10, v8, v9
+; ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, ma
+; ZVE32F-NEXT: vse32.v v10, (a1)
; ZVE32F-NEXT: ret
entry:
%0 = load <4 x i32>, ptr %in, align 4
@@ -197,8 +199,8 @@ define void @vnsrl_0_float(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 4, e32, mf2, ta, ma
; V-NEXT: vle32.v v8, (a0)
-; V-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; V-NEXT: vnsrl.wi v8, v8, 0
+; V-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; V-NEXT: vse32.v v8, (a1)
; V-NEXT: ret
;
@@ -206,10 +208,11 @@ define void @vnsrl_0_float(ptr %in, ptr %out) {
; ZVE32F: # %bb.0: # %entry
; ZVE32F-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; ZVE32F-NEXT: vle32.v v8, (a0)
+; ZVE32F-NEXT: vid.v v9
+; ZVE32F-NEXT: vadd.vv v9, v9, v9
+; ZVE32F-NEXT: vrgather.vv v10, v8, v9
; ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, ma
-; ZVE32F-NEXT: vslidedown.vi v9, v8, 2
-; ZVE32F-NEXT: vslideup.vi v8, v9, 1
-; ZVE32F-NEXT: vse32.v v8, (a1)
+; ZVE32F-NEXT: vse32.v v10, (a1)
; ZVE32F-NEXT: ret
entry:
%0 = load <4 x float>, ptr %in, align 4
@@ -224,8 +227,8 @@ define void @vnsrl_32_float(ptr %in, ptr %out) {
; V-NEXT: vsetivli zero, 4, e32, mf2, ta, ma
; V-NEXT: vle32.v v8, (a0)
; V-NEXT: li a0, 32
-; V-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; V-NEXT: vnsrl.wx v8, v8, a0
+; V-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; V-NEXT: vse32.v v8, (a1)
; V-NEXT: ret
;
@@ -233,11 +236,12 @@ define void @vnsrl_32_float(ptr %in, ptr %out) {
; ZVE32F: # %bb.0: # %entry
; ZVE32F-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; ZVE32F-NEXT: vle32.v v8, (a0)
-; ZVE32F-NEXT: vmv.v.i v0, 1
-; ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, mu
-; ZVE32F-NEXT: vslidedown.vi v9, v8, 2
-; ZVE32F-NEXT: vrgather.vi v9, v8, 1, v0.t
-; ZVE32F-NEXT: vse32.v v9, (a1)
+; ZVE32F-NEXT: vid.v v9
+; ZVE32F-NEXT: vadd.vv v9, v9, v9
+; ZVE32F-NEXT: vadd.vi v9, v9, 1
+; ZVE32F-NEXT: vrgather.vv v10, v8, v9
+; ZVE32F-NEXT: vsetivli zero, 2, e32, m1, ta, ma
+; ZVE32F-NEXT: vse32.v v10, (a1)
; ZVE32F-NEXT: ret
entry:
%0 = load <4 x float>, ptr %in, align 4
@@ -251,10 +255,11 @@ define void @vnsrl_0_i64(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 4, e64, m1, ta, ma
; V-NEXT: vle64.v v8, (a0)
+; V-NEXT: vid.v v9
+; V-NEXT: vadd.vv v9, v9, v9
+; V-NEXT: vrgather.vv v10, v8, v9
; V-NEXT: vsetivli zero, 2, e64, m1, ta, ma
-; V-NEXT: vslidedown.vi v9, v8, 2
-; V-NEXT: vslideup.vi v8, v9, 1
-; V-NEXT: vse64.v v8, (a1)
+; V-NEXT: vse64.v v10, (a1)
; V-NEXT: ret
;
; ZVE32F-LABEL: vnsrl_0_i64:
@@ -276,11 +281,12 @@ define void @vnsrl_64_i64(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 4, e64, m1, ta, ma
; V-NEXT: vle64.v v8, (a0)
-; V-NEXT: vmv.v.i v0, 1
-; V-NEXT: vsetivli zero, 2, e64, m1, ta, mu
-; V-NEXT: vslidedown.vi v9, v8, 2
-; V-NEXT: vrgather.vi v9, v8, 1, v0.t
-; V-NEXT: vse64.v v9, (a1)
+; V-NEXT: vid.v v9
+; V-NEXT: vadd.vv v9, v9, v9
+; V-NEXT: vadd.vi v9, v9, 1
+; V-NEXT: vrgather.vv v10, v8, v9
+; V-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; V-NEXT: vse64.v v10, (a1)
; V-NEXT: ret
;
; ZVE32F-LABEL: vnsrl_64_i64:
@@ -302,10 +308,11 @@ define void @vnsrl_0_double(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 4, e64, m1, ta, ma
; V-NEXT: vle64.v v8, (a0)
+; V-NEXT: vid.v v9
+; V-NEXT: vadd.vv v9, v9, v9
+; V-NEXT: vrgather.vv v10, v8, v9
; V-NEXT: vsetivli zero, 2, e64, m1, ta, ma
-; V-NEXT: vslidedown.vi v9, v8, 2
-; V-NEXT: vslideup.vi v8, v9, 1
-; V-NEXT: vse64.v v8, (a1)
+; V-NEXT: vse64.v v10, (a1)
; V-NEXT: ret
;
; ZVE32F-LABEL: vnsrl_0_double:
@@ -327,11 +334,12 @@ define void @vnsrl_64_double(ptr %in, ptr %out) {
; V: # %bb.0: # %entry
; V-NEXT: vsetivli zero, 4, e64, m1, ta, ma
; V-NEXT: vle64.v v8, (a0)
-; V-NEXT: vmv.v.i v0, 1
-; V-NEXT: vsetivli zero, 2, e64, m1, ta, mu
-; V-NEXT: vslidedown.vi v9, v8, 2
-; V-NEXT: vrgather.vi v9, v8, 1, v0.t
-; V-NEXT: vse64.v v9, (a1)
+; V-NEXT: vid.v v9
+; V-NEXT: vadd.vv v9, v9, v9
+; V-NEXT: vadd.vi v9, v9, 1
+; V-NEXT: vrgather.vv v10, v8, v9
+; V-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; V-NEXT: vse64.v v10, (a1)
; V-NEXT: ret
;
; ZVE32F-LABEL: vnsrl_64_double:
@@ -353,8 +361,8 @@ define void @vnsrl_0_i8_undef(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
; CHECK-NEXT: vnsrl.wi v8, v8, 0
+; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
; CHECK-NEXT: vse8.v v8, (a1)
; CHECK-NEXT: ret
entry:
@@ -369,8 +377,8 @@ define void @vnsrl_0_i8_undef2(ptr %in, ptr %out) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
; CHECK-NEXT: vnsrl.wi v8, v8, 0
+; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
; CHECK-NEXT: vse8.v v8, (a1)
; CHECK-NEXT: ret
entry:
@@ -382,27 +390,34 @@ entry:
; TODO: Allow an undef initial element
define void @vnsrl_0_i8_undef3(ptr %in, ptr %out) {
-; CHECK-LABEL: vnsrl_0_i8_undef3:
-; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
-; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: lui a0, 24640
-; CHECK-NEXT: addi a0, a0, 6
-; CHECK-NEXT: vsetivli zero, 8, e32, m1, ta, ma
-; CHECK-NEXT: vmv.v.x v9, a0
-; CHECK-NEXT: vsetvli zero, zero, e8, mf4, ta, ma
-; CHECK-NEXT: vrgather.vv v10, v8, v9
-; CHECK-NEXT: vid.v v9
-; CHECK-NEXT: vadd.vv v9, v9, v9
-; CHECK-NEXT: vadd.vi v9, v9, -8
-; CHECK-NEXT: li a0, -32
-; CHECK-NEXT: vmv.s.x v0, a0
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, mu
-; CHECK-NEXT: vrgather.vv v10, v8, v9, v0.t
-; CHECK-NEXT: vse8.v v10, (a1)
-; CHECK-NEXT: ret
+; V-LABEL: vnsrl_0_i8_undef3:
+; V: # %bb.0: # %entry
+; V-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
+; V-NEXT: vle8.v v8, (a0)
+; V-NEXT: lui a0, 28768
+; V-NEXT: addi a0, a0, 1283
+; V-NEXT: slli a0, a0, 15
+; V-NEXT: addi a0, a0, 385
+; V-NEXT: slli a0, a0, 18
+; V-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; V-NEXT: vmv.v.x v9, a0
+; V-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
+; V-NEXT: vrgather.vv v10, v8, v9
+; V-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
+; V-NEXT: vse8.v v10, (a1)
+; V-NEXT: ret
+;
+; ZVE32F-LABEL: vnsrl_0_i8_undef3:
+; ZVE32F: # %bb.0: # %entry
+; ZVE32F-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
+; ZVE32F-NEXT: vle8.v v8, (a0)
+; ZVE32F-NEXT: lui a0, %hi(.LCPI16_0)
+; ZVE32F-NEXT: addi a0, a0, %lo(.LCPI16_0)
+; ZVE32F-NEXT: vle8.v v9, (a0)
+; ZVE32F-NEXT: vrgather.vv v10, v8, v9
+; ZVE32F-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
+; ZVE32F-NEXT: vse8.v v10, (a1)
+; ZVE32F-NEXT: ret
entry:
%0 = load <16 x i8>, ptr %in, align 1
%shuffle.i5 = shufflevector <16 x i8> %0, <16 x i8> poison, <8 x i32> <i32 undef, i32 undef, i32 4, i32 6, i32 6, i32 10, i32 12, i32 14>
@@ -412,26 +427,31 @@ entry:
; Not a vnsrl (checking for a prior pattern matching bug)
define void @vnsrl_0_i8_undef_negative(ptr %in, ptr %out) {
-; CHECK-LABEL: vnsrl_0_i8_undef_negative:
-; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
-; CHECK-NEXT: vle8.v v8, (a0)
-; CHECK-NEXT: lui a0, %hi(.LCPI17_0)
-; CHECK-NEXT: addi a0, a0, %lo(.LCPI17_0)
-; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
-; CHECK-NEXT: vle8.v v9, (a0)
-; CHECK-NEXT: vrgather.vv v10, v8, v9
-; CHECK-NEXT: vid.v v9
-; CHECK-NEXT: vadd.vv v9, v9, v9
-; CHECK-NEXT: vadd.vi v9, v9, -8
-; CHECK-NEXT: li a0, 48
-; CHECK-NEXT: vmv.s.x v0, a0
-; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
-; CHECK-NEXT: vslidedown.vi v8, v8, 8
-; CHECK-NEXT: vsetivli zero, 8, e8, mf4, ta, mu
-; CHECK-NEXT: vrgather.vv v10, v8, v9, v0.t
-; CHECK-NEXT: vse8.v v10, (a1)
-; CHECK-NEXT: ret
+; V-LABEL: vnsrl_0_i8_undef_negative:
+; V: # %bb.0: # %entry
+; V-NEXT: lui a2, %hi(.LCPI17_0)
+; V-NEXT: ld a2, %lo(.LCPI17_0)(a2)
+; V-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
+; V-NEXT: vle8.v v8, (a0)
+; V-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; V-NEXT: vmv.v.x v9, a2
+; V-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
+; V-NEXT: vrgather.vv v10, v8, v9
+; V-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
+; V-NEXT: vse8.v v10, (a1)
+; V-NEXT: ret
+;
+; ZVE32F-LABEL: vnsrl_0_i8_undef_negative:
+; ZVE32F: # %bb.0: # %entry
+; ZVE32F-NEXT: vsetivli zero, 16, e8, mf2, ta, ma
+; ZVE32F-NEXT: vle8.v v8, (a0)
+; ZVE32F-NEXT: lui a0, %hi(.LCPI17_0)
+; ZVE32F-NEXT: addi a0, a0, %lo(.LCPI17_0)
+; ZVE32F-NEXT: vle8.v v9, (a0)
+; ZVE32F-NEXT: vrgather.vv v10, v8, v9
+; ZVE32F-NEXT: vsetivli zero, 8, e8, mf4, ta, ma
+; ZVE32F-NEXT: vse8.v v10, (a1)
+; ZVE32F-NEXT: ret
entry:
%0 = load <16 x i8>, ptr %in, align 1
%shuffle.i5 = shufflevector <16 x i8> %0, <16 x i8> poison, <8 x i32> <i32 0, i32 2, i32 4, i32 6, i32 8, i32 10, i32 undef, i32 1>
diff --git a/llvm/test/CodeGen/RISCV/rvv/vl-opt-op-info.ll b/llvm/test/CodeGen/RISCV/rvv/vl-opt-op-info.ll
index 1a01a9bf77cff5..aab449668a9f8f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vl-opt-op-info.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vl-opt-op-info.ll
@@ -11,14 +11,16 @@
define <2 x i32> @vdot_lane_s32(<2 x i32> noundef %var_1, <8 x i8> noundef %var_3, <8 x i8> noundef %var_5, <8 x i16> %x) {
; CHECK-LABEL: vdot_lane_s32:
; CHECK: # %bb.0: # %entry
-; CHECK-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
+; CHECK-NEXT: vsetivli zero, 8, e16, mf4, ta, ma
; CHECK-NEXT: vnsrl.wi v8, v11, 0
; CHECK-NEXT: vnsrl.wi v9, v11, 16
+; CHECK-NEXT: vsetivli zero, 4, e16, mf4, ta, ma
; CHECK-NEXT: vwadd.vv v10, v8, v9
-; CHECK-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
+; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma
; CHECK-NEXT: vnsrl.wi v8, v10, 0
; CHECK-NEXT: li a0, 32
; CHECK-NEXT: vnsrl.wx v9, v10, a0
+; CHECK-NEXT: vsetivli zero, 2, e32, mf2, ta, ma
; CHECK-NEXT: vadd.vv v8, v8, v9
; CHECK-NEXT: ret
entry:
More information about the llvm-commits
mailing list