[llvm] [RISCV] Decompose single source shuffles (without exact VLEN) (PR #126951)

Philip Reames via llvm-commits llvm-commits at lists.llvm.org
Wed Feb 12 10:06:36 PST 2025


https://github.com/preames created https://github.com/llvm/llvm-project/pull/126951

(This is a re-review for what was https://github.com/llvm/llvm-project/pull/126108.  Apparently, github doesn't allow me to reopen after it's been merged and reverted.  The bug there was fairly major - despite the comments and review description, the code was using each register in the source register group, not only the first register.  This was completely wrong.)

This is a continuation of the work started in https://github.com/llvm/llvm-project/pull/125735 to lower selected VLA shuffles in linear m1 components instead of generating O(LMUL^2) or O(LMUL*Log2(LMUL) high LMUL shuffles.

This pattern focuses on shuffles where all the elements being used across the entire destination register group come from a single register in the source register group. Such cases come up fairly frequently via e.g. spread(N), and repeat(N) idioms.

One subtlety to this patch is the handling of the index vector for vrgatherei16.vv. Because the index and source registers can have different EEW, the index vector for the Nth chunk of the destination is not guaranteed to be register aligned. In fact, it is common for e.g. an EEW=64 shuffle to have EEW=16 indices which are four chunks per source register. Given this, we have to pay a cost for extracting these chunks into the low position before performing each shuffle.

I'd initially expressed this as a naive extract sub-vector for each data parallel piece. However, at high LMUL, this quickly caused register pressure problems since we could at worst need 4x the temporary registers for the index. Instead, this patch uses a repeating slidedown chained from previous iterations. This increases critical path by at worst 3 slides (SEW=64 is the worst case), but reduces register pressure to at worst 2x - and only if the original index vector is reused elsewhere. I view this as arguably a bit of a workaround (since our scheduling should have done better with the plain extract variant), but a probably necessary one.

>From cb44b0adaee8ffcb382e79cfb67263c7728aedca Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Thu, 6 Feb 2025 09:45:59 -0800
Subject: [PATCH 1/4] [RISCV] Decompose single source shuffles (without exact
 VLEN)

This is a continuation of the work started in #125735 to lower
selected VLA shuffles in linear m1 components instead of generating
O(LMUL^2) or O(LMUL*Log2(LMUL) high LMUL shuffles.

This pattern focuses on shuffles where all the elements being used
across the entire destination register group come from a single
register in the source register group.  Such cases come up fairly
frequently via e.g. spread(N), and repeat(N) idioms.

One subtlety to this patch is the handling of the index vector
for vrgatherei16.vv.  Because the index and source registers can
have different EEW, the index vector for the Nth chunk of the
destination is not guaranteed to be register aligned.  In fact,
it is common for e.g. an EEW=64 shuffle to have EEW=16 indices
which are four chunks per source register.  Given this, we have
to pay a cost for extracting these chunks into the low position
before performing each shuffle.

I'd initially expressed this as a naive extract sub-vector for each
data parallel piece.  However, at high LMUL, this quickly caused
register pressure problems since we could at worst need 4x the
temporary registers for the index.  Instead, this patch uses a
repeating slidedown chained from previous iterations.  This increases
critical path by at worst 3 slides (SEW=64 is the worst case),
but reduces register pressure to at worst 2x - and only if the
original index vector is reused elsewhere.  I view this as arguably
a bit of a workaround (since our scheduling should have done better
with the plan extract variant), but a probably neccessary one.
---
 llvm/lib/Target/RISCV/RISCVISelLowering.cpp   |  37 ++++++
 .../RISCV/rvv/fixed-vectors-fp-interleave.ll  |  19 ++--
 .../RISCV/rvv/fixed-vectors-int-interleave.ll |  19 ++--
 .../RISCV/rvv/fixed-vectors-int-shuffles.ll   |  58 +++++++---
 .../fixed-vectors-shuffle-changes-length.ll   | 105 ++++++++++++------
 5 files changed, 177 insertions(+), 61 deletions(-)

diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 165c71d8e03f1..30db4f6f13d85 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -5350,6 +5350,15 @@ static bool isLocalRepeatingShuffle(ArrayRef<int> Mask, int Span) {
   return true;
 }
 
+/// Is this mask only using elements from the first span of the input?
+static bool isLowSourceShuffle(ArrayRef<int> Mask, int Span) {
+  for (auto [I, M] : enumerate(Mask)) {
+    if (M != -1 && M >= Span)
+      return false;
+  }
+  return true;
+}
+
 /// Try to widen element type to get a new mask value for a better permutation
 /// sequence.  This doesn't try to inspect the widened mask for profitability;
 /// we speculate the widened form is equal or better.  This has the effect of
@@ -5766,6 +5775,34 @@ static SDValue lowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG,
         Gather = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, ContainerVT, Gather,
                              SubVec, SubIdx);
       }
+    } else if (ContainerVT.bitsGT(M1VT) && isLowSourceShuffle(Mask, VLMAX)) {
+      EVT SubIndexVT = M1VT.changeVectorElementType(IndexVT.getScalarType());
+      auto [InnerTrueMask, InnerVL] =
+          getDefaultScalableVLOps(M1VT, DL, DAG, Subtarget);
+      int N = ContainerVT.getVectorMinNumElements() /
+              M1VT.getVectorMinNumElements();
+      assert(isPowerOf2_32(N) && N <= 8);
+      Gather = DAG.getUNDEF(ContainerVT);
+      SDValue SlideAmt =
+          DAG.getElementCount(DL, XLenVT, M1VT.getVectorElementCount());
+      for (int i = 0; i < N; i++) {
+        if (i != 0)
+          LHSIndices = getVSlidedown(DAG, Subtarget, DL, IndexContainerVT,
+                                     DAG.getUNDEF(IndexContainerVT), LHSIndices,
+                                     SlideAmt, TrueMask, VL);
+        SDValue SubIdx =
+            DAG.getVectorIdxConstant(M1VT.getVectorMinNumElements() * i, DL);
+        SDValue SubV1 =
+            DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, M1VT, V1, SubIdx);
+        SDValue SubIndex =
+            DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, SubIndexVT, LHSIndices,
+                        DAG.getVectorIdxConstant(0, DL));
+        SDValue SubVec =
+            DAG.getNode(GatherVVOpc, DL, M1VT, SubV1, SubIndex,
+                        DAG.getUNDEF(M1VT), InnerTrueMask, InnerVL);
+        Gather = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, ContainerVT, Gather,
+                             SubVec, SubIdx);
+      }
     } else {
       Gather = DAG.getNode(GatherVVOpc, DL, ContainerVT, V1, LHSIndices,
                            DAG.getUNDEF(ContainerVT), TrueMask, VL);
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
index 4b09b571b9406..30b2181ece1eb 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
@@ -38,15 +38,20 @@ define <4 x float> @interleave_v2f32(<2 x float> %x, <2 x float> %y) {
 define <4 x double> @interleave_v2f64(<2 x double> %x, <2 x double> %y) {
 ; V128-LABEL: interleave_v2f64:
 ; V128:       # %bb.0:
+; V128-NEXT:    csrr a0, vlenb
 ; V128-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; V128-NEXT:    vmv1r.v v12, v9
-; V128-NEXT:    vid.v v9
+; V128-NEXT:    vid.v v10
+; V128-NEXT:    srli a0, a0, 3
+; V128-NEXT:    vsrl.vi v10, v10, 1
+; V128-NEXT:    vslidedown.vx v12, v10, a0
+; V128-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
+; V128-NEXT:    vrgatherei16.vv v13, v11, v12
+; V128-NEXT:    vrgatherei16.vv v12, v9, v10
 ; V128-NEXT:    vmv.v.i v0, 10
-; V128-NEXT:    vsrl.vi v14, v9, 1
-; V128-NEXT:    vsetvli zero, zero, e64, m2, ta, mu
-; V128-NEXT:    vrgatherei16.vv v10, v8, v14
-; V128-NEXT:    vrgatherei16.vv v10, v12, v14, v0.t
-; V128-NEXT:    vmv.v.v v8, v10
+; V128-NEXT:    vrgatherei16.vv v14, v8, v10
+; V128-NEXT:    vmv.v.v v15, v13
+; V128-NEXT:    vsetivli zero, 4, e64, m2, ta, ma
+; V128-NEXT:    vmerge.vvm v8, v14, v12, v0
 ; V128-NEXT:    ret
 ;
 ; RV32-V512-LABEL: interleave_v2f64:
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
index da7cdf3ba8ec0..ac70e5a3081c4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
@@ -51,15 +51,20 @@ define <4 x i32> @interleave_v2i32(<2 x i32> %x, <2 x i32> %y) {
 define <4 x i64> @interleave_v2i64(<2 x i64> %x, <2 x i64> %y) {
 ; V128-LABEL: interleave_v2i64:
 ; V128:       # %bb.0:
+; V128-NEXT:    csrr a0, vlenb
 ; V128-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; V128-NEXT:    vmv1r.v v12, v9
-; V128-NEXT:    vid.v v9
+; V128-NEXT:    vid.v v10
+; V128-NEXT:    srli a0, a0, 3
+; V128-NEXT:    vsrl.vi v10, v10, 1
+; V128-NEXT:    vslidedown.vx v12, v10, a0
+; V128-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
+; V128-NEXT:    vrgatherei16.vv v13, v11, v12
+; V128-NEXT:    vrgatherei16.vv v12, v9, v10
 ; V128-NEXT:    vmv.v.i v0, 10
-; V128-NEXT:    vsrl.vi v14, v9, 1
-; V128-NEXT:    vsetvli zero, zero, e64, m2, ta, mu
-; V128-NEXT:    vrgatherei16.vv v10, v8, v14
-; V128-NEXT:    vrgatherei16.vv v10, v12, v14, v0.t
-; V128-NEXT:    vmv.v.v v8, v10
+; V128-NEXT:    vrgatherei16.vv v14, v8, v10
+; V128-NEXT:    vmv.v.v v15, v13
+; V128-NEXT:    vsetivli zero, 4, e64, m2, ta, ma
+; V128-NEXT:    vmerge.vvm v8, v14, v12, v0
 ; V128-NEXT:    ret
 ;
 ; RV32-V512-LABEL: interleave_v2i64:
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
index a5039c58fccb1..a6a79f57a3da6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
@@ -817,13 +817,17 @@ define <8 x i32> @shuffle_spread2_singlesrc_e32_index1(<8 x i32> %v) {
 define <8 x i32> @shuffle_spread2_singlesrc_e32_index2(<8 x i32> %v) {
 ; CHECK-LABEL: shuffle_spread2_singlesrc_e32_index2:
 ; CHECK:       # %bb.0:
+; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    srli a0, a0, 2
 ; CHECK-NEXT:    vsrl.vi v10, v10, 1
 ; CHECK-NEXT:    vadd.vi v12, v10, -1
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 undef, i32 undef, i32 0, i32 undef, i32 1, i32 undef, i32 2, i32 undef>
   ret <8 x i32> %out
@@ -836,9 +840,13 @@ define <8 x i32> @shuffle_spread3_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-NEXT:    vmv.v.i v10, 0
 ; CHECK-NEXT:    li a0, 1
 ; CHECK-NEXT:    vslide1down.vx v12, v10, a0
-; CHECK-NEXT:    vsetvli zero, zero, e64, m2, ta, ma
+; CHECK-NEXT:    csrr a0, vlenb
+; CHECK-NEXT:    srli a0, a0, 3
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 undef, i32 undef, i32 1, i32 undef, i32 undef, i32 2, i32 undef>
   ret <8 x i32> %out
@@ -848,12 +856,16 @@ define <8 x i32> @shuffle_spread3_singlesrc_e32(<8 x i32> %v) {
 define <8 x i32> @shuffle_spread4_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-LABEL: shuffle_spread4_singlesrc_e32:
 ; CHECK:       # %bb.0:
+; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    srli a0, a0, 2
 ; CHECK-NEXT:    vsrl.vi v12, v10, 2
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 undef, i32 undef, i32 undef, i32 1, i32 undef, i32 undef, i32 undef>
   ret <8 x i32> %out
@@ -980,12 +992,16 @@ define <8 x i32> @shuffle_repeat3_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-NEXT:    vmv.v.i v11, 1
 ; CHECK-NEXT:    li a0, 192
 ; CHECK-NEXT:    vmv.s.x v10, a0
+; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vmerge.vim v11, v11, 0, v0
 ; CHECK-NEXT:    vmv.v.v v0, v10
 ; CHECK-NEXT:    vmerge.vim v12, v11, 2, v0
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    srli a0, a0, 2
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 0, i32 0, i32 1, i32 1, i32 1, i32 2, i32 2>
   ret <8 x i32> %out
@@ -994,12 +1010,16 @@ define <8 x i32> @shuffle_repeat3_singlesrc_e32(<8 x i32> %v) {
 define <8 x i32> @shuffle_repeat4_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-LABEL: shuffle_repeat4_singlesrc_e32:
 ; CHECK:       # %bb.0:
+; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    srli a0, a0, 2
 ; CHECK-NEXT:    vsrl.vi v12, v10, 2
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 0, i32 0, i32 0, i32 1, i32 1, i32 1, i32 1>
   ret <8 x i32> %out
@@ -1291,11 +1311,23 @@ define void @shuffle_i128_splat(ptr %p) nounwind {
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
 ; CHECK-NEXT:    vle64.v v8, (a0)
-; CHECK-NEXT:    lui a1, 16
+; CHECK-NEXT:    csrr a1, vlenb
+; CHECK-NEXT:    lui a2, 16
+; CHECK-NEXT:    srli a1, a1, 3
 ; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
-; CHECK-NEXT:    vmv.v.x v12, a1
-; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
+; CHECK-NEXT:    vmv.v.x v12, a2
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vslidedown.vx v13, v12, a1
+; CHECK-NEXT:    vslidedown.vx v14, v13, a1
+; CHECK-NEXT:    vsetvli a2, zero, e64, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v17, v9, v13
 ; CHECK-NEXT:    vrgatherei16.vv v16, v8, v12
+; CHECK-NEXT:    vrgatherei16.vv v18, v10, v14
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vslidedown.vx v8, v14, a1
+; CHECK-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v19, v11, v8
+; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
 ; CHECK-NEXT:    vse64.v v16, (a0)
 ; CHECK-NEXT:    ret
   %a = load <4 x i128>, ptr %p
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
index abbbfe8f252fb..b7b5ca870bd90 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
@@ -237,10 +237,15 @@ define <8 x i32> @v8i32_v4i32(<4 x i32>) {
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    lui a0, %hi(.LCPI5_0)
 ; CHECK-NEXT:    addi a0, a0, %lo(.LCPI5_0)
-; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
-; CHECK-NEXT:    vle16.v v12, (a0)
-; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vle16.v v9, (a0)
+; CHECK-NEXT:    csrr a0, vlenb
+; CHECK-NEXT:    srli a0, a0, 2
+; CHECK-NEXT:    vslidedown.vx v10, v9, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v12, v10
+; CHECK-NEXT:    vrgatherei16.vv v10, v8, v9
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %2 = shufflevector <4 x i32> %0, <4 x i32> poison, <8 x i32> <i32 2, i32 3, i32 0, i32 1, i32 1, i32 2, i32 0, i32 3>
   ret <8 x i32> %2
@@ -249,30 +254,38 @@ define <8 x i32> @v8i32_v4i32(<4 x i32>) {
 define <16 x i32> @v16i32_v4i32(<4 x i32>) {
 ; CHECK-LABEL: v16i32_v4i32:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    lui a0, 2
 ; CHECK-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
-; CHECK-NEXT:    vmv.v.i v9, 3
+; CHECK-NEXT:    vmv1r.v v10, v8
+; CHECK-NEXT:    lui a0, 2
+; CHECK-NEXT:    vmv.v.i v11, 3
 ; CHECK-NEXT:    addi a1, a0, 265
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
 ; CHECK-NEXT:    vmv.s.x v0, a1
 ; CHECK-NEXT:    lui a1, 4
 ; CHECK-NEXT:    addi a1, a1, 548
-; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT:    vmerge.vim v9, v9, 2, v0
-; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vmv.s.x v0, a1
+; CHECK-NEXT:    vmv.s.x v8, a1
+; CHECK-NEXT:    csrr a1, vlenb
 ; CHECK-NEXT:    addi a0, a0, -1856
+; CHECK-NEXT:    srli a1, a1, 2
+; CHECK-NEXT:    vmv.s.x v9, a0
 ; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT:    vmerge.vim v9, v9, 0, v0
-; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vmv.s.x v0, a0
-; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT:    vmerge.vim v9, v9, 1, v0
+; CHECK-NEXT:    vmerge.vim v11, v11, 2, v0
+; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmerge.vim v8, v11, 0, v0
+; CHECK-NEXT:    vmv1r.v v0, v9
+; CHECK-NEXT:    vmerge.vim v8, v8, 1, v0
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vsext.vf2 v16, v9
-; CHECK-NEXT:    vsetvli zero, zero, e32, m4, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v12, v8, v16
-; CHECK-NEXT:    vmv.v.v v8, v12
+; CHECK-NEXT:    vsext.vf2 v14, v8
+; CHECK-NEXT:    vslidedown.vx v16, v14, a1
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v9, v12, v16
+; CHECK-NEXT:    vrgatherei16.vv v8, v10, v14
+; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v16, a1
+; CHECK-NEXT:    vslidedown.vx v14, v12, a1
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v10, v11, v12
+; CHECK-NEXT:    vrgatherei16.vv v11, v12, v14
 ; CHECK-NEXT:    ret
   %2 = shufflevector <4 x i32> %0, <4 x i32> poison, <16 x i32> <i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3>
   ret <16 x i32> %2
@@ -281,31 +294,55 @@ define <16 x i32> @v16i32_v4i32(<4 x i32>) {
 define <32 x i32> @v32i32_v4i32(<4 x i32>) {
 ; CHECK-LABEL: v32i32_v4i32:
 ; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 1, e32, m4, ta, ma
+; CHECK-NEXT:    vmv1r.v v10, v8
 ; CHECK-NEXT:    li a0, 32
 ; CHECK-NEXT:    lui a1, 135432
 ; CHECK-NEXT:    addi a1, a1, 1161
-; CHECK-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
 ; CHECK-NEXT:    vmv.s.x v0, a1
 ; CHECK-NEXT:    lui a1, 270865
 ; CHECK-NEXT:    addi a1, a1, 548
-; CHECK-NEXT:    vmv.s.x v9, a1
+; CHECK-NEXT:    vmv.s.x v8, a1
 ; CHECK-NEXT:    lui a1, 100550
+; CHECK-NEXT:    addi a1, a1, 64
+; CHECK-NEXT:    vmv.s.x v9, a1
+; CHECK-NEXT:    csrr a1, vlenb
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m2, ta, ma
-; CHECK-NEXT:    vmv.v.i v10, 3
-; CHECK-NEXT:    addi a0, a1, 64
-; CHECK-NEXT:    vmerge.vim v18, v10, 2, v0
-; CHECK-NEXT:    vsetvli zero, zero, e32, m8, ta, ma
-; CHECK-NEXT:    vmv.s.x v16, a0
+; CHECK-NEXT:    vmv.v.i v12, 3
+; CHECK-NEXT:    srli a1, a1, 2
+; CHECK-NEXT:    vmerge.vim v12, v12, 2, v0
+; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmerge.vim v12, v12, 0, v0
 ; CHECK-NEXT:    vmv1r.v v0, v9
-; CHECK-NEXT:    vsetvli zero, zero, e8, m2, ta, ma
-; CHECK-NEXT:    vmerge.vim v18, v18, 0, v0
-; CHECK-NEXT:    vmv1r.v v0, v16
-; CHECK-NEXT:    vmerge.vim v16, v18, 1, v0
+; CHECK-NEXT:    vmerge.vim v8, v12, 1, v0
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m4, ta, ma
-; CHECK-NEXT:    vsext.vf2 v24, v16
-; CHECK-NEXT:    vsetvli zero, zero, e32, m8, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v16, v8, v24
-; CHECK-NEXT:    vmv.v.v v8, v16
+; CHECK-NEXT:    vsext.vf2 v16, v8
+; CHECK-NEXT:    vslidedown.vx v12, v16, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v9, v11, v12
+; CHECK-NEXT:    vrgatherei16.vv v8, v10, v16
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v12, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v10, v11, v12
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v12, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v16, v12
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v20, v12, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v12, v17, v20
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v20, v20, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v13, v18, v20
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v20, v20, a1
+; CHECK-NEXT:    vslidedown.vx v24, v20, a1
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v14, v19, v20
+; CHECK-NEXT:    vrgatherei16.vv v15, v16, v24
 ; CHECK-NEXT:    ret
   %2 = shufflevector <4 x i32> %0, <4 x i32> poison, <32 x i32> <i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3>
   ret <32 x i32> %2

>From 849eff1dc43f781ede22b875335b30ec19395499 Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Thu, 6 Feb 2025 10:46:59 -0800
Subject: [PATCH 2/4] Minor style and comment improvements

---
 llvm/lib/Target/RISCV/RISCVISelLowering.cpp | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 30db4f6f13d85..426519cd3a452 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -5352,11 +5352,8 @@ static bool isLocalRepeatingShuffle(ArrayRef<int> Mask, int Span) {
 
 /// Is this mask only using elements from the first span of the input?
 static bool isLowSourceShuffle(ArrayRef<int> Mask, int Span) {
-  for (auto [I, M] : enumerate(Mask)) {
-    if (M != -1 && M >= Span)
-      return false;
-  }
-  return true;
+  return !any_of(Mask,
+                 [&](const auto &Idx) { return Idx != -1 && Idx >= Span; });
 }
 
 /// Try to widen element type to get a new mask value for a better permutation
@@ -5776,6 +5773,11 @@ static SDValue lowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG,
                              SubVec, SubIdx);
       }
     } else if (ContainerVT.bitsGT(M1VT) && isLowSourceShuffle(Mask, VLMAX)) {
+      // If we have a shuffle which only uses the first register in our
+      // source register group, we can do a linear number of m1 vrgathers
+      // reusing the same source register (but with different indices)
+      // TODO: This can be generalized for m2 or m4, or for any shuffle
+      // for which we can do a vslidedown followed by this expansion.
       EVT SubIndexVT = M1VT.changeVectorElementType(IndexVT.getScalarType());
       auto [InnerTrueMask, InnerVL] =
           getDefaultScalableVLOps(M1VT, DL, DAG, Subtarget);

>From c359d56faa58de6c0a4bae9a58e3ec9284411d73 Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Tue, 11 Feb 2025 13:33:35 -0800
Subject: [PATCH 3/4] Address review comment

---
 llvm/lib/Target/RISCV/RISCVISelLowering.cpp | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 426519cd3a452..4f0904ed0df5d 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -5352,8 +5352,8 @@ static bool isLocalRepeatingShuffle(ArrayRef<int> Mask, int Span) {
 
 /// Is this mask only using elements from the first span of the input?
 static bool isLowSourceShuffle(ArrayRef<int> Mask, int Span) {
-  return !any_of(Mask,
-                 [&](const auto &Idx) { return Idx != -1 && Idx >= Span; });
+  return all_of(Mask,
+                [&](const auto &Idx) { return Idx == -1 || Idx < Span; });
 }
 
 /// Try to widen element type to get a new mask value for a better permutation

>From 7ba16acf776a77b5dbf7c1ccf9a43e0a2a6dd7a5 Mon Sep 17 00:00:00 2001
From: Philip Reames <preames at rivosinc.com>
Date: Wed, 12 Feb 2025 08:58:11 -0800
Subject: [PATCH 4/4] Fix serious bug

---
 llvm/lib/Target/RISCV/RISCVISelLowering.cpp   |  9 ++-
 .../RISCV/rvv/fixed-vectors-fp-interleave.ll  |  8 +-
 .../RISCV/rvv/fixed-vectors-int-interleave.ll |  8 +-
 .../RISCV/rvv/fixed-vectors-int-shuffles.ll   | 76 +++++++++----------
 .../fixed-vectors-shuffle-changes-length.ll   | 74 +++++++++---------
 5 files changed, 90 insertions(+), 85 deletions(-)

diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 4f0904ed0df5d..efe22cc972865 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -5787,21 +5787,22 @@ static SDValue lowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG,
       Gather = DAG.getUNDEF(ContainerVT);
       SDValue SlideAmt =
           DAG.getElementCount(DL, XLenVT, M1VT.getVectorElementCount());
+      SDValue SubV1 =
+          DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, M1VT, V1,
+                      DAG.getVectorIdxConstant(0, DL));
       for (int i = 0; i < N; i++) {
         if (i != 0)
           LHSIndices = getVSlidedown(DAG, Subtarget, DL, IndexContainerVT,
                                      DAG.getUNDEF(IndexContainerVT), LHSIndices,
                                      SlideAmt, TrueMask, VL);
-        SDValue SubIdx =
-            DAG.getVectorIdxConstant(M1VT.getVectorMinNumElements() * i, DL);
-        SDValue SubV1 =
-            DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, M1VT, V1, SubIdx);
         SDValue SubIndex =
             DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, SubIndexVT, LHSIndices,
                         DAG.getVectorIdxConstant(0, DL));
         SDValue SubVec =
             DAG.getNode(GatherVVOpc, DL, M1VT, SubV1, SubIndex,
                         DAG.getUNDEF(M1VT), InnerTrueMask, InnerVL);
+        SDValue SubIdx =
+            DAG.getVectorIdxConstant(M1VT.getVectorMinNumElements() * i, DL);
         Gather = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, ContainerVT, Gather,
                              SubVec, SubIdx);
       }
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
index 30b2181ece1eb..ac78a252cf9cd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
@@ -41,15 +41,15 @@ define <4 x double> @interleave_v2f64(<2 x double> %x, <2 x double> %y) {
 ; V128-NEXT:    csrr a0, vlenb
 ; V128-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
 ; V128-NEXT:    vid.v v10
+; V128-NEXT:    vmv.v.i v0, 10
 ; V128-NEXT:    srli a0, a0, 3
 ; V128-NEXT:    vsrl.vi v10, v10, 1
-; V128-NEXT:    vslidedown.vx v12, v10, a0
+; V128-NEXT:    vslidedown.vx v11, v10, a0
 ; V128-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
-; V128-NEXT:    vrgatherei16.vv v13, v11, v12
+; V128-NEXT:    vrgatherei16.vv v13, v9, v11
 ; V128-NEXT:    vrgatherei16.vv v12, v9, v10
-; V128-NEXT:    vmv.v.i v0, 10
+; V128-NEXT:    vrgatherei16.vv v15, v8, v11
 ; V128-NEXT:    vrgatherei16.vv v14, v8, v10
-; V128-NEXT:    vmv.v.v v15, v13
 ; V128-NEXT:    vsetivli zero, 4, e64, m2, ta, ma
 ; V128-NEXT:    vmerge.vvm v8, v14, v12, v0
 ; V128-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
index ac70e5a3081c4..beaf75d5b0cfa 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
@@ -54,15 +54,15 @@ define <4 x i64> @interleave_v2i64(<2 x i64> %x, <2 x i64> %y) {
 ; V128-NEXT:    csrr a0, vlenb
 ; V128-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
 ; V128-NEXT:    vid.v v10
+; V128-NEXT:    vmv.v.i v0, 10
 ; V128-NEXT:    srli a0, a0, 3
 ; V128-NEXT:    vsrl.vi v10, v10, 1
-; V128-NEXT:    vslidedown.vx v12, v10, a0
+; V128-NEXT:    vslidedown.vx v11, v10, a0
 ; V128-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
-; V128-NEXT:    vrgatherei16.vv v13, v11, v12
+; V128-NEXT:    vrgatherei16.vv v13, v9, v11
 ; V128-NEXT:    vrgatherei16.vv v12, v9, v10
-; V128-NEXT:    vmv.v.i v0, 10
+; V128-NEXT:    vrgatherei16.vv v15, v8, v11
 ; V128-NEXT:    vrgatherei16.vv v14, v8, v10
-; V128-NEXT:    vmv.v.v v15, v13
 ; V128-NEXT:    vsetivli zero, 4, e64, m2, ta, ma
 ; V128-NEXT:    vmerge.vvm v8, v14, v12, v0
 ; V128-NEXT:    ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
index a6a79f57a3da6..51417424db14a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
@@ -819,14 +819,14 @@ define <8 x i32> @shuffle_spread2_singlesrc_e32_index2(<8 x i32> %v) {
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
-; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    vid.v v9
 ; CHECK-NEXT:    srli a0, a0, 2
-; CHECK-NEXT:    vsrl.vi v10, v10, 1
-; CHECK-NEXT:    vadd.vi v12, v10, -1
-; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsrl.vi v9, v9, 1
+; CHECK-NEXT:    vadd.vi v9, v9, -1
+; CHECK-NEXT:    vslidedown.vx v10, v9, a0
 ; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
-; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
+; CHECK-NEXT:    vrgatherei16.vv v11, v8, v10
+; CHECK-NEXT:    vrgatherei16.vv v10, v8, v9
 ; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 undef, i32 undef, i32 0, i32 undef, i32 1, i32 undef, i32 2, i32 undef>
@@ -837,15 +837,15 @@ define <8 x i32> @shuffle_spread3_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-LABEL: shuffle_spread3_singlesrc_e32:
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; CHECK-NEXT:    vmv.v.i v10, 0
+; CHECK-NEXT:    vmv.v.i v9, 0
 ; CHECK-NEXT:    li a0, 1
-; CHECK-NEXT:    vslide1down.vx v12, v10, a0
+; CHECK-NEXT:    vslide1down.vx v9, v9, a0
 ; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    srli a0, a0, 3
-; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vslidedown.vx v10, v9, a0
 ; CHECK-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
-; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
+; CHECK-NEXT:    vrgatherei16.vv v11, v8, v10
+; CHECK-NEXT:    vrgatherei16.vv v10, v8, v9
 ; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 undef, i32 undef, i32 1, i32 undef, i32 undef, i32 2, i32 undef>
@@ -858,13 +858,13 @@ define <8 x i32> @shuffle_spread4_singlesrc_e32(<8 x i32> %v) {
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
-; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    vid.v v9
 ; CHECK-NEXT:    srli a0, a0, 2
-; CHECK-NEXT:    vsrl.vi v12, v10, 2
-; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsrl.vi v9, v9, 2
+; CHECK-NEXT:    vslidedown.vx v10, v9, a0
 ; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
-; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
+; CHECK-NEXT:    vrgatherei16.vv v11, v8, v10
+; CHECK-NEXT:    vrgatherei16.vv v10, v8, v9
 ; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 undef, i32 undef, i32 undef, i32 1, i32 undef, i32 undef, i32 undef>
@@ -989,18 +989,18 @@ define <8 x i32> @shuffle_repeat3_singlesrc_e32(<8 x i32> %v) {
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vmv.v.i v0, 7
-; CHECK-NEXT:    vmv.v.i v11, 1
+; CHECK-NEXT:    vmv.v.i v10, 1
 ; CHECK-NEXT:    li a0, 192
-; CHECK-NEXT:    vmv.s.x v10, a0
+; CHECK-NEXT:    vmv.s.x v9, a0
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vmerge.vim v11, v11, 0, v0
-; CHECK-NEXT:    vmv.v.v v0, v10
-; CHECK-NEXT:    vmerge.vim v12, v11, 2, v0
+; CHECK-NEXT:    vmerge.vim v10, v10, 0, v0
+; CHECK-NEXT:    vmv.v.v v0, v9
+; CHECK-NEXT:    vmerge.vim v9, v10, 2, v0
 ; CHECK-NEXT:    srli a0, a0, 2
-; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vslidedown.vx v10, v9, a0
 ; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
-; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
+; CHECK-NEXT:    vrgatherei16.vv v11, v8, v10
+; CHECK-NEXT:    vrgatherei16.vv v10, v8, v9
 ; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 0, i32 0, i32 1, i32 1, i32 1, i32 2, i32 2>
@@ -1012,13 +1012,13 @@ define <8 x i32> @shuffle_repeat4_singlesrc_e32(<8 x i32> %v) {
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
-; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    vid.v v9
 ; CHECK-NEXT:    srli a0, a0, 2
-; CHECK-NEXT:    vsrl.vi v12, v10, 2
-; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsrl.vi v9, v9, 2
+; CHECK-NEXT:    vslidedown.vx v10, v9, a0
 ; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
-; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
+; CHECK-NEXT:    vrgatherei16.vv v11, v8, v10
+; CHECK-NEXT:    vrgatherei16.vv v10, v8, v9
 ; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 0, i32 0, i32 0, i32 1, i32 1, i32 1, i32 1>
@@ -1315,20 +1315,20 @@ define void @shuffle_i128_splat(ptr %p) nounwind {
 ; CHECK-NEXT:    lui a2, 16
 ; CHECK-NEXT:    srli a1, a1, 3
 ; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
-; CHECK-NEXT:    vmv.v.x v12, a2
+; CHECK-NEXT:    vmv.v.x v9, a2
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
-; CHECK-NEXT:    vslidedown.vx v13, v12, a1
-; CHECK-NEXT:    vslidedown.vx v14, v13, a1
+; CHECK-NEXT:    vslidedown.vx v10, v9, a1
+; CHECK-NEXT:    vslidedown.vx v11, v10, a1
 ; CHECK-NEXT:    vsetvli a2, zero, e64, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v17, v9, v13
-; CHECK-NEXT:    vrgatherei16.vv v16, v8, v12
-; CHECK-NEXT:    vrgatherei16.vv v18, v10, v14
+; CHECK-NEXT:    vrgatherei16.vv v13, v8, v10
+; CHECK-NEXT:    vrgatherei16.vv v12, v8, v9
+; CHECK-NEXT:    vrgatherei16.vv v14, v8, v11
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
-; CHECK-NEXT:    vslidedown.vx v8, v14, a1
+; CHECK-NEXT:    vslidedown.vx v9, v11, a1
 ; CHECK-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v19, v11, v8
+; CHECK-NEXT:    vrgatherei16.vv v15, v8, v9
 ; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
-; CHECK-NEXT:    vse64.v v16, (a0)
+; CHECK-NEXT:    vse64.v v12, (a0)
 ; CHECK-NEXT:    ret
   %a = load <4 x i128>, ptr %p
   %res = shufflevector <4 x i128> %a, <4 x i128> poison, <4 x i32> <i32 0, i32 0, i32 0, i32 0>
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
index b7b5ca870bd90..e4b6e5c47fd98 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
@@ -243,7 +243,7 @@ define <8 x i32> @v8i32_v4i32(<4 x i32>) {
 ; CHECK-NEXT:    srli a0, a0, 2
 ; CHECK-NEXT:    vslidedown.vx v10, v9, a0
 ; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v11, v12, v10
+; CHECK-NEXT:    vrgatherei16.vv v11, v8, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v9
 ; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
@@ -255,9 +255,9 @@ define <16 x i32> @v16i32_v4i32(<4 x i32>) {
 ; CHECK-LABEL: v16i32_v4i32:
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
-; CHECK-NEXT:    vmv1r.v v10, v8
+; CHECK-NEXT:    vmv1r.v v12, v8
 ; CHECK-NEXT:    lui a0, 2
-; CHECK-NEXT:    vmv.v.i v11, 3
+; CHECK-NEXT:    vmv.v.i v10, 3
 ; CHECK-NEXT:    addi a1, a0, 265
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
 ; CHECK-NEXT:    vmv.s.x v0, a1
@@ -269,22 +269,24 @@ define <16 x i32> @v16i32_v4i32(<4 x i32>) {
 ; CHECK-NEXT:    srli a1, a1, 2
 ; CHECK-NEXT:    vmv.s.x v9, a0
 ; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT:    vmerge.vim v11, v11, 2, v0
+; CHECK-NEXT:    vmerge.vim v10, v10, 2, v0
 ; CHECK-NEXT:    vmv1r.v v0, v8
-; CHECK-NEXT:    vmerge.vim v8, v11, 0, v0
+; CHECK-NEXT:    vmerge.vim v8, v10, 0, v0
 ; CHECK-NEXT:    vmv1r.v v0, v9
 ; CHECK-NEXT:    vmerge.vim v8, v8, 1, v0
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vsext.vf2 v14, v8
-; CHECK-NEXT:    vslidedown.vx v16, v14, a1
+; CHECK-NEXT:    vsext.vf2 v10, v8
+; CHECK-NEXT:    vslidedown.vx v14, v10, a1
 ; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v9, v12, v16
-; CHECK-NEXT:    vrgatherei16.vv v8, v10, v14
+; CHECK-NEXT:    vrgatherei16.vv v9, v12, v14
 ; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
-; CHECK-NEXT:    vslidedown.vx v12, v16, a1
-; CHECK-NEXT:    vslidedown.vx v14, v12, a1
+; CHECK-NEXT:    vslidedown.vx v14, v14, a1
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v8, v12, v10
+; CHECK-NEXT:    vrgatherei16.vv v10, v12, v14
+; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v14, v14, a1
 ; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v10, v11, v12
 ; CHECK-NEXT:    vrgatherei16.vv v11, v12, v14
 ; CHECK-NEXT:    ret
   %2 = shufflevector <4 x i32> %0, <4 x i32> poison, <16 x i32> <i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3>
@@ -294,55 +296,57 @@ define <16 x i32> @v16i32_v4i32(<4 x i32>) {
 define <32 x i32> @v32i32_v4i32(<4 x i32>) {
 ; CHECK-LABEL: v32i32_v4i32:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 1, e32, m4, ta, ma
-; CHECK-NEXT:    vmv1r.v v10, v8
 ; CHECK-NEXT:    li a0, 32
 ; CHECK-NEXT:    lui a1, 135432
 ; CHECK-NEXT:    addi a1, a1, 1161
+; CHECK-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
 ; CHECK-NEXT:    vmv.s.x v0, a1
 ; CHECK-NEXT:    lui a1, 270865
 ; CHECK-NEXT:    addi a1, a1, 548
-; CHECK-NEXT:    vmv.s.x v8, a1
+; CHECK-NEXT:    vmv.s.x v9, a1
 ; CHECK-NEXT:    lui a1, 100550
 ; CHECK-NEXT:    addi a1, a1, 64
-; CHECK-NEXT:    vmv.s.x v9, a1
+; CHECK-NEXT:    vmv.s.x v10, a1
 ; CHECK-NEXT:    csrr a1, vlenb
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m2, ta, ma
 ; CHECK-NEXT:    vmv.v.i v12, 3
 ; CHECK-NEXT:    srli a1, a1, 2
 ; CHECK-NEXT:    vmerge.vim v12, v12, 2, v0
-; CHECK-NEXT:    vmv1r.v v0, v8
-; CHECK-NEXT:    vmerge.vim v12, v12, 0, v0
 ; CHECK-NEXT:    vmv1r.v v0, v9
-; CHECK-NEXT:    vmerge.vim v8, v12, 1, v0
+; CHECK-NEXT:    vmerge.vim v12, v12, 0, v0
+; CHECK-NEXT:    vmv1r.v v0, v10
+; CHECK-NEXT:    vmerge.vim v10, v12, 1, v0
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m4, ta, ma
-; CHECK-NEXT:    vsext.vf2 v16, v8
-; CHECK-NEXT:    vslidedown.vx v12, v16, a1
+; CHECK-NEXT:    vsext.vf2 v12, v10
+; CHECK-NEXT:    vslidedown.vx v20, v12, a1
 ; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v9, v11, v12
-; CHECK-NEXT:    vrgatherei16.vv v8, v10, v16
+; CHECK-NEXT:    vrgatherei16.vv v17, v8, v20
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
-; CHECK-NEXT:    vslidedown.vx v12, v12, a1
+; CHECK-NEXT:    vslidedown.vx v20, v20, a1
 ; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v10, v11, v12
+; CHECK-NEXT:    vrgatherei16.vv v16, v8, v12
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
-; CHECK-NEXT:    vslidedown.vx v12, v12, a1
+; CHECK-NEXT:    vslidedown.vx v12, v20, a1
 ; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v11, v16, v12
+; CHECK-NEXT:    vrgatherei16.vv v18, v8, v20
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
-; CHECK-NEXT:    vslidedown.vx v20, v12, a1
+; CHECK-NEXT:    vslidedown.vx v24, v12, a1
 ; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v12, v17, v20
+; CHECK-NEXT:    vrgatherei16.vv v19, v8, v12
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
-; CHECK-NEXT:    vslidedown.vx v20, v20, a1
+; CHECK-NEXT:    vslidedown.vx v12, v24, a1
 ; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v13, v18, v20
+; CHECK-NEXT:    vrgatherei16.vv v20, v8, v24
 ; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
-; CHECK-NEXT:    vslidedown.vx v20, v20, a1
-; CHECK-NEXT:    vslidedown.vx v24, v20, a1
+; CHECK-NEXT:    vslidedown.vx v24, v12, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v21, v8, v12
+; CHECK-NEXT:    vrgatherei16.vv v22, v8, v24
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v24, a1
 ; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v14, v19, v20
-; CHECK-NEXT:    vrgatherei16.vv v15, v16, v24
+; CHECK-NEXT:    vrgatherei16.vv v23, v8, v12
+; CHECK-NEXT:    vmv8r.v v8, v16
 ; CHECK-NEXT:    ret
   %2 = shufflevector <4 x i32> %0, <4 x i32> poison, <32 x i32> <i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3>
   ret <32 x i32> %2



More information about the llvm-commits mailing list