[llvm] [RISCV] Reduce the LMUL for a vrgather operation if legal (PR #125768)
Luke Lau via llvm-commits
llvm-commits at lists.llvm.org
Wed Feb 5 02:13:01 PST 2025
================
@@ -874,27 +874,30 @@ define <16 x i8> @reverse_v16i8_2(<8 x i8> %a, <8 x i8> %b) {
define <32 x i8> @reverse_v32i8_2(<16 x i8> %a, <16 x i8> %b) {
; CHECK-LABEL: reverse_v32i8_2:
; CHECK: # %bb.0:
-; CHECK-NEXT: vsetvli a0, zero, e16, m2, ta, ma
-; CHECK-NEXT: vmv1r.v v10, v9
; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: vid.v v12
+; CHECK-NEXT: vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT: vid.v v10
; CHECK-NEXT: addi a1, a0, -1
-; CHECK-NEXT: vrsub.vx v12, v12, a1
+; CHECK-NEXT: vrsub.vx v10, v10, a1
; CHECK-NEXT: lui a1, 16
; CHECK-NEXT: addi a1, a1, -1
; CHECK-NEXT: vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT: vrgatherei16.vv v15, v8, v12
-; CHECK-NEXT: vrgatherei16.vv v14, v9, v12
+; CHECK-NEXT: vrgatherei16.vv v15, v8, v10
+; CHECK-NEXT: vrgatherei16.vv v14, v12, v10
; CHECK-NEXT: vsetvli zero, zero, e32, m4, ta, ma
; CHECK-NEXT: vmv.s.x v0, a1
; CHECK-NEXT: li a1, 32
-; CHECK-NEXT: slli a0, a0, 1
-; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, mu
+; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vid.v v8
+; CHECK-NEXT: slli a0, a0, 1
+; CHECK-NEXT: vrsub.vi v8, v8, 15
; CHECK-NEXT: addi a0, a0, -32
-; CHECK-NEXT: vrsub.vi v12, v8, 15
-; CHECK-NEXT: vslidedown.vx v8, v14, a0
-; CHECK-NEXT: vrgather.vv v8, v10, v12, v0.t
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
+; CHECK-NEXT: vslidedown.vx v10, v14, a0
+; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
+; CHECK-NEXT: vrgather.vv v12, v9, v8
+; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
+; CHECK-NEXT: vmerge.vvm v8, v10, v12, v0
; CHECK-NEXT: ret
%res = shufflevector <16 x i8> %a, <16 x i8> %b, <32 x i32> <i32 31, i32 30, i32 29, i32 28, i32 27, i32 26, i32 25, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 17, i32 16, i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
----------------
lukel97 wrote:
Why aren't we able to do the single source version of the reverse lowering here, which is just LMUL * M1 vrgathers and a single vslidedown? E.g. see earlier in the tests:
```asm
reverse_v8i32:
# %bb.0:
csrr a0, vlenb
vsetvli a1, zero, e32, m1, ta, ma
vid.v v10
srli a1, a0, 2
srli a0, a0, 1
addi a1, a1, -1
vrsub.vx v10, v10, a1
vrgather.vv v13, v8, v10
vrgather.vv v12, v9, v10
addi a0, a0, -8
vsetivli zero, 8, e32, m2, ta, ma
vslidedown.vx v8, v12, a0
ret
```
It looks like the reverse mask is getting all messed up when the shufflevector goes from IR to SDAG:
```
t9: v32i8 = concat_vectors t4, undef:v16i8
t10: v32i8 = concat_vectors t7, undef:v16i8
t11: v32i8 = vector_shuffle<47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0> t9, t10
```
Would we get better lowering if we combine this to
```
t10: v32i8 = concat_vectors t4, t7
t11: v32i8 = vector_shuffle<31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0> t0, undef
https://github.com/llvm/llvm-project/pull/125768
More information about the llvm-commits
mailing list