[llvm] [RISCV] Support llvm.masked.expandload intrinsic (PR #101954)

Pengcheng Wang via llvm-commits llvm-commits at lists.llvm.org
Tue Aug 6 06:05:48 PDT 2024


================
@@ -250,50 +102,13 @@ declare <4 x i16> @llvm.masked.expandload.v4i16(ptr, <4 x i1>, <4 x i16>)
 define <4 x i16> @expandload_v4i16(ptr %base, <4 x i16> %src0, <4 x i1> %mask) {
 ; CHECK-LABEL: expandload_v4i16:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetivli zero, 1, e8, m1, ta, ma
-; CHECK-NEXT:    vmv.x.s a1, v0
-; CHECK-NEXT:    andi a2, a1, 1
-; CHECK-NEXT:    bnez a2, .LBB6_5
-; CHECK-NEXT:  # %bb.1: # %else
-; CHECK-NEXT:    andi a2, a1, 2
-; CHECK-NEXT:    bnez a2, .LBB6_6
-; CHECK-NEXT:  .LBB6_2: # %else2
-; CHECK-NEXT:    andi a2, a1, 4
-; CHECK-NEXT:    bnez a2, .LBB6_7
-; CHECK-NEXT:  .LBB6_3: # %else6
-; CHECK-NEXT:    andi a1, a1, 8
-; CHECK-NEXT:    bnez a1, .LBB6_8
-; CHECK-NEXT:  .LBB6_4: # %else10
-; CHECK-NEXT:    ret
-; CHECK-NEXT:  .LBB6_5: # %cond.load
-; CHECK-NEXT:    lh a2, 0(a0)
-; CHECK-NEXT:    vsetvli zero, zero, e16, m2, tu, ma
-; CHECK-NEXT:    vmv.s.x v8, a2
-; CHECK-NEXT:    addi a0, a0, 2
-; CHECK-NEXT:    andi a2, a1, 2
-; CHECK-NEXT:    beqz a2, .LBB6_2
-; CHECK-NEXT:  .LBB6_6: # %cond.load1
-; CHECK-NEXT:    lh a2, 0(a0)
-; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vmv.s.x v9, a2
-; CHECK-NEXT:    vsetivli zero, 2, e16, mf2, tu, ma
-; CHECK-NEXT:    vslideup.vi v8, v9, 1
-; CHECK-NEXT:    addi a0, a0, 2
-; CHECK-NEXT:    andi a2, a1, 4
-; CHECK-NEXT:    beqz a2, .LBB6_3
-; CHECK-NEXT:  .LBB6_7: # %cond.load5
-; CHECK-NEXT:    lh a2, 0(a0)
-; CHECK-NEXT:    vsetivli zero, 3, e16, mf2, tu, ma
-; CHECK-NEXT:    vmv.s.x v9, a2
-; CHECK-NEXT:    vslideup.vi v8, v9, 2
-; CHECK-NEXT:    addi a0, a0, 2
-; CHECK-NEXT:    andi a1, a1, 8
-; CHECK-NEXT:    beqz a1, .LBB6_4
-; CHECK-NEXT:  .LBB6_8: # %cond.load9
-; CHECK-NEXT:    lh a0, 0(a0)
-; CHECK-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; CHECK-NEXT:    vmv.s.x v9, a0
-; CHECK-NEXT:    vslideup.vi v8, v9, 3
+; CHECK-NEXT:    vsetivli zero, 4, e8, mf4, ta, ma
+; CHECK-NEXT:    vcpop.m a1, v0
+; CHECK-NEXT:    vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT:    vle16.v v9, (a0)
+; CHECK-NEXT:    vsetivli zero, 4, e16, mf2, ta, mu
+; CHECK-NEXT:    viota.m v10, v0
+; CHECK-NEXT:    vrgather.vv v8, v9, v10, v0.t
----------------
wangpc-pp wrote:

Thanks for the discussion! I think we may leave a TODO here to investigate which way is better:
1. `vcpop.m+vleN.v+viota.m+vrgather.vv`
```asm
vsetivli zero, 16, e8, m1, ta, ma
vcpop.m a1, v0
vsetvli zero, a1, e32, m4, ta, ma
vle32.v v12, (a0)
vsetivli zero, 16, e32, m4, ta, mu
viota.m v16, v0
vrgather.vv v8, v12, v16, v0.t
ret
```
It has:
* Consecutive unit-stride load, which is fast.
* Larger register pressure. We need to retain v12+v16 for above example.
* More vtype toggles (2).
* vrgather.vv may be slow for large LMUL.
2. `viota.m+vsll.vi(optional)+vluxeiN.v`
```asm
vsetivli zero, 16, e32, m4, ta, ma
viota.m v12, v0
vsll.vi v12, v12, 2, v0.t
vsetvli zero, zero, e32, m4, ta, mu
vluxei32.v v8, (a0), v12, v0.t
ret
```
It has:
* Indexed load, which is slower.
* Small register pressure. We only use v12 for above example.
* Fewer instructions (2 or 3) and vtype toggles (only 1).

Currently I prefer the second, but I don't have performance data to support this.

https://github.com/llvm/llvm-project/pull/101954


More information about the llvm-commits mailing list