[llvm] [RISCV] Remove vmv.s.x and vmv.x.s lmul pseudo variants (PR #71501)

Luke Lau via llvm-commits llvm-commits at lists.llvm.org
Wed Nov 15 03:27:33 PST 2023


================
@@ -1921,24 +1921,28 @@ define void @mscatter_v8i32(<8 x i32> %val, <8 x ptr> %ptrs, <8 x i1> %m) {
 ; RV64ZVE32F-NEXT:  .LBB28_13: # %cond.store7
 ; RV64ZVE32F-NEXT:    vsetivli zero, 1, e32, m2, ta, ma
 ; RV64ZVE32F-NEXT:    vslidedown.vi v10, v8, 4
+; RV64ZVE32F-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
----------------
lukel97 wrote:

I could add a check in here like 
```c++
      SDValue Src = Val.getOperand(0);
      if (Src.getOpcode() == ISD::EXTRACT_SUBVECTOR && isNullConstant(Src.getOperand(1)))
	Src = Src.getOperand(0);
```
but then we actually end up increasing LMUL in some cases, e.g.

```diff
 define void @store_vfmv_f_s_nxv8f64(<vscale x 8 x double>* %x, double* %p) {
 ; CHECK-LABEL: store_vfmv_f_s_nxv8f64:
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    vl8re64.v v8, (a0)
-; CHECK-NEXT:    vsetivli zero, 1, e64, m1, ta, ma
+; CHECK-NEXT:    vsetivli zero, 1, e64, m8, ta, ma
 ; CHECK-NEXT:    vse64.v v8, (a1)
 ; CHECK-NEXT:    ret
   %a = load <vscale x 8 x double>, <vscale x 8 x double>* %x
   %b = call double @llvm.riscv.vfmv.f.s.nxv8f64(<vscale x 8 x double> %a)
   store double %b, double* %p
   ret void
 }
```

This extra vsetvli actually decreases LMUL: Does this have any effect on performance on a vse32e if VL is the same?
cc @preames @topperc

https://github.com/llvm/llvm-project/pull/71501


More information about the llvm-commits mailing list