[llvm] [RISCV] Use vsetvli instead of vlenb in Prologue/Epilogue (PR #113756)
Kito Cheng via llvm-commits
llvm-commits at lists.llvm.org
Sat Oct 26 00:41:25 PDT 2024
https://github.com/kito-cheng created https://github.com/llvm/llvm-project/pull/113756
Currently, we use `csrr` with `vlenb` to obtain the `VLEN`, but this is not the only option. We can also use `vsetvli` with `e8`/`m1` to get `VLENMAX`, which is equal to the VLEN. This method is preferable on some microarchitectures and makes it easier to obtain values like `VLEN * 2`, `VLEN * 4`, or `VLEN * 8`, reducing the number of instructions needed to calculate VLEN multiples.
However, this approach is *NOT* always interchangeable, as it changes the state of `VTYPE` and `VL`, which can alter the behavior of vector instructions, potentially causing incorrect code generation if applied after a vsetvli insertion. Therefore, we limit its use to the prologue/epilogue for now, as there are no vector operations within the prologue/epilogue sequence.
With further analysis, we may extend this approach beyond the prologue/epilogue in the future, but starting here should be a good first step.
>From ad8b6025550438f8238645808147402cfef2e185 Mon Sep 17 00:00:00 2001
From: Kito Cheng <kito.cheng at sifive.com>
Date: Sat, 26 Oct 2024 12:06:43 +0800
Subject: [PATCH] [RISCV] Use vsetvli instead of vlenb in Prologue/Epilogue
Currently, we use `csrr` with `vlenb` to obtain the `VLEN`, but this is not the
only option. We can also use `vsetvli` with `e8`/`m1` to get `VLENMAX`, which
is equal to the VLEN. This method is preferable on some microarchitectures and
makes it easier to obtain values like `VLEN * 2`, `VLEN * 4`, or `VLEN * 8`,
reducing the number of instructions needed to calculate VLEN multiples.
However, this approach is *NOT* always interchangeable, as it changes the state
of `VTYPE` and `VL`, which can alter the behavior of vector instructions,
potentially causing incorrect code generation if applied after a vsetvli
insertion. Therefore, we limit its use to the prologue/epilogue for now, as
there are no vector operations within the prologue/epilogue sequence.
With further analysis, we may extend this approach beyond the prologue/epilogue
in the future, but starting here should be a good first step.
---
.../Target/RISCV/RISCVExpandPseudoInsts.cpp | 37 +++
llvm/lib/Target/RISCV/RISCVFrameLowering.cpp | 28 +-
.../Target/RISCV/RISCVInstrInfoVPseudos.td | 5 +
llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp | 52 +++-
llvm/lib/Target/RISCV/RISCVRegisterInfo.h | 4 +-
.../RISCV/calling-conv-vector-on-stack.ll | 3 +-
.../early-clobber-tied-def-subreg-liveness.ll | 4 +-
.../RISCV/intrinsic-cttz-elts-vscale.ll | 6 +-
llvm/test/CodeGen/RISCV/pr69586.ll | 4 +-
...regalloc-last-chance-recoloring-failure.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv-cfi-info.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/abs-vp.ll | 4 +-
.../RISCV/rvv/access-fixed-objects-by-rvv.ll | 4 +-
.../RISCV/rvv/addi-scalable-offset.mir | 2 +-
.../rvv/alloca-load-store-scalable-array.ll | 4 +-
.../rvv/alloca-load-store-scalable-struct.ll | 6 +-
.../rvv/alloca-load-store-vector-tuple.ll | 18 +-
.../CodeGen/RISCV/rvv/allocate-lmul-2-4-8.ll | 229 ++++----------
.../CodeGen/RISCV/rvv/bitreverse-sdnode.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll | 36 +--
llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll | 36 +--
.../CodeGen/RISCV/rvv/callee-saved-regs.ll | 8 +-
.../CodeGen/RISCV/rvv/calling-conv-fastcc.ll | 26 +-
llvm/test/CodeGen/RISCV/rvv/calling-conv.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll | 30 +-
llvm/test/CodeGen/RISCV/rvv/compressstore.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll | 16 +-
.../test/CodeGen/RISCV/rvv/emergency-slot.mir | 2 +-
llvm/test/CodeGen/RISCV/rvv/extractelt-fp.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/extractelt-i1.ll | 4 +-
.../CodeGen/RISCV/rvv/extractelt-int-rv32.ll | 4 +-
.../CodeGen/RISCV/rvv/extractelt-int-rv64.ll | 4 +-
.../RISCV/rvv/fixed-vectors-bitreverse-vp.ll | 36 +--
.../RISCV/rvv/fixed-vectors-bswap-vp.ll | 36 +--
.../RISCV/rvv/fixed-vectors-ceil-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-ctlz-vp.ll | 40 +--
.../RISCV/rvv/fixed-vectors-ctpop-vp.ll | 20 +-
.../RISCV/rvv/fixed-vectors-cttz-vp.ll | 40 +--
.../RISCV/rvv/fixed-vectors-floor-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-fmaximum-vp.ll | 14 +-
.../RISCV/rvv/fixed-vectors-fminimum-vp.ll | 14 +-
.../rvv/fixed-vectors-fp-buildvec-bf16.ll | 8 +-
.../RISCV/rvv/fixed-vectors-fp-buildvec.ll | 8 +-
.../RISCV/rvv/fixed-vectors-fp-interleave.ll | 6 +-
.../RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll | 12 +-
.../rvv/fixed-vectors-insert-subvector.ll | 4 +-
.../RISCV/rvv/fixed-vectors-int-interleave.ll | 6 +-
.../rvv/fixed-vectors-interleaved-access.ll | 8 +-
.../CodeGen/RISCV/rvv/fixed-vectors-llrint.ll | 35 +--
.../rvv/fixed-vectors-masked-store-fp.ll | 16 +-
.../rvv/fixed-vectors-masked-store-int.ll | 16 +-
.../RISCV/rvv/fixed-vectors-reduction-fp.ll | 40 +--
.../RISCV/rvv/fixed-vectors-reduction-int.ll | 16 +-
.../RISCV/rvv/fixed-vectors-rint-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-round-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-roundeven-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-roundtozero-vp.ll | 6 +-
.../RISCV/rvv/fixed-vectors-setcc-fp-vp.ll | 8 +-
.../RISCV/rvv/fixed-vectors-setcc-int-vp.ll | 8 +-
.../RISCV/rvv/fixed-vectors-trunc-vp.ll | 4 +-
.../RISCV/rvv/fixed-vectors-vcopysign-vp.ll | 4 +-
.../RISCV/rvv/fixed-vectors-vfma-vp.ll | 8 +-
.../RISCV/rvv/fixed-vectors-vfmax-vp.ll | 4 +-
.../RISCV/rvv/fixed-vectors-vfmin-vp.ll | 4 +-
.../RISCV/rvv/fixed-vectors-vfmuladd-vp.ll | 8 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vfwadd.ll | 8 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vfwmul.ll | 8 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vfwsub.ll | 8 +-
.../RISCV/rvv/fixed-vectors-vpmerge.ll | 4 +-
.../RISCV/rvv/fixed-vectors-vpscatter.ll | 20 +-
.../RISCV/rvv/fixed-vectors-vscale-range.ll | 4 +-
.../RISCV/rvv/fixed-vectors-vselect-vp.ll | 24 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vwadd.ll | 12 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll | 12 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vwmul.ll | 12 +-
.../RISCV/rvv/fixed-vectors-vwmulsu.ll | 12 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll | 12 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vwsub.ll | 12 +-
.../CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/floor-vp.ll | 30 +-
.../test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll | 60 ++--
.../test/CodeGen/RISCV/rvv/fminimum-sdnode.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll | 60 ++--
.../CodeGen/RISCV/rvv/fnearbyint-sdnode.ll | 12 +-
.../CodeGen/RISCV/rvv/fpclamptosat_vec.ll | 108 +++----
llvm/test/CodeGen/RISCV/rvv/frm-insert.ll | 40 +--
llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll | 68 ++---
.../CodeGen/RISCV/rvv/get-vlen-debugloc.mir | 6 +-
.../RISCV/rvv/large-rvv-stack-size.mir | 2 +-
llvm/test/CodeGen/RISCV/rvv/localvar.ll | 41 +--
llvm/test/CodeGen/RISCV/rvv/memory-args.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll | 6 +-
.../test/CodeGen/RISCV/rvv/mscatter-sdnode.ll | 4 +-
.../RISCV/rvv/named-vector-shuffle-reverse.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll | 24 +-
.../CodeGen/RISCV/rvv/no-reserved-frame.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/pr88576.ll | 2 +-
.../CodeGen/RISCV/rvv/reg-alloc-reserve-bp.ll | 3 +-
llvm/test/CodeGen/RISCV/rvv/remat.ll | 54 ++--
llvm/test/CodeGen/RISCV/rvv/rint-vp.ll | 30 +-
llvm/test/CodeGen/RISCV/rvv/round-vp.ll | 30 +-
llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll | 30 +-
llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll | 30 +-
.../RISCV/rvv/rv32-spill-vector-csr.ll | 12 +-
.../CodeGen/RISCV/rvv/rv32-spill-vector.ll | 52 ++--
.../CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll | 54 ++--
.../RISCV/rvv/rv64-spill-vector-csr.ll | 12 +-
.../CodeGen/RISCV/rvv/rv64-spill-vector.ll | 44 +--
.../CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll | 54 ++--
.../test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll | 2 +-
.../test/CodeGen/RISCV/rvv/rvv-framelayout.ll | 9 +-
.../CodeGen/RISCV/rvv/rvv-out-arguments.ll | 6 +-
.../CodeGen/RISCV/rvv/rvv-stack-align.mir | 30 +-
.../CodeGen/RISCV/rvv/scalar-stack-align.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/stack-folding.ll | 48 +--
.../test/CodeGen/RISCV/rvv/strided-vpstore.ll | 6 +-
.../RISCV/rvv/vector-deinterleave-load.ll | 4 +-
.../CodeGen/RISCV/rvv/vector-deinterleave.ll | 14 +-
.../RISCV/rvv/vector-interleave-store.ll | 4 +-
.../CodeGen/RISCV/rvv/vector-interleave.ll | 24 +-
llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll | 40 +--
llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll | 40 +--
llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll | 284 ++++++++----------
.../RISCV/rvv/vfmadd-constrained-sdnode.ll | 40 +--
llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll | 36 +--
llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll | 24 +-
llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll | 24 +-
.../RISCV/rvv/vfmsub-constrained-sdnode.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll | 20 +-
llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll | 8 +-
.../RISCV/rvv/vfnmadd-constrained-sdnode.ll | 8 +-
.../RISCV/rvv/vfnmsub-constrained-sdnode.ll | 8 +-
llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll | 10 +-
llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll | 40 +--
llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vp-reverse-int.ll | 2 +-
llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll | 6 +-
.../CodeGen/RISCV/rvv/vpscatter-sdnode.ll | 14 +-
llvm/test/CodeGen/RISCV/rvv/vpstore.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll | 12 +-
.../RISCV/rvv/vsetvli-insert-crossbb.ll | 12 +-
llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll | 4 +-
llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll | 6 +-
llvm/test/CodeGen/RISCV/rvv/vxrm-insert.ll | 8 +-
.../rvv/wrong-stack-offset-for-rvv-object.mir | 2 +-
.../RISCV/rvv/wrong-stack-slot-rv32.mir | 9 +-
.../RISCV/rvv/wrong-stack-slot-rv64.mir | 6 +-
llvm/test/CodeGen/RISCV/rvv/zvlsseg-spill.mir | 6 +-
.../CodeGen/RISCV/srem-seteq-illegal-types.ll | 6 +-
159 files changed, 1287 insertions(+), 1802 deletions(-)
diff --git a/llvm/lib/Target/RISCV/RISCVExpandPseudoInsts.cpp b/llvm/lib/Target/RISCV/RISCVExpandPseudoInsts.cpp
index 5dcec078856ead..299537e5047d2b 100644
--- a/llvm/lib/Target/RISCV/RISCVExpandPseudoInsts.cpp
+++ b/llvm/lib/Target/RISCV/RISCVExpandPseudoInsts.cpp
@@ -56,6 +56,8 @@ class RISCVExpandPseudo : public MachineFunctionPass {
MachineBasicBlock::iterator MBBI);
bool expandRV32ZdinxLoad(MachineBasicBlock &MBB,
MachineBasicBlock::iterator MBBI);
+ bool expandPseudoReadMulVLENB(MachineBasicBlock &MBB,
+ MachineBasicBlock::iterator MBBI);
#ifndef NDEBUG
unsigned getInstSizeInBytes(const MachineFunction &MF) const {
unsigned Size = 0;
@@ -164,6 +166,8 @@ bool RISCVExpandPseudo::expandMI(MachineBasicBlock &MBB,
case RISCV::PseudoVMSET_M_B64:
// vmset.m vd => vmxnor.mm vd, vd, vd
return expandVMSET_VMCLR(MBB, MBBI, RISCV::VMXNOR_MM);
+ case RISCV::PseudoReadMulVLENB:
+ return expandPseudoReadMulVLENB(MBB, MBBI);
}
return false;
@@ -410,6 +414,39 @@ bool RISCVExpandPseudo::expandRV32ZdinxLoad(MachineBasicBlock &MBB,
return true;
}
+bool RISCVExpandPseudo::expandPseudoReadMulVLENB(
+ MachineBasicBlock &MBB, MachineBasicBlock::iterator MBBI) {
+ DebugLoc DL = MBBI->getDebugLoc();
+ Register Dst = MBBI->getOperand(0).getReg();
+ unsigned Mul = MBBI->getOperand(1).getImm();
+ RISCVII::VLMUL VLMUL = RISCVII::VLMUL::LMUL_1;
+ switch (Mul) {
+ case 1:
+ VLMUL = RISCVII::VLMUL::LMUL_1;
+ break;
+ case 2:
+ VLMUL = RISCVII::VLMUL::LMUL_2;
+ break;
+ case 4:
+ VLMUL = RISCVII::VLMUL::LMUL_4;
+ break;
+ case 8:
+ VLMUL = RISCVII::VLMUL::LMUL_8;
+ break;
+ default:
+ llvm_unreachable("Unexpected VLENB value");
+ }
+ unsigned VTypeImm = RISCVVType::encodeVTYPE(
+ VLMUL, /*SEW*/ 8, /*TailAgnostic*/ true, /*MaskAgnostic*/ true);
+
+ BuildMI(MBB, MBBI, DL, TII->get(RISCV::VSETVLI), Dst)
+ .addReg(RISCV::X0)
+ .addImm(VTypeImm);
+
+ MBBI->eraseFromParent();
+ return true;
+}
+
class RISCVPreRAExpandPseudo : public MachineFunctionPass {
public:
const RISCVSubtarget *STI;
diff --git a/llvm/lib/Target/RISCV/RISCVFrameLowering.cpp b/llvm/lib/Target/RISCV/RISCVFrameLowering.cpp
index b49cbab1876d79..b76b8e1df9996e 100644
--- a/llvm/lib/Target/RISCV/RISCVFrameLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVFrameLowering.cpp
@@ -436,8 +436,8 @@ void RISCVFrameLowering::adjustStackForRVV(MachineFunction &MF,
const RISCVRegisterInfo &RI = *STI.getRegisterInfo();
// We must keep the stack pointer aligned through any intermediate
// updates.
- RI.adjustReg(MBB, MBBI, DL, SPReg, SPReg, Offset,
- Flag, getStackAlign());
+ RI.adjustReg(MBB, MBBI, DL, SPReg, SPReg, Offset, Flag, getStackAlign(),
+ /*IsPrologueOrEpilogue*/ true);
}
static void appendScalableVectorExpression(const TargetRegisterInfo &TRI,
@@ -621,7 +621,7 @@ void RISCVFrameLowering::emitPrologue(MachineFunction &MF,
// Allocate space on the stack if necessary.
RI->adjustReg(MBB, MBBI, DL, SPReg, SPReg,
StackOffset::getFixed(-StackSize), MachineInstr::FrameSetup,
- getStackAlign());
+ getStackAlign(), /*IsPrologueOrEpilogue*/ true);
}
// Emit ".cfi_def_cfa_offset RealStackSize"
@@ -666,9 +666,11 @@ void RISCVFrameLowering::emitPrologue(MachineFunction &MF,
// The frame pointer does need to be reserved from register allocation.
assert(MF.getRegInfo().isReserved(FPReg) && "FP not reserved");
- RI->adjustReg(MBB, MBBI, DL, FPReg, SPReg,
- StackOffset::getFixed(RealStackSize - RVFI->getVarArgsSaveSize()),
- MachineInstr::FrameSetup, getStackAlign());
+ RI->adjustReg(
+ MBB, MBBI, DL, FPReg, SPReg,
+ StackOffset::getFixed(RealStackSize - RVFI->getVarArgsSaveSize()),
+ MachineInstr::FrameSetup, getStackAlign(),
+ /*IsPrologueOrEpilogue*/ true);
// Emit ".cfi_def_cfa $fp, RVFI->getVarArgsSaveSize()"
unsigned CFIIndex = MF.addFrameInst(MCCFIInstruction::cfiDefCfa(
@@ -686,7 +688,8 @@ void RISCVFrameLowering::emitPrologue(MachineFunction &MF,
"SecondSPAdjustAmount should be greater than zero");
RI->adjustReg(MBB, MBBI, DL, SPReg, SPReg,
StackOffset::getFixed(-SecondSPAdjustAmount),
- MachineInstr::FrameSetup, getStackAlign());
+ MachineInstr::FrameSetup, getStackAlign(),
+ /*IsPrologueOrEpilogue*/ true);
// If we are using a frame-pointer, and thus emitted ".cfi_def_cfa fp, 0",
// don't emit an sp-based .cfi_def_cfa_offset
@@ -765,7 +768,8 @@ void RISCVFrameLowering::deallocateStack(MachineFunction &MF,
Register SPReg = getSPReg(STI);
RI->adjustReg(MBB, MBBI, DL, SPReg, SPReg, StackOffset::getFixed(StackSize),
- MachineInstr::FrameDestroy, getStackAlign());
+ MachineInstr::FrameDestroy, getStackAlign(),
+ /*IsPrologueOrEpilogue*/ true);
}
void RISCVFrameLowering::emitEpilogue(MachineFunction &MF,
@@ -839,7 +843,8 @@ void RISCVFrameLowering::emitEpilogue(MachineFunction &MF,
if (!RestoreFP)
RI->adjustReg(MBB, LastFrameDestroy, DL, SPReg, SPReg,
StackOffset::getFixed(SecondSPAdjustAmount),
- MachineInstr::FrameDestroy, getStackAlign());
+ MachineInstr::FrameDestroy, getStackAlign(),
+ /*IsPrologueOrEpilogue*/ true);
}
// Restore the stack pointer using the value of the frame pointer. Only
@@ -857,7 +862,7 @@ void RISCVFrameLowering::emitEpilogue(MachineFunction &MF,
RI->adjustReg(MBB, LastFrameDestroy, DL, SPReg, FPReg,
StackOffset::getFixed(-FPOffset), MachineInstr::FrameDestroy,
- getStackAlign());
+ getStackAlign(), /*IsPrologueOrEpilogue*/ true);
}
bool ApplyPop = RVFI->isPushable(MF) && MBBI != MBB.end() &&
@@ -1348,7 +1353,8 @@ MachineBasicBlock::iterator RISCVFrameLowering::eliminateCallFramePseudoInstr(
const RISCVRegisterInfo &RI = *STI.getRegisterInfo();
RI.adjustReg(MBB, MI, DL, SPReg, SPReg, StackOffset::getFixed(Amount),
- MachineInstr::NoFlags, getStackAlign());
+ MachineInstr::NoFlags, getStackAlign(),
+ /*IsPrologueOrEpilogue*/ true);
}
}
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td b/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
index 6b308bc8c9aa0f..fa8cde09be696b 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
@@ -6096,6 +6096,11 @@ let hasSideEffects = 0, mayLoad = 0, mayStore = 0, isCodeGenOnly = 1 in {
[(set GPR:$rd, (riscv_read_vlenb))]>,
PseudoInstExpansion<(CSRRS GPR:$rd, SysRegVLENB.Encoding, X0)>,
Sched<[WriteRdVLENB]>;
+ let Defs = [VL, VTYPE] in {
+ def PseudoReadMulVLENB : Pseudo<(outs GPR:$rd), (ins uimm5:$shamt),
+ []>,
+ Sched<[WriteVSETVLI, ReadVSETVLI]>;
+ }
}
let hasSideEffects = 0, mayLoad = 0, mayStore = 0, isCodeGenOnly = 1,
diff --git a/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp b/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp
index 26195ef721db39..b37899b148c283 100644
--- a/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp
+++ b/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp
@@ -175,7 +175,8 @@ void RISCVRegisterInfo::adjustReg(MachineBasicBlock &MBB,
const DebugLoc &DL, Register DestReg,
Register SrcReg, StackOffset Offset,
MachineInstr::MIFlag Flag,
- MaybeAlign RequiredAlign) const {
+ MaybeAlign RequiredAlign,
+ bool IsPrologueOrEpilogue) const {
if (DestReg == SrcReg && !Offset.getFixed() && !Offset.getScalable())
return;
@@ -205,21 +206,43 @@ void RISCVRegisterInfo::adjustReg(MachineBasicBlock &MBB,
assert(isInt<32>(ScalableValue / (RISCV::RVVBitsPerBlock / 8)) &&
"Expect the number of vector registers within 32-bits.");
uint32_t NumOfVReg = ScalableValue / (RISCV::RVVBitsPerBlock / 8);
- BuildMI(MBB, II, DL, TII->get(RISCV::PseudoReadVLENB), ScratchReg)
- .setMIFlag(Flag);
-
- if (ScalableAdjOpc == RISCV::ADD && ST.hasStdExtZba() &&
- (NumOfVReg == 2 || NumOfVReg == 4 || NumOfVReg == 8)) {
- unsigned Opc = NumOfVReg == 2 ? RISCV::SH1ADD :
- (NumOfVReg == 4 ? RISCV::SH2ADD : RISCV::SH3ADD);
- BuildMI(MBB, II, DL, TII->get(Opc), DestReg)
- .addReg(ScratchReg, RegState::Kill).addReg(SrcReg)
+ // Only use vsetvli rather than vlenb if adjusting in the prologue or
+ // epilogue, otherwise it may distrube the VTYPE and VL status.
+ bool UseVsetvliRatherThanVlenb = IsPrologueOrEpilogue;
+ if (UseVsetvliRatherThanVlenb && (NumOfVReg == 1 || NumOfVReg == 2 ||
+ NumOfVReg == 4 || NumOfVReg == 8)) {
+ BuildMI(MBB, II, DL, TII->get(RISCV::PseudoReadMulVLENB), ScratchReg)
+ .addImm(NumOfVReg)
.setMIFlag(Flag);
- } else {
- TII->mulImm(MF, MBB, II, DL, ScratchReg, NumOfVReg, Flag);
BuildMI(MBB, II, DL, TII->get(ScalableAdjOpc), DestReg)
- .addReg(SrcReg).addReg(ScratchReg, RegState::Kill)
+ .addReg(SrcReg)
+ .addReg(ScratchReg, RegState::Kill)
.setMIFlag(Flag);
+ } else {
+ if (UseVsetvliRatherThanVlenb)
+ BuildMI(MBB, II, DL, TII->get(RISCV::PseudoReadMulVLENB), ScratchReg)
+ .addImm(1)
+ .setMIFlag(Flag);
+ else
+ BuildMI(MBB, II, DL, TII->get(RISCV::PseudoReadVLENB), ScratchReg)
+ .setMIFlag(Flag);
+
+ if (ScalableAdjOpc == RISCV::ADD && ST.hasStdExtZba() &&
+ (NumOfVReg == 2 || NumOfVReg == 4 || NumOfVReg == 8)) {
+ unsigned Opc = NumOfVReg == 2
+ ? RISCV::SH1ADD
+ : (NumOfVReg == 4 ? RISCV::SH2ADD : RISCV::SH3ADD);
+ BuildMI(MBB, II, DL, TII->get(Opc), DestReg)
+ .addReg(ScratchReg, RegState::Kill)
+ .addReg(SrcReg)
+ .setMIFlag(Flag);
+ } else {
+ TII->mulImm(MF, MBB, II, DL, ScratchReg, NumOfVReg, Flag);
+ BuildMI(MBB, II, DL, TII->get(ScalableAdjOpc), DestReg)
+ .addReg(SrcReg)
+ .addReg(ScratchReg, RegState::Kill)
+ .setMIFlag(Flag);
+ }
}
SrcReg = DestReg;
KillSrcReg = true;
@@ -526,7 +549,8 @@ bool RISCVRegisterInfo::eliminateFrameIndex(MachineBasicBlock::iterator II,
else
DestReg = MRI.createVirtualRegister(&RISCV::GPRRegClass);
adjustReg(*II->getParent(), II, DL, DestReg, FrameReg, Offset,
- MachineInstr::NoFlags, std::nullopt);
+ MachineInstr::NoFlags, std::nullopt,
+ /*IsPrologueOrEpilogue*/ false);
MI.getOperand(FIOperandNum).ChangeToRegister(DestReg, /*IsDef*/false,
/*IsImp*/false,
/*IsKill*/true);
diff --git a/llvm/lib/Target/RISCV/RISCVRegisterInfo.h b/llvm/lib/Target/RISCV/RISCVRegisterInfo.h
index 6ddb1eb9c14d5e..b7aa120935747a 100644
--- a/llvm/lib/Target/RISCV/RISCVRegisterInfo.h
+++ b/llvm/lib/Target/RISCV/RISCVRegisterInfo.h
@@ -72,10 +72,12 @@ struct RISCVRegisterInfo : public RISCVGenRegisterInfo {
// used during frame layout, and we may need to ensure that if we
// split the offset internally that the DestReg is always aligned,
// assuming that source reg was.
+ // If IsPrologueOrEpilogue is true, the function is called during prologue
+ // or epilogue generation.
void adjustReg(MachineBasicBlock &MBB, MachineBasicBlock::iterator II,
const DebugLoc &DL, Register DestReg, Register SrcReg,
StackOffset Offset, MachineInstr::MIFlag Flag,
- MaybeAlign RequiredAlign) const;
+ MaybeAlign RequiredAlign, bool IsPrologueOrEpilogue) const;
bool eliminateFrameIndex(MachineBasicBlock::iterator MI, int SPAdj,
unsigned FIOperandNum,
diff --git a/llvm/test/CodeGen/RISCV/calling-conv-vector-on-stack.ll b/llvm/test/CodeGen/RISCV/calling-conv-vector-on-stack.ll
index 70cdb6cec2449f..28a4d9166b1ef9 100644
--- a/llvm/test/CodeGen/RISCV/calling-conv-vector-on-stack.ll
+++ b/llvm/test/CodeGen/RISCV/calling-conv-vector-on-stack.ll
@@ -11,8 +11,7 @@ define void @bar() nounwind {
; CHECK-NEXT: sd s0, 80(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s1, 72(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 96
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -64
; CHECK-NEXT: mv s1, sp
diff --git a/llvm/test/CodeGen/RISCV/early-clobber-tied-def-subreg-liveness.ll b/llvm/test/CodeGen/RISCV/early-clobber-tied-def-subreg-liveness.ll
index 0c2b809c0be20c..3704d0a5e20edb 100644
--- a/llvm/test/CodeGen/RISCV/early-clobber-tied-def-subreg-liveness.ll
+++ b/llvm/test/CodeGen/RISCV/early-clobber-tied-def-subreg-liveness.ll
@@ -16,7 +16,7 @@ define void @_Z3foov() {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a0, 3
; CHECK-NEXT: add a0, a1, a0
; CHECK-NEXT: sub sp, sp, a0
@@ -82,7 +82,7 @@ define void @_Z3foov() {
; CHECK-NEXT: lui a0, %hi(var_47)
; CHECK-NEXT: addi a0, a0, %lo(var_47)
; CHECK-NEXT: vsseg4e16.v v8, (a0)
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a0, 3
; CHECK-NEXT: add a0, a1, a0
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/intrinsic-cttz-elts-vscale.ll b/llvm/test/CodeGen/RISCV/intrinsic-cttz-elts-vscale.ll
index 8116d138d288e2..cc426ce3cad1a1 100644
--- a/llvm/test/CodeGen/RISCV/intrinsic-cttz-elts-vscale.ll
+++ b/llvm/test/CodeGen/RISCV/intrinsic-cttz-elts-vscale.ll
@@ -59,8 +59,7 @@ define i64 @ctz_nxv8i1_no_range(<vscale x 8 x i16> %a) {
; RV32-NEXT: .cfi_def_cfa_offset 48
; RV32-NEXT: sw ra, 44(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 1
+; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x02, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 2 * vlenb
; RV32-NEXT: addi a0, sp, 32
@@ -97,8 +96,7 @@ define i64 @ctz_nxv8i1_no_range(<vscale x 8 x i16> %a) {
; RV32-NEXT: sub a1, a1, a4
; RV32-NEXT: sub a1, a1, a3
; RV32-NEXT: sub a0, a0, a2
-; RV32-NEXT: csrr a2, vlenb
-; RV32-NEXT: slli a2, a2, 1
+; RV32-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; RV32-NEXT: add sp, sp, a2
; RV32-NEXT: lw ra, 44(sp) # 4-byte Folded Reload
; RV32-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/pr69586.ll b/llvm/test/CodeGen/RISCV/pr69586.ll
index 7084c04805be72..3a62d8c2980802 100644
--- a/llvm/test/CodeGen/RISCV/pr69586.ll
+++ b/llvm/test/CodeGen/RISCV/pr69586.ll
@@ -35,7 +35,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: .cfi_offset s9, -88
; NOREMAT-NEXT: .cfi_offset s10, -96
; NOREMAT-NEXT: .cfi_offset s11, -104
-; NOREMAT-NEXT: csrr a2, vlenb
+; NOREMAT-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; NOREMAT-NEXT: li a3, 6
; NOREMAT-NEXT: mul a2, a2, a3
; NOREMAT-NEXT: sub sp, sp, a2
@@ -759,7 +759,7 @@ define void @test(ptr %0, ptr %1, i64 %2) {
; NOREMAT-NEXT: vse32.v v8, (a0)
; NOREMAT-NEXT: sf.vc.v.i 2, 0, v8, 0
; NOREMAT-NEXT: sf.vc.v.i 2, 0, v8, 0
-; NOREMAT-NEXT: csrr a0, vlenb
+; NOREMAT-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOREMAT-NEXT: li a1, 6
; NOREMAT-NEXT: mul a0, a0, a1
; NOREMAT-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/regalloc-last-chance-recoloring-failure.ll b/llvm/test/CodeGen/RISCV/regalloc-last-chance-recoloring-failure.ll
index 6a0dbbe356a165..8708f766130c6a 100644
--- a/llvm/test/CodeGen/RISCV/regalloc-last-chance-recoloring-failure.ll
+++ b/llvm/test/CodeGen/RISCV/regalloc-last-chance-recoloring-failure.ll
@@ -19,7 +19,7 @@ define void @last_chance_recoloring_failure() {
; CHECK-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
; CHECK-NEXT: .cfi_offset ra, -8
; CHECK-NEXT: .cfi_offset s0, -16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x20, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 32 + 16 * vlenb
@@ -59,7 +59,7 @@ define void @last_chance_recoloring_failure() {
; CHECK-NEXT: vsetvli zero, zero, e32, m8, tu, mu
; CHECK-NEXT: vfdiv.vv v8, v24, v8, v0.t
; CHECK-NEXT: vse32.v v8, (a0)
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld ra, 24(sp) # 8-byte Folded Reload
@@ -75,7 +75,7 @@ define void @last_chance_recoloring_failure() {
; SUBREGLIVENESS-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
; SUBREGLIVENESS-NEXT: .cfi_offset ra, -8
; SUBREGLIVENESS-NEXT: .cfi_offset s0, -16
-; SUBREGLIVENESS-NEXT: csrr a0, vlenb
+; SUBREGLIVENESS-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SUBREGLIVENESS-NEXT: slli a0, a0, 4
; SUBREGLIVENESS-NEXT: sub sp, sp, a0
; SUBREGLIVENESS-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x20, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 32 + 16 * vlenb
@@ -115,7 +115,7 @@ define void @last_chance_recoloring_failure() {
; SUBREGLIVENESS-NEXT: vsetvli zero, zero, e32, m8, tu, mu
; SUBREGLIVENESS-NEXT: vfdiv.vv v8, v24, v8, v0.t
; SUBREGLIVENESS-NEXT: vse32.v v8, (a0)
-; SUBREGLIVENESS-NEXT: csrr a0, vlenb
+; SUBREGLIVENESS-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SUBREGLIVENESS-NEXT: slli a0, a0, 4
; SUBREGLIVENESS-NEXT: add sp, sp, a0
; SUBREGLIVENESS-NEXT: ld ra, 24(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/rvv-cfi-info.ll b/llvm/test/CodeGen/RISCV/rvv-cfi-info.ll
index 225680e846bac7..af88e39f18e195 100644
--- a/llvm/test/CodeGen/RISCV/rvv-cfi-info.ll
+++ b/llvm/test/CodeGen/RISCV/rvv-cfi-info.ll
@@ -9,7 +9,7 @@ define riscv_vector_cc <vscale x 1 x i32> @test_vector_callee_cfi(<vscale x 1 x
; OMIT-FP: # %bb.0: # %entry
; OMIT-FP-NEXT: addi sp, sp, -16
; OMIT-FP-NEXT: .cfi_def_cfa_offset 16
-; OMIT-FP-NEXT: csrr a0, vlenb
+; OMIT-FP-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; OMIT-FP-NEXT: slli a1, a0, 3
; OMIT-FP-NEXT: sub a0, a1, a0
; OMIT-FP-NEXT: sub sp, sp, a0
@@ -49,7 +49,7 @@ define riscv_vector_cc <vscale x 1 x i32> @test_vector_callee_cfi(<vscale x 1 x
; OMIT-FP-NEXT: vl2r.v v2, (a0) # Unknown-size Folded Reload
; OMIT-FP-NEXT: addi a0, sp, 16
; OMIT-FP-NEXT: vl4r.v v4, (a0) # Unknown-size Folded Reload
-; OMIT-FP-NEXT: csrr a0, vlenb
+; OMIT-FP-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; OMIT-FP-NEXT: slli a1, a0, 3
; OMIT-FP-NEXT: sub a0, a1, a0
; OMIT-FP-NEXT: add sp, sp, a0
@@ -66,7 +66,7 @@ define riscv_vector_cc <vscale x 1 x i32> @test_vector_callee_cfi(<vscale x 1 x
; NO-OMIT-FP-NEXT: .cfi_offset s0, -16
; NO-OMIT-FP-NEXT: addi s0, sp, 32
; NO-OMIT-FP-NEXT: .cfi_def_cfa s0, 0
-; NO-OMIT-FP-NEXT: csrr a0, vlenb
+; NO-OMIT-FP-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NO-OMIT-FP-NEXT: slli a1, a0, 3
; NO-OMIT-FP-NEXT: sub a0, a1, a0
; NO-OMIT-FP-NEXT: sub sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/abs-vp.ll b/llvm/test/CodeGen/RISCV/rvv/abs-vp.ll
index cd2208e31eb6d3..3a8100c57b26f7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/abs-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/abs-vp.ll
@@ -563,7 +563,7 @@ define <vscale x 16 x i64> @vp_abs_nxv16i64(<vscale x 16 x i64> %va, <vscale x 1
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -600,7 +600,7 @@ define <vscale x 16 x i64> @vp_abs_nxv16i64(<vscale x 16 x i64> %va, <vscale x 1
; CHECK-NEXT: vmax.vv v8, v8, v16, v0.t
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/access-fixed-objects-by-rvv.ll b/llvm/test/CodeGen/RISCV/rvv/access-fixed-objects-by-rvv.ll
index 8640ac2da53030..5599589ba55984 100644
--- a/llvm/test/CodeGen/RISCV/rvv/access-fixed-objects-by-rvv.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/access-fixed-objects-by-rvv.ll
@@ -32,7 +32,7 @@ define <vscale x 1 x i64> @access_fixed_and_vector_objects(ptr %val) {
; RV64IV: # %bb.0:
; RV64IV-NEXT: addi sp, sp, -528
; RV64IV-NEXT: .cfi_def_cfa_offset 528
-; RV64IV-NEXT: csrr a0, vlenb
+; RV64IV-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0x90, 0x04, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 528 + 1 * vlenb
; RV64IV-NEXT: addi a0, sp, 8
@@ -42,7 +42,7 @@ define <vscale x 1 x i64> @access_fixed_and_vector_objects(ptr %val) {
; RV64IV-NEXT: ld a0, 520(sp)
; RV64IV-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; RV64IV-NEXT: vadd.vv v8, v8, v9
-; RV64IV-NEXT: csrr a0, vlenb
+; RV64IV-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64IV-NEXT: add sp, sp, a0
; RV64IV-NEXT: addi sp, sp, 528
; RV64IV-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/addi-scalable-offset.mir b/llvm/test/CodeGen/RISCV/rvv/addi-scalable-offset.mir
index 43fb0c10ca46f6..be65fb8937e7f8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/addi-scalable-offset.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/addi-scalable-offset.mir
@@ -37,7 +37,7 @@ body: |
; CHECK-NEXT: $x8 = frame-setup ADDI $x2, 2032
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa $x8, 0
; CHECK-NEXT: $x2 = frame-setup ADDI $x2, -240
- ; CHECK-NEXT: $x12 = frame-setup PseudoReadVLENB
+ ; CHECK-NEXT: $x12 = frame-setup PseudoReadMulVLENB 1, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: $x2 = frame-setup SUB $x2, killed $x12
; CHECK-NEXT: dead $x0 = PseudoVSETVLI killed renamable $x11, 216 /* e64, m1, ta, ma */, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: renamable $v8 = PseudoVLE64_V_M1 undef renamable $v8, killed renamable $x10, $noreg, 6 /* e64 */, 0 /* tu, mu */, implicit $vl, implicit $vtype :: (load unknown-size from %ir.pa, align 8)
diff --git a/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-scalable-array.ll b/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-scalable-array.ll
index 2e70c3395090ec..e27a47d2d64a6c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-scalable-array.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-scalable-array.ll
@@ -10,7 +10,7 @@ define void @test(ptr %addr) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrrs a1, vlenb, zero
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a1, 1
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
@@ -28,7 +28,7 @@ define void @test(ptr %addr) {
; CHECK-NEXT: vs1r.v v10, (a2)
; CHECK-NEXT: add a0, a0, a1
; CHECK-NEXT: vs1r.v v8, (a0)
-; CHECK-NEXT: csrrs a0, vlenb, zero
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a0, 1
; CHECK-NEXT: add a0, a1, a0
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-scalable-struct.ll b/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-scalable-struct.ll
index a9a680d54d5897..c978c6a099f346 100644
--- a/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-scalable-struct.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-scalable-struct.ll
@@ -11,8 +11,7 @@ define <vscale x 1 x double> @test(ptr %addr, i64 %vl) {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrrs a2, vlenb, zero
-; CHECK-NEXT: slli a2, a2, 1
+; CHECK-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x02, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 2 * vlenb
; CHECK-NEXT: csrrs a2, vlenb, zero
@@ -27,8 +26,7 @@ define <vscale x 1 x double> @test(ptr %addr, i64 %vl) {
; CHECK-NEXT: vl1re64.v v9, (a0)
; CHECK-NEXT: vsetvli zero, a1, e64, m1, ta, ma
; CHECK-NEXT: vfadd.vv v8, v9, v8
-; CHECK-NEXT: csrrs a0, vlenb, zero
-; CHECK-NEXT: slli a0, a0, 1
+; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: jalr zero, 0(ra)
diff --git a/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-vector-tuple.ll b/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-vector-tuple.ll
index 82a3ac4a74d174..2a4822fa791f97 100644
--- a/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-vector-tuple.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/alloca-load-store-vector-tuple.ll
@@ -9,8 +9,7 @@ define target("riscv.vector.tuple", <vscale x 8 x i8>, 5) @load_store_m1x5(targe
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrrs a0, vlenb, zero
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a0, sp, 16
@@ -31,8 +30,7 @@ define target("riscv.vector.tuple", <vscale x 8 x i8>, 5) @load_store_m1x5(targe
; CHECK-NEXT: vl1re8.v v10, (a3)
; CHECK-NEXT: vl1re8.v v11, (a4)
; CHECK-NEXT: vl1re8.v v12, (a1)
-; CHECK-NEXT: csrrs a0, vlenb, zero
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: jalr zero, 0(ra)
@@ -50,8 +48,7 @@ define target("riscv.vector.tuple", <vscale x 16 x i8>, 2) @load_store_m2x2(targ
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrrs a0, vlenb, zero
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; CHECK-NEXT: addi a0, sp, 16
@@ -64,8 +61,7 @@ define target("riscv.vector.tuple", <vscale x 16 x i8>, 2) @load_store_m2x2(targ
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: vl2re8.v v8, (a0)
; CHECK-NEXT: vl2re8.v v10, (a1)
-; CHECK-NEXT: csrrs a0, vlenb, zero
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: jalr zero, 0(ra)
@@ -83,8 +79,7 @@ define target("riscv.vector.tuple", <vscale x 32 x i8>, 2) @load_store_m4x2(targ
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrrs a0, vlenb, zero
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a0, sp, 16
@@ -97,8 +92,7 @@ define target("riscv.vector.tuple", <vscale x 32 x i8>, 2) @load_store_m4x2(targ
; CHECK-NEXT: #NO_APP
; CHECK-NEXT: vl4re8.v v8, (a0)
; CHECK-NEXT: vl4re8.v v12, (a1)
-; CHECK-NEXT: csrrs a0, vlenb, zero
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: jalr zero, 0(ra)
diff --git a/llvm/test/CodeGen/RISCV/rvv/allocate-lmul-2-4-8.ll b/llvm/test/CodeGen/RISCV/rvv/allocate-lmul-2-4-8.ll
index 35e269b9119025..31f55fbb61b1a3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/allocate-lmul-2-4-8.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/allocate-lmul-2-4-8.ll
@@ -9,9 +9,9 @@
define void @lmul1() nounwind {
; CHECK-LABEL: lmul1:
; CHECK: # %bb.0:
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ret
%v = alloca <vscale x 1 x i64>
@@ -19,34 +19,13 @@ define void @lmul1() nounwind {
}
define void @lmul2() nounwind {
-; NOZBA-LABEL: lmul2:
-; NOZBA: # %bb.0:
-; NOZBA-NEXT: csrr a0, vlenb
-; NOZBA-NEXT: slli a0, a0, 1
-; NOZBA-NEXT: sub sp, sp, a0
-; NOZBA-NEXT: csrr a0, vlenb
-; NOZBA-NEXT: slli a0, a0, 1
-; NOZBA-NEXT: add sp, sp, a0
-; NOZBA-NEXT: ret
-;
-; ZBA-LABEL: lmul2:
-; ZBA: # %bb.0:
-; ZBA-NEXT: csrr a0, vlenb
-; ZBA-NEXT: slli a0, a0, 1
-; ZBA-NEXT: sub sp, sp, a0
-; ZBA-NEXT: csrr a0, vlenb
-; ZBA-NEXT: sh1add sp, a0, sp
-; ZBA-NEXT: ret
-;
-; NOMUL-LABEL: lmul2:
-; NOMUL: # %bb.0:
-; NOMUL-NEXT: csrr a0, vlenb
-; NOMUL-NEXT: slli a0, a0, 1
-; NOMUL-NEXT: sub sp, sp, a0
-; NOMUL-NEXT: csrr a0, vlenb
-; NOMUL-NEXT: slli a0, a0, 1
-; NOMUL-NEXT: add sp, sp, a0
-; NOMUL-NEXT: ret
+; CHECK-LABEL: lmul2:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
+; CHECK-NEXT: sub sp, sp, a0
+; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
+; CHECK-NEXT: add sp, sp, a0
+; CHECK-NEXT: ret
%v = alloca <vscale x 2 x i64>
ret void
}
@@ -58,8 +37,7 @@ define void @lmul4() nounwind {
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 48
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -32
; CHECK-NEXT: addi sp, s0, -48
@@ -78,8 +56,7 @@ define void @lmul8() nounwind {
; CHECK-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 80
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -64
; CHECK-NEXT: addi sp, s0, -80
@@ -92,34 +69,13 @@ define void @lmul8() nounwind {
}
define void @lmul1_and_2() nounwind {
-; NOZBA-LABEL: lmul1_and_2:
-; NOZBA: # %bb.0:
-; NOZBA-NEXT: csrr a0, vlenb
-; NOZBA-NEXT: slli a0, a0, 2
-; NOZBA-NEXT: sub sp, sp, a0
-; NOZBA-NEXT: csrr a0, vlenb
-; NOZBA-NEXT: slli a0, a0, 2
-; NOZBA-NEXT: add sp, sp, a0
-; NOZBA-NEXT: ret
-;
-; ZBA-LABEL: lmul1_and_2:
-; ZBA: # %bb.0:
-; ZBA-NEXT: csrr a0, vlenb
-; ZBA-NEXT: slli a0, a0, 2
-; ZBA-NEXT: sub sp, sp, a0
-; ZBA-NEXT: csrr a0, vlenb
-; ZBA-NEXT: sh2add sp, a0, sp
-; ZBA-NEXT: ret
-;
-; NOMUL-LABEL: lmul1_and_2:
-; NOMUL: # %bb.0:
-; NOMUL-NEXT: csrr a0, vlenb
-; NOMUL-NEXT: slli a0, a0, 2
-; NOMUL-NEXT: sub sp, sp, a0
-; NOMUL-NEXT: csrr a0, vlenb
-; NOMUL-NEXT: slli a0, a0, 2
-; NOMUL-NEXT: add sp, sp, a0
-; NOMUL-NEXT: ret
+; CHECK-LABEL: lmul1_and_2:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; CHECK-NEXT: sub sp, sp, a0
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; CHECK-NEXT: add sp, sp, a0
+; CHECK-NEXT: ret
%v1 = alloca <vscale x 1 x i64>
%v2 = alloca <vscale x 2 x i64>
ret void
@@ -132,8 +88,7 @@ define void @lmul2_and_4() nounwind {
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 48
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -32
; CHECK-NEXT: addi sp, s0, -48
@@ -153,8 +108,7 @@ define void @lmul1_and_4() nounwind {
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 48
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -32
; CHECK-NEXT: addi sp, s0, -48
@@ -170,11 +124,11 @@ define void @lmul1_and_4() nounwind {
define void @lmul2_and_1() nounwind {
; NOZBA-LABEL: lmul2_and_1:
; NOZBA: # %bb.0:
-; NOZBA-NEXT: csrr a0, vlenb
+; NOZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZBA-NEXT: slli a1, a0, 1
; NOZBA-NEXT: add a0, a1, a0
; NOZBA-NEXT: sub sp, sp, a0
-; NOZBA-NEXT: csrr a0, vlenb
+; NOZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZBA-NEXT: slli a1, a0, 1
; NOZBA-NEXT: add a0, a1, a0
; NOZBA-NEXT: add sp, sp, a0
@@ -182,21 +136,21 @@ define void @lmul2_and_1() nounwind {
;
; ZBA-LABEL: lmul2_and_1:
; ZBA: # %bb.0:
-; ZBA-NEXT: csrr a0, vlenb
+; ZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZBA-NEXT: sh1add a0, a0, a0
; ZBA-NEXT: sub sp, sp, a0
-; ZBA-NEXT: csrr a0, vlenb
+; ZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZBA-NEXT: sh1add a0, a0, a0
; ZBA-NEXT: add sp, sp, a0
; ZBA-NEXT: ret
;
; NOMUL-LABEL: lmul2_and_1:
; NOMUL: # %bb.0:
-; NOMUL-NEXT: csrr a0, vlenb
+; NOMUL-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOMUL-NEXT: slli a1, a0, 1
; NOMUL-NEXT: add a0, a1, a0
; NOMUL-NEXT: sub sp, sp, a0
-; NOMUL-NEXT: csrr a0, vlenb
+; NOMUL-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOMUL-NEXT: slli a1, a0, 1
; NOMUL-NEXT: add a0, a1, a0
; NOMUL-NEXT: add sp, sp, a0
@@ -213,7 +167,7 @@ define void @lmul4_and_1() nounwind {
; NOZBA-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; NOZBA-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; NOZBA-NEXT: addi s0, sp, 48
-; NOZBA-NEXT: csrr a0, vlenb
+; NOZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZBA-NEXT: li a1, 6
; NOZBA-NEXT: mul a0, a0, a1
; NOZBA-NEXT: sub sp, sp, a0
@@ -230,7 +184,7 @@ define void @lmul4_and_1() nounwind {
; ZBA-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; ZBA-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; ZBA-NEXT: addi s0, sp, 48
-; ZBA-NEXT: csrr a0, vlenb
+; ZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZBA-NEXT: slli a0, a0, 1
; ZBA-NEXT: sh1add a0, a0, a0
; ZBA-NEXT: sub sp, sp, a0
@@ -247,7 +201,7 @@ define void @lmul4_and_1() nounwind {
; NOMUL-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; NOMUL-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; NOMUL-NEXT: addi s0, sp, 48
-; NOMUL-NEXT: csrr a0, vlenb
+; NOMUL-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOMUL-NEXT: slli a0, a0, 1
; NOMUL-NEXT: mv a1, a0
; NOMUL-NEXT: slli a0, a0, 1
@@ -271,7 +225,7 @@ define void @lmul4_and_2() nounwind {
; NOZBA-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; NOZBA-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; NOZBA-NEXT: addi s0, sp, 48
-; NOZBA-NEXT: csrr a0, vlenb
+; NOZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZBA-NEXT: li a1, 6
; NOZBA-NEXT: mul a0, a0, a1
; NOZBA-NEXT: sub sp, sp, a0
@@ -288,7 +242,7 @@ define void @lmul4_and_2() nounwind {
; ZBA-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; ZBA-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; ZBA-NEXT: addi s0, sp, 48
-; ZBA-NEXT: csrr a0, vlenb
+; ZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZBA-NEXT: slli a0, a0, 1
; ZBA-NEXT: sh1add a0, a0, a0
; ZBA-NEXT: sub sp, sp, a0
@@ -305,7 +259,7 @@ define void @lmul4_and_2() nounwind {
; NOMUL-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; NOMUL-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; NOMUL-NEXT: addi s0, sp, 48
-; NOMUL-NEXT: csrr a0, vlenb
+; NOMUL-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOMUL-NEXT: slli a0, a0, 1
; NOMUL-NEXT: mv a1, a0
; NOMUL-NEXT: slli a0, a0, 1
@@ -329,7 +283,7 @@ define void @lmul4_and_2_x2_0() nounwind {
; NOZBA-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; NOZBA-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; NOZBA-NEXT: addi s0, sp, 48
-; NOZBA-NEXT: csrr a0, vlenb
+; NOZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZBA-NEXT: li a1, 14
; NOZBA-NEXT: mul a0, a0, a1
; NOZBA-NEXT: sub sp, sp, a0
@@ -346,7 +300,7 @@ define void @lmul4_and_2_x2_0() nounwind {
; ZBA-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; ZBA-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; ZBA-NEXT: addi s0, sp, 48
-; ZBA-NEXT: csrr a0, vlenb
+; ZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZBA-NEXT: li a1, 14
; ZBA-NEXT: mul a0, a0, a1
; ZBA-NEXT: sub sp, sp, a0
@@ -363,7 +317,7 @@ define void @lmul4_and_2_x2_0() nounwind {
; NOMUL-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; NOMUL-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; NOMUL-NEXT: addi s0, sp, 48
-; NOMUL-NEXT: csrr a0, vlenb
+; NOMUL-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOMUL-NEXT: slli a0, a0, 1
; NOMUL-NEXT: mv a1, a0
; NOMUL-NEXT: slli a0, a0, 1
@@ -391,7 +345,7 @@ define void @lmul4_and_2_x2_1() nounwind {
; NOZBA-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; NOZBA-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; NOZBA-NEXT: addi s0, sp, 48
-; NOZBA-NEXT: csrr a0, vlenb
+; NOZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZBA-NEXT: li a1, 12
; NOZBA-NEXT: mul a0, a0, a1
; NOZBA-NEXT: sub sp, sp, a0
@@ -408,7 +362,7 @@ define void @lmul4_and_2_x2_1() nounwind {
; ZBA-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; ZBA-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; ZBA-NEXT: addi s0, sp, 48
-; ZBA-NEXT: csrr a0, vlenb
+; ZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZBA-NEXT: slli a0, a0, 2
; ZBA-NEXT: sh1add a0, a0, a0
; ZBA-NEXT: sub sp, sp, a0
@@ -425,7 +379,7 @@ define void @lmul4_and_2_x2_1() nounwind {
; NOMUL-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; NOMUL-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; NOMUL-NEXT: addi s0, sp, 48
-; NOMUL-NEXT: csrr a0, vlenb
+; NOMUL-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOMUL-NEXT: slli a0, a0, 2
; NOMUL-NEXT: mv a1, a0
; NOMUL-NEXT: slli a0, a0, 1
@@ -446,46 +400,17 @@ define void @lmul4_and_2_x2_1() nounwind {
define void @gpr_and_lmul1_and_2() nounwind {
-; NOZBA-LABEL: gpr_and_lmul1_and_2:
-; NOZBA: # %bb.0:
-; NOZBA-NEXT: addi sp, sp, -16
-; NOZBA-NEXT: csrr a0, vlenb
-; NOZBA-NEXT: slli a0, a0, 2
-; NOZBA-NEXT: sub sp, sp, a0
-; NOZBA-NEXT: li a0, 3
-; NOZBA-NEXT: sd a0, 8(sp)
-; NOZBA-NEXT: csrr a0, vlenb
-; NOZBA-NEXT: slli a0, a0, 2
-; NOZBA-NEXT: add sp, sp, a0
-; NOZBA-NEXT: addi sp, sp, 16
-; NOZBA-NEXT: ret
-;
-; ZBA-LABEL: gpr_and_lmul1_and_2:
-; ZBA: # %bb.0:
-; ZBA-NEXT: addi sp, sp, -16
-; ZBA-NEXT: csrr a0, vlenb
-; ZBA-NEXT: slli a0, a0, 2
-; ZBA-NEXT: sub sp, sp, a0
-; ZBA-NEXT: li a0, 3
-; ZBA-NEXT: sd a0, 8(sp)
-; ZBA-NEXT: csrr a0, vlenb
-; ZBA-NEXT: sh2add sp, a0, sp
-; ZBA-NEXT: addi sp, sp, 16
-; ZBA-NEXT: ret
-;
-; NOMUL-LABEL: gpr_and_lmul1_and_2:
-; NOMUL: # %bb.0:
-; NOMUL-NEXT: addi sp, sp, -16
-; NOMUL-NEXT: csrr a0, vlenb
-; NOMUL-NEXT: slli a0, a0, 2
-; NOMUL-NEXT: sub sp, sp, a0
-; NOMUL-NEXT: li a0, 3
-; NOMUL-NEXT: sd a0, 8(sp)
-; NOMUL-NEXT: csrr a0, vlenb
-; NOMUL-NEXT: slli a0, a0, 2
-; NOMUL-NEXT: add sp, sp, a0
-; NOMUL-NEXT: addi sp, sp, 16
-; NOMUL-NEXT: ret
+; CHECK-LABEL: gpr_and_lmul1_and_2:
+; CHECK: # %bb.0:
+; CHECK-NEXT: addi sp, sp, -16
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; CHECK-NEXT: sub sp, sp, a0
+; CHECK-NEXT: li a0, 3
+; CHECK-NEXT: sd a0, 8(sp)
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; CHECK-NEXT: add sp, sp, a0
+; CHECK-NEXT: addi sp, sp, 16
+; CHECK-NEXT: ret
%x1 = alloca i64
%v1 = alloca <vscale x 1 x i64>
%v2 = alloca <vscale x 2 x i64>
@@ -500,8 +425,7 @@ define void @gpr_and_lmul1_and_4() nounwind {
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 32(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 48
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -32
; CHECK-NEXT: li a0, 3
@@ -525,7 +449,7 @@ define void @lmul_1_2_4_8() nounwind {
; CHECK-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 80
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -64
@@ -548,7 +472,7 @@ define void @lmul_1_2_4_8_x2_0() nounwind {
; CHECK-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 80
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -64
@@ -575,7 +499,7 @@ define void @lmul_1_2_4_8_x2_1() nounwind {
; CHECK-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 80
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -64
@@ -596,34 +520,13 @@ define void @lmul_1_2_4_8_x2_1() nounwind {
}
define void @masks() nounwind {
-; NOZBA-LABEL: masks:
-; NOZBA: # %bb.0:
-; NOZBA-NEXT: csrr a0, vlenb
-; NOZBA-NEXT: slli a0, a0, 2
-; NOZBA-NEXT: sub sp, sp, a0
-; NOZBA-NEXT: csrr a0, vlenb
-; NOZBA-NEXT: slli a0, a0, 2
-; NOZBA-NEXT: add sp, sp, a0
-; NOZBA-NEXT: ret
-;
-; ZBA-LABEL: masks:
-; ZBA: # %bb.0:
-; ZBA-NEXT: csrr a0, vlenb
-; ZBA-NEXT: slli a0, a0, 2
-; ZBA-NEXT: sub sp, sp, a0
-; ZBA-NEXT: csrr a0, vlenb
-; ZBA-NEXT: sh2add sp, a0, sp
-; ZBA-NEXT: ret
-;
-; NOMUL-LABEL: masks:
-; NOMUL: # %bb.0:
-; NOMUL-NEXT: csrr a0, vlenb
-; NOMUL-NEXT: slli a0, a0, 2
-; NOMUL-NEXT: sub sp, sp, a0
-; NOMUL-NEXT: csrr a0, vlenb
-; NOMUL-NEXT: slli a0, a0, 2
-; NOMUL-NEXT: add sp, sp, a0
-; NOMUL-NEXT: ret
+; CHECK-LABEL: masks:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; CHECK-NEXT: sub sp, sp, a0
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
+; CHECK-NEXT: add sp, sp, a0
+; CHECK-NEXT: ret
%v1 = alloca <vscale x 1 x i1>
%v2 = alloca <vscale x 2 x i1>
%v4 = alloca <vscale x 4 x i1>
@@ -638,7 +541,7 @@ define void @lmul_8_x5() nounwind {
; NOZBA-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; NOZBA-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; NOZBA-NEXT: addi s0, sp, 80
-; NOZBA-NEXT: csrr a0, vlenb
+; NOZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZBA-NEXT: li a1, 40
; NOZBA-NEXT: mul a0, a0, a1
; NOZBA-NEXT: sub sp, sp, a0
@@ -655,7 +558,7 @@ define void @lmul_8_x5() nounwind {
; ZBA-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; ZBA-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; ZBA-NEXT: addi s0, sp, 80
-; ZBA-NEXT: csrr a0, vlenb
+; ZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZBA-NEXT: slli a0, a0, 3
; ZBA-NEXT: sh2add a0, a0, a0
; ZBA-NEXT: sub sp, sp, a0
@@ -672,7 +575,7 @@ define void @lmul_8_x5() nounwind {
; NOMUL-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; NOMUL-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; NOMUL-NEXT: addi s0, sp, 80
-; NOMUL-NEXT: csrr a0, vlenb
+; NOMUL-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOMUL-NEXT: slli a0, a0, 3
; NOMUL-NEXT: mv a1, a0
; NOMUL-NEXT: slli a0, a0, 2
@@ -699,7 +602,7 @@ define void @lmul_8_x9() nounwind {
; NOZBA-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; NOZBA-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; NOZBA-NEXT: addi s0, sp, 80
-; NOZBA-NEXT: csrr a0, vlenb
+; NOZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZBA-NEXT: li a1, 72
; NOZBA-NEXT: mul a0, a0, a1
; NOZBA-NEXT: sub sp, sp, a0
@@ -716,7 +619,7 @@ define void @lmul_8_x9() nounwind {
; ZBA-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; ZBA-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; ZBA-NEXT: addi s0, sp, 80
-; ZBA-NEXT: csrr a0, vlenb
+; ZBA-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZBA-NEXT: slli a0, a0, 3
; ZBA-NEXT: sh3add a0, a0, a0
; ZBA-NEXT: sub sp, sp, a0
@@ -733,7 +636,7 @@ define void @lmul_8_x9() nounwind {
; NOMUL-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; NOMUL-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; NOMUL-NEXT: addi s0, sp, 80
-; NOMUL-NEXT: csrr a0, vlenb
+; NOMUL-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOMUL-NEXT: slli a0, a0, 3
; NOMUL-NEXT: mv a1, a0
; NOMUL-NEXT: slli a0, a0, 3
diff --git a/llvm/test/CodeGen/RISCV/rvv/bitreverse-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/bitreverse-sdnode.ll
index a34f06948a762c..c5e267ee7ea9bc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bitreverse-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bitreverse-sdnode.ll
@@ -1126,8 +1126,7 @@ define <vscale x 8 x i64> @bitreverse_nxv8i64(<vscale x 8 x i64> %va) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: lui a0, 1044480
@@ -1199,8 +1198,7 @@ define <vscale x 8 x i64> @bitreverse_nxv8i64(<vscale x 8 x i64> %va) {
; RV32-NEXT: vand.vv v8, v8, v24
; RV32-NEXT: vadd.vv v8, v8, v8
; RV32-NEXT: vor.vv v8, v16, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll b/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
index afce04d107e728..b1375487065ec0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bitreverse-vp.ll
@@ -2289,7 +2289,7 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64(<vscale x 7 x i64> %va, <vscale
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -2395,7 +2395,7 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64(<vscale x 7 x i64> %va, <vscale
; RV32-NEXT: vand.vv v8, v8, v24, v0.t
; RV32-NEXT: vsll.vi v8, v8, 1, v0.t
; RV32-NEXT: vor.vv v8, v16, v8, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -2406,8 +2406,7 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64(<vscale x 7 x i64> %va, <vscale
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: lui a1, 4080
@@ -2473,8 +2472,7 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64(<vscale x 7 x i64> %va, <vscale
; RV64-NEXT: vand.vx v8, v8, a0, v0.t
; RV64-NEXT: vsll.vi v8, v8, 1, v0.t
; RV64-NEXT: vor.vv v8, v16, v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -2493,8 +2491,7 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64_unmasked(<vscale x 7 x i64> %va
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: lui a1, 1044480
@@ -2568,8 +2565,7 @@ define <vscale x 7 x i64> @vp_bitreverse_nxv7i64_unmasked(<vscale x 7 x i64> %va
; RV32-NEXT: vand.vv v8, v8, v24
; RV32-NEXT: vadd.vv v8, v8, v8
; RV32-NEXT: vor.vv v8, v16, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
@@ -2650,7 +2646,7 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64(<vscale x 8 x i64> %va, <vscale
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -2756,7 +2752,7 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64(<vscale x 8 x i64> %va, <vscale
; RV32-NEXT: vand.vv v8, v8, v24, v0.t
; RV32-NEXT: vsll.vi v8, v8, 1, v0.t
; RV32-NEXT: vor.vv v8, v16, v8, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -2767,8 +2763,7 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64(<vscale x 8 x i64> %va, <vscale
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: lui a1, 4080
@@ -2834,8 +2829,7 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64(<vscale x 8 x i64> %va, <vscale
; RV64-NEXT: vand.vx v8, v8, a0, v0.t
; RV64-NEXT: vsll.vi v8, v8, 1, v0.t
; RV64-NEXT: vor.vv v8, v16, v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -2854,8 +2848,7 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64_unmasked(<vscale x 8 x i64> %va
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: lui a1, 1044480
@@ -2929,8 +2922,7 @@ define <vscale x 8 x i64> @vp_bitreverse_nxv8i64_unmasked(<vscale x 8 x i64> %va
; RV32-NEXT: vand.vv v8, v8, v24
; RV32-NEXT: vadd.vv v8, v8, v8
; RV32-NEXT: vor.vv v8, v16, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
@@ -3012,7 +3004,7 @@ define <vscale x 64 x i16> @vp_bitreverse_nxv64i16(<vscale x 64 x i16> %va, <vsc
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -3089,7 +3081,7 @@ define <vscale x 64 x i16> @vp_bitreverse_nxv64i16(<vscale x 64 x i16> %va, <vsc
; CHECK-NEXT: vor.vv v8, v16, v8, v0.t
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll
index e8e362b1f042dd..0c3ddb11160cc8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bswap-sdnode.ll
@@ -507,8 +507,7 @@ define <vscale x 8 x i64> @bswap_nxv8i64(<vscale x 8 x i64> %va) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: lui a0, 1044480
@@ -550,8 +549,7 @@ define <vscale x 8 x i64> @bswap_nxv8i64(<vscale x 8 x i64> %va) {
; RV32-NEXT: addi a0, sp, 16
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v8, v16
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll b/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
index 171de6c2fddf17..a5de4cd960cb00 100644
--- a/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/bswap-vp.ll
@@ -1023,7 +1023,7 @@ define <vscale x 7 x i64> @vp_bswap_nxv7i64(<vscale x 7 x i64> %va, <vscale x 7
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -1099,7 +1099,7 @@ define <vscale x 7 x i64> @vp_bswap_nxv7i64(<vscale x 7 x i64> %va, <vscale x 7
; RV32-NEXT: addi a0, a0, 16
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v16, v8, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -1110,8 +1110,7 @@ define <vscale x 7 x i64> @vp_bswap_nxv7i64(<vscale x 7 x i64> %va, <vscale x 7
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: lui a1, 4080
@@ -1150,8 +1149,7 @@ define <vscale x 7 x i64> @vp_bswap_nxv7i64(<vscale x 7 x i64> %va, <vscale x 7
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV64-NEXT: vor.vv v8, v16, v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -1170,8 +1168,7 @@ define <vscale x 7 x i64> @vp_bswap_nxv7i64_unmasked(<vscale x 7 x i64> %va, i32
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: lui a1, 1044480
@@ -1214,8 +1211,7 @@ define <vscale x 7 x i64> @vp_bswap_nxv7i64_unmasked(<vscale x 7 x i64> %va, i32
; RV32-NEXT: vor.vv v8, v8, v24
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v16, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
@@ -1269,7 +1265,7 @@ define <vscale x 8 x i64> @vp_bswap_nxv8i64(<vscale x 8 x i64> %va, <vscale x 8
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -1345,7 +1341,7 @@ define <vscale x 8 x i64> @vp_bswap_nxv8i64(<vscale x 8 x i64> %va, <vscale x 8
; RV32-NEXT: addi a0, a0, 16
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v16, v8, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -1356,8 +1352,7 @@ define <vscale x 8 x i64> @vp_bswap_nxv8i64(<vscale x 8 x i64> %va, <vscale x 8
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: lui a1, 4080
@@ -1396,8 +1391,7 @@ define <vscale x 8 x i64> @vp_bswap_nxv8i64(<vscale x 8 x i64> %va, <vscale x 8
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV64-NEXT: vor.vv v8, v16, v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -1416,8 +1410,7 @@ define <vscale x 8 x i64> @vp_bswap_nxv8i64_unmasked(<vscale x 8 x i64> %va, i32
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: lui a1, 1044480
@@ -1460,8 +1453,7 @@ define <vscale x 8 x i64> @vp_bswap_nxv8i64_unmasked(<vscale x 8 x i64> %va, i32
; RV32-NEXT: vor.vv v8, v8, v24
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v16, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
@@ -1516,7 +1508,7 @@ define <vscale x 64 x i16> @vp_bswap_nxv64i16(<vscale x 64 x i16> %va, <vscale x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1557,7 +1549,7 @@ define <vscale x 64 x i16> @vp_bswap_nxv64i16(<vscale x 64 x i16> %va, <vscale x
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/callee-saved-regs.ll b/llvm/test/CodeGen/RISCV/rvv/callee-saved-regs.ll
index c0b10be847d1ff..d418c47a53055d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/callee-saved-regs.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/callee-saved-regs.ll
@@ -6,14 +6,14 @@ define <vscale x 1 x i32> @test_vector_std(<vscale x 1 x i32> %va) nounwind {
; SPILL-O2-LABEL: test_vector_std:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -28,7 +28,7 @@ define riscv_vector_cc <vscale x 1 x i32> @test_vector_callee(<vscale x 1 x i32>
; SPILL-O2-LABEL: test_vector_callee:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: slli a0, a0, 4
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: csrr a0, vlenb
@@ -80,7 +80,7 @@ define riscv_vector_cc <vscale x 1 x i32> @test_vector_callee(<vscale x 1 x i32>
; SPILL-O2-NEXT: add a0, sp, a0
; SPILL-O2-NEXT: addi a0, a0, 16
; SPILL-O2-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: slli a0, a0, 4
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll b/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
index 427ce9d097135b..b94c4b29fbbbd3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/calling-conv-fastcc.ll
@@ -99,7 +99,7 @@ define fastcc <vscale x 128 x i32> @ret_split_nxv128i32(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 5
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -179,7 +179,7 @@ define fastcc <vscale x 128 x i32> @ret_split_nxv128i32(ptr %x) {
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vs8r.v v8, (a0)
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -233,7 +233,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_param_nxv32i32_nxv32i32_nxv32i32
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -266,7 +266,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_param_nxv32i32_nxv32i32_nxv32i32
; CHECK-NEXT: vadd.vv v24, v24, v16
; CHECK-NEXT: vadd.vx v16, v8, a4
; CHECK-NEXT: vadd.vx v8, v24, a4
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -293,7 +293,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_i32(<vsca
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 144
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: andi sp, sp, -128
@@ -327,7 +327,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_i32(<vsca
; RV64-NEXT: .cfi_offset s0, -16
; RV64-NEXT: addi s0, sp, 144
; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: andi sp, sp, -128
@@ -365,7 +365,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_nxv32i32_
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 144
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a3, 48
; RV32-NEXT: mul a1, a1, a3
; RV32-NEXT: sub sp, sp, a1
@@ -433,7 +433,7 @@ define fastcc <vscale x 32 x i32> @ret_nxv32i32_call_nxv32i32_nxv32i32_nxv32i32_
; RV64-NEXT: .cfi_offset s0, -16
; RV64-NEXT: addi s0, sp, 144
; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: li a3, 48
; RV64-NEXT: mul a1, a1, a3
; RV64-NEXT: sub sp, sp, a1
@@ -527,7 +527,7 @@ define fastcc <vscale x 32 x i32> @pass_vector_arg_indirect_stack(<vscale x 32 x
; RV32-NEXT: .cfi_offset s1, -12
; RV32-NEXT: addi s0, sp, 144
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 5
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -128
@@ -585,7 +585,7 @@ define fastcc <vscale x 32 x i32> @pass_vector_arg_indirect_stack(<vscale x 32 x
; RV64-NEXT: .cfi_offset s1, -24
; RV64-NEXT: addi s0, sp, 160
; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 5
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: andi sp, sp, -128
@@ -669,8 +669,7 @@ define fastcc <vscale x 16 x i32> @pass_vector_arg_indirect_stack_no_gpr(<vscale
; RV32-NEXT: .cfi_offset s1, -12
; RV32-NEXT: addi s0, sp, 80
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -64
; RV32-NEXT: mv s1, sp
@@ -714,8 +713,7 @@ define fastcc <vscale x 16 x i32> @pass_vector_arg_indirect_stack_no_gpr(<vscale
; RV64-NEXT: .cfi_offset s1, -24
; RV64-NEXT: addi s0, sp, 96
; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: andi sp, sp, -64
; RV64-NEXT: mv s1, sp
diff --git a/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll b/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
index b229af5849fe9d..6ee46db11feb9b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/calling-conv.ll
@@ -31,7 +31,7 @@ define <vscale x 32 x i32> @caller_scalable_vector_split_indirect(<vscale x 32 x
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 144
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -128
@@ -62,7 +62,7 @@ define <vscale x 32 x i32> @caller_scalable_vector_split_indirect(<vscale x 32 x
; RV64-NEXT: .cfi_offset s0, -16
; RV64-NEXT: addi s0, sp, 144
; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: andi sp, sp, -128
diff --git a/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll b/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
index 15cff650765efa..3e243b8b5222d8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ceil-vp.ll
@@ -275,8 +275,7 @@ define <vscale x 32 x bfloat> @vp_ceil_vv_nxv32bf16(<vscale x 32 x bfloat> %va,
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -340,8 +339,7 @@ define <vscale x 32 x bfloat> @vp_ceil_vv_nxv32bf16(<vscale x 32 x bfloat> %va,
; CHECK-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -354,8 +352,7 @@ define <vscale x 32 x bfloat> @vp_ceil_vv_nxv32bf16_unmasked(<vscale x 32 x bflo
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -408,8 +405,7 @@ define <vscale x 32 x bfloat> @vp_ceil_vv_nxv32bf16_unmasked(<vscale x 32 x bflo
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -862,8 +858,7 @@ define <vscale x 32 x half> @vp_ceil_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -927,8 +922,7 @@ define <vscale x 32 x half> @vp_ceil_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -956,8 +950,7 @@ define <vscale x 32 x half> @vp_ceil_vv_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -1010,8 +1003,7 @@ define <vscale x 32 x half> @vp_ceil_vv_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1461,8 +1453,7 @@ define <vscale x 16 x double> @vp_ceil_vv_nxv16f64(<vscale x 16 x double> %va, <
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -1510,8 +1501,7 @@ define <vscale x 16 x double> @vp_ceil_vv_nxv16f64(<vscale x 16 x double> %va, <
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v8, v24, v8, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/compressstore.ll b/llvm/test/CodeGen/RISCV/rvv/compressstore.ll
index 52811133c53f3d..0336b113746b72 100644
--- a/llvm/test/CodeGen/RISCV/rvv/compressstore.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/compressstore.ll
@@ -199,7 +199,7 @@ define void @test_compresstore_v256i8(ptr %p, <256 x i1> %mask, <256 x i8> %data
; RV64: # %bb.0: # %entry
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a2, vlenb
+; RV64-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV64-NEXT: slli a2, a2, 4
; RV64-NEXT: sub sp, sp, a2
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -238,7 +238,7 @@ define void @test_compresstore_v256i8(ptr %p, <256 x i1> %mask, <256 x i8> %data
; RV64-NEXT: add a0, a0, a1
; RV64-NEXT: vsetvli zero, a2, e8, m8, ta, ma
; RV64-NEXT: vse8.v v16, (a0)
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll b/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
index 7031f93edc2c3e..43ff018415e827 100644
--- a/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/ctpop-vp.ll
@@ -2017,7 +2017,7 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 56
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -2216,7 +2216,7 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV32-NEXT: add a0, sp, a0
; RV32-NEXT: addi a0, a0, 16
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 56
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -2227,7 +2227,7 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -2302,7 +2302,7 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV64-NEXT: vsrl.vx v16, v8, a6, v0.t
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -2338,7 +2338,7 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64_unmasked(<vscale x 16 x i64> %va,
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 5
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -2434,7 +2434,7 @@ define <vscale x 16 x i64> @vp_ctpop_nxv16i64_unmasked(<vscale x 16 x i64> %va,
; RV32-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vmul.vv v8, v8, v24
; RV32-NEXT: vsrl.vx v8, v8, a2
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 5
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll b/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
index d36240e493e41d..beb92f54bd40d0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/cttz-vp.ll
@@ -2241,7 +2241,7 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 56
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -2476,7 +2476,7 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV32-NEXT: add a0, sp, a0
; RV32-NEXT: addi a0, a0, 16
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 56
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -2487,7 +2487,7 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -2568,7 +2568,7 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64(<vscale x 16 x i64> %va, <vscale x
; RV64-NEXT: vsrl.vx v8, v8, a7, v0.t
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -2604,7 +2604,7 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64_unmasked(<vscale x 16 x i64> %va, i
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 5
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -2707,7 +2707,7 @@ define <vscale x 16 x i64> @vp_cttz_nxv16i64_unmasked(<vscale x 16 x i64> %va, i
; RV32-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vmul.vv v8, v8, v24
; RV32-NEXT: vsrl.vx v8, v8, a3
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 5
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
@@ -3987,7 +3987,7 @@ define <vscale x 16 x i64> @vp_cttz_zero_undef_nxv16i64(<vscale x 16 x i64> %va,
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -4036,7 +4036,7 @@ define <vscale x 16 x i64> @vp_cttz_zero_undef_nxv16i64(<vscale x 16 x i64> %va,
; CHECK-NEXT: fsrm a0
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/emergency-slot.mir b/llvm/test/CodeGen/RISCV/rvv/emergency-slot.mir
index 1b9ce12af01f96..7dc846097b5ba6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/emergency-slot.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/emergency-slot.mir
@@ -82,7 +82,7 @@ body: |
; CHECK-NEXT: $x8 = frame-setup ADDI $x2, 2032
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa $x8, 0
; CHECK-NEXT: $x2 = frame-setup ADDI $x2, -272
- ; CHECK-NEXT: $x10 = frame-setup PseudoReadVLENB
+ ; CHECK-NEXT: $x10 = frame-setup PseudoReadMulVLENB 1, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: $x11 = frame-setup ADDI $x0, 51
; CHECK-NEXT: $x10 = frame-setup MUL killed $x10, killed $x11
; CHECK-NEXT: $x2 = frame-setup SUB $x2, killed $x10
diff --git a/llvm/test/CodeGen/RISCV/rvv/extractelt-fp.ll b/llvm/test/CodeGen/RISCV/rvv/extractelt-fp.ll
index 86ef78be97afb0..0ae8158451b064 100644
--- a/llvm/test/CodeGen/RISCV/rvv/extractelt-fp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/extractelt-fp.ll
@@ -1291,7 +1291,7 @@ define double @extractelt_nxv16f64_neg1(<vscale x 16 x double> %v) {
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 80
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -64
@@ -1320,7 +1320,7 @@ define double @extractelt_nxv16f64_neg1(<vscale x 16 x double> %v) {
; RV64-NEXT: .cfi_offset s0, -16
; RV64-NEXT: addi s0, sp, 80
; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: andi sp, sp, -64
@@ -1379,7 +1379,7 @@ define double @extractelt_nxv16f64_idx(<vscale x 16 x double> %v, i32 zeroext %i
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 80
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a2, vlenb
+; RV32-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV32-NEXT: slli a2, a2, 4
; RV32-NEXT: sub sp, sp, a2
; RV32-NEXT: andi sp, sp, -64
@@ -1414,7 +1414,7 @@ define double @extractelt_nxv16f64_idx(<vscale x 16 x double> %v, i32 zeroext %i
; RV64-NEXT: .cfi_offset s0, -16
; RV64-NEXT: addi s0, sp, 80
; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: csrr a2, vlenb
+; RV64-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV64-NEXT: slli a2, a2, 4
; RV64-NEXT: sub sp, sp, a2
; RV64-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/extractelt-i1.ll b/llvm/test/CodeGen/RISCV/rvv/extractelt-i1.ll
index 14719e190a6934..8478eedca461d3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/extractelt-i1.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/extractelt-i1.ll
@@ -135,7 +135,7 @@ define i1 @extractelt_nxv128i1(ptr %x, i64 %idx) nounwind {
; RV32-NEXT: sw ra, 76(sp) # 4-byte Folded Spill
; RV32-NEXT: sw s0, 72(sp) # 4-byte Folded Spill
; RV32-NEXT: addi s0, sp, 80
-; RV32-NEXT: csrr a3, vlenb
+; RV32-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; RV32-NEXT: slli a3, a3, 4
; RV32-NEXT: sub sp, sp, a3
; RV32-NEXT: andi sp, sp, -64
@@ -175,7 +175,7 @@ define i1 @extractelt_nxv128i1(ptr %x, i64 %idx) nounwind {
; RV64-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; RV64-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; RV64-NEXT: addi s0, sp, 80
-; RV64-NEXT: csrr a3, vlenb
+; RV64-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; RV64-NEXT: slli a3, a3, 4
; RV64-NEXT: sub sp, sp, a3
; RV64-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv32.ll b/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv32.ll
index 4d6bc349ffacbc..1f48ff5c4dced6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv32.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv32.ll
@@ -871,7 +871,7 @@ define i32 @extractelt_nxv32i32_neg1(<vscale x 32 x i32> %v) {
; CHECK-NEXT: .cfi_offset s0, -8
; CHECK-NEXT: addi s0, sp, 80
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -64
@@ -922,7 +922,7 @@ define i32 @extractelt_nxv32i32_idx(<vscale x 32 x i32> %v, i32 %idx) {
; CHECK-NEXT: .cfi_offset s0, -8
; CHECK-NEXT: addi s0, sp, 80
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv64.ll b/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv64.ll
index aba0ad022005ba..a2f19f2c45b003 100644
--- a/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv64.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/extractelt-int-rv64.ll
@@ -857,7 +857,7 @@ define i64 @extractelt_nxv16i64_neg1(<vscale x 16 x i64> %v) {
; CHECK-NEXT: .cfi_offset s0, -16
; CHECK-NEXT: addi s0, sp, 80
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -64
@@ -916,7 +916,7 @@ define i64 @extractelt_nxv16i64_idx(<vscale x 16 x i64> %v, i32 zeroext %idx) {
; CHECK-NEXT: .cfi_offset s0, -16
; CHECK-NEXT: addi s0, sp, 80
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
index 54265193b09f6e..ce0dd1908db68c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bitreverse-vp.ll
@@ -1651,7 +1651,7 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -1766,7 +1766,7 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV32-NEXT: vand.vv v8, v8, v24, v0.t
; RV32-NEXT: vsll.vi v8, v8, 1, v0.t
; RV32-NEXT: vor.vv v8, v16, v8, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -1777,8 +1777,7 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: lui a1, 4080
@@ -1844,8 +1843,7 @@ define <15 x i64> @vp_bitreverse_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroex
; RV64-NEXT: vand.vx v8, v8, a0, v0.t
; RV64-NEXT: vsll.vi v8, v8, 1, v0.t
; RV64-NEXT: vor.vv v8, v16, v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -1858,8 +1856,7 @@ define <15 x i64> @vp_bitreverse_v15i64_unmasked(<15 x i64> %va, i32 zeroext %ev
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 8 * vlenb
; RV32-NEXT: lui a1, 1044480
@@ -1942,8 +1939,7 @@ define <15 x i64> @vp_bitreverse_v15i64_unmasked(<15 x i64> %va, i32 zeroext %ev
; RV32-NEXT: vand.vv v8, v8, v16
; RV32-NEXT: vadd.vv v8, v8, v8
; RV32-NEXT: vor.vv v8, v24, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
; RV32-NEXT: ret
@@ -2018,7 +2014,7 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -2133,7 +2129,7 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV32-NEXT: vand.vv v8, v8, v24, v0.t
; RV32-NEXT: vsll.vi v8, v8, 1, v0.t
; RV32-NEXT: vor.vv v8, v16, v8, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -2144,8 +2140,7 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: lui a1, 4080
@@ -2211,8 +2206,7 @@ define <16 x i64> @vp_bitreverse_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroex
; RV64-NEXT: vand.vx v8, v8, a0, v0.t
; RV64-NEXT: vsll.vi v8, v8, 1, v0.t
; RV64-NEXT: vor.vv v8, v16, v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -2225,8 +2219,7 @@ define <16 x i64> @vp_bitreverse_v16i64_unmasked(<16 x i64> %va, i32 zeroext %ev
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 8 * vlenb
; RV32-NEXT: lui a1, 1044480
@@ -2309,8 +2302,7 @@ define <16 x i64> @vp_bitreverse_v16i64_unmasked(<16 x i64> %va, i32 zeroext %ev
; RV32-NEXT: vand.vv v8, v8, v16
; RV32-NEXT: vadd.vv v8, v8, v8
; RV32-NEXT: vor.vv v8, v24, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
; RV32-NEXT: ret
@@ -2385,7 +2377,7 @@ define <128 x i16> @vp_bitreverse_v128i16(<128 x i16> %va, <128 x i1> %m, i32 ze
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -2460,7 +2452,7 @@ define <128 x i16> @vp_bitreverse_v128i16(<128 x i16> %va, <128 x i1> %m, i32 ze
; CHECK-NEXT: vor.vv v16, v16, v8, v0.t
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap-vp.ll
index b8ddf74c30dbdc..d35592899ac289 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-bswap-vp.ll
@@ -757,7 +757,7 @@ define <15 x i64> @vp_bswap_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %ev
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -833,7 +833,7 @@ define <15 x i64> @vp_bswap_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %ev
; RV32-NEXT: addi a0, a0, 16
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v16, v8, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -844,8 +844,7 @@ define <15 x i64> @vp_bswap_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %ev
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: lui a1, 4080
@@ -884,8 +883,7 @@ define <15 x i64> @vp_bswap_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %ev
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV64-NEXT: vor.vv v8, v16, v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -898,8 +896,7 @@ define <15 x i64> @vp_bswap_v15i64_unmasked(<15 x i64> %va, i32 zeroext %evl) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: lui a1, 1044480
@@ -942,8 +939,7 @@ define <15 x i64> @vp_bswap_v15i64_unmasked(<15 x i64> %va, i32 zeroext %evl) {
; RV32-NEXT: vor.vv v8, v8, v24
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v16, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
@@ -991,7 +987,7 @@ define <16 x i64> @vp_bswap_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %ev
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -1067,7 +1063,7 @@ define <16 x i64> @vp_bswap_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %ev
; RV32-NEXT: addi a0, a0, 16
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v16, v8, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -1078,8 +1074,7 @@ define <16 x i64> @vp_bswap_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %ev
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: lui a1, 4080
@@ -1118,8 +1113,7 @@ define <16 x i64> @vp_bswap_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %ev
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV64-NEXT: vor.vv v8, v16, v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -1132,8 +1126,7 @@ define <16 x i64> @vp_bswap_v16i64_unmasked(<16 x i64> %va, i32 zeroext %evl) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
-; RV32-NEXT: slli a1, a1, 3
+; RV32-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV32-NEXT: lui a1, 1044480
@@ -1176,8 +1169,7 @@ define <16 x i64> @vp_bswap_v16i64_unmasked(<16 x i64> %va, i32 zeroext %evl) {
; RV32-NEXT: vor.vv v8, v8, v24
; RV32-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vor.vv v8, v16, v8
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 3
+; RV32-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
@@ -1225,7 +1217,7 @@ define <128 x i16> @vp_bswap_v128i16(<128 x i16> %va, <128 x i1> %m, i32 zeroext
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1264,7 +1256,7 @@ define <128 x i16> @vp_bswap_v128i16(<128 x i16> %va, <128 x i1> %m, i32 zeroext
; CHECK-NEXT: vor.vv v16, v8, v16, v0.t
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
index befbfb88550bad..ea014b7f8bd744 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ceil-vp.ll
@@ -748,8 +748,7 @@ define <32 x double> @vp_ceil_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroex
; CHECK-NEXT: .LBB26_2:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI26_0)
@@ -788,8 +787,7 @@ define <32 x double> @vp_ceil_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroex
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll
index c59b45a1d4f833..f6f6c73949789e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctlz-vp.ll
@@ -1505,7 +1505,7 @@ define <15 x i64> @vp_ctlz_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %evl
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -1589,7 +1589,7 @@ define <15 x i64> @vp_ctlz_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %evl
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -1768,7 +1768,7 @@ define <16 x i64> @vp_ctlz_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %evl
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -1852,7 +1852,7 @@ define <16 x i64> @vp_ctlz_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %evl
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -2031,7 +2031,7 @@ define <32 x i64> @vp_ctlz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 56
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -2335,7 +2335,7 @@ define <32 x i64> @vp_ctlz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV32-NEXT: add a0, sp, a0
; RV32-NEXT: addi a0, a0, 48
; RV32-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 56
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -2346,7 +2346,7 @@ define <32 x i64> @vp_ctlz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -2447,7 +2447,7 @@ define <32 x i64> @vp_ctlz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV64-NEXT: vsrl.vx v16, v8, a6, v0.t
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -2461,7 +2461,7 @@ define <32 x i64> @vp_ctlz_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -2579,7 +2579,7 @@ define <32 x i64> @vp_ctlz_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV32-NEXT: vsrl.vx v8, v16, a2
; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vsrl.vx v16, v24, a2
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -4141,7 +4141,7 @@ define <15 x i64> @vp_ctlz_zero_undef_v15i64(<15 x i64> %va, <15 x i1> %m, i32 z
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -4225,7 +4225,7 @@ define <15 x i64> @vp_ctlz_zero_undef_v15i64(<15 x i64> %va, <15 x i1> %m, i32 z
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -4402,7 +4402,7 @@ define <16 x i64> @vp_ctlz_zero_undef_v16i64(<16 x i64> %va, <16 x i1> %m, i32 z
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -4486,7 +4486,7 @@ define <16 x i64> @vp_ctlz_zero_undef_v16i64(<16 x i64> %va, <16 x i1> %m, i32 z
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -4663,7 +4663,7 @@ define <32 x i64> @vp_ctlz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 56
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -4967,7 +4967,7 @@ define <32 x i64> @vp_ctlz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV32-NEXT: add a0, sp, a0
; RV32-NEXT: addi a0, a0, 48
; RV32-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 56
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -4978,7 +4978,7 @@ define <32 x i64> @vp_ctlz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -5079,7 +5079,7 @@ define <32 x i64> @vp_ctlz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV64-NEXT: vsrl.vx v16, v8, a6, v0.t
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -5093,7 +5093,7 @@ define <32 x i64> @vp_ctlz_zero_undef_v32i64_unmasked(<32 x i64> %va, i32 zeroex
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -5211,7 +5211,7 @@ define <32 x i64> @vp_ctlz_zero_undef_v32i64_unmasked(<32 x i64> %va, i32 zeroex
; RV32-NEXT: vsrl.vx v8, v16, a2
; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vsrl.vx v16, v24, a2
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
index ffc1bfd240804e..0f00227823ff1f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-ctpop-vp.ll
@@ -1121,7 +1121,7 @@ define <15 x i64> @vp_ctpop_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %ev
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -1201,7 +1201,7 @@ define <15 x i64> @vp_ctpop_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %ev
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -1338,7 +1338,7 @@ define <16 x i64> @vp_ctpop_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %ev
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 24
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -1418,7 +1418,7 @@ define <16 x i64> @vp_ctpop_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %ev
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 24
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -1555,7 +1555,7 @@ define <32 x i64> @vp_ctpop_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %ev
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 48
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -1725,7 +1725,7 @@ define <32 x i64> @vp_ctpop_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %ev
; RV32-NEXT: add a0, sp, a0
; RV32-NEXT: addi a0, a0, 48
; RV32-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 48
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -1736,7 +1736,7 @@ define <32 x i64> @vp_ctpop_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %ev
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1810,7 +1810,7 @@ define <32 x i64> @vp_ctpop_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %ev
; RV64-NEXT: vsrl.vx v16, v8, a5, v0.t
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -1824,7 +1824,7 @@ define <32 x i64> @vp_ctpop_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -1914,7 +1914,7 @@ define <32 x i64> @vp_ctpop_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV32-NEXT: vsrl.vx v8, v16, a2
; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vsrl.vx v16, v24, a2
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll
index 5b002207729735..86687533b9cfca 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-cttz-vp.ll
@@ -1265,7 +1265,7 @@ define <15 x i64> @vp_cttz_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %evl
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -1339,7 +1339,7 @@ define <15 x i64> @vp_cttz_v15i64(<15 x i64> %va, <15 x i1> %m, i32 zeroext %evl
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -1488,7 +1488,7 @@ define <16 x i64> @vp_cttz_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %evl
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -1562,7 +1562,7 @@ define <16 x i64> @vp_cttz_v16i64(<16 x i64> %va, <16 x i1> %m, i32 zeroext %evl
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -1711,7 +1711,7 @@ define <32 x i64> @vp_cttz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 56
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -1995,7 +1995,7 @@ define <32 x i64> @vp_cttz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV32-NEXT: add a0, sp, a0
; RV32-NEXT: addi a0, a0, 48
; RV32-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 56
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -2006,7 +2006,7 @@ define <32 x i64> @vp_cttz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -2087,7 +2087,7 @@ define <32 x i64> @vp_cttz_v32i64(<32 x i64> %va, <32 x i1> %m, i32 zeroext %evl
; RV64-NEXT: vsrl.vx v16, v8, a6, v0.t
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -2101,7 +2101,7 @@ define <32 x i64> @vp_cttz_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -2199,7 +2199,7 @@ define <32 x i64> @vp_cttz_v32i64_unmasked(<32 x i64> %va, i32 zeroext %evl) {
; RV32-NEXT: vsrl.vx v8, v16, a2
; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vsrl.vx v16, v24, a2
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -3501,7 +3501,7 @@ define <15 x i64> @vp_cttz_zero_undef_v15i64(<15 x i64> %va, <15 x i1> %m, i32 z
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -3575,7 +3575,7 @@ define <15 x i64> @vp_cttz_zero_undef_v15i64(<15 x i64> %va, <15 x i1> %m, i32 z
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -3722,7 +3722,7 @@ define <16 x i64> @vp_cttz_zero_undef_v16i64(<16 x i64> %va, <16 x i1> %m, i32 z
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -3796,7 +3796,7 @@ define <16 x i64> @vp_cttz_zero_undef_v16i64(<16 x i64> %va, <16 x i1> %m, i32 z
; RV32-NEXT: vmul.vv v8, v8, v24, v0.t
; RV32-NEXT: li a0, 56
; RV32-NEXT: vsrl.vx v8, v8, a0, v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
@@ -3943,7 +3943,7 @@ define <32 x i64> @vp_cttz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: li a2, 56
; RV32-NEXT: mul a1, a1, a2
; RV32-NEXT: sub sp, sp, a1
@@ -4227,7 +4227,7 @@ define <32 x i64> @vp_cttz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV32-NEXT: add a0, sp, a0
; RV32-NEXT: addi a0, a0, 48
; RV32-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 56
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -4238,7 +4238,7 @@ define <32 x i64> @vp_cttz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -4319,7 +4319,7 @@ define <32 x i64> @vp_cttz_zero_undef_v32i64(<32 x i64> %va, <32 x i1> %m, i32 z
; RV64-NEXT: vsrl.vx v16, v8, a6, v0.t
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -4333,7 +4333,7 @@ define <32 x i64> @vp_cttz_zero_undef_v32i64_unmasked(<32 x i64> %va, i32 zeroex
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: .cfi_def_cfa_offset 48
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 16 * vlenb
@@ -4431,7 +4431,7 @@ define <32 x i64> @vp_cttz_zero_undef_v32i64_unmasked(<32 x i64> %va, i32 zeroex
; RV32-NEXT: vsrl.vx v8, v16, a2
; RV32-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV32-NEXT: vsrl.vx v16, v24, a2
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
index c1b4c5fda6c640..27f5ec9d7e9a0b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-floor-vp.ll
@@ -748,8 +748,7 @@ define <32 x double> @vp_floor_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroe
; CHECK-NEXT: .LBB26_2:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI26_0)
@@ -788,8 +787,7 @@ define <32 x double> @vp_floor_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroe
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
index 51eb63f5f92212..e9979cb54c2735 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fmaximum-vp.ll
@@ -555,8 +555,7 @@ define <16 x double> @vfmax_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -573,8 +572,7 @@ define <16 x double> @vfmax_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK-NEXT: vmv1r.v v0, v7
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vfmax.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -604,7 +602,7 @@ define <32 x double> @vfmax_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -695,7 +693,7 @@ define <32 x double> @vfmax_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -709,7 +707,7 @@ define <32 x double> @vfmax_vv_v32f64_unmasked(<32 x double> %va, <32 x double>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 24
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -777,7 +775,7 @@ define <32 x double> @vfmax_vv_v32f64_unmasked(<32 x double> %va, <32 x double>
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
index 03e0ac42c442c4..ce0050a3117f27 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fminimum-vp.ll
@@ -555,8 +555,7 @@ define <16 x double> @vfmin_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -573,8 +572,7 @@ define <16 x double> @vfmin_vv_v16f64(<16 x double> %va, <16 x double> %vb, <16
; CHECK-NEXT: vmv1r.v v0, v7
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vfmin.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -604,7 +602,7 @@ define <32 x double> @vfmin_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -695,7 +693,7 @@ define <32 x double> @vfmin_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -709,7 +707,7 @@ define <32 x double> @vfmin_vv_v32f64_unmasked(<32 x double> %va, <32 x double>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 24
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -777,7 +775,7 @@ define <32 x double> @vfmin_vv_v32f64_unmasked(<32 x double> %va, <32 x double>
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-buildvec-bf16.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-buildvec-bf16.ll
index bdedc5f33c3a19..3f49e486eb9546 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-buildvec-bf16.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-buildvec-bf16.ll
@@ -25,7 +25,7 @@ define <4 x bfloat> @splat_idx_v4bf16(<4 x bfloat> %v, i64 %idx) {
; RV32-ZFBFMIN-NEXT: .cfi_def_cfa_offset 48
; RV32-ZFBFMIN-NEXT: sw ra, 44(sp) # 4-byte Folded Spill
; RV32-ZFBFMIN-NEXT: .cfi_offset ra, -4
-; RV32-ZFBFMIN-NEXT: csrr a1, vlenb
+; RV32-ZFBFMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-ZFBFMIN-NEXT: sub sp, sp, a1
; RV32-ZFBFMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 1 * vlenb
; RV32-ZFBFMIN-NEXT: addi a1, sp, 32
@@ -41,7 +41,7 @@ define <4 x bfloat> @splat_idx_v4bf16(<4 x bfloat> %v, i64 %idx) {
; RV32-ZFBFMIN-NEXT: vse16.v v8, (a1)
; RV32-ZFBFMIN-NEXT: lh a0, 0(a0)
; RV32-ZFBFMIN-NEXT: vmv.v.x v8, a0
-; RV32-ZFBFMIN-NEXT: csrr a0, vlenb
+; RV32-ZFBFMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-ZFBFMIN-NEXT: add sp, sp, a0
; RV32-ZFBFMIN-NEXT: lw ra, 44(sp) # 4-byte Folded Reload
; RV32-ZFBFMIN-NEXT: addi sp, sp, 48
@@ -53,7 +53,7 @@ define <4 x bfloat> @splat_idx_v4bf16(<4 x bfloat> %v, i64 %idx) {
; RV64-ZFBFMIN-NEXT: .cfi_def_cfa_offset 48
; RV64-ZFBFMIN-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; RV64-ZFBFMIN-NEXT: .cfi_offset ra, -8
-; RV64-ZFBFMIN-NEXT: csrr a1, vlenb
+; RV64-ZFBFMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-ZFBFMIN-NEXT: sub sp, sp, a1
; RV64-ZFBFMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 1 * vlenb
; RV64-ZFBFMIN-NEXT: addi a1, sp, 32
@@ -69,7 +69,7 @@ define <4 x bfloat> @splat_idx_v4bf16(<4 x bfloat> %v, i64 %idx) {
; RV64-ZFBFMIN-NEXT: vse16.v v8, (a1)
; RV64-ZFBFMIN-NEXT: lh a0, 0(a0)
; RV64-ZFBFMIN-NEXT: vmv.v.x v8, a0
-; RV64-ZFBFMIN-NEXT: csrr a0, vlenb
+; RV64-ZFBFMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-ZFBFMIN-NEXT: add sp, sp, a0
; RV64-ZFBFMIN-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; RV64-ZFBFMIN-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-buildvec.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-buildvec.ll
index 96b9b2bac2993c..7c6ac2275c4b37 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-buildvec.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-buildvec.ll
@@ -205,7 +205,7 @@ define <4 x half> @splat_idx_v4f16(<4 x half> %v, i64 %idx) {
; RV32-ZFHMIN-NEXT: .cfi_def_cfa_offset 48
; RV32-ZFHMIN-NEXT: sw ra, 44(sp) # 4-byte Folded Spill
; RV32-ZFHMIN-NEXT: .cfi_offset ra, -4
-; RV32-ZFHMIN-NEXT: csrr a1, vlenb
+; RV32-ZFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-ZFHMIN-NEXT: sub sp, sp, a1
; RV32-ZFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 1 * vlenb
; RV32-ZFHMIN-NEXT: addi a1, sp, 32
@@ -221,7 +221,7 @@ define <4 x half> @splat_idx_v4f16(<4 x half> %v, i64 %idx) {
; RV32-ZFHMIN-NEXT: vse16.v v8, (a1)
; RV32-ZFHMIN-NEXT: lh a0, 0(a0)
; RV32-ZFHMIN-NEXT: vmv.v.x v8, a0
-; RV32-ZFHMIN-NEXT: csrr a0, vlenb
+; RV32-ZFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-ZFHMIN-NEXT: add sp, sp, a0
; RV32-ZFHMIN-NEXT: lw ra, 44(sp) # 4-byte Folded Reload
; RV32-ZFHMIN-NEXT: addi sp, sp, 48
@@ -233,7 +233,7 @@ define <4 x half> @splat_idx_v4f16(<4 x half> %v, i64 %idx) {
; RV64-ZFHMIN-NEXT: .cfi_def_cfa_offset 48
; RV64-ZFHMIN-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
; RV64-ZFHMIN-NEXT: .cfi_offset ra, -8
-; RV64-ZFHMIN-NEXT: csrr a1, vlenb
+; RV64-ZFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-ZFHMIN-NEXT: sub sp, sp, a1
; RV64-ZFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x30, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 48 + 1 * vlenb
; RV64-ZFHMIN-NEXT: addi a1, sp, 32
@@ -249,7 +249,7 @@ define <4 x half> @splat_idx_v4f16(<4 x half> %v, i64 %idx) {
; RV64-ZFHMIN-NEXT: vse16.v v8, (a1)
; RV64-ZFHMIN-NEXT: lh a0, 0(a0)
; RV64-ZFHMIN-NEXT: vmv.v.x v8, a0
-; RV64-ZFHMIN-NEXT: csrr a0, vlenb
+; RV64-ZFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-ZFHMIN-NEXT: add sp, sp, a0
; RV64-ZFHMIN-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; RV64-ZFHMIN-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
index f3b124aa34dcb5..5d0a6d75b4e1a1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
@@ -240,8 +240,7 @@ define <64 x float> @interleave_v32f32(<32 x float> %x, <32 x float> %y) {
; V128: # %bb.0:
; V128-NEXT: addi sp, sp, -16
; V128-NEXT: .cfi_def_cfa_offset 16
-; V128-NEXT: csrr a0, vlenb
-; V128-NEXT: slli a0, a0, 3
+; V128-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; V128-NEXT: sub sp, sp, a0
; V128-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; V128-NEXT: vmv8r.v v0, v16
@@ -272,8 +271,7 @@ define <64 x float> @interleave_v32f32(<32 x float> %x, <32 x float> %y) {
; V128-NEXT: vwmaccu.vx v0, a0, v8
; V128-NEXT: vmv8r.v v8, v0
; V128-NEXT: vmv8r.v v16, v24
-; V128-NEXT: csrr a0, vlenb
-; V128-NEXT: slli a0, a0, 3
+; V128-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; V128-NEXT: add sp, sp, a0
; V128-NEXT: addi sp, sp, 16
; V128-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll
index a2ff77625b7584..f6e52aa06477a9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fshr-fshl-vp.ll
@@ -670,8 +670,7 @@ define <16 x i64> @fshr_v16i64(<16 x i64> %a, <16 x i64> %b, <16 x i64> %c, <16
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
@@ -689,8 +688,7 @@ define <16 x i64> @fshr_v16i64(<16 x i64> %a, <16 x i64> %b, <16 x i64> %c, <16
; CHECK-NEXT: vsll.vi v24, v24, 1, v0.t
; CHECK-NEXT: vsll.vv v8, v24, v8, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -704,8 +702,7 @@ define <16 x i64> @fshl_v16i64(<16 x i64> %a, <16 x i64> %b, <16 x i64> %c, <16
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
@@ -724,8 +721,7 @@ define <16 x i64> @fshl_v16i64(<16 x i64> %a, <16 x i64> %b, <16 x i64> %c, <16
; CHECK-NEXT: vsrl.vi v24, v24, 1, v0.t
; CHECK-NEXT: vsrl.vv v16, v24, v16, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
index e81f686a283032..65810964b4bfcc 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-insert-subvector.ll
@@ -743,7 +743,7 @@ define void @insert_v2i64_nxv16i64_hi(ptr %psv, ptr %out) {
; RV32VLA-NEXT: .cfi_offset s0, -8
; RV32VLA-NEXT: addi s0, sp, 80
; RV32VLA-NEXT: .cfi_def_cfa s0, 0
-; RV32VLA-NEXT: csrr a2, vlenb
+; RV32VLA-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV32VLA-NEXT: slli a2, a2, 4
; RV32VLA-NEXT: sub sp, sp, a2
; RV32VLA-NEXT: andi sp, sp, -64
@@ -776,7 +776,7 @@ define void @insert_v2i64_nxv16i64_hi(ptr %psv, ptr %out) {
; RV64VLA-NEXT: .cfi_offset s0, -16
; RV64VLA-NEXT: addi s0, sp, 80
; RV64VLA-NEXT: .cfi_def_cfa s0, 0
-; RV64VLA-NEXT: csrr a2, vlenb
+; RV64VLA-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV64VLA-NEXT: slli a2, a2, 4
; RV64VLA-NEXT: sub sp, sp, a2
; RV64VLA-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
index 2ea90203b21030..67614083a3985f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
@@ -405,8 +405,7 @@ define <64 x i32> @interleave_v32i32(<32 x i32> %x, <32 x i32> %y) {
; V128: # %bb.0:
; V128-NEXT: addi sp, sp, -16
; V128-NEXT: .cfi_def_cfa_offset 16
-; V128-NEXT: csrr a0, vlenb
-; V128-NEXT: slli a0, a0, 3
+; V128-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; V128-NEXT: sub sp, sp, a0
; V128-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; V128-NEXT: vmv8r.v v0, v16
@@ -437,8 +436,7 @@ define <64 x i32> @interleave_v32i32(<32 x i32> %x, <32 x i32> %y) {
; V128-NEXT: vwmaccu.vx v0, a0, v8
; V128-NEXT: vmv8r.v v8, v0
; V128-NEXT: vmv8r.v v16, v24
-; V128-NEXT: csrr a0, vlenb
-; V128-NEXT: slli a0, a0, 3
+; V128-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; V128-NEXT: add sp, sp, a0
; V128-NEXT: addi sp, sp, 16
; V128-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll
index 5911e8248f2995..1d56a64459f3df 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-interleaved-access.ll
@@ -158,7 +158,7 @@ define {<8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>} @load_
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a2, vlenb
+; RV32-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV32-NEXT: li a3, 84
; RV32-NEXT: mul a2, a2, a3
; RV32-NEXT: sub sp, sp, a2
@@ -629,7 +629,7 @@ define {<8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>} @load_
; RV32-NEXT: addi a1, a1, 16
; RV32-NEXT: vl4r.v v8, (a1) # Unknown-size Folded Reload
; RV32-NEXT: vse32.v v8, (a0)
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: li a1, 84
; RV32-NEXT: mul a0, a0, a1
; RV32-NEXT: add sp, sp, a0
@@ -640,7 +640,7 @@ define {<8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>} @load_
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a2, vlenb
+; RV64-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV64-NEXT: slli a3, a2, 6
; RV64-NEXT: add a2, a3, a2
; RV64-NEXT: sub sp, sp, a2
@@ -1064,7 +1064,7 @@ define {<8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>} @load_
; RV64-NEXT: addi a1, a1, 16
; RV64-NEXT: vl4r.v v8, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vse64.v v8, (a0)
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a0, 6
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-llrint.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-llrint.ll
index 7f1493544eabcf..6a00b768bb76e0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-llrint.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-llrint.ll
@@ -43,8 +43,7 @@ define <2 x i64> @llrint_v2i64_v2f32(<2 x float> %x) {
; RV32-NEXT: .cfi_def_cfa_offset 32
; RV32-NEXT: sw ra, 28(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 1
+; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x20, 0x22, 0x11, 0x02, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 32 + 2 * vlenb
; RV32-NEXT: addi a0, sp, 16
@@ -72,8 +71,7 @@ define <2 x i64> @llrint_v2i64_v2f32(<2 x float> %x) {
; RV32-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; RV32-NEXT: vslide1down.vx v8, v8, a0
; RV32-NEXT: vslide1down.vx v8, v8, a1
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 1
+; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: lw ra, 28(sp) # 4-byte Folded Reload
; RV32-NEXT: addi sp, sp, 32
@@ -103,7 +101,7 @@ define <3 x i64> @llrint_v3i64_v3f32(<3 x float> %x) {
; RV32-NEXT: .cfi_def_cfa_offset 32
; RV32-NEXT: sw ra, 28(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a0, 1
; RV32-NEXT: add a0, a1, a0
; RV32-NEXT: sub sp, sp, a0
@@ -167,7 +165,7 @@ define <3 x i64> @llrint_v3i64_v3f32(<3 x float> %x) {
; RV32-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV32-NEXT: vslide1down.vx v8, v8, a0
; RV32-NEXT: vslide1down.vx v8, v8, a1
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a0, 1
; RV32-NEXT: add a0, a1, a0
; RV32-NEXT: add sp, sp, a0
@@ -211,7 +209,7 @@ define <4 x i64> @llrint_v4i64_v4f32(<4 x float> %x) {
; RV32-NEXT: .cfi_def_cfa_offset 32
; RV32-NEXT: sw ra, 28(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a0, 1
; RV32-NEXT: add a0, a1, a0
; RV32-NEXT: sub sp, sp, a0
@@ -275,7 +273,7 @@ define <4 x i64> @llrint_v4i64_v4f32(<4 x float> %x) {
; RV32-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV32-NEXT: vslide1down.vx v8, v8, a0
; RV32-NEXT: vslide1down.vx v8, v8, a1
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a0, 1
; RV32-NEXT: add a0, a1, a0
; RV32-NEXT: add sp, sp, a0
@@ -323,8 +321,7 @@ define <8 x i64> @llrint_v8i64_v8f32(<8 x float> %x) {
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 208
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 1
+; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -64
; RV32-NEXT: addi a0, sp, 192
@@ -467,8 +464,7 @@ define <16 x i64> @llrint_v16i64_v16f32(<16 x float> %x) {
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 400
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 2
+; RV32-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -128
; RV32-NEXT: addi a0, sp, 384
@@ -703,8 +699,7 @@ define <2 x i64> @llrint_v2i64_v2f64(<2 x double> %x) {
; RV32-NEXT: .cfi_def_cfa_offset 32
; RV32-NEXT: sw ra, 28(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 1
+; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x20, 0x22, 0x11, 0x02, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 32 + 2 * vlenb
; RV32-NEXT: addi a0, sp, 16
@@ -732,8 +727,7 @@ define <2 x i64> @llrint_v2i64_v2f64(<2 x double> %x) {
; RV32-NEXT: vsetivli zero, 4, e32, m1, ta, ma
; RV32-NEXT: vslide1down.vx v8, v8, a0
; RV32-NEXT: vslide1down.vx v8, v8, a1
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 1
+; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: lw ra, 28(sp) # 4-byte Folded Reload
; RV32-NEXT: addi sp, sp, 32
@@ -762,8 +756,7 @@ define <4 x i64> @llrint_v4i64_v4f64(<4 x double> %x) {
; RV32-NEXT: .cfi_def_cfa_offset 32
; RV32-NEXT: sw ra, 28(sp) # 4-byte Folded Spill
; RV32-NEXT: .cfi_offset ra, -4
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 2
+; RV32-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x20, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 32 + 4 * vlenb
; RV32-NEXT: csrr a0, vlenb
@@ -825,8 +818,7 @@ define <4 x i64> @llrint_v4i64_v4f64(<4 x double> %x) {
; RV32-NEXT: vsetivli zero, 8, e32, m2, ta, ma
; RV32-NEXT: vslide1down.vx v8, v8, a0
; RV32-NEXT: vslide1down.vx v8, v8, a1
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 2
+; RV32-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: lw ra, 28(sp) # 4-byte Folded Reload
; RV32-NEXT: addi sp, sp, 32
@@ -868,8 +860,7 @@ define <8 x i64> @llrint_v8i64_v8f64(<8 x double> %x) {
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 272
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a0, vlenb
-; RV32-NEXT: slli a0, a0, 2
+; RV32-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -64
; RV32-NEXT: addi a0, sp, 256
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-store-fp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-store-fp.ll
index a1e81ea41c249b..ab54554cf29f23 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-store-fp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-store-fp.ll
@@ -364,7 +364,7 @@ define void @masked_store_v32f64(<32 x double>* %val_ptr, <32 x double>* %a, <32
; RV32-LABEL: masked_store_v32f64:
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
-; RV32-NEXT: csrr a3, vlenb
+; RV32-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; RV32-NEXT: slli a3, a3, 4
; RV32-NEXT: sub sp, sp, a3
; RV32-NEXT: vsetivli zero, 16, e64, m8, ta, ma
@@ -395,7 +395,7 @@ define void @masked_store_v32f64(<32 x double>* %val_ptr, <32 x double>* %a, <32
; RV32-NEXT: addi a1, sp, 16
; RV32-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; RV32-NEXT: vse64.v v8, (a0), v0.t
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
@@ -404,7 +404,7 @@ define void @masked_store_v32f64(<32 x double>* %val_ptr, <32 x double>* %a, <32
; RV64-LABEL: masked_store_v32f64:
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
-; RV64-NEXT: csrr a3, vlenb
+; RV64-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; RV64-NEXT: slli a3, a3, 4
; RV64-NEXT: sub sp, sp, a3
; RV64-NEXT: vsetivli zero, 16, e64, m8, ta, ma
@@ -435,7 +435,7 @@ define void @masked_store_v32f64(<32 x double>* %val_ptr, <32 x double>* %a, <32
; RV64-NEXT: addi a1, sp, 16
; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vse64.v v8, (a0), v0.t
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -471,7 +471,7 @@ define void @masked_store_v64f32(<64 x float>* %val_ptr, <64 x float>* %a, <64 x
; CHECK-LABEL: masked_store_v64f32:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a3, a3, 4
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: li a3, 32
@@ -503,7 +503,7 @@ define void @masked_store_v64f32(<64 x float>* %val_ptr, <64 x float>* %a, <64 x
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vse32.v v8, (a0), v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -520,7 +520,7 @@ define void @masked_store_v128f16(<128 x half>* %val_ptr, <128 x half>* %a, <128
; CHECK-LABEL: masked_store_v128f16:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a3, a3, 4
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: li a3, 64
@@ -552,7 +552,7 @@ define void @masked_store_v128f16(<128 x half>* %val_ptr, <128 x half>* %a, <128
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vse16.v v8, (a0), v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-store-int.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-store-int.ll
index 90690bbc8e2085..dd87123ee5cc3c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-store-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-store-int.ll
@@ -400,7 +400,7 @@ define void @masked_store_v32i64(ptr %val_ptr, ptr %a, ptr %m_ptr) nounwind {
; CHECK-LABEL: masked_store_v32i64:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a3, a3, 4
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: vsetivli zero, 16, e64, m8, ta, ma
@@ -430,7 +430,7 @@ define void @masked_store_v32i64(ptr %val_ptr, ptr %a, ptr %m_ptr) nounwind {
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vse64.v v8, (a0), v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -483,7 +483,7 @@ define void @masked_store_v64i32(ptr %val_ptr, ptr %a, ptr %m_ptr) nounwind {
; CHECK-LABEL: masked_store_v64i32:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a3, a3, 4
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: li a3, 32
@@ -514,7 +514,7 @@ define void @masked_store_v64i32(ptr %val_ptr, ptr %a, ptr %m_ptr) nounwind {
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vse32.v v8, (a0), v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -549,7 +549,7 @@ define void @masked_store_v128i16(ptr %val_ptr, ptr %a, ptr %m_ptr) nounwind {
; CHECK-LABEL: masked_store_v128i16:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a3, a3, 4
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: li a3, 64
@@ -580,7 +580,7 @@ define void @masked_store_v128i16(ptr %val_ptr, ptr %a, ptr %m_ptr) nounwind {
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vse16.v v8, (a0), v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -597,7 +597,7 @@ define void @masked_store_v256i8(ptr %val_ptr, ptr %a, ptr %m_ptr) nounwind {
; CHECK-LABEL: masked_store_v256i8:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a3, a3, 4
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: li a3, 128
@@ -628,7 +628,7 @@ define void @masked_store_v256i8(ptr %val_ptr, ptr %a, ptr %m_ptr) nounwind {
; CHECK-NEXT: addi a1, sp, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vse8.v v8, (a0), v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-fp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-fp.ll
index 4be680e272e5b9..31a8cd73c583f2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-fp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-fp.ll
@@ -1950,8 +1950,7 @@ define float @vreduce_fminimum_v64f32(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a1, a0, 128
@@ -1979,8 +1978,7 @@ define float @vreduce_fminimum_v64f32(ptr %x) {
; CHECK-NEXT: vfredmin.vs v8, v8, v8
; CHECK-NEXT: vfmv.f.s fa0, v8
; CHECK-NEXT: .LBB119_3:
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -2013,7 +2011,7 @@ define float @vreduce_fminimum_v128f32(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: mv a2, a1
; CHECK-NEXT: slli a1, a1, 1
@@ -2101,7 +2099,7 @@ define float @vreduce_fminimum_v128f32(ptr %x) {
; CHECK-NEXT: vfredmin.vs v8, v8, v8
; CHECK-NEXT: vfmv.f.s fa0, v8
; CHECK-NEXT: .LBB121_3:
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 1
@@ -2288,8 +2286,7 @@ define double @vreduce_fminimum_v32f64(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a1, a0, 128
@@ -2316,8 +2313,7 @@ define double @vreduce_fminimum_v32f64(ptr %x) {
; CHECK-NEXT: vfredmin.vs v8, v8, v8
; CHECK-NEXT: vfmv.f.s fa0, v8
; CHECK-NEXT: .LBB131_3:
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -2349,7 +2345,7 @@ define double @vreduce_fminimum_v64f64(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: mv a2, a1
; CHECK-NEXT: slli a1, a1, 1
@@ -2436,7 +2432,7 @@ define double @vreduce_fminimum_v64f64(ptr %x) {
; CHECK-NEXT: vfredmin.vs v8, v8, v8
; CHECK-NEXT: vfmv.f.s fa0, v8
; CHECK-NEXT: .LBB133_3:
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 1
@@ -2703,8 +2699,7 @@ define float @vreduce_fmaximum_v64f32(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a1, a0, 128
@@ -2732,8 +2727,7 @@ define float @vreduce_fmaximum_v64f32(ptr %x) {
; CHECK-NEXT: vfredmax.vs v8, v8, v8
; CHECK-NEXT: vfmv.f.s fa0, v8
; CHECK-NEXT: .LBB147_3:
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -2766,7 +2760,7 @@ define float @vreduce_fmaximum_v128f32(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: mv a2, a1
; CHECK-NEXT: slli a1, a1, 1
@@ -2854,7 +2848,7 @@ define float @vreduce_fmaximum_v128f32(ptr %x) {
; CHECK-NEXT: vfredmax.vs v8, v8, v8
; CHECK-NEXT: vfmv.f.s fa0, v8
; CHECK-NEXT: .LBB149_3:
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 1
@@ -3041,8 +3035,7 @@ define double @vreduce_fmaximum_v32f64(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a1, a0, 128
@@ -3069,8 +3062,7 @@ define double @vreduce_fmaximum_v32f64(ptr %x) {
; CHECK-NEXT: vfredmax.vs v8, v8, v8
; CHECK-NEXT: vfmv.f.s fa0, v8
; CHECK-NEXT: .LBB159_3:
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -3102,7 +3094,7 @@ define double @vreduce_fmaximum_v64f64(ptr %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: mv a2, a1
; CHECK-NEXT: slli a1, a1, 1
@@ -3189,7 +3181,7 @@ define double @vreduce_fmaximum_v64f64(ptr %x) {
; CHECK-NEXT: vfredmax.vs v8, v8, v8
; CHECK-NEXT: vfmv.f.s fa0, v8
; CHECK-NEXT: .LBB161_3:
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 1
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int.ll
index 56944e2aa5074d..ce907fb4123134 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int.ll
@@ -1541,7 +1541,7 @@ define i64 @vwreduce_add_v64i64(ptr %x) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1578,7 +1578,7 @@ define i64 @vwreduce_add_v64i64(ptr %x) {
; RV32-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; RV32-NEXT: vsrl.vx v8, v8, a2
; RV32-NEXT: vmv.x.s a1, v8
-; RV32-NEXT: csrr a2, vlenb
+; RV32-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV32-NEXT: slli a2, a2, 4
; RV32-NEXT: add sp, sp, a2
; RV32-NEXT: addi sp, sp, 16
@@ -1588,7 +1588,7 @@ define i64 @vwreduce_add_v64i64(ptr %x) {
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1622,7 +1622,7 @@ define i64 @vwreduce_add_v64i64(ptr %x) {
; RV64-NEXT: vmv.s.x v16, zero
; RV64-NEXT: vredsum.vs v8, v8, v16
; RV64-NEXT: vmv.x.s a0, v8
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: add sp, sp, a1
; RV64-NEXT: addi sp, sp, 16
@@ -1638,7 +1638,7 @@ define i64 @vwreduce_uadd_v64i64(ptr %x) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: slli a1, a1, 4
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1675,7 +1675,7 @@ define i64 @vwreduce_uadd_v64i64(ptr %x) {
; RV32-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; RV32-NEXT: vsrl.vx v8, v8, a2
; RV32-NEXT: vmv.x.s a1, v8
-; RV32-NEXT: csrr a2, vlenb
+; RV32-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV32-NEXT: slli a2, a2, 4
; RV32-NEXT: add sp, sp, a2
; RV32-NEXT: addi sp, sp, 16
@@ -1685,7 +1685,7 @@ define i64 @vwreduce_uadd_v64i64(ptr %x) {
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1719,7 +1719,7 @@ define i64 @vwreduce_uadd_v64i64(ptr %x) {
; RV64-NEXT: vmv.s.x v16, zero
; RV64-NEXT: vredsum.vs v8, v8, v16
; RV64-NEXT: vmv.x.s a0, v8
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a1, 4
; RV64-NEXT: add sp, sp, a1
; RV64-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
index 1f856d04ca89fd..a4ddb0a3fc7288 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-rint-vp.ll
@@ -528,8 +528,7 @@ define <32 x double> @vp_rint_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroex
; CHECK-NEXT: .LBB26_2:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI26_0)
@@ -563,8 +562,7 @@ define <32 x double> @vp_rint_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroex
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
index 0f587232680df6..ce444e4d86511f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-round-vp.ll
@@ -748,8 +748,7 @@ define <32 x double> @vp_round_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroe
; CHECK-NEXT: .LBB26_2:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI26_0)
@@ -788,8 +787,7 @@ define <32 x double> @vp_round_v32f64(<32 x double> %va, <32 x i1> %m, i32 zeroe
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
index 0fb7e6a7de5696..a049b359faddfa 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundeven-vp.ll
@@ -748,8 +748,7 @@ define <32 x double> @vp_roundeven_v32f64(<32 x double> %va, <32 x i1> %m, i32 z
; CHECK-NEXT: .LBB26_2:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI26_0)
@@ -788,8 +787,7 @@ define <32 x double> @vp_roundeven_v32f64(<32 x double> %va, <32 x i1> %m, i32 z
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
index 927f96b6442274..346fdb4e0a722a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-roundtozero-vp.ll
@@ -748,8 +748,7 @@ define <32 x double> @vp_roundtozero_v32f64(<32 x double> %va, <32 x i1> %m, i32
; CHECK-NEXT: .LBB26_2:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: lui a2, %hi(.LCPI26_0)
@@ -788,8 +787,7 @@ define <32 x double> @vp_roundtozero_v32f64(<32 x double> %va, <32 x i1> %m, i32
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-fp-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-fp-vp.ll
index 8e2a225622eec2..e6ac246fff9a35 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-fp-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-fp-vp.ll
@@ -1062,7 +1062,7 @@ define <128 x i1> @fcmp_oeq_vv_v128f16(<128 x half> %va, <128 x half> %vb, <128
; ZVFH: # %bb.0:
; ZVFH-NEXT: addi sp, sp, -16
; ZVFH-NEXT: .cfi_def_cfa_offset 16
-; ZVFH-NEXT: csrr a1, vlenb
+; ZVFH-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFH-NEXT: slli a1, a1, 4
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1104,7 +1104,7 @@ define <128 x i1> @fcmp_oeq_vv_v128f16(<128 x half> %va, <128 x half> %vb, <128
; ZVFH-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; ZVFH-NEXT: vslideup.vi v7, v8, 8
; ZVFH-NEXT: vmv.v.v v0, v7
-; ZVFH-NEXT: csrr a0, vlenb
+; ZVFH-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFH-NEXT: slli a0, a0, 4
; ZVFH-NEXT: add sp, sp, a0
; ZVFH-NEXT: addi sp, sp, 16
@@ -2769,7 +2769,7 @@ define <32 x i1> @fcmp_oeq_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32 x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -2811,7 +2811,7 @@ define <32 x i1> @fcmp_oeq_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32 x
; CHECK-NEXT: vsetivli zero, 4, e8, mf4, ta, ma
; CHECK-NEXT: vslideup.vi v7, v8, 2
; CHECK-NEXT: vmv1r.v v0, v7
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
index 0cecec31e2bda3..87c07f6bb1c404 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-setcc-int-vp.ll
@@ -594,7 +594,7 @@ define <256 x i1> @icmp_eq_vv_v256i8(<256 x i8> %va, <256 x i8> %vb, <256 x i1>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -634,7 +634,7 @@ define <256 x i1> @icmp_eq_vv_v256i8(<256 x i8> %va, <256 x i8> %vb, <256 x i1>
; CHECK-NEXT: vmseq.vv v16, v8, v24, v0.t
; CHECK-NEXT: vmv1r.v v0, v16
; CHECK-NEXT: vmv1r.v v8, v6
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -1247,7 +1247,7 @@ define <64 x i1> @icmp_eq_vv_v64i32(<64 x i32> %va, <64 x i32> %vb, <64 x i1> %m
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1289,7 +1289,7 @@ define <64 x i1> @icmp_eq_vv_v64i32(<64 x i32> %va, <64 x i32> %vb, <64 x i1> %m
; CHECK-NEXT: vsetivli zero, 8, e8, mf2, ta, ma
; CHECK-NEXT: vslideup.vi v7, v8, 4
; CHECK-NEXT: vmv1r.v v0, v7
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll
index 7513d31b54bd1a..1bf96ccf89e1fe 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll
@@ -226,7 +226,7 @@ define <128 x i32> @vtrunc_v128i32_v128i64(<128 x i64> %a, <128 x i1> %m, i32 ze
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 6
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0xc0, 0x00, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 64 * vlenb
@@ -504,7 +504,7 @@ define <128 x i32> @vtrunc_v128i32_v128i64(<128 x i64> %a, <128 x i1> %m, i32 ze
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vse32.v v8, (a0)
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 6
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vcopysign-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vcopysign-vp.ll
index 77a095303675f5..2e4efb58d36981 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vcopysign-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vcopysign-vp.ll
@@ -297,7 +297,7 @@ define <32 x double> @vfsgnj_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -336,7 +336,7 @@ define <32 x double> @vfsgnj_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vfsgnj.vv v16, v16, v24, v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfma-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfma-vp.ll
index 6dcebc9763d82b..e1119131cb2d9c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfma-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfma-vp.ll
@@ -818,7 +818,7 @@ define <32 x double> @vfma_vv_v32f64(<32 x double> %va, <32 x double> %b, <32 x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -886,7 +886,7 @@ define <32 x double> @vfma_vv_v32f64(<32 x double> %va, <32 x double> %b, <32 x
; CHECK-NEXT: vfmadd.vv v16, v24, v8, v0.t
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -900,7 +900,7 @@ define <32 x double> @vfma_vv_v32f64_unmasked(<32 x double> %va, <32 x double> %
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 24
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -952,7 +952,7 @@ define <32 x double> @vfma_vv_v32f64_unmasked(<32 x double> %va, <32 x double> %
; CHECK-NEXT: vfmadd.vv v24, v16, v8
; CHECK-NEXT: vmv8r.v v8, v0
; CHECK-NEXT: vmv.v.v v16, v24
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmax-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmax-vp.ll
index c83a298cb501ee..2489cef2db6d65 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmax-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmax-vp.ll
@@ -389,7 +389,7 @@ define <32 x double> @vfmax_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -428,7 +428,7 @@ define <32 x double> @vfmax_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vfmax.vv v16, v16, v24, v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmin-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmin-vp.ll
index 60dbededb90a5c..39fc19dc5effbe 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmin-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmin-vp.ll
@@ -389,7 +389,7 @@ define <32 x double> @vfmin_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -428,7 +428,7 @@ define <32 x double> @vfmin_vv_v32f64(<32 x double> %va, <32 x double> %vb, <32
; CHECK-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vfmin.vv v16, v16, v24, v0.t
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmuladd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmuladd-vp.ll
index 6c695b43d2718e..d306f2d01af7b2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmuladd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfmuladd-vp.ll
@@ -606,7 +606,7 @@ define <32 x double> @vfma_vv_v32f64(<32 x double> %va, <32 x double> %b, <32 x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -674,7 +674,7 @@ define <32 x double> @vfma_vv_v32f64(<32 x double> %va, <32 x double> %b, <32 x
; CHECK-NEXT: vfmadd.vv v16, v24, v8, v0.t
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -688,7 +688,7 @@ define <32 x double> @vfma_vv_v32f64_unmasked(<32 x double> %va, <32 x double> %
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 24
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -740,7 +740,7 @@ define <32 x double> @vfma_vv_v32f64_unmasked(<32 x double> %va, <32 x double> %
; CHECK-NEXT: vfmadd.vv v24, v16, v8
; CHECK-NEXT: vmv8r.v v8, v0
; CHECK-NEXT: vmv.v.v v16, v24
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwadd.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwadd.ll
index afea1dc6d3c2a3..09a437e2b2f70d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwadd.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwadd.ll
@@ -90,7 +90,7 @@ define <64 x float> @vfwadd_v64f16(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -119,7 +119,7 @@ define <64 x float> @vfwadd_v64f16(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -201,7 +201,7 @@ define <32 x double> @vfwadd_v32f32(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -229,7 +229,7 @@ define <32 x double> @vfwadd_v32f32(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwmul.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwmul.ll
index 319994d2655651..9558d40ad34d29 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwmul.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwmul.ll
@@ -90,7 +90,7 @@ define <64 x float> @vfwmul_v64f16(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -119,7 +119,7 @@ define <64 x float> @vfwmul_v64f16(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -201,7 +201,7 @@ define <32 x double> @vfwmul_v32f32(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -229,7 +229,7 @@ define <32 x double> @vfwmul_v32f32(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwsub.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwsub.ll
index 2c706cad9742ff..6da0c7a4cce1ba 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwsub.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vfwsub.ll
@@ -90,7 +90,7 @@ define <64 x float> @vfwsub_v64f16(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -119,7 +119,7 @@ define <64 x float> @vfwsub_v64f16(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -201,7 +201,7 @@ define <32 x double> @vfwsub_v32f32(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -229,7 +229,7 @@ define <32 x double> @vfwsub_v32f32(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpmerge.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpmerge.ll
index df1c84a9e05d81..e8eda44ff7e484 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpmerge.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpmerge.ll
@@ -1167,7 +1167,7 @@ define <32 x double> @vpmerge_vv_v32f64(<32 x double> %va, <32 x double> %vb, <3
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1206,7 +1206,7 @@ define <32 x double> @vpmerge_vv_v32f64(<32 x double> %va, <32 x double> %vb, <3
; CHECK-NEXT: vsetvli zero, a0, e64, m8, tu, ma
; CHECK-NEXT: vmerge.vvm v16, v16, v8, v0
; CHECK-NEXT: vmv8r.v v8, v24
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpscatter.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpscatter.ll
index c0550398761915..2bccd226626880 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpscatter.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vpscatter.ll
@@ -1697,8 +1697,7 @@ define void @vpscatter_v32f64(<32 x double> %val, <32 x ptr> %ptrs, <32 x i1> %m
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: addi a1, a0, 128
@@ -1725,8 +1724,7 @@ define void @vpscatter_v32f64(<32 x double> %val, <32 x ptr> %ptrs, <32 x i1> %m
; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v16, (zero), v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -1765,8 +1763,7 @@ define void @vpscatter_baseidx_v32i32_v32f64(<32 x double> %val, ptr %base, <32
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a3, vlenb
-; RV64-NEXT: slli a3, a3, 3
+; RV64-NEXT: vsetvli a3, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a3
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: li a3, 32
@@ -1802,8 +1799,7 @@ define void @vpscatter_baseidx_v32i32_v32f64(<32 x double> %val, ptr %base, <32
; RV64-NEXT: vl8r.v v8, (a2) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -1843,7 +1839,7 @@ define void @vpscatter_baseidx_sext_v32i32_v32f64(<32 x double> %val, ptr %base,
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a3, vlenb
+; RV64-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; RV64-NEXT: slli a4, a3, 3
; RV64-NEXT: add a3, a4, a3
; RV64-NEXT: sub sp, sp, a3
@@ -1887,7 +1883,7 @@ define void @vpscatter_baseidx_sext_v32i32_v32f64(<32 x double> %val, ptr %base,
; RV64-NEXT: vl8r.v v8, (a2) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a0, 3
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: add sp, sp, a0
@@ -1930,7 +1926,7 @@ define void @vpscatter_baseidx_zext_v32i32_v32f64(<32 x double> %val, ptr %base,
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a3, vlenb
+; RV64-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; RV64-NEXT: slli a4, a3, 3
; RV64-NEXT: add a3, a4, a3
; RV64-NEXT: sub sp, sp, a3
@@ -1974,7 +1970,7 @@ define void @vpscatter_baseidx_zext_v32i32_v32f64(<32 x double> %val, ptr %base,
; RV64-NEXT: vl8r.v v8, (a2) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, a1, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a0, 3
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vscale-range.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vscale-range.ll
index 4f533f2055bf34..d00582c5d78696 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vscale-range.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vscale-range.ll
@@ -7,7 +7,7 @@ define <512 x i8> @vadd_v512i8_zvl128(<512 x i8> %a, <512 x i8> %b) #0 {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: li a4, 48
; CHECK-NEXT: mul a2, a2, a4
; CHECK-NEXT: sub sp, sp, a2
@@ -103,7 +103,7 @@ define <512 x i8> @vadd_v512i8_zvl128(<512 x i8> %a, <512 x i8> %b) #0 {
; CHECK-NEXT: vse8.v v8, (a1)
; CHECK-NEXT: addi a0, a0, 128
; CHECK-NEXT: vse8.v v24, (a0)
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 48
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
index 0a2ed3eb1ffbf7..d6e7dbb15c60cb 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vselect-vp.ll
@@ -157,7 +157,7 @@ define <256 x i8> @select_v256i8(<256 x i1> %a, <256 x i8> %b, <256 x i8> %c, i3
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -198,7 +198,7 @@ define <256 x i8> @select_v256i8(<256 x i1> %a, <256 x i8> %b, <256 x i8> %c, i3
; CHECK-NEXT: vsetvli zero, a3, e8, m8, ta, ma
; CHECK-NEXT: vmerge.vvm v8, v16, v8, v0
; CHECK-NEXT: vmv8r.v v16, v24
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -212,7 +212,7 @@ define <256 x i8> @select_evl_v256i8(<256 x i1> %a, <256 x i8> %b, <256 x i8> %c
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 24
; CHECK-NEXT: mul a2, a2, a3
; CHECK-NEXT: sub sp, sp, a2
@@ -255,7 +255,7 @@ define <256 x i8> @select_evl_v256i8(<256 x i1> %a, <256 x i8> %b, <256 x i8> %c
; CHECK-NEXT: vsetvli zero, a2, e8, m8, ta, ma
; CHECK-NEXT: vmerge.vvm v8, v8, v16, v0
; CHECK-NEXT: vmv8r.v v16, v24
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
@@ -416,8 +416,7 @@ define <32 x i64> @select_v32i64(<32 x i1> %a, <32 x i64> %b, <32 x i64> %c, i32
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a1, a0, 128
@@ -444,8 +443,7 @@ define <32 x i64> @select_v32i64(<32 x i1> %a, <32 x i64> %b, <32 x i64> %c, i32
; CHECK-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vmerge.vvm v16, v24, v16, v0
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -458,7 +456,7 @@ define <32 x i64> @select_evl_v32i64(<32 x i1> %a, <32 x i64> %b, <32 x i64> %c)
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -487,7 +485,7 @@ define <32 x i64> @select_evl_v32i64(<32 x i1> %a, <32 x i64> %b, <32 x i64> %c)
; CHECK-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetivli zero, 1, e64, m8, ta, ma
; CHECK-NEXT: vmerge.vvm v16, v24, v16, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -599,8 +597,7 @@ define <64 x float> @select_v64f32(<64 x i1> %a, <64 x float> %b, <64 x float> %
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: addi a1, a0, 128
@@ -627,8 +624,7 @@ define <64 x float> @select_v64f32(<64 x i1> %a, <64 x float> %b, <64 x float> %
; CHECK-NEXT: vl8r.v v24, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a0, e32, m8, ta, ma
; CHECK-NEXT: vmerge.vvm v16, v24, v16, v0
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwadd.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwadd.ll
index 50184796b38f53..89a6360069c1a5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwadd.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwadd.ll
@@ -249,7 +249,7 @@ define <128 x i16> @vwadd_v128i16(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwadd_v128i16:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 128
@@ -277,7 +277,7 @@ define <128 x i16> @vwadd_v128i16(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -294,7 +294,7 @@ define <64 x i32> @vwadd_v64i32(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwadd_v64i32:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 64
@@ -322,7 +322,7 @@ define <64 x i32> @vwadd_v64i32(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -339,7 +339,7 @@ define <32 x i64> @vwadd_v32i64(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwadd_v32i64:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 32
@@ -366,7 +366,7 @@ define <32 x i64> @vwadd_v32i64(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll
index 98f246b8741dcc..7af26a8c0de087 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwaddu.ll
@@ -249,7 +249,7 @@ define <128 x i16> @vwaddu_v128i16(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwaddu_v128i16:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 128
@@ -277,7 +277,7 @@ define <128 x i16> @vwaddu_v128i16(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -294,7 +294,7 @@ define <64 x i32> @vwaddu_v64i32(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwaddu_v64i32:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 64
@@ -322,7 +322,7 @@ define <64 x i32> @vwaddu_v64i32(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -339,7 +339,7 @@ define <32 x i64> @vwaddu_v32i64(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwaddu_v32i64:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 32
@@ -366,7 +366,7 @@ define <32 x i64> @vwaddu_v32i64(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmul.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmul.ll
index 01f2fe506e85f7..91a9630517ee54 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmul.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmul.ll
@@ -274,7 +274,7 @@ define <128 x i16> @vwmul_v128i16(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -303,7 +303,7 @@ define <128 x i16> @vwmul_v128i16(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -321,7 +321,7 @@ define <64 x i32> @vwmul_v64i32(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -350,7 +350,7 @@ define <64 x i32> @vwmul_v64i32(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -368,7 +368,7 @@ define <32 x i64> @vwmul_v32i64(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -396,7 +396,7 @@ define <32 x i64> @vwmul_v32i64(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulsu.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulsu.ll
index db2f544ab30669..3ee4059c066d1c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulsu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulsu.ll
@@ -266,7 +266,7 @@ define <128 x i16> @vwmulsu_v128i16(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -295,7 +295,7 @@ define <128 x i16> @vwmulsu_v128i16(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -313,7 +313,7 @@ define <64 x i32> @vwmulsu_v64i32(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -342,7 +342,7 @@ define <64 x i32> @vwmulsu_v64i32(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -360,7 +360,7 @@ define <32 x i64> @vwmulsu_v32i64(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -388,7 +388,7 @@ define <32 x i64> @vwmulsu_v32i64(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll
index 17a76ae5e7f75e..729a1e40bbda78 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwmulu.ll
@@ -250,7 +250,7 @@ define <128 x i16> @vwmulu_v128i16(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -279,7 +279,7 @@ define <128 x i16> @vwmulu_v128i16(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -297,7 +297,7 @@ define <64 x i32> @vwmulu_v64i32(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -326,7 +326,7 @@ define <64 x i32> @vwmulu_v64i32(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -344,7 +344,7 @@ define <32 x i64> @vwmulu_v32i64(ptr %x, ptr %y) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -372,7 +372,7 @@ define <32 x i64> @vwmulu_v32i64(ptr %x, ptr %y) {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsub.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsub.ll
index 7a925165d98163..98d0a54e7d3e57 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsub.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsub.ll
@@ -249,7 +249,7 @@ define <128 x i16> @vwsub_v128i16(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwsub_v128i16:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 128
@@ -277,7 +277,7 @@ define <128 x i16> @vwsub_v128i16(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -294,7 +294,7 @@ define <64 x i32> @vwsub_v64i32(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwsub_v64i32:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 64
@@ -322,7 +322,7 @@ define <64 x i32> @vwsub_v64i32(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -339,7 +339,7 @@ define <32 x i64> @vwsub_v32i64(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwsub_v32i64:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 32
@@ -366,7 +366,7 @@ define <32 x i64> @vwsub_v32i64(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll
index 4c08a8c15a388e..2b94d951f1dfe0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-vwsubu.ll
@@ -249,7 +249,7 @@ define <128 x i16> @vwsubu_v128i16(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwsubu_v128i16:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 128
@@ -277,7 +277,7 @@ define <128 x i16> @vwsubu_v128i16(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -294,7 +294,7 @@ define <64 x i32> @vwsubu_v64i32(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwsubu_v64i32:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 64
@@ -322,7 +322,7 @@ define <64 x i32> @vwsubu_v64i32(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -339,7 +339,7 @@ define <32 x i64> @vwsubu_v32i64(ptr %x, ptr %y) nounwind {
; CHECK-LABEL: vwsubu_v32i64:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: li a2, 32
@@ -366,7 +366,7 @@ define <32 x i64> @vwsubu_v32i64(ptr %x, ptr %y) nounwind {
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll b/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
index 03d1fb6c8d297f..3fe1151856406e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/floor-vp.ll
@@ -275,8 +275,7 @@ define <vscale x 32 x bfloat> @vp_floor_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -340,8 +339,7 @@ define <vscale x 32 x bfloat> @vp_floor_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -354,8 +352,7 @@ define <vscale x 32 x bfloat> @vp_floor_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -408,8 +405,7 @@ define <vscale x 32 x bfloat> @vp_floor_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -862,8 +858,7 @@ define <vscale x 32 x half> @vp_floor_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -927,8 +922,7 @@ define <vscale x 32 x half> @vp_floor_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -956,8 +950,7 @@ define <vscale x 32 x half> @vp_floor_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -1010,8 +1003,7 @@ define <vscale x 32 x half> @vp_floor_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1461,8 +1453,7 @@ define <vscale x 16 x double> @vp_floor_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -1510,8 +1501,7 @@ define <vscale x 16 x double> @vp_floor_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v8, v24, v8, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll
index d8c3ab27cfad12..f063d1a4fa3153 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fmaximum-sdnode.ll
@@ -111,8 +111,7 @@ define <vscale x 16 x bfloat> @vfmax_nxv16bf16_vv(<vscale x 16 x bfloat> %a, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
@@ -130,8 +129,7 @@ define <vscale x 16 x bfloat> @vfmax_nxv16bf16_vv(<vscale x 16 x bfloat> %a, <vs
; CHECK-NEXT: vfmax.vv v16, v8, v16
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -145,7 +143,7 @@ define <vscale x 32 x bfloat> @vfmax_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; CHECK-LABEL: vfmax_nxv32bf16_vv:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: addi a0, sp, 16
@@ -186,7 +184,7 @@ define <vscale x 32 x bfloat> @vfmax_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -345,8 +343,7 @@ define <vscale x 16 x half> @vfmax_nxv16f16_vv(<vscale x 16 x half> %a, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
@@ -364,8 +361,7 @@ define <vscale x 16 x half> @vfmax_nxv16f16_vv(<vscale x 16 x half> %a, <vscale
; ZVFHMIN-NEXT: vfmax.vv v16, v8, v16
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -390,7 +386,7 @@ define <vscale x 32 x half> @vfmax_nxv32f16_vv(<vscale x 32 x half> %a, <vscale
; ZVFHMIN-LABEL: vfmax_nxv32f16_vv:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 4
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: addi a0, sp, 16
@@ -431,7 +427,7 @@ define <vscale x 32 x half> @vfmax_nxv32f16_vv(<vscale x 32 x half> %a, <vscale
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 4
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
index dd01e1c1ee66d0..30983e93c77a00 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fmaximum-vp.ll
@@ -217,8 +217,7 @@ define <vscale x 16 x bfloat> @vfmax_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -242,8 +241,7 @@ define <vscale x 16 x bfloat> @vfmax_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <v
; CHECK-NEXT: vfmax.vv v16, v8, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -256,8 +254,7 @@ define <vscale x 16 x bfloat> @vfmax_vv_nxv16bf16_unmasked(<vscale x 16 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
@@ -277,8 +274,7 @@ define <vscale x 16 x bfloat> @vfmax_vv_nxv16bf16_unmasked(<vscale x 16 x bfloat
; CHECK-NEXT: vfmax.vv v16, v8, v16
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -293,7 +289,7 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a1, 5
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
@@ -425,7 +421,7 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a0, 5
; CHECK-NEXT: add a0, a1, a0
; CHECK-NEXT: add sp, sp, a0
@@ -440,7 +436,7 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -528,7 +524,7 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v0, v16
; CHECK-NEXT: vmv8r.v v8, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -852,8 +848,7 @@ define <vscale x 16 x half> @vfmax_vv_nxv16f16(<vscale x 16 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -877,8 +872,7 @@ define <vscale x 16 x half> @vfmax_vv_nxv16f16(<vscale x 16 x half> %va, <vscale
; ZVFHMIN-NEXT: vfmax.vv v16, v8, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -902,8 +896,7 @@ define <vscale x 16 x half> @vfmax_vv_nxv16f16_unmasked(<vscale x 16 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
@@ -923,8 +916,7 @@ define <vscale x 16 x half> @vfmax_vv_nxv16f16_unmasked(<vscale x 16 x half> %va
; ZVFHMIN-NEXT: vfmax.vv v16, v8, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -939,8 +931,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH: # %bb.0:
; ZVFH-NEXT: addi sp, sp, -16
; ZVFH-NEXT: .cfi_def_cfa_offset 16
-; ZVFH-NEXT: csrr a1, vlenb
-; ZVFH-NEXT: slli a1, a1, 3
+; ZVFH-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFH-NEXT: vmv1r.v v7, v0
@@ -957,8 +948,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH-NEXT: vmv1r.v v0, v7
; ZVFH-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; ZVFH-NEXT: vfmax.vv v8, v8, v16, v0.t
-; ZVFH-NEXT: csrr a0, vlenb
-; ZVFH-NEXT: slli a0, a0, 3
+; ZVFH-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFH-NEXT: add sp, sp, a0
; ZVFH-NEXT: addi sp, sp, 16
; ZVFH-NEXT: ret
@@ -967,7 +957,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a1, 5
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
@@ -1099,7 +1089,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a0, 5
; ZVFHMIN-NEXT: add a0, a1, a0
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -1125,7 +1115,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -1213,7 +1203,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v0, v16
; ZVFHMIN-NEXT: vmv8r.v v8, v0
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -1475,8 +1465,7 @@ define <vscale x 8 x double> @vfmax_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -1493,8 +1482,7 @@ define <vscale x 8 x double> @vfmax_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK-NEXT: vmv1r.v v0, v7
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vfmax.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -1524,7 +1512,7 @@ define <vscale x 16 x double> @vfmax_vv_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 35
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -1682,7 +1670,7 @@ define <vscale x 16 x double> @vfmax_vv_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 35
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
@@ -1697,7 +1685,7 @@ define <vscale x 16 x double> @vfmax_vv_nxv16f64_unmasked(<vscale x 16 x double>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 24
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -1753,7 +1741,7 @@ define <vscale x 16 x double> @vfmax_vv_nxv16f64_unmasked(<vscale x 16 x double>
; CHECK-NEXT: vfmax.vv v8, v8, v24
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fminimum-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/fminimum-sdnode.ll
index 2371840002f40b..e4889a0d264760 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fminimum-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fminimum-sdnode.ll
@@ -111,8 +111,7 @@ define <vscale x 16 x bfloat> @vfmin_nxv16bf16_vv(<vscale x 16 x bfloat> %a, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
@@ -130,8 +129,7 @@ define <vscale x 16 x bfloat> @vfmin_nxv16bf16_vv(<vscale x 16 x bfloat> %a, <vs
; CHECK-NEXT: vfmin.vv v16, v8, v16
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -145,7 +143,7 @@ define <vscale x 32 x bfloat> @vfmin_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; CHECK-LABEL: vfmin_nxv32bf16_vv:
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: addi a0, sp, 16
@@ -186,7 +184,7 @@ define <vscale x 32 x bfloat> @vfmin_nxv32bf16_vv(<vscale x 32 x bfloat> %a, <vs
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -345,8 +343,7 @@ define <vscale x 16 x half> @vfmin_nxv16f16_vv(<vscale x 16 x half> %a, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
@@ -364,8 +361,7 @@ define <vscale x 16 x half> @vfmin_nxv16f16_vv(<vscale x 16 x half> %a, <vscale
; ZVFHMIN-NEXT: vfmin.vv v16, v8, v16
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -390,7 +386,7 @@ define <vscale x 32 x half> @vfmin_nxv32f16_vv(<vscale x 32 x half> %a, <vscale
; ZVFHMIN-LABEL: vfmin_nxv32f16_vv:
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 4
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: addi a0, sp, 16
@@ -431,7 +427,7 @@ define <vscale x 32 x half> @vfmin_nxv32f16_vv(<vscale x 32 x half> %a, <vscale
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 4
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
index 85cac8d1870594..1aab10fc17a02e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fminimum-vp.ll
@@ -217,8 +217,7 @@ define <vscale x 16 x bfloat> @vfmin_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -242,8 +241,7 @@ define <vscale x 16 x bfloat> @vfmin_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <v
; CHECK-NEXT: vfmin.vv v16, v8, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -256,8 +254,7 @@ define <vscale x 16 x bfloat> @vfmin_vv_nxv16bf16_unmasked(<vscale x 16 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
@@ -277,8 +274,7 @@ define <vscale x 16 x bfloat> @vfmin_vv_nxv16bf16_unmasked(<vscale x 16 x bfloat
; CHECK-NEXT: vfmin.vv v16, v8, v16
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -293,7 +289,7 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a1, 5
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
@@ -425,7 +421,7 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a0, 5
; CHECK-NEXT: add a0, a1, a0
; CHECK-NEXT: add sp, sp, a0
@@ -440,7 +436,7 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -528,7 +524,7 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v0, v16
; CHECK-NEXT: vmv8r.v v8, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -852,8 +848,7 @@ define <vscale x 16 x half> @vfmin_vv_nxv16f16(<vscale x 16 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -877,8 +872,7 @@ define <vscale x 16 x half> @vfmin_vv_nxv16f16(<vscale x 16 x half> %va, <vscale
; ZVFHMIN-NEXT: vfmin.vv v16, v8, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -902,8 +896,7 @@ define <vscale x 16 x half> @vfmin_vv_nxv16f16_unmasked(<vscale x 16 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
@@ -923,8 +916,7 @@ define <vscale x 16 x half> @vfmin_vv_nxv16f16_unmasked(<vscale x 16 x half> %va
; ZVFHMIN-NEXT: vfmin.vv v16, v8, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -939,8 +931,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH: # %bb.0:
; ZVFH-NEXT: addi sp, sp, -16
; ZVFH-NEXT: .cfi_def_cfa_offset 16
-; ZVFH-NEXT: csrr a1, vlenb
-; ZVFH-NEXT: slli a1, a1, 3
+; ZVFH-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFH-NEXT: vmv1r.v v7, v0
@@ -957,8 +948,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFH-NEXT: vmv1r.v v0, v7
; ZVFH-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; ZVFH-NEXT: vfmin.vv v8, v8, v16, v0.t
-; ZVFH-NEXT: csrr a0, vlenb
-; ZVFH-NEXT: slli a0, a0, 3
+; ZVFH-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFH-NEXT: add sp, sp, a0
; ZVFH-NEXT: addi sp, sp, 16
; ZVFH-NEXT: ret
@@ -967,7 +957,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a1, 5
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
@@ -1099,7 +1089,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a0, 5
; ZVFHMIN-NEXT: add a0, a1, a0
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -1125,7 +1115,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -1213,7 +1203,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v0, v16
; ZVFHMIN-NEXT: vmv8r.v v8, v0
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -1475,8 +1465,7 @@ define <vscale x 8 x double> @vfmin_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -1493,8 +1482,7 @@ define <vscale x 8 x double> @vfmin_vv_nxv8f64(<vscale x 8 x double> %va, <vscal
; CHECK-NEXT: vmv1r.v v0, v7
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vfmin.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -1524,7 +1512,7 @@ define <vscale x 16 x double> @vfmin_vv_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 35
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -1682,7 +1670,7 @@ define <vscale x 16 x double> @vfmin_vv_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 35
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
@@ -1697,7 +1685,7 @@ define <vscale x 16 x double> @vfmin_vv_nxv16f64_unmasked(<vscale x 16 x double>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 24
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -1753,7 +1741,7 @@ define <vscale x 16 x double> @vfmin_vv_nxv16f64_unmasked(<vscale x 16 x double>
; CHECK-NEXT: vfmin.vv v8, v8, v24
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/fnearbyint-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/fnearbyint-sdnode.ll
index 9498c65ba9a176..6fd718d6a950e6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fnearbyint-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fnearbyint-sdnode.ll
@@ -132,8 +132,7 @@ define <vscale x 32 x bfloat> @nearbyint_nxv32bf16(<vscale x 32 x bfloat> %x) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
@@ -168,8 +167,7 @@ define <vscale x 32 x bfloat> @nearbyint_nxv32bf16(<vscale x 32 x bfloat> %x) {
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v12, v24
; CHECK-NEXT: fsflags a0
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -392,8 +390,7 @@ define <vscale x 32 x half> @nearbyint_nxv32f16(<vscale x 32 x half> %x) {
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
@@ -428,8 +425,7 @@ define <vscale x 32 x half> @nearbyint_nxv32f16(<vscale x 32 x half> %x) {
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
; ZVFHMIN-NEXT: fsflags a0
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll b/llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll
index 5a1f7f54305846..408049d4af04c5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll
@@ -431,7 +431,7 @@ define <4 x i32> @stest_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
; CHECK-V-NEXT: .cfi_offset s2, -32
-; CHECK-V-NEXT: csrr a1, vlenb
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a2, a1, 1
; CHECK-V-NEXT: add a1, a2, a1
; CHECK-V-NEXT: sub sp, sp, a1
@@ -482,7 +482,7 @@ define <4 x i32> @stest_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: vslideup.vi v10, v8, 2
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclip.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a1, a0, 1
; CHECK-V-NEXT: add a0, a1, a0
; CHECK-V-NEXT: add sp, sp, a0
@@ -594,7 +594,7 @@ define <4 x i32> @utesth_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
; CHECK-V-NEXT: .cfi_offset s2, -32
-; CHECK-V-NEXT: csrr a1, vlenb
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a2, a1, 1
; CHECK-V-NEXT: add a1, a2, a1
; CHECK-V-NEXT: sub sp, sp, a1
@@ -645,7 +645,7 @@ define <4 x i32> @utesth_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: vslideup.vi v10, v8, 2
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a1, a0, 1
; CHECK-V-NEXT: add a0, a1, a0
; CHECK-V-NEXT: add sp, sp, a0
@@ -767,7 +767,7 @@ define <4 x i32> @ustest_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
; CHECK-V-NEXT: .cfi_offset s2, -32
-; CHECK-V-NEXT: csrr a1, vlenb
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a2, a1, 1
; CHECK-V-NEXT: add a1, a2, a1
; CHECK-V-NEXT: sub sp, sp, a1
@@ -819,7 +819,7 @@ define <4 x i32> @ustest_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: vmax.vx v10, v8, zero
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a1, a0, 1
; CHECK-V-NEXT: add a0, a1, a0
; CHECK-V-NEXT: add sp, sp, a0
@@ -1373,8 +1373,7 @@ define <8 x i16> @stest_f16i16(<8 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s4, -48
; CHECK-V-NEXT: .cfi_offset s5, -56
; CHECK-V-NEXT: .cfi_offset s6, -64
-; CHECK-V-NEXT: csrr a1, vlenb
-; CHECK-V-NEXT: slli a1, a1, 2
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-V-NEXT: sub sp, sp, a1
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xd0, 0x00, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 80 + 4 * vlenb
; CHECK-V-NEXT: lhu s0, 0(a0)
@@ -1491,8 +1490,7 @@ define <8 x i16> @stest_f16i16(<8 x half> %x) {
; CHECK-V-NEXT: vslideup.vi v10, v8, 4
; CHECK-V-NEXT: vsetvli zero, zero, e16, m1, ta, ma
; CHECK-V-NEXT: vnclip.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
-; CHECK-V-NEXT: slli a0, a0, 2
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 72(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 64(sp) # 8-byte Folded Reload
@@ -1686,8 +1684,7 @@ define <8 x i16> @utesth_f16i16(<8 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s4, -48
; CHECK-V-NEXT: .cfi_offset s5, -56
; CHECK-V-NEXT: .cfi_offset s6, -64
-; CHECK-V-NEXT: csrr a1, vlenb
-; CHECK-V-NEXT: slli a1, a1, 2
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-V-NEXT: sub sp, sp, a1
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xd0, 0x00, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 80 + 4 * vlenb
; CHECK-V-NEXT: lhu s0, 0(a0)
@@ -1804,8 +1801,7 @@ define <8 x i16> @utesth_f16i16(<8 x half> %x) {
; CHECK-V-NEXT: vslideup.vi v10, v8, 4
; CHECK-V-NEXT: vsetvli zero, zero, e16, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
-; CHECK-V-NEXT: slli a0, a0, 2
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 72(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 64(sp) # 8-byte Folded Reload
@@ -2021,8 +2017,7 @@ define <8 x i16> @ustest_f16i16(<8 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s4, -48
; CHECK-V-NEXT: .cfi_offset s5, -56
; CHECK-V-NEXT: .cfi_offset s6, -64
-; CHECK-V-NEXT: csrr a1, vlenb
-; CHECK-V-NEXT: slli a1, a1, 2
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-V-NEXT: sub sp, sp, a1
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xd0, 0x00, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 80 + 4 * vlenb
; CHECK-V-NEXT: lhu s0, 0(a0)
@@ -2140,8 +2135,7 @@ define <8 x i16> @ustest_f16i16(<8 x half> %x) {
; CHECK-V-NEXT: vmax.vx v10, v8, zero
; CHECK-V-NEXT: vsetvli zero, zero, e16, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
-; CHECK-V-NEXT: slli a0, a0, 2
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 72(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 64(sp) # 8-byte Folded Reload
@@ -2255,7 +2249,7 @@ define <2 x i64> @stest_f64i64(<2 x double> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -2323,7 +2317,7 @@ define <2 x i64> @stest_f64i64(<2 x double> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, s0
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -2383,7 +2377,7 @@ define <2 x i64> @utest_f64i64(<2 x double> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -2406,7 +2400,7 @@ define <2 x i64> @utest_f64i64(<2 x double> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, a2
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -2490,7 +2484,7 @@ define <2 x i64> @ustest_f64i64(<2 x double> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -2540,7 +2534,7 @@ define <2 x i64> @ustest_f64i64(<2 x double> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, a2
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -2647,7 +2641,7 @@ define <2 x i64> @stest_f32i64(<2 x float> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -2715,7 +2709,7 @@ define <2 x i64> @stest_f32i64(<2 x float> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, s0
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -2775,7 +2769,7 @@ define <2 x i64> @utest_f32i64(<2 x float> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -2798,7 +2792,7 @@ define <2 x i64> @utest_f32i64(<2 x float> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, a2
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -2882,7 +2876,7 @@ define <2 x i64> @ustest_f32i64(<2 x float> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -2932,7 +2926,7 @@ define <2 x i64> @ustest_f32i64(<2 x float> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, a2
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -3760,7 +3754,7 @@ define <4 x i32> @stest_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
; CHECK-V-NEXT: .cfi_offset s2, -32
-; CHECK-V-NEXT: csrr a1, vlenb
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a2, a1, 1
; CHECK-V-NEXT: add a1, a2, a1
; CHECK-V-NEXT: sub sp, sp, a1
@@ -3811,7 +3805,7 @@ define <4 x i32> @stest_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: vslideup.vi v10, v8, 2
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclip.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a1, a0, 1
; CHECK-V-NEXT: add a0, a1, a0
; CHECK-V-NEXT: add sp, sp, a0
@@ -3921,7 +3915,7 @@ define <4 x i32> @utesth_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
; CHECK-V-NEXT: .cfi_offset s2, -32
-; CHECK-V-NEXT: csrr a1, vlenb
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a2, a1, 1
; CHECK-V-NEXT: add a1, a2, a1
; CHECK-V-NEXT: sub sp, sp, a1
@@ -3972,7 +3966,7 @@ define <4 x i32> @utesth_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: vslideup.vi v10, v8, 2
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a1, a0, 1
; CHECK-V-NEXT: add a0, a1, a0
; CHECK-V-NEXT: add sp, sp, a0
@@ -4093,7 +4087,7 @@ define <4 x i32> @ustest_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
; CHECK-V-NEXT: .cfi_offset s2, -32
-; CHECK-V-NEXT: csrr a1, vlenb
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a2, a1, 1
; CHECK-V-NEXT: add a1, a2, a1
; CHECK-V-NEXT: sub sp, sp, a1
@@ -4145,7 +4139,7 @@ define <4 x i32> @ustest_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: vmax.vx v10, v8, zero
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: slli a1, a0, 1
; CHECK-V-NEXT: add a0, a1, a0
; CHECK-V-NEXT: add sp, sp, a0
@@ -4687,8 +4681,7 @@ define <8 x i16> @stest_f16i16_mm(<8 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s4, -48
; CHECK-V-NEXT: .cfi_offset s5, -56
; CHECK-V-NEXT: .cfi_offset s6, -64
-; CHECK-V-NEXT: csrr a1, vlenb
-; CHECK-V-NEXT: slli a1, a1, 2
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-V-NEXT: sub sp, sp, a1
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xd0, 0x00, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 80 + 4 * vlenb
; CHECK-V-NEXT: lhu s0, 0(a0)
@@ -4805,8 +4798,7 @@ define <8 x i16> @stest_f16i16_mm(<8 x half> %x) {
; CHECK-V-NEXT: vslideup.vi v10, v8, 4
; CHECK-V-NEXT: vsetvli zero, zero, e16, m1, ta, ma
; CHECK-V-NEXT: vnclip.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
-; CHECK-V-NEXT: slli a0, a0, 2
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 72(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 64(sp) # 8-byte Folded Reload
@@ -4998,8 +4990,7 @@ define <8 x i16> @utesth_f16i16_mm(<8 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s4, -48
; CHECK-V-NEXT: .cfi_offset s5, -56
; CHECK-V-NEXT: .cfi_offset s6, -64
-; CHECK-V-NEXT: csrr a1, vlenb
-; CHECK-V-NEXT: slli a1, a1, 2
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-V-NEXT: sub sp, sp, a1
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xd0, 0x00, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 80 + 4 * vlenb
; CHECK-V-NEXT: lhu s0, 0(a0)
@@ -5116,8 +5107,7 @@ define <8 x i16> @utesth_f16i16_mm(<8 x half> %x) {
; CHECK-V-NEXT: vslideup.vi v10, v8, 4
; CHECK-V-NEXT: vsetvli zero, zero, e16, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
-; CHECK-V-NEXT: slli a0, a0, 2
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 72(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 64(sp) # 8-byte Folded Reload
@@ -5332,8 +5322,7 @@ define <8 x i16> @ustest_f16i16_mm(<8 x half> %x) {
; CHECK-V-NEXT: .cfi_offset s4, -48
; CHECK-V-NEXT: .cfi_offset s5, -56
; CHECK-V-NEXT: .cfi_offset s6, -64
-; CHECK-V-NEXT: csrr a1, vlenb
-; CHECK-V-NEXT: slli a1, a1, 2
+; CHECK-V-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-V-NEXT: sub sp, sp, a1
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xd0, 0x00, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 80 + 4 * vlenb
; CHECK-V-NEXT: lhu s0, 0(a0)
@@ -5451,8 +5440,7 @@ define <8 x i16> @ustest_f16i16_mm(<8 x half> %x) {
; CHECK-V-NEXT: vmax.vx v10, v8, zero
; CHECK-V-NEXT: vsetvli zero, zero, e16, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
-; CHECK-V-NEXT: csrr a0, vlenb
-; CHECK-V-NEXT: slli a0, a0, 2
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 72(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 64(sp) # 8-byte Folded Reload
@@ -5567,7 +5555,7 @@ define <2 x i64> @stest_f64i64_mm(<2 x double> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -5638,7 +5626,7 @@ define <2 x i64> @stest_f64i64_mm(<2 x double> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, s0
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -5696,7 +5684,7 @@ define <2 x i64> @utest_f64i64_mm(<2 x double> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -5722,7 +5710,7 @@ define <2 x i64> @utest_f64i64_mm(<2 x double> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a1
; CHECK-V-NEXT: vmv.s.x v9, a0
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -5794,7 +5782,7 @@ define <2 x i64> @ustest_f64i64_mm(<2 x double> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -5833,7 +5821,7 @@ define <2 x i64> @ustest_f64i64_mm(<2 x double> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, a1
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -5941,7 +5929,7 @@ define <2 x i64> @stest_f32i64_mm(<2 x float> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -6012,7 +6000,7 @@ define <2 x i64> @stest_f32i64_mm(<2 x float> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, s0
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -6070,7 +6058,7 @@ define <2 x i64> @utest_f32i64_mm(<2 x float> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -6096,7 +6084,7 @@ define <2 x i64> @utest_f32i64_mm(<2 x float> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a1
; CHECK-V-NEXT: vmv.s.x v9, a0
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
@@ -6168,7 +6156,7 @@ define <2 x i64> @ustest_f32i64_mm(<2 x float> %x) {
; CHECK-V-NEXT: .cfi_offset ra, -8
; CHECK-V-NEXT: .cfi_offset s0, -16
; CHECK-V-NEXT: .cfi_offset s1, -24
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: sub sp, sp, a0
; CHECK-V-NEXT: .cfi_escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xc0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 64 + 1 * vlenb
; CHECK-V-NEXT: addi a0, sp, 32
@@ -6207,7 +6195,7 @@ define <2 x i64> @ustest_f32i64_mm(<2 x float> %x) {
; CHECK-V-NEXT: vmv.s.x v8, a0
; CHECK-V-NEXT: vmv.s.x v9, a1
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
-; CHECK-V-NEXT: csrr a0, vlenb
+; CHECK-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-V-NEXT: add sp, sp, a0
; CHECK-V-NEXT: ld ra, 56(sp) # 8-byte Folded Reload
; CHECK-V-NEXT: ld s0, 48(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/rvv/frm-insert.ll b/llvm/test/CodeGen/RISCV/rvv/frm-insert.ll
index ccfe94ecad286b..f0472e4974cb41 100644
--- a/llvm/test/CodeGen/RISCV/rvv/frm-insert.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/frm-insert.ll
@@ -86,14 +86,14 @@ define <vscale x 1 x float> @just_call(<vscale x 1 x float> %0) nounwind {
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -48
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: addi a0, sp, 32
; CHECK-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
; CHECK-NEXT: call foo
; CHECK-NEXT: addi a0, sp, 32
; CHECK-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 48
@@ -103,14 +103,14 @@ define <vscale x 1 x float> @just_call(<vscale x 1 x float> %0) nounwind {
; UNOPT: # %bb.0: # %entry
; UNOPT-NEXT: addi sp, sp, -48
; UNOPT-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; UNOPT-NEXT: csrr a0, vlenb
+; UNOPT-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; UNOPT-NEXT: sub sp, sp, a0
; UNOPT-NEXT: addi a0, sp, 32
; UNOPT-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
; UNOPT-NEXT: call foo
; UNOPT-NEXT: addi a0, sp, 32
; UNOPT-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; UNOPT-NEXT: csrr a0, vlenb
+; UNOPT-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; UNOPT-NEXT: add sp, sp, a0
; UNOPT-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; UNOPT-NEXT: addi sp, sp, 48
@@ -125,7 +125,7 @@ define <vscale x 1 x float> @before_call1(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -48
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: fsrmi a1, 0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
@@ -136,7 +136,7 @@ define <vscale x 1 x float> @before_call1(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK-NEXT: call foo
; CHECK-NEXT: addi a0, sp, 32
; CHECK-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 48
@@ -146,7 +146,7 @@ define <vscale x 1 x float> @before_call1(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT: # %bb.0: # %entry
; UNOPT-NEXT: addi sp, sp, -48
; UNOPT-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; UNOPT-NEXT: csrr a1, vlenb
+; UNOPT-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; UNOPT-NEXT: sub sp, sp, a1
; UNOPT-NEXT: fsrmi a1, 0
; UNOPT-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
@@ -157,7 +157,7 @@ define <vscale x 1 x float> @before_call1(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT-NEXT: call foo
; UNOPT-NEXT: addi a0, sp, 32
; UNOPT-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; UNOPT-NEXT: csrr a0, vlenb
+; UNOPT-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; UNOPT-NEXT: add sp, sp, a0
; UNOPT-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; UNOPT-NEXT: addi sp, sp, 48
@@ -177,7 +177,7 @@ define <vscale x 1 x float> @before_call2(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -48
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vfadd.vv v8, v8, v9
@@ -186,7 +186,7 @@ define <vscale x 1 x float> @before_call2(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK-NEXT: call foo
; CHECK-NEXT: addi a0, sp, 32
; CHECK-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 48
@@ -196,7 +196,7 @@ define <vscale x 1 x float> @before_call2(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT: # %bb.0: # %entry
; UNOPT-NEXT: addi sp, sp, -48
; UNOPT-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; UNOPT-NEXT: csrr a1, vlenb
+; UNOPT-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; UNOPT-NEXT: sub sp, sp, a1
; UNOPT-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; UNOPT-NEXT: vfadd.vv v8, v8, v9
@@ -205,7 +205,7 @@ define <vscale x 1 x float> @before_call2(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT-NEXT: call foo
; UNOPT-NEXT: addi a0, sp, 32
; UNOPT-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; UNOPT-NEXT: csrr a0, vlenb
+; UNOPT-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; UNOPT-NEXT: add sp, sp, a0
; UNOPT-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; UNOPT-NEXT: addi sp, sp, 48
@@ -225,7 +225,7 @@ define <vscale x 1 x float> @after_call1(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -48
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: fsrmi a1, 0
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
@@ -236,7 +236,7 @@ define <vscale x 1 x float> @after_call1(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK-NEXT: call foo
; CHECK-NEXT: addi a0, sp, 32
; CHECK-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 48
@@ -246,7 +246,7 @@ define <vscale x 1 x float> @after_call1(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT: # %bb.0: # %entry
; UNOPT-NEXT: addi sp, sp, -48
; UNOPT-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; UNOPT-NEXT: csrr a1, vlenb
+; UNOPT-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; UNOPT-NEXT: sub sp, sp, a1
; UNOPT-NEXT: fsrmi a1, 0
; UNOPT-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
@@ -257,7 +257,7 @@ define <vscale x 1 x float> @after_call1(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT-NEXT: call foo
; UNOPT-NEXT: addi a0, sp, 32
; UNOPT-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; UNOPT-NEXT: csrr a0, vlenb
+; UNOPT-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; UNOPT-NEXT: add sp, sp, a0
; UNOPT-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; UNOPT-NEXT: addi sp, sp, 48
@@ -277,7 +277,7 @@ define <vscale x 1 x float> @after_call2(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -48
; CHECK-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; CHECK-NEXT: vfadd.vv v8, v8, v9
@@ -286,7 +286,7 @@ define <vscale x 1 x float> @after_call2(<vscale x 1 x float> %0, <vscale x 1 x
; CHECK-NEXT: call foo
; CHECK-NEXT: addi a0, sp, 32
; CHECK-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 48
@@ -296,7 +296,7 @@ define <vscale x 1 x float> @after_call2(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT: # %bb.0: # %entry
; UNOPT-NEXT: addi sp, sp, -48
; UNOPT-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; UNOPT-NEXT: csrr a1, vlenb
+; UNOPT-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; UNOPT-NEXT: sub sp, sp, a1
; UNOPT-NEXT: vsetvli zero, a0, e32, mf2, ta, ma
; UNOPT-NEXT: vfadd.vv v8, v8, v9
@@ -305,7 +305,7 @@ define <vscale x 1 x float> @after_call2(<vscale x 1 x float> %0, <vscale x 1 x
; UNOPT-NEXT: call foo
; UNOPT-NEXT: addi a0, sp, 32
; UNOPT-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; UNOPT-NEXT: csrr a0, vlenb
+; UNOPT-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; UNOPT-NEXT: add sp, sp, a0
; UNOPT-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; UNOPT-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
index 2c5a3dfffc2cfc..6a816fa1e4dd77 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fshr-fshl-vp.ll
@@ -212,8 +212,7 @@ define <vscale x 64 x i8> @fshr_v64i8(<vscale x 64 x i8> %a, <vscale x 64 x i8>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8r.v v24, (a0)
@@ -228,8 +227,7 @@ define <vscale x 64 x i8> @fshr_v64i8(<vscale x 64 x i8> %a, <vscale x 64 x i8>
; CHECK-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsrl.vv v16, v24, v16, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -243,8 +241,7 @@ define <vscale x 64 x i8> @fshl_v64i8(<vscale x 64 x i8> %a, <vscale x 64 x i8>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8r.v v24, (a0)
@@ -259,8 +256,7 @@ define <vscale x 64 x i8> @fshl_v64i8(<vscale x 64 x i8> %a, <vscale x 64 x i8>
; CHECK-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsll.vv v16, v24, v16, v0.t
; CHECK-NEXT: vor.vv v8, v16, v8, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -444,8 +440,7 @@ define <vscale x 32 x i16> @fshr_v32i16(<vscale x 32 x i16> %a, <vscale x 32 x i
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re16.v v24, (a0)
@@ -460,8 +455,7 @@ define <vscale x 32 x i16> @fshr_v32i16(<vscale x 32 x i16> %a, <vscale x 32 x i
; CHECK-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsrl.vv v16, v24, v16, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -475,8 +469,7 @@ define <vscale x 32 x i16> @fshl_v32i16(<vscale x 32 x i16> %a, <vscale x 32 x i
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re16.v v24, (a0)
@@ -491,8 +484,7 @@ define <vscale x 32 x i16> @fshl_v32i16(<vscale x 32 x i16> %a, <vscale x 32 x i
; CHECK-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsll.vv v16, v24, v16, v0.t
; CHECK-NEXT: vor.vv v8, v16, v8, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -650,8 +642,7 @@ define <vscale x 16 x i32> @fshr_v16i32(<vscale x 16 x i32> %a, <vscale x 16 x i
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re32.v v24, (a0)
@@ -668,8 +659,7 @@ define <vscale x 16 x i32> @fshr_v16i32(<vscale x 16 x i32> %a, <vscale x 16 x i
; CHECK-NEXT: vsll.vi v24, v24, 1, v0.t
; CHECK-NEXT: vsll.vv v8, v24, v8, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -683,8 +673,7 @@ define <vscale x 16 x i32> @fshl_v16i32(<vscale x 16 x i32> %a, <vscale x 16 x i
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re32.v v24, (a0)
@@ -702,8 +691,7 @@ define <vscale x 16 x i32> @fshl_v16i32(<vscale x 16 x i32> %a, <vscale x 16 x i
; CHECK-NEXT: vsrl.vi v24, v24, 1, v0.t
; CHECK-NEXT: vsrl.vv v16, v24, v16, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -825,8 +813,7 @@ define <vscale x 7 x i64> @fshr_v7i64(<vscale x 7 x i64> %a, <vscale x 7 x i64>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re64.v v24, (a0)
@@ -843,8 +830,7 @@ define <vscale x 7 x i64> @fshr_v7i64(<vscale x 7 x i64> %a, <vscale x 7 x i64>
; CHECK-NEXT: vsll.vi v24, v24, 1, v0.t
; CHECK-NEXT: vsll.vv v8, v24, v8, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -858,8 +844,7 @@ define <vscale x 7 x i64> @fshl_v7i64(<vscale x 7 x i64> %a, <vscale x 7 x i64>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re64.v v24, (a0)
@@ -877,8 +862,7 @@ define <vscale x 7 x i64> @fshl_v7i64(<vscale x 7 x i64> %a, <vscale x 7 x i64>
; CHECK-NEXT: vsrl.vi v24, v24, 1, v0.t
; CHECK-NEXT: vsrl.vv v16, v24, v16, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -892,8 +876,7 @@ define <vscale x 8 x i64> @fshr_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re64.v v24, (a0)
@@ -910,8 +893,7 @@ define <vscale x 8 x i64> @fshr_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64>
; CHECK-NEXT: vsll.vi v24, v24, 1, v0.t
; CHECK-NEXT: vsll.vv v8, v24, v8, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -925,8 +907,7 @@ define <vscale x 8 x i64> @fshl_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 3
+; CHECK-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re64.v v24, (a0)
@@ -944,8 +925,7 @@ define <vscale x 8 x i64> @fshl_v8i64(<vscale x 8 x i64> %a, <vscale x 8 x i64>
; CHECK-NEXT: vsrl.vi v24, v24, 1, v0.t
; CHECK-NEXT: vsrl.vv v16, v24, v16, v0.t
; CHECK-NEXT: vor.vv v8, v8, v16, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -959,7 +939,7 @@ define <vscale x 16 x i64> @fshr_v16i64(<vscale x 16 x i64> %a, <vscale x 16 x i
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 48
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -1110,7 +1090,7 @@ define <vscale x 16 x i64> @fshr_v16i64(<vscale x 16 x i64> %a, <vscale x 16 x i
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 48
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
@@ -1126,7 +1106,7 @@ define <vscale x 16 x i64> @fshl_v16i64(<vscale x 16 x i64> %a, <vscale x 16 x i
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 40
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -1266,7 +1246,7 @@ define <vscale x 16 x i64> @fshl_v16i64(<vscale x 16 x i64> %a, <vscale x 16 x i
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 40
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/get-vlen-debugloc.mir b/llvm/test/CodeGen/RISCV/rvv/get-vlen-debugloc.mir
index 3b41e92f437309..8d1c9943b7a300 100644
--- a/llvm/test/CodeGen/RISCV/rvv/get-vlen-debugloc.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/get-vlen-debugloc.mir
@@ -27,14 +27,12 @@ body: |
; CHECK-NEXT: successors: %bb.1(0x80000000)
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 0
- ; CHECK-NEXT: $x10 = frame-setup PseudoReadVLENB
- ; CHECK-NEXT: $x10 = frame-setup SLLI killed $x10, 1
+ ; CHECK-NEXT: $x10 = frame-setup PseudoReadMulVLENB 2, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: $x2 = frame-setup SUB $x2, killed $x10
; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0a, 0x72, 0x00, 0x11, 0x02, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: bb.1:
- ; CHECK-NEXT: $x10 = frame-destroy PseudoReadVLENB
- ; CHECK-NEXT: $x10 = frame-destroy SLLI killed $x10, 1
+ ; CHECK-NEXT: $x10 = frame-destroy PseudoReadMulVLENB 2, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: $x2 = frame-destroy ADD $x2, killed $x10
; CHECK-NEXT: PseudoRET
bb.0:
diff --git a/llvm/test/CodeGen/RISCV/rvv/large-rvv-stack-size.mir b/llvm/test/CodeGen/RISCV/rvv/large-rvv-stack-size.mir
index 22a7425bf98b8e..6243af0d32f23e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/large-rvv-stack-size.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/large-rvv-stack-size.mir
@@ -18,7 +18,7 @@
; CHECK-NEXT: .cfi_def_cfa s0, 0
; CHECK-NEXT: addi sp, sp, -272
; CHECK-NEXT: sd a0, 8(sp)
- ; CHECK-NEXT: csrr a0, vlenb
+ ; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: sd a1, 0(sp)
; CHECK-NEXT: li a1, 3
; CHECK-NEXT: slli a1, a1, 10
diff --git a/llvm/test/CodeGen/RISCV/rvv/localvar.ll b/llvm/test/CodeGen/RISCV/rvv/localvar.ll
index 90bf29d7760112..c3eaa63928bdda 100644
--- a/llvm/test/CodeGen/RISCV/rvv/localvar.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/localvar.ll
@@ -7,8 +7,7 @@ define void @local_var_mf8() {
; RV64IV: # %bb.0:
; RV64IV-NEXT: addi sp, sp, -16
; RV64IV-NEXT: .cfi_def_cfa_offset 16
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 1
+; RV64IV-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x02, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 2 * vlenb
; RV64IV-NEXT: csrr a0, vlenb
@@ -18,8 +17,7 @@ define void @local_var_mf8() {
; RV64IV-NEXT: vle8.v v8, (a0)
; RV64IV-NEXT: addi a0, sp, 16
; RV64IV-NEXT: vle8.v v8, (a0)
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 1
+; RV64IV-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64IV-NEXT: add sp, sp, a0
; RV64IV-NEXT: addi sp, sp, 16
; RV64IV-NEXT: ret
@@ -35,8 +33,7 @@ define void @local_var_m1() {
; RV64IV: # %bb.0:
; RV64IV-NEXT: addi sp, sp, -16
; RV64IV-NEXT: .cfi_def_cfa_offset 16
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 1
+; RV64IV-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x02, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 2 * vlenb
; RV64IV-NEXT: csrr a0, vlenb
@@ -45,8 +42,7 @@ define void @local_var_m1() {
; RV64IV-NEXT: vl1r.v v8, (a0)
; RV64IV-NEXT: addi a0, sp, 16
; RV64IV-NEXT: vl1r.v v8, (a0)
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 1
+; RV64IV-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64IV-NEXT: add sp, sp, a0
; RV64IV-NEXT: addi sp, sp, 16
; RV64IV-NEXT: ret
@@ -62,8 +58,7 @@ define void @local_var_m2() {
; RV64IV: # %bb.0:
; RV64IV-NEXT: addi sp, sp, -16
; RV64IV-NEXT: .cfi_def_cfa_offset 16
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 2
+; RV64IV-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; RV64IV-NEXT: csrr a0, vlenb
@@ -73,8 +68,7 @@ define void @local_var_m2() {
; RV64IV-NEXT: vl2r.v v8, (a0)
; RV64IV-NEXT: addi a0, sp, 16
; RV64IV-NEXT: vl2r.v v8, (a0)
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 2
+; RV64IV-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV64IV-NEXT: add sp, sp, a0
; RV64IV-NEXT: addi sp, sp, 16
; RV64IV-NEXT: ret
@@ -96,8 +90,7 @@ define void @local_var_m4() {
; RV64IV-NEXT: .cfi_offset s0, -16
; RV64IV-NEXT: addi s0, sp, 48
; RV64IV-NEXT: .cfi_def_cfa s0, 0
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 3
+; RV64IV-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: andi sp, sp, -32
; RV64IV-NEXT: csrr a0, vlenb
@@ -130,7 +123,7 @@ define void @local_var_m8() {
; RV64IV-NEXT: .cfi_offset s0, -16
; RV64IV-NEXT: addi s0, sp, 80
; RV64IV-NEXT: .cfi_def_cfa s0, 0
-; RV64IV-NEXT: csrr a0, vlenb
+; RV64IV-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64IV-NEXT: slli a0, a0, 4
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: andi sp, sp, -64
@@ -158,8 +151,7 @@ define void @local_var_m2_mix_local_scalar() {
; RV64IV: # %bb.0:
; RV64IV-NEXT: addi sp, sp, -16
; RV64IV-NEXT: .cfi_def_cfa_offset 16
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 2
+; RV64IV-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; RV64IV-NEXT: lw zero, 12(sp)
@@ -171,8 +163,7 @@ define void @local_var_m2_mix_local_scalar() {
; RV64IV-NEXT: addi a0, sp, 16
; RV64IV-NEXT: vl2r.v v8, (a0)
; RV64IV-NEXT: lw zero, 8(sp)
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 2
+; RV64IV-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV64IV-NEXT: add sp, sp, a0
; RV64IV-NEXT: addi sp, sp, 16
; RV64IV-NEXT: ret
@@ -200,8 +191,7 @@ define void @local_var_m2_with_varsize_object(i64 %n) {
; RV64IV-NEXT: .cfi_offset s1, -24
; RV64IV-NEXT: addi s0, sp, 32
; RV64IV-NEXT: .cfi_def_cfa s0, 0
-; RV64IV-NEXT: csrr a1, vlenb
-; RV64IV-NEXT: slli a1, a1, 2
+; RV64IV-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; RV64IV-NEXT: sub sp, sp, a1
; RV64IV-NEXT: addi a0, a0, 15
; RV64IV-NEXT: andi a0, a0, -16
@@ -252,8 +242,7 @@ define void @local_var_m2_with_bp(i64 %n) {
; RV64IV-NEXT: .cfi_offset s2, -32
; RV64IV-NEXT: addi s0, sp, 256
; RV64IV-NEXT: .cfi_def_cfa s0, 0
-; RV64IV-NEXT: csrr a1, vlenb
-; RV64IV-NEXT: slli a1, a1, 2
+; RV64IV-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; RV64IV-NEXT: sub sp, sp, a1
; RV64IV-NEXT: andi sp, sp, -128
; RV64IV-NEXT: mv s1, sp
@@ -301,15 +290,13 @@ define i64 @fixed_object(i64 %0, i64 %1, i64 %2, i64 %3, i64 %4, i64 %5, i64 %6,
; RV64IV-LABEL: fixed_object:
; RV64IV: # %bb.0:
; RV64IV-NEXT: addi sp, sp, -16
-; RV64IV-NEXT: csrr a0, vlenb
-; RV64IV-NEXT: slli a0, a0, 3
+; RV64IV-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: csrr a0, vlenb
; RV64IV-NEXT: slli a0, a0, 3
; RV64IV-NEXT: add a0, sp, a0
; RV64IV-NEXT: ld a0, 16(a0)
-; RV64IV-NEXT: csrr a1, vlenb
-; RV64IV-NEXT: slli a1, a1, 3
+; RV64IV-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64IV-NEXT: add sp, sp, a1
; RV64IV-NEXT: addi sp, sp, 16
; RV64IV-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/memory-args.ll b/llvm/test/CodeGen/RISCV/rvv/memory-args.ll
index 87dd0048d928a9..7eaef2c5a0da3f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/memory-args.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/memory-args.ll
@@ -35,7 +35,7 @@ define <vscale x 64 x i8> @caller() {
; RV64IV-NEXT: .cfi_offset s0, -16
; RV64IV-NEXT: addi s0, sp, 80
; RV64IV-NEXT: .cfi_def_cfa s0, 0
-; RV64IV-NEXT: csrr a0, vlenb
+; RV64IV-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64IV-NEXT: slli a0, a0, 5
; RV64IV-NEXT: sub sp, sp, a0
; RV64IV-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
index 189ba08dddc7aa..66a65959cbb568 100644
--- a/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/mgather-sdnode.ll
@@ -1226,8 +1226,7 @@ define void @mgather_nxv16i64(<vscale x 8 x ptr> %ptrs0, <vscale x 8 x ptr> %ptr
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a3, vlenb
-; RV64-NEXT: slli a3, a3, 3
+; RV64-NEXT: vsetvli a3, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a3
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: addi a3, sp, 16
@@ -1249,8 +1248,7 @@ define void @mgather_nxv16i64(<vscale x 8 x ptr> %ptrs0, <vscale x 8 x ptr> %ptr
; RV64-NEXT: add a0, a2, a0
; RV64-NEXT: vs8r.v v8, (a0)
; RV64-NEXT: vs8r.v v24, (a2)
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/mscatter-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/mscatter-sdnode.ll
index 29db67b4b0a41f..9ab5c979535ac9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/mscatter-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/mscatter-sdnode.ll
@@ -1893,7 +1893,7 @@ define void @mscatter_nxv16f64(<vscale x 8 x double> %val0, <vscale x 8 x double
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a2, vlenb
+; RV64-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV64-NEXT: slli a2, a2, 4
; RV64-NEXT: sub sp, sp, a2
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1922,7 +1922,7 @@ define void @mscatter_nxv16f64(<vscale x 8 x double> %val0, <vscale x 8 x double
; RV64-NEXT: addi a0, sp, 16
; RV64-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
; RV64-NEXT: vsoxei64.v v8, (zero), v16, v0.t
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/named-vector-shuffle-reverse.ll b/llvm/test/CodeGen/RISCV/rvv/named-vector-shuffle-reverse.ll
index a6c6db345032ec..a4517b87d1b8a9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/named-vector-shuffle-reverse.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/named-vector-shuffle-reverse.ll
@@ -1962,7 +1962,7 @@ define <vscale x 12 x i64> @reverse_nxv12i64(<vscale x 12 x i64> %a) {
; RV32-NEXT: .cfi_offset s0, -8
; RV32-NEXT: addi s0, sp, 80
; RV32-NEXT: .cfi_def_cfa s0, 0
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: slli a0, a0, 4
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -64
@@ -2007,7 +2007,7 @@ define <vscale x 12 x i64> @reverse_nxv12i64(<vscale x 12 x i64> %a) {
; RV64-NEXT: .cfi_offset s0, -16
; RV64-NEXT: addi s0, sp, 80
; RV64-NEXT: .cfi_def_cfa s0, 0
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
index 5aa773b01e6926..39b26ad3f4af82 100644
--- a/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/nearbyint-vp.ll
@@ -275,8 +275,7 @@ define <vscale x 32 x bfloat> @vp_nearbyint_nxv32bf16(<vscale x 32 x bfloat> %va
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -332,8 +331,7 @@ define <vscale x 32 x bfloat> @vp_nearbyint_nxv32bf16(<vscale x 32 x bfloat> %va
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
; CHECK-NEXT: fsflags a0
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -346,8 +344,7 @@ define <vscale x 32 x bfloat> @vp_nearbyint_nxv32bf16_unmasked(<vscale x 32 x bf
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -400,8 +397,7 @@ define <vscale x 32 x bfloat> @vp_nearbyint_nxv32bf16_unmasked(<vscale x 32 x bf
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
; CHECK-NEXT: fsflags a0
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -854,8 +850,7 @@ define <vscale x 32 x half> @vp_nearbyint_nxv32f16(<vscale x 32 x half> %va, <vs
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -911,8 +906,7 @@ define <vscale x 32 x half> @vp_nearbyint_nxv32f16(<vscale x 32 x half> %va, <vs
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: fsflags a0
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -940,8 +934,7 @@ define <vscale x 32 x half> @vp_nearbyint_nxv32f16_unmasked(<vscale x 32 x half>
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -994,8 +987,7 @@ define <vscale x 32 x half> @vp_nearbyint_nxv32f16_unmasked(<vscale x 32 x half>
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: fsflags a0
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/no-reserved-frame.ll b/llvm/test/CodeGen/RISCV/rvv/no-reserved-frame.ll
index 59ba857dca8a5f..ebb5851e4b8454 100644
--- a/llvm/test/CodeGen/RISCV/rvv/no-reserved-frame.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/no-reserved-frame.ll
@@ -15,7 +15,7 @@ define signext i32 @foo(i32 signext %aa) #0 {
; CHECK-NEXT: .cfi_offset s1, -24
; CHECK-NEXT: addi s0, sp, 96
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: andi sp, sp, -16
; CHECK-NEXT: mv s1, sp
diff --git a/llvm/test/CodeGen/RISCV/rvv/pr88576.ll b/llvm/test/CodeGen/RISCV/rvv/pr88576.ll
index b6e0d1e2ff4ae7..e12ca05c9b3249 100644
--- a/llvm/test/CodeGen/RISCV/rvv/pr88576.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/pr88576.ll
@@ -19,7 +19,7 @@ define i1 @foo(<vscale x 16 x i8> %x, i64 %y) {
; CHECK-NEXT: .cfi_offset s0, -16
; CHECK-NEXT: addi s0, sp, 80
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 4
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/reg-alloc-reserve-bp.ll b/llvm/test/CodeGen/RISCV/rvv/reg-alloc-reserve-bp.ll
index 600ac594380b39..31ae1b0f1d5c38 100644
--- a/llvm/test/CodeGen/RISCV/rvv/reg-alloc-reserve-bp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/reg-alloc-reserve-bp.ll
@@ -17,8 +17,7 @@ define void @foo(ptr nocapture noundef %p1) {
; CHECK-NEXT: .cfi_offset s2, -32
; CHECK-NEXT: addi s0, sp, 192
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 1
+; CHECK-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: andi sp, sp, -64
; CHECK-NEXT: mv s1, sp
diff --git a/llvm/test/CodeGen/RISCV/rvv/remat.ll b/llvm/test/CodeGen/RISCV/rvv/remat.ll
index 4f58ccb5188d31..f52ff5f005cbc1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/remat.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/remat.ll
@@ -24,8 +24,7 @@ define void @vid(ptr %p) {
; PRERA: # %bb.0:
; PRERA-NEXT: addi sp, sp, -16
; PRERA-NEXT: .cfi_def_cfa_offset 16
-; PRERA-NEXT: csrr a1, vlenb
-; PRERA-NEXT: slli a1, a1, 3
+; PRERA-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; PRERA-NEXT: sub sp, sp, a1
; PRERA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; PRERA-NEXT: vsetvli a1, zero, e64, m8, ta, ma
@@ -43,8 +42,7 @@ define void @vid(ptr %p) {
; PRERA-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; PRERA-NEXT: vs8r.v v16, (a0)
; PRERA-NEXT: vs8r.v v8, (a0)
-; PRERA-NEXT: csrr a0, vlenb
-; PRERA-NEXT: slli a0, a0, 3
+; PRERA-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; PRERA-NEXT: add sp, sp, a0
; PRERA-NEXT: addi sp, sp, 16
; PRERA-NEXT: ret
@@ -70,8 +68,7 @@ define void @vid_passthru(ptr %p, <vscale x 8 x i64> %v) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetivli zero, 1, e64, m8, tu, ma
@@ -89,8 +86,7 @@ define void @vid_passthru(ptr %p, <vscale x 8 x i64> %v) {
; CHECK-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vs8r.v v16, (a0)
; CHECK-NEXT: vs8r.v v8, (a0)
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -132,8 +128,7 @@ define void @vmv.v.i(ptr %p) {
; PRERA: # %bb.0:
; PRERA-NEXT: addi sp, sp, -16
; PRERA-NEXT: .cfi_def_cfa_offset 16
-; PRERA-NEXT: csrr a1, vlenb
-; PRERA-NEXT: slli a1, a1, 3
+; PRERA-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; PRERA-NEXT: sub sp, sp, a1
; PRERA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; PRERA-NEXT: vsetvli a1, zero, e64, m8, ta, ma
@@ -151,8 +146,7 @@ define void @vmv.v.i(ptr %p) {
; PRERA-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; PRERA-NEXT: vs8r.v v16, (a0)
; PRERA-NEXT: vs8r.v v8, (a0)
-; PRERA-NEXT: csrr a0, vlenb
-; PRERA-NEXT: slli a0, a0, 3
+; PRERA-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; PRERA-NEXT: add sp, sp, a0
; PRERA-NEXT: addi sp, sp, 16
; PRERA-NEXT: ret
@@ -179,8 +173,7 @@ define void @vmv.v.x_needs_extended(ptr %p, i64 %x) {
; POSTRA: # %bb.0:
; POSTRA-NEXT: addi sp, sp, -16
; POSTRA-NEXT: .cfi_def_cfa_offset 16
-; POSTRA-NEXT: csrr a2, vlenb
-; POSTRA-NEXT: slli a2, a2, 3
+; POSTRA-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; POSTRA-NEXT: sub sp, sp, a2
; POSTRA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; POSTRA-NEXT: vsetvli a2, zero, e64, m8, ta, ma
@@ -198,8 +191,7 @@ define void @vmv.v.x_needs_extended(ptr %p, i64 %x) {
; POSTRA-NEXT: vs8r.v v16, (a0)
; POSTRA-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; POSTRA-NEXT: vs8r.v v8, (a0)
-; POSTRA-NEXT: csrr a0, vlenb
-; POSTRA-NEXT: slli a0, a0, 3
+; POSTRA-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; POSTRA-NEXT: add sp, sp, a0
; POSTRA-NEXT: addi sp, sp, 16
; POSTRA-NEXT: ret
@@ -208,8 +200,7 @@ define void @vmv.v.x_needs_extended(ptr %p, i64 %x) {
; PRERA: # %bb.0:
; PRERA-NEXT: addi sp, sp, -16
; PRERA-NEXT: .cfi_def_cfa_offset 16
-; PRERA-NEXT: csrr a2, vlenb
-; PRERA-NEXT: slli a2, a2, 3
+; PRERA-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; PRERA-NEXT: sub sp, sp, a2
; PRERA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; PRERA-NEXT: vsetvli a2, zero, e64, m8, ta, ma
@@ -227,8 +218,7 @@ define void @vmv.v.x_needs_extended(ptr %p, i64 %x) {
; PRERA-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; PRERA-NEXT: vs8r.v v16, (a0)
; PRERA-NEXT: vs8r.v v8, (a0)
-; PRERA-NEXT: csrr a0, vlenb
-; PRERA-NEXT: slli a0, a0, 3
+; PRERA-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; PRERA-NEXT: add sp, sp, a0
; PRERA-NEXT: addi sp, sp, 16
; PRERA-NEXT: ret
@@ -271,8 +261,7 @@ define void @vmv.v.x_live(ptr %p, i64 %x) {
; PRERA: # %bb.0:
; PRERA-NEXT: addi sp, sp, -16
; PRERA-NEXT: .cfi_def_cfa_offset 16
-; PRERA-NEXT: csrr a2, vlenb
-; PRERA-NEXT: slli a2, a2, 3
+; PRERA-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; PRERA-NEXT: sub sp, sp, a2
; PRERA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; PRERA-NEXT: vsetvli a2, zero, e64, m8, ta, ma
@@ -291,8 +280,7 @@ define void @vmv.v.x_live(ptr %p, i64 %x) {
; PRERA-NEXT: vs8r.v v16, (a0)
; PRERA-NEXT: vs8r.v v8, (a0)
; PRERA-NEXT: sd a1, 0(a0)
-; PRERA-NEXT: csrr a0, vlenb
-; PRERA-NEXT: slli a0, a0, 3
+; PRERA-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; PRERA-NEXT: add sp, sp, a0
; PRERA-NEXT: addi sp, sp, 16
; PRERA-NEXT: ret
@@ -336,8 +324,7 @@ define void @vfmv.v.f(ptr %p, double %x) {
; PRERA: # %bb.0:
; PRERA-NEXT: addi sp, sp, -16
; PRERA-NEXT: .cfi_def_cfa_offset 16
-; PRERA-NEXT: csrr a1, vlenb
-; PRERA-NEXT: slli a1, a1, 3
+; PRERA-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; PRERA-NEXT: sub sp, sp, a1
; PRERA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; PRERA-NEXT: vsetvli a1, zero, e64, m8, ta, ma
@@ -356,8 +343,7 @@ define void @vfmv.v.f(ptr %p, double %x) {
; PRERA-NEXT: vs8r.v v16, (a0)
; PRERA-NEXT: vs8r.v v8, (a0)
; PRERA-NEXT: fsd fa0, 0(a0)
-; PRERA-NEXT: csrr a0, vlenb
-; PRERA-NEXT: slli a0, a0, 3
+; PRERA-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; PRERA-NEXT: add sp, sp, a0
; PRERA-NEXT: addi sp, sp, 16
; PRERA-NEXT: ret
@@ -401,8 +387,7 @@ define void @vmv.s.x(ptr %p, i64 %x) {
; PRERA: # %bb.0:
; PRERA-NEXT: addi sp, sp, -16
; PRERA-NEXT: .cfi_def_cfa_offset 16
-; PRERA-NEXT: csrr a2, vlenb
-; PRERA-NEXT: slli a2, a2, 3
+; PRERA-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; PRERA-NEXT: sub sp, sp, a2
; PRERA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; PRERA-NEXT: vsetvli a2, zero, e64, m1, ta, ma
@@ -421,8 +406,7 @@ define void @vmv.s.x(ptr %p, i64 %x) {
; PRERA-NEXT: vs8r.v v16, (a0)
; PRERA-NEXT: vs8r.v v8, (a0)
; PRERA-NEXT: sd a1, 0(a0)
-; PRERA-NEXT: csrr a0, vlenb
-; PRERA-NEXT: slli a0, a0, 3
+; PRERA-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; PRERA-NEXT: add sp, sp, a0
; PRERA-NEXT: addi sp, sp, 16
; PRERA-NEXT: ret
@@ -466,8 +450,7 @@ define void @vfmv.s.f(ptr %p, double %x) {
; PRERA: # %bb.0:
; PRERA-NEXT: addi sp, sp, -16
; PRERA-NEXT: .cfi_def_cfa_offset 16
-; PRERA-NEXT: csrr a1, vlenb
-; PRERA-NEXT: slli a1, a1, 3
+; PRERA-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; PRERA-NEXT: sub sp, sp, a1
; PRERA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; PRERA-NEXT: vsetvli a1, zero, e64, m1, ta, ma
@@ -486,8 +469,7 @@ define void @vfmv.s.f(ptr %p, double %x) {
; PRERA-NEXT: vs8r.v v16, (a0)
; PRERA-NEXT: vs8r.v v8, (a0)
; PRERA-NEXT: fsd fa0, 0(a0)
-; PRERA-NEXT: csrr a0, vlenb
-; PRERA-NEXT: slli a0, a0, 3
+; PRERA-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; PRERA-NEXT: add sp, sp, a0
; PRERA-NEXT: addi sp, sp, 16
; PRERA-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll b/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
index a454f9dd97cebf..e29f32ce22a31f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rint-vp.ll
@@ -255,8 +255,7 @@ define <vscale x 32 x bfloat> @vp_rint_nxv32bf16(<vscale x 32 x bfloat> %va, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v16, v0
@@ -313,8 +312,7 @@ define <vscale x 32 x bfloat> @vp_rint_nxv32bf16(<vscale x 32 x bfloat> %va, <vs
; CHECK-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -327,8 +325,7 @@ define <vscale x 32 x bfloat> @vp_rint_nxv32bf16_unmasked(<vscale x 32 x bfloat>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -377,8 +374,7 @@ define <vscale x 32 x bfloat> @vp_rint_nxv32bf16_unmasked(<vscale x 32 x bfloat>
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -789,8 +785,7 @@ define <vscale x 32 x half> @vp_rint_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v16, v0
@@ -847,8 +842,7 @@ define <vscale x 32 x half> @vp_rint_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -874,8 +868,7 @@ define <vscale x 32 x half> @vp_rint_nxv32f16_unmasked(<vscale x 32 x half> %va,
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -924,8 +917,7 @@ define <vscale x 32 x half> @vp_rint_nxv32f16_unmasked(<vscale x 32 x half> %va,
; ZVFHMIN-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1335,8 +1327,7 @@ define <vscale x 16 x double> @vp_rint_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -1379,8 +1370,7 @@ define <vscale x 16 x double> @vp_rint_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v8, v24, v8, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/round-vp.ll b/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
index a4936483e8a152..34325a0320bf9f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/round-vp.ll
@@ -275,8 +275,7 @@ define <vscale x 32 x bfloat> @vp_round_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -340,8 +339,7 @@ define <vscale x 32 x bfloat> @vp_round_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -354,8 +352,7 @@ define <vscale x 32 x bfloat> @vp_round_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -408,8 +405,7 @@ define <vscale x 32 x bfloat> @vp_round_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -862,8 +858,7 @@ define <vscale x 32 x half> @vp_round_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -927,8 +922,7 @@ define <vscale x 32 x half> @vp_round_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -956,8 +950,7 @@ define <vscale x 32 x half> @vp_round_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -1010,8 +1003,7 @@ define <vscale x 32 x half> @vp_round_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1461,8 +1453,7 @@ define <vscale x 16 x double> @vp_round_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -1510,8 +1501,7 @@ define <vscale x 16 x double> @vp_round_nxv16f64(<vscale x 16 x double> %va, <vs
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v8, v24, v8, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll b/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
index 9857009002eb90..815896dfa8db05 100644
--- a/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/roundeven-vp.ll
@@ -275,8 +275,7 @@ define <vscale x 32 x bfloat> @vp_roundeven_nxv32bf16(<vscale x 32 x bfloat> %va
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -340,8 +339,7 @@ define <vscale x 32 x bfloat> @vp_roundeven_nxv32bf16(<vscale x 32 x bfloat> %va
; CHECK-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -354,8 +352,7 @@ define <vscale x 32 x bfloat> @vp_roundeven_nxv32bf16_unmasked(<vscale x 32 x bf
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -408,8 +405,7 @@ define <vscale x 32 x bfloat> @vp_roundeven_nxv32bf16_unmasked(<vscale x 32 x bf
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -862,8 +858,7 @@ define <vscale x 32 x half> @vp_roundeven_nxv32f16(<vscale x 32 x half> %va, <vs
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -927,8 +922,7 @@ define <vscale x 32 x half> @vp_roundeven_nxv32f16(<vscale x 32 x half> %va, <vs
; ZVFHMIN-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -956,8 +950,7 @@ define <vscale x 32 x half> @vp_roundeven_nxv32f16_unmasked(<vscale x 32 x half>
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -1010,8 +1003,7 @@ define <vscale x 32 x half> @vp_roundeven_nxv32f16_unmasked(<vscale x 32 x half>
; ZVFHMIN-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1461,8 +1453,7 @@ define <vscale x 16 x double> @vp_roundeven_nxv16f64(<vscale x 16 x double> %va,
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -1510,8 +1501,7 @@ define <vscale x 16 x double> @vp_roundeven_nxv16f64(<vscale x 16 x double> %va,
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v8, v24, v8, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll b/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
index 11830c924867b4..83332447f63ce9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/roundtozero-vp.ll
@@ -275,8 +275,7 @@ define <vscale x 32 x bfloat> @vp_roundtozero_nxv32bf16(<vscale x 32 x bfloat> %
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -340,8 +339,7 @@ define <vscale x 32 x bfloat> @vp_roundtozero_nxv32bf16(<vscale x 32 x bfloat> %
; CHECK-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -354,8 +352,7 @@ define <vscale x 32 x bfloat> @vp_roundtozero_nxv32bf16_unmasked(<vscale x 32 x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -408,8 +405,7 @@ define <vscale x 32 x bfloat> @vp_roundtozero_nxv32bf16_unmasked(<vscale x 32 x
; CHECK-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -862,8 +858,7 @@ define <vscale x 32 x half> @vp_roundtozero_nxv32f16(<vscale x 32 x half> %va, <
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -927,8 +922,7 @@ define <vscale x 32 x half> @vp_roundtozero_nxv32f16(<vscale x 32 x half> %va, <
; ZVFHMIN-NEXT: vfsgnj.vv v24, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -956,8 +950,7 @@ define <vscale x 32 x half> @vp_roundtozero_nxv32f16_unmasked(<vscale x 32 x hal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -1010,8 +1003,7 @@ define <vscale x 32 x half> @vp_roundtozero_nxv32f16_unmasked(<vscale x 32 x hal
; ZVFHMIN-NEXT: vfsgnj.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1461,8 +1453,7 @@ define <vscale x 16 x double> @vp_roundtozero_nxv16f64(<vscale x 16 x double> %v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -1510,8 +1501,7 @@ define <vscale x 16 x double> @vp_roundtozero_nxv16f64(<vscale x 16 x double> %v
; CHECK-NEXT: vfcvt.f.x.v v24, v24, v0.t
; CHECK-NEXT: vsetvli zero, zero, e64, m8, ta, mu
; CHECK-NEXT: vfsgnj.vv v8, v24, v8, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
index ac74a82e79e6d0..d2cd53ff449e99 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
@@ -11,8 +11,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0: # %bb.0:
; SPILL-O0-NEXT: addi sp, sp, -32
; SPILL-O0-NEXT: sw ra, 28(sp) # 4-byte Folded Spill
-; SPILL-O0-NEXT: csrr a1, vlenb
-; SPILL-O0-NEXT: slli a1, a1, 1
+; SPILL-O0-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a1
; SPILL-O0-NEXT: sw a0, 8(sp) # 4-byte Folded Spill
; SPILL-O0-NEXT: vmv1r.v v10, v9
@@ -40,8 +39,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: # implicit-def: $v8
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, tu, ma
; SPILL-O0-NEXT: vfadd.vv v8, v9, v10
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: lw ra, 28(sp) # 4-byte Folded Reload
; SPILL-O0-NEXT: addi sp, sp, 32
@@ -52,8 +50,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O2-NEXT: addi sp, sp, -32
; SPILL-O2-NEXT: sw ra, 28(sp) # 4-byte Folded Spill
; SPILL-O2-NEXT: sw s0, 24(sp) # 4-byte Folded Spill
-; SPILL-O2-NEXT: csrr a1, vlenb
-; SPILL-O2-NEXT: slli a1, a1, 1
+; SPILL-O2-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a1
; SPILL-O2-NEXT: mv s0, a0
; SPILL-O2-NEXT: addi a1, sp, 16
@@ -75,8 +72,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O2-NEXT: vl1r.v v9, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: vsetvli zero, s0, e64, m1, ta, ma
; SPILL-O2-NEXT: vfadd.vv v8, v9, v8
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: lw ra, 28(sp) # 4-byte Folded Reload
; SPILL-O2-NEXT: lw s0, 24(sp) # 4-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector.ll b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector.ll
index f0cd067fd0448d..edecb07498e67f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector.ll
@@ -8,7 +8,7 @@ define <vscale x 1 x i32> @spill_lmul_mf2(<vscale x 1 x i32> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_mf2:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
@@ -16,7 +16,7 @@ define <vscale x 1 x i32> @spill_lmul_mf2(<vscale x 1 x i32> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -24,14 +24,14 @@ define <vscale x 1 x i32> @spill_lmul_mf2(<vscale x 1 x i32> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_mf2:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -46,7 +46,7 @@ define <vscale x 2 x i32> @spill_lmul_1(<vscale x 2 x i32> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_1:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
@@ -54,7 +54,7 @@ define <vscale x 2 x i32> @spill_lmul_1(<vscale x 2 x i32> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -62,14 +62,14 @@ define <vscale x 2 x i32> @spill_lmul_1(<vscale x 2 x i32> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_1:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -84,8 +84,7 @@ define <vscale x 4 x i32> @spill_lmul_2(<vscale x 4 x i32> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_2:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs2r.v v8, (a0) # Unknown-size Folded Spill
@@ -93,8 +92,7 @@ define <vscale x 4 x i32> @spill_lmul_2(<vscale x 4 x i32> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -102,16 +100,14 @@ define <vscale x 4 x i32> @spill_lmul_2(<vscale x 4 x i32> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_2:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs2r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -126,8 +122,7 @@ define <vscale x 8 x i32> @spill_lmul_4(<vscale x 8 x i32> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_4:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 2
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs4r.v v8, (a0) # Unknown-size Folded Spill
@@ -135,8 +130,7 @@ define <vscale x 8 x i32> @spill_lmul_4(<vscale x 8 x i32> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl4r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 2
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -144,16 +138,14 @@ define <vscale x 8 x i32> @spill_lmul_4(<vscale x 8 x i32> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_4:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 2
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs4r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl4r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 2
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -168,8 +160,7 @@ define <vscale x 16 x i32> @spill_lmul_8(<vscale x 16 x i32> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_8:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 3
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
@@ -177,8 +168,7 @@ define <vscale x 16 x i32> @spill_lmul_8(<vscale x 16 x i32> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 3
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -186,16 +176,14 @@ define <vscale x 16 x i32> @spill_lmul_8(<vscale x 16 x i32> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_8:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 3
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 3
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll
index b34952b64f09e2..0106649864a8e7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll
@@ -10,7 +10,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg_nxv1i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, mf2, tu, ma
@@ -22,7 +22,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -30,8 +30,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg_nxv1i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
-; SPILL-O2-NEXT: slli a2, a2, 1
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a2
; SPILL-O2-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
; SPILL-O2-NEXT: vlseg2e32.v v8, (a0)
@@ -46,8 +45,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-NEXT: vl1r.v v7, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -85,7 +83,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg_nxv2i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m1, tu, ma
@@ -97,7 +95,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -105,8 +103,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg_nxv2i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
-; SPILL-O2-NEXT: slli a2, a2, 1
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a2
; SPILL-O2-NEXT: vsetvli zero, a1, e32, m1, ta, ma
; SPILL-O2-NEXT: vlseg2e32.v v8, (a0)
@@ -121,8 +118,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-NEXT: vl1r.v v7, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -160,8 +156,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg_nxv4i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
-; SPILL-O0-NEXT: slli a2, a2, 1
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
@@ -173,8 +168,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -182,8 +176,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg_nxv4i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
-; SPILL-O2-NEXT: slli a2, a2, 2
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m4, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a2
; SPILL-O2-NEXT: vsetvli zero, a1, e32, m2, ta, ma
; SPILL-O2-NEXT: vlseg2e32.v v8, (a0)
@@ -201,8 +194,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-NEXT: vl2r.v v6, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 2
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -240,8 +232,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg_nxv8i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
-; SPILL-O0-NEXT: slli a2, a2, 2
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m4, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8m4_v12m4
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m4, tu, ma
@@ -253,8 +244,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl4r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 2
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -262,8 +252,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg_nxv8i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
-; SPILL-O2-NEXT: slli a2, a2, 3
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a2
; SPILL-O2-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; SPILL-O2-NEXT: vlseg2e32.v v8, (a0)
@@ -281,8 +270,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-NEXT: vl4r.v v4, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl4r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 3
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -320,8 +308,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg3_nxv4i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
-; SPILL-O0-NEXT: slli a2, a2, 1
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2_v12m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
@@ -333,8 +320,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -342,7 +328,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg3_nxv4i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: li a3, 6
; SPILL-O2-NEXT: mul a2, a2, a3
; SPILL-O2-NEXT: sub sp, sp, a2
@@ -366,7 +352,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O2-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl2r.v v10, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: li a1, 6
; SPILL-O2-NEXT: mul a0, a0, a1
; SPILL-O2-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
index 9054048f2f747a..09cf87d3aff578 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
@@ -14,8 +14,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0: # %bb.0:
; SPILL-O0-NEXT: addi sp, sp, -48
; SPILL-O0-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; SPILL-O0-NEXT: csrr a1, vlenb
-; SPILL-O0-NEXT: slli a1, a1, 1
+; SPILL-O0-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a1
; SPILL-O0-NEXT: sd a0, 16(sp) # 8-byte Folded Spill
; SPILL-O0-NEXT: vmv1r.v v10, v9
@@ -43,8 +42,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: # implicit-def: $v8
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, tu, ma
; SPILL-O0-NEXT: vfadd.vv v8, v9, v10
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; SPILL-O0-NEXT: addi sp, sp, 48
@@ -55,8 +53,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O2-NEXT: addi sp, sp, -32
; SPILL-O2-NEXT: sd ra, 24(sp) # 8-byte Folded Spill
; SPILL-O2-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
-; SPILL-O2-NEXT: csrr a1, vlenb
-; SPILL-O2-NEXT: slli a1, a1, 1
+; SPILL-O2-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a1
; SPILL-O2-NEXT: mv s0, a0
; SPILL-O2-NEXT: addi a1, sp, 16
@@ -78,8 +75,7 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O2-NEXT: vl1r.v v9, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: vsetvli zero, s0, e64, m1, ta, ma
; SPILL-O2-NEXT: vfadd.vv v8, v9, v8
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: ld ra, 24(sp) # 8-byte Folded Reload
; SPILL-O2-NEXT: ld s0, 16(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector.ll b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector.ll
index 1e6ff0baddaef2..eccc4cf8ad430f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector.ll
@@ -8,7 +8,7 @@ define <vscale x 1 x i64> @spill_lmul_1(<vscale x 1 x i64> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_1:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
@@ -16,7 +16,7 @@ define <vscale x 1 x i64> @spill_lmul_1(<vscale x 1 x i64> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -24,14 +24,14 @@ define <vscale x 1 x i64> @spill_lmul_1(<vscale x 1 x i64> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_1:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -46,8 +46,7 @@ define <vscale x 2 x i64> @spill_lmul_2(<vscale x 2 x i64> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_2:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs2r.v v8, (a0) # Unknown-size Folded Spill
@@ -55,8 +54,7 @@ define <vscale x 2 x i64> @spill_lmul_2(<vscale x 2 x i64> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -64,16 +62,14 @@ define <vscale x 2 x i64> @spill_lmul_2(<vscale x 2 x i64> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_2:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs2r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -88,8 +84,7 @@ define <vscale x 4 x i64> @spill_lmul_4(<vscale x 4 x i64> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_4:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 2
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs4r.v v8, (a0) # Unknown-size Folded Spill
@@ -97,8 +92,7 @@ define <vscale x 4 x i64> @spill_lmul_4(<vscale x 4 x i64> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl4r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 2
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -106,16 +100,14 @@ define <vscale x 4 x i64> @spill_lmul_4(<vscale x 4 x i64> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_4:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 2
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs4r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl4r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 2
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -130,8 +122,7 @@ define <vscale x 8 x i64> @spill_lmul_8(<vscale x 8 x i64> %va) nounwind {
; SPILL-O0-LABEL: spill_lmul_8:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 3
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a0
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
@@ -139,8 +130,7 @@ define <vscale x 8 x i64> @spill_lmul_8(<vscale x 8 x i64> %va) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 3
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -148,16 +138,14 @@ define <vscale x 8 x i64> @spill_lmul_8(<vscale x 8 x i64> %va) nounwind {
; SPILL-O2-LABEL: spill_lmul_8:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 3
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a0
; SPILL-O2-NEXT: addi a0, sp, 16
; SPILL-O2-NEXT: vs8r.v v8, (a0) # Unknown-size Folded Spill
; SPILL-O2-NEXT: #APP
; SPILL-O2-NEXT: #NO_APP
; SPILL-O2-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 3
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll
index 361adb55ef12f1..4923ba1df4e471 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rv64-spill-zvlsseg.ll
@@ -10,7 +10,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg_nxv1i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, mf2, tu, ma
@@ -22,7 +22,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -30,8 +30,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg_nxv1i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
-; SPILL-O2-NEXT: slli a2, a2, 1
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a2
; SPILL-O2-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
; SPILL-O2-NEXT: vlseg2e32.v v8, (a0)
@@ -46,8 +45,7 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-NEXT: vl1r.v v7, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -85,7 +83,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg_nxv2i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m1, tu, ma
@@ -97,7 +95,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -105,8 +103,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg_nxv2i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
-; SPILL-O2-NEXT: slli a2, a2, 1
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a2
; SPILL-O2-NEXT: vsetvli zero, a1, e32, m1, ta, ma
; SPILL-O2-NEXT: vlseg2e32.v v8, (a0)
@@ -121,8 +118,7 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-NEXT: vl1r.v v7, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 1
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -160,8 +156,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg_nxv4i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
-; SPILL-O0-NEXT: slli a2, a2, 1
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
@@ -173,8 +168,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -182,8 +176,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg_nxv4i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
-; SPILL-O2-NEXT: slli a2, a2, 2
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m4, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a2
; SPILL-O2-NEXT: vsetvli zero, a1, e32, m2, ta, ma
; SPILL-O2-NEXT: vlseg2e32.v v8, (a0)
@@ -201,8 +194,7 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-NEXT: vl2r.v v6, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 2
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -240,8 +232,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg_nxv8i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
-; SPILL-O0-NEXT: slli a2, a2, 2
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m4, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8m4_v12m4
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m4, tu, ma
@@ -253,8 +244,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl4r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 2
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -262,8 +252,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg_nxv8i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
-; SPILL-O2-NEXT: slli a2, a2, 3
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m8, ta, ma
; SPILL-O2-NEXT: sub sp, sp, a2
; SPILL-O2-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; SPILL-O2-NEXT: vlseg2e32.v v8, (a0)
@@ -281,8 +270,7 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-NEXT: vl4r.v v4, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl4r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
-; SPILL-O2-NEXT: slli a0, a0, 3
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; SPILL-O2-NEXT: add sp, sp, a0
; SPILL-O2-NEXT: addi sp, sp, 16
; SPILL-O2-NEXT: ret
@@ -320,8 +308,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-LABEL: spill_zvlsseg3_nxv4i32:
; SPILL-O0: # %bb.0: # %entry
; SPILL-O0-NEXT: addi sp, sp, -16
-; SPILL-O0-NEXT: csrr a2, vlenb
-; SPILL-O0-NEXT: slli a2, a2, 1
+; SPILL-O0-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2_v12m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
@@ -333,8 +320,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O0-NEXT: #NO_APP
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
-; SPILL-O0-NEXT: csrr a0, vlenb
-; SPILL-O0-NEXT: slli a0, a0, 1
+; SPILL-O0-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; SPILL-O0-NEXT: add sp, sp, a0
; SPILL-O0-NEXT: addi sp, sp, 16
; SPILL-O0-NEXT: ret
@@ -342,7 +328,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-LABEL: spill_zvlsseg3_nxv4i32:
; SPILL-O2: # %bb.0: # %entry
; SPILL-O2-NEXT: addi sp, sp, -16
-; SPILL-O2-NEXT: csrr a2, vlenb
+; SPILL-O2-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: li a3, 6
; SPILL-O2-NEXT: mul a2, a2, a3
; SPILL-O2-NEXT: sub sp, sp, a2
@@ -366,7 +352,7 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i64 %vl) nounwind {
; SPILL-O2-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
; SPILL-O2-NEXT: add a0, a0, a1
; SPILL-O2-NEXT: vl2r.v v10, (a0) # Unknown-size Folded Reload
-; SPILL-O2-NEXT: csrr a0, vlenb
+; SPILL-O2-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; SPILL-O2-NEXT: li a1, 6
; SPILL-O2-NEXT: mul a0, a0, a1
; SPILL-O2-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll b/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
index 7a2e40e86f0027..d5502941d5f39a 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-args-by-mem.ll
@@ -32,7 +32,7 @@ define <vscale x 16 x i32> @foo(i32 %0, i32 %1, i32 %2, i32 %3, i32 %4, i32 %5,
; CHECK-NEXT: .cfi_offset s1, -24
; CHECK-NEXT: addi s0, sp, 96
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr t0, vlenb
+; CHECK-NEXT: vsetvli t0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli t0, t0, 4
; CHECK-NEXT: sub sp, sp, t0
; CHECK-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-framelayout.ll b/llvm/test/CodeGen/RISCV/rvv/rvv-framelayout.ll
index ab644599448856..fe37bc669d9a10 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-framelayout.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-framelayout.ll
@@ -8,8 +8,7 @@ define void @rvv_vla(i64 %n, i64 %i) nounwind {
; CHECK-NEXT: sd ra, 24(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 32
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 2
+; CHECK-NEXT: vsetvli a2, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: slli a0, a0, 2
; CHECK-NEXT: addi a0, a0, 15
@@ -53,8 +52,7 @@ define void @rvv_overaligned() nounwind {
; CHECK-NEXT: sd ra, 120(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 112(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 128
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: andi sp, sp, -64
; CHECK-NEXT: csrr a0, vlenb
@@ -91,8 +89,7 @@ define void @rvv_vla_and_overaligned(i64 %n, i64 %i) nounwind {
; CHECK-NEXT: sd s0, 128(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s1, 120(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 144
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 2
+; CHECK-NEXT: vsetvli a2, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: andi sp, sp, -64
; CHECK-NEXT: mv s1, sp
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-out-arguments.ll b/llvm/test/CodeGen/RISCV/rvv/rvv-out-arguments.ll
index 3850d5d6d2c74e..782fe2094b44df 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-out-arguments.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-out-arguments.ll
@@ -9,8 +9,7 @@ define dso_local void @lots_args(i32 signext %x0, i32 signext %x1, <vscale x 16
; CHECK-NEXT: sd ra, 56(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 48(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 64
-; CHECK-NEXT: csrr t0, vlenb
-; CHECK-NEXT: slli t0, t0, 3
+; CHECK-NEXT: vsetvli t0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, t0
; CHECK-NEXT: ld t0, 8(s0)
; CHECK-NEXT: ld t1, 0(s0)
@@ -69,8 +68,7 @@ define dso_local signext i32 @main() #0 {
; CHECK-NEXT: sd s0, 96(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s1, 88(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 112
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: sw zero, -36(s0)
; CHECK-NEXT: vsetivli zero, 2, e64, m1, ta, ma
diff --git a/llvm/test/CodeGen/RISCV/rvv/rvv-stack-align.mir b/llvm/test/CodeGen/RISCV/rvv/rvv-stack-align.mir
index 749bd4c13879b6..2753e1e4089a25 100644
--- a/llvm/test/CodeGen/RISCV/rvv/rvv-stack-align.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/rvv-stack-align.mir
@@ -18,15 +18,13 @@
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: sw ra, 44(sp) # 4-byte Folded Spill
- ; RV32-NEXT: csrr a0, vlenb
- ; RV32-NEXT: slli a0, a0, 1
+ ; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: addi a0, sp, 32
; RV32-NEXT: addi a1, sp, 16
; RV32-NEXT: addi a2, sp, 8
; RV32-NEXT: call extern
- ; RV32-NEXT: csrr a0, vlenb
- ; RV32-NEXT: slli a0, a0, 1
+ ; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: lw ra, 44(sp) # 4-byte Folded Reload
; RV32-NEXT: addi sp, sp, 48
@@ -36,15 +34,13 @@
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -48
; RV64-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
- ; RV64-NEXT: csrr a0, vlenb
- ; RV64-NEXT: slli a0, a0, 1
+ ; RV64-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: addi a0, sp, 32
; RV64-NEXT: addi a1, sp, 16
; RV64-NEXT: addi a2, sp, 8
; RV64-NEXT: call extern
- ; RV64-NEXT: csrr a0, vlenb
- ; RV64-NEXT: slli a0, a0, 1
+ ; RV64-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; RV64-NEXT: addi sp, sp, 48
@@ -61,15 +57,13 @@
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -48
; RV32-NEXT: sw ra, 44(sp) # 4-byte Folded Spill
- ; RV32-NEXT: csrr a0, vlenb
- ; RV32-NEXT: slli a0, a0, 1
+ ; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: addi a0, sp, 32
; RV32-NEXT: addi a1, sp, 16
; RV32-NEXT: addi a2, sp, 8
; RV32-NEXT: call extern
- ; RV32-NEXT: csrr a0, vlenb
- ; RV32-NEXT: slli a0, a0, 1
+ ; RV32-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: lw ra, 44(sp) # 4-byte Folded Reload
; RV32-NEXT: addi sp, sp, 48
@@ -79,15 +73,13 @@
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -48
; RV64-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
- ; RV64-NEXT: csrr a0, vlenb
- ; RV64-NEXT: slli a0, a0, 1
+ ; RV64-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: addi a0, sp, 32
; RV64-NEXT: addi a1, sp, 16
; RV64-NEXT: addi a2, sp, 8
; RV64-NEXT: call extern
- ; RV64-NEXT: csrr a0, vlenb
- ; RV64-NEXT: slli a0, a0, 1
+ ; RV64-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; RV64-NEXT: addi sp, sp, 48
@@ -106,8 +98,7 @@
; RV32-NEXT: sw ra, 44(sp) # 4-byte Folded Spill
; RV32-NEXT: sw s0, 40(sp) # 4-byte Folded Spill
; RV32-NEXT: addi s0, sp, 48
- ; RV32-NEXT: csrr a0, vlenb
- ; RV32-NEXT: slli a0, a0, 2
+ ; RV32-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV32-NEXT: sub sp, sp, a0
; RV32-NEXT: andi sp, sp, -32
; RV32-NEXT: addi a0, sp, 32
@@ -126,8 +117,7 @@
; RV64-NEXT: sd ra, 72(sp) # 8-byte Folded Spill
; RV64-NEXT: sd s0, 64(sp) # 8-byte Folded Spill
; RV64-NEXT: addi s0, sp, 80
- ; RV64-NEXT: csrr a0, vlenb
- ; RV64-NEXT: slli a0, a0, 2
+ ; RV64-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; RV64-NEXT: sub sp, sp, a0
; RV64-NEXT: andi sp, sp, -32
; RV64-NEXT: addi a0, sp, 64
diff --git a/llvm/test/CodeGen/RISCV/rvv/scalar-stack-align.ll b/llvm/test/CodeGen/RISCV/rvv/scalar-stack-align.ll
index fcb5f07664aa5a..b1f5d63ed8080f 100644
--- a/llvm/test/CodeGen/RISCV/rvv/scalar-stack-align.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/scalar-stack-align.ll
@@ -13,14 +13,12 @@ define ptr @scalar_stack_align16() nounwind {
; RV32-ZVE64: # %bb.0:
; RV32-ZVE64-NEXT: addi sp, sp, -48
; RV32-ZVE64-NEXT: sw ra, 44(sp) # 4-byte Folded Spill
-; RV32-ZVE64-NEXT: csrr a0, vlenb
-; RV32-ZVE64-NEXT: slli a0, a0, 1
+; RV32-ZVE64-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32-ZVE64-NEXT: sub sp, sp, a0
; RV32-ZVE64-NEXT: addi a0, sp, 32
; RV32-ZVE64-NEXT: call extern
; RV32-ZVE64-NEXT: addi a0, sp, 16
-; RV32-ZVE64-NEXT: csrr a1, vlenb
-; RV32-ZVE64-NEXT: slli a1, a1, 1
+; RV32-ZVE64-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; RV32-ZVE64-NEXT: add sp, sp, a1
; RV32-ZVE64-NEXT: lw ra, 44(sp) # 4-byte Folded Reload
; RV32-ZVE64-NEXT: addi sp, sp, 48
@@ -30,14 +28,12 @@ define ptr @scalar_stack_align16() nounwind {
; RV64-ZVE64: # %bb.0:
; RV64-ZVE64-NEXT: addi sp, sp, -48
; RV64-ZVE64-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; RV64-ZVE64-NEXT: csrr a0, vlenb
-; RV64-ZVE64-NEXT: slli a0, a0, 1
+; RV64-ZVE64-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV64-ZVE64-NEXT: sub sp, sp, a0
; RV64-ZVE64-NEXT: addi a0, sp, 32
; RV64-ZVE64-NEXT: call extern
; RV64-ZVE64-NEXT: addi a0, sp, 16
-; RV64-ZVE64-NEXT: csrr a1, vlenb
-; RV64-ZVE64-NEXT: slli a1, a1, 1
+; RV64-ZVE64-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; RV64-ZVE64-NEXT: add sp, sp, a1
; RV64-ZVE64-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; RV64-ZVE64-NEXT: addi sp, sp, 48
@@ -47,12 +43,12 @@ define ptr @scalar_stack_align16() nounwind {
; RV32-V: # %bb.0:
; RV32-V-NEXT: addi sp, sp, -48
; RV32-V-NEXT: sw ra, 44(sp) # 4-byte Folded Spill
-; RV32-V-NEXT: csrr a0, vlenb
+; RV32-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-V-NEXT: sub sp, sp, a0
; RV32-V-NEXT: addi a0, sp, 32
; RV32-V-NEXT: call extern
; RV32-V-NEXT: addi a0, sp, 16
-; RV32-V-NEXT: csrr a1, vlenb
+; RV32-V-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-V-NEXT: add sp, sp, a1
; RV32-V-NEXT: lw ra, 44(sp) # 4-byte Folded Reload
; RV32-V-NEXT: addi sp, sp, 48
@@ -62,12 +58,12 @@ define ptr @scalar_stack_align16() nounwind {
; RV64-V: # %bb.0:
; RV64-V-NEXT: addi sp, sp, -48
; RV64-V-NEXT: sd ra, 40(sp) # 8-byte Folded Spill
-; RV64-V-NEXT: csrr a0, vlenb
+; RV64-V-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-V-NEXT: sub sp, sp, a0
; RV64-V-NEXT: addi a0, sp, 32
; RV64-V-NEXT: call extern
; RV64-V-NEXT: addi a0, sp, 16
-; RV64-V-NEXT: csrr a1, vlenb
+; RV64-V-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-V-NEXT: add sp, sp, a1
; RV64-V-NEXT: ld ra, 40(sp) # 8-byte Folded Reload
; RV64-V-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll b/llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll
index 5ba4efa8458c79..2c4617c42f9a60 100644
--- a/llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/setcc-fp-vp.ll
@@ -1469,7 +1469,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64bf16(<vscale x 64 x bfloat> %va, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 1
; CHECK-NEXT: mv a3, a1
; CHECK-NEXT: slli a1, a1, 4
@@ -1648,7 +1648,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64bf16(<vscale x 64 x bfloat> %va, <vs
; CHECK-NEXT: vsetvli zero, a0, e8, m1, ta, ma
; CHECK-NEXT: vslideup.vx v8, v6, a1
; CHECK-NEXT: vmv.v.v v0, v8
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 1
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 4
@@ -3733,7 +3733,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64f16(<vscale x 64 x half> %va, <vscal
; ZVFH: # %bb.0:
; ZVFH-NEXT: addi sp, sp, -16
; ZVFH-NEXT: .cfi_def_cfa_offset 16
-; ZVFH-NEXT: csrr a1, vlenb
+; ZVFH-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFH-NEXT: slli a1, a1, 4
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -3778,7 +3778,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64f16(<vscale x 64 x half> %va, <vscal
; ZVFH-NEXT: vsetvli zero, a0, e8, m1, ta, ma
; ZVFH-NEXT: vslideup.vx v16, v7, a1
; ZVFH-NEXT: vmv.v.v v0, v16
-; ZVFH-NEXT: csrr a0, vlenb
+; ZVFH-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFH-NEXT: slli a0, a0, 4
; ZVFH-NEXT: add sp, sp, a0
; ZVFH-NEXT: addi sp, sp, 16
@@ -3788,7 +3788,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64f16(<vscale x 64 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 1
; ZVFHMIN-NEXT: mv a3, a1
; ZVFHMIN-NEXT: slli a1, a1, 4
@@ -3967,7 +3967,7 @@ define <vscale x 64 x i1> @fcmp_oeq_vv_nxv64f16(<vscale x 64 x half> %va, <vscal
; ZVFHMIN-NEXT: vsetvli zero, a0, e8, m1, ta, ma
; ZVFHMIN-NEXT: vslideup.vx v8, v6, a1
; ZVFHMIN-NEXT: vmv.v.v v0, v8
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 1
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 4
diff --git a/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll b/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
index 7315fd6cfbbecb..29be916fec27d7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/setcc-int-vp.ll
@@ -1088,7 +1088,7 @@ define <vscale x 128 x i1> @icmp_eq_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1129,7 +1129,7 @@ define <vscale x 128 x i1> @icmp_eq_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
; CHECK-NEXT: vmseq.vv v16, v8, v24, v0.t
; CHECK-NEXT: vmv1r.v v0, v16
; CHECK-NEXT: vmv1r.v v8, v6
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -2238,7 +2238,7 @@ define <vscale x 32 x i1> @icmp_eq_vv_nxv32i32(<vscale x 32 x i32> %va, <vscale
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -2283,7 +2283,7 @@ define <vscale x 32 x i1> @icmp_eq_vv_nxv32i32(<vscale x 32 x i32> %va, <vscale
; CHECK-NEXT: vsetvli zero, a0, e8, mf2, ta, ma
; CHECK-NEXT: vslideup.vx v16, v7, a1
; CHECK-NEXT: vmv1r.v v0, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/stack-folding.ll b/llvm/test/CodeGen/RISCV/rvv/stack-folding.ll
index 70fcabe59889fb..64637a278ca334 100644
--- a/llvm/test/CodeGen/RISCV/rvv/stack-folding.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/stack-folding.ll
@@ -9,7 +9,7 @@ define i64 @i64(<vscale x 1 x i64> %v, i1 %c) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; RV32-NEXT: addi a1, sp, 16
@@ -29,7 +29,7 @@ define i64 @i64(<vscale x 1 x i64> %v, i1 %c) {
; RV32-NEXT: .LBB0_2: # %falsebb
; RV32-NEXT: li a1, 0
; RV32-NEXT: .LBB0_3: # %falsebb
-; RV32-NEXT: csrr a2, vlenb
+; RV32-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; RV32-NEXT: add sp, sp, a2
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
@@ -38,7 +38,7 @@ define i64 @i64(<vscale x 1 x i64> %v, i1 %c) {
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; RV64-NEXT: addi a1, sp, 16
@@ -50,7 +50,7 @@ define i64 @i64(<vscale x 1 x i64> %v, i1 %c) {
; RV64-NEXT: # %bb.1: # %truebb
; RV64-NEXT: ld a0, 16(sp) # 8-byte Folded Reload
; RV64-NEXT: .LBB0_2: # %falsebb
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: add sp, sp, a1
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -68,7 +68,7 @@ define i32 @i32(<vscale x 2 x i32> %v, i1 %c) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; CHECK-NEXT: addi a1, sp, 16
@@ -80,7 +80,7 @@ define i32 @i32(<vscale x 2 x i32> %v, i1 %c) {
; CHECK-NEXT: # %bb.1: # %truebb
; CHECK-NEXT: lw a0, 16(sp) # 8-byte Folded Reload
; CHECK-NEXT: .LBB1_2: # %falsebb
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a1
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -98,7 +98,7 @@ define i16 @i16(<vscale x 4 x i16> %v, i1 %c) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; CHECK-NEXT: addi a1, sp, 16
@@ -110,7 +110,7 @@ define i16 @i16(<vscale x 4 x i16> %v, i1 %c) {
; CHECK-NEXT: # %bb.1: # %truebb
; CHECK-NEXT: lh a0, 16(sp) # 8-byte Folded Reload
; CHECK-NEXT: .LBB2_2: # %falsebb
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a1
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -128,7 +128,7 @@ define i8 @i8(<vscale x 8 x i8> %v, i1 %c) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; CHECK-NEXT: addi a1, sp, 16
@@ -140,7 +140,7 @@ define i8 @i8(<vscale x 8 x i8> %v, i1 %c) {
; CHECK-NEXT: # %bb.1: # %truebb
; CHECK-NEXT: lb a0, 16(sp) # 8-byte Folded Reload
; CHECK-NEXT: .LBB3_2: # %falsebb
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a1
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -158,7 +158,7 @@ define double @f64(<vscale x 1 x double> %v, i1 %c) {
; RV32: # %bb.0:
; RV32-NEXT: addi sp, sp, -16
; RV32-NEXT: .cfi_def_cfa_offset 16
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; RV32-NEXT: addi a1, sp, 16
@@ -173,7 +173,7 @@ define double @f64(<vscale x 1 x double> %v, i1 %c) {
; RV32-NEXT: .LBB4_2: # %falsebb
; RV32-NEXT: fcvt.d.w fa0, zero
; RV32-NEXT: .LBB4_3: # %falsebb
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: addi sp, sp, 16
; RV32-NEXT: ret
@@ -182,7 +182,7 @@ define double @f64(<vscale x 1 x double> %v, i1 %c) {
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; RV64-NEXT: addi a1, sp, 16
@@ -197,7 +197,7 @@ define double @f64(<vscale x 1 x double> %v, i1 %c) {
; RV64-NEXT: .LBB4_2: # %falsebb
; RV64-NEXT: fmv.d.x fa0, zero
; RV64-NEXT: .LBB4_3: # %falsebb
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -215,7 +215,7 @@ define float @f32(<vscale x 2 x float> %v, i1 %c) {
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; CHECK-NEXT: addi a1, sp, 16
@@ -230,7 +230,7 @@ define float @f32(<vscale x 2 x float> %v, i1 %c) {
; CHECK-NEXT: .LBB5_2: # %falsebb
; CHECK-NEXT: fmv.w.x fa0, zero
; CHECK-NEXT: .LBB5_3: # %falsebb
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -248,7 +248,7 @@ define half @f16(<vscale x 1 x half> %v, i1 %c) {
; ZFMIN: # %bb.0:
; ZFMIN-NEXT: addi sp, sp, -16
; ZFMIN-NEXT: .cfi_def_cfa_offset 16
-; ZFMIN-NEXT: csrr a1, vlenb
+; ZFMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZFMIN-NEXT: sub sp, sp, a1
; ZFMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; ZFMIN-NEXT: addi a1, sp, 16
@@ -263,7 +263,7 @@ define half @f16(<vscale x 1 x half> %v, i1 %c) {
; ZFMIN-NEXT: .LBB6_2: # %falsebb
; ZFMIN-NEXT: fmv.h.x fa0, zero
; ZFMIN-NEXT: .LBB6_3: # %falsebb
-; ZFMIN-NEXT: csrr a0, vlenb
+; ZFMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZFMIN-NEXT: add sp, sp, a0
; ZFMIN-NEXT: addi sp, sp, 16
; ZFMIN-NEXT: ret
@@ -272,7 +272,7 @@ define half @f16(<vscale x 1 x half> %v, i1 %c) {
; NOZFMIN: # %bb.0:
; NOZFMIN-NEXT: addi sp, sp, -16
; NOZFMIN-NEXT: .cfi_def_cfa_offset 16
-; NOZFMIN-NEXT: csrr a1, vlenb
+; NOZFMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; NOZFMIN-NEXT: sub sp, sp, a1
; NOZFMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; NOZFMIN-NEXT: addi a1, sp, 16
@@ -290,7 +290,7 @@ define half @f16(<vscale x 1 x half> %v, i1 %c) {
; NOZFMIN-NEXT: lui a0, 1048560
; NOZFMIN-NEXT: .LBB6_3: # %falsebb
; NOZFMIN-NEXT: fmv.w.x fa0, a0
-; NOZFMIN-NEXT: csrr a0, vlenb
+; NOZFMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZFMIN-NEXT: add sp, sp, a0
; NOZFMIN-NEXT: addi sp, sp, 16
; NOZFMIN-NEXT: ret
@@ -308,7 +308,7 @@ define bfloat @bf16(<vscale x 2 x bfloat> %v, i1 %c) {
; ZFMIN: # %bb.0:
; ZFMIN-NEXT: addi sp, sp, -16
; ZFMIN-NEXT: .cfi_def_cfa_offset 16
-; ZFMIN-NEXT: csrr a1, vlenb
+; ZFMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZFMIN-NEXT: sub sp, sp, a1
; ZFMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; ZFMIN-NEXT: addi a1, sp, 16
@@ -324,7 +324,7 @@ define bfloat @bf16(<vscale x 2 x bfloat> %v, i1 %c) {
; ZFMIN-NEXT: .LBB7_2: # %falsebb
; ZFMIN-NEXT: fmv.h.x fa0, zero
; ZFMIN-NEXT: .LBB7_3: # %falsebb
-; ZFMIN-NEXT: csrr a0, vlenb
+; ZFMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZFMIN-NEXT: add sp, sp, a0
; ZFMIN-NEXT: addi sp, sp, 16
; ZFMIN-NEXT: ret
@@ -333,7 +333,7 @@ define bfloat @bf16(<vscale x 2 x bfloat> %v, i1 %c) {
; NOZFMIN: # %bb.0:
; NOZFMIN-NEXT: addi sp, sp, -16
; NOZFMIN-NEXT: .cfi_def_cfa_offset 16
-; NOZFMIN-NEXT: csrr a1, vlenb
+; NOZFMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; NOZFMIN-NEXT: sub sp, sp, a1
; NOZFMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 1 * vlenb
; NOZFMIN-NEXT: addi a1, sp, 16
@@ -351,7 +351,7 @@ define bfloat @bf16(<vscale x 2 x bfloat> %v, i1 %c) {
; NOZFMIN-NEXT: lui a0, 1048560
; NOZFMIN-NEXT: .LBB7_3: # %falsebb
; NOZFMIN-NEXT: fmv.w.x fa0, a0
-; NOZFMIN-NEXT: csrr a0, vlenb
+; NOZFMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; NOZFMIN-NEXT: add sp, sp, a0
; NOZFMIN-NEXT: addi sp, sp, 16
; NOZFMIN-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll b/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
index abdf9ab09bb9ae..9a0ade65dbb0a1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/strided-vpstore.ll
@@ -624,8 +624,7 @@ define void @strided_store_nxv17f64(<vscale x 17 x double> %v, ptr %ptr, i32 sig
; CHECK-NEXT: .LBB48_4:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr t0, vlenb
-; CHECK-NEXT: slli t0, t0, 3
+; CHECK-NEXT: vsetvli t0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, t0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re64.v v0, (a0)
@@ -662,8 +661,7 @@ define void @strided_store_nxv17f64(<vscale x 17 x double> %v, ptr %ptr, i32 sig
; CHECK-NEXT: vl8r.v v8, (a3) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vsse64.v v8, (a1), a2, v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave-load.ll b/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave-load.ll
index 54373d94f8f5f3..d702ccebb560a6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave-load.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave-load.ll
@@ -103,7 +103,7 @@ define {<vscale x 8 x i64>, <vscale x 8 x i64>} @vector_deinterleave_load_nxv8i6
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -177,7 +177,7 @@ define {<vscale x 8 x i64>, <vscale x 8 x i64>} @vector_deinterleave_load_nxv8i6
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vmv4r.v v20, v8
; CHECK-NEXT: vmv8r.v v8, v24
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave.ll b/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave.ll
index 28f7eb4329e3b9..ae80f431100fe7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-deinterleave.ll
@@ -95,8 +95,7 @@ define {<vscale x 64 x i1>, <vscale x 64 x i1>} @vector_deinterleave_nxv64i1_nxv
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v12, v8
@@ -121,8 +120,7 @@ define {<vscale x 64 x i1>, <vscale x 64 x i1>} @vector_deinterleave_nxv64i1_nxv
; CHECK-NEXT: vnsrl.wi v20, v24, 8
; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: vmsne.vi v8, v16, 0
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -181,7 +179,7 @@ define {<vscale x 8 x i64>, <vscale x 8 x i64>} @vector_deinterleave_nxv8i64_nxv
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -240,7 +238,7 @@ define {<vscale x 8 x i64>, <vscale x 8 x i64>} @vector_deinterleave_nxv8i64_nxv
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv8r.v v8, v16
; CHECK-NEXT: vmv8r.v v16, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -435,7 +433,7 @@ define {<vscale x 8 x double>, <vscale x 8 x double>} @vector_deinterleave_nxv8f
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -494,7 +492,7 @@ define {<vscale x 8 x double>, <vscale x 8 x double>} @vector_deinterleave_nxv8f
; CHECK-NEXT: vmv4r.v v4, v8
; CHECK-NEXT: vmv8r.v v8, v16
; CHECK-NEXT: vmv8r.v v16, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-interleave-store.ll b/llvm/test/CodeGen/RISCV/rvv/vector-interleave-store.ll
index a06aa2d02b11b5..072b0db38d08ac 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-interleave-store.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-interleave-store.ll
@@ -97,7 +97,7 @@ define void @vector_interleave_store_nxv16i64_nxv8i64(<vscale x 8 x i64> %a, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -133,7 +133,7 @@ define void @vector_interleave_store_nxv16i64_nxv8i64(<vscale x 8 x i64> %a, <vs
; CHECK-NEXT: addi a1, a1, 16
; CHECK-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; CHECK-NEXT: vs8r.v v8, (a0)
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll
index 83c235d8e87ab7..603520b7bd87fd 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vector-interleave.ll
@@ -289,8 +289,7 @@ define <vscale x 16 x i64> @vector_interleave_nxv16i64_nxv8i64(<vscale x 8 x i64
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv8r.v v0, v8
@@ -314,8 +313,7 @@ define <vscale x 16 x i64> @vector_interleave_nxv16i64_nxv8i64(<vscale x 8 x i64
; CHECK-NEXT: vrgatherei16.vv v24, v16, v8
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vmv.v.v v16, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -324,8 +322,7 @@ define <vscale x 16 x i64> @vector_interleave_nxv16i64_nxv8i64(<vscale x 8 x i64
; ZVBB: # %bb.0:
; ZVBB-NEXT: addi sp, sp, -16
; ZVBB-NEXT: .cfi_def_cfa_offset 16
-; ZVBB-NEXT: csrr a0, vlenb
-; ZVBB-NEXT: slli a0, a0, 3
+; ZVBB-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVBB-NEXT: sub sp, sp, a0
; ZVBB-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVBB-NEXT: vmv8r.v v0, v8
@@ -349,8 +346,7 @@ define <vscale x 16 x i64> @vector_interleave_nxv16i64_nxv8i64(<vscale x 8 x i64
; ZVBB-NEXT: vrgatherei16.vv v24, v16, v8
; ZVBB-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; ZVBB-NEXT: vmv.v.v v16, v24
-; ZVBB-NEXT: csrr a0, vlenb
-; ZVBB-NEXT: slli a0, a0, 3
+; ZVBB-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVBB-NEXT: add sp, sp, a0
; ZVBB-NEXT: addi sp, sp, 16
; ZVBB-NEXT: ret
@@ -693,8 +689,7 @@ define <vscale x 16 x double> @vector_interleave_nxv16f64_nxv8f64(<vscale x 8 x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv8r.v v0, v8
@@ -718,8 +713,7 @@ define <vscale x 16 x double> @vector_interleave_nxv16f64_nxv8f64(<vscale x 8 x
; CHECK-NEXT: vrgatherei16.vv v24, v16, v8
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vmv.v.v v16, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -728,8 +722,7 @@ define <vscale x 16 x double> @vector_interleave_nxv16f64_nxv8f64(<vscale x 8 x
; ZVBB: # %bb.0:
; ZVBB-NEXT: addi sp, sp, -16
; ZVBB-NEXT: .cfi_def_cfa_offset 16
-; ZVBB-NEXT: csrr a0, vlenb
-; ZVBB-NEXT: slli a0, a0, 3
+; ZVBB-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVBB-NEXT: sub sp, sp, a0
; ZVBB-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVBB-NEXT: vmv8r.v v0, v8
@@ -753,8 +746,7 @@ define <vscale x 16 x double> @vector_interleave_nxv16f64_nxv8f64(<vscale x 8 x
; ZVBB-NEXT: vrgatherei16.vv v24, v16, v8
; ZVBB-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; ZVBB-NEXT: vmv.v.v v16, v24
-; ZVBB-NEXT: csrr a0, vlenb
-; ZVBB-NEXT: slli a0, a0, 3
+; ZVBB-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVBB-NEXT: add sp, sp, a0
; ZVBB-NEXT: addi sp, sp, 16
; ZVBB-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll
index 4c298ab2b5e6db..64122ee52c2ad9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll
@@ -407,8 +407,7 @@ define <vscale x 32 x bfloat> @vfadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -443,8 +442,7 @@ define <vscale x 32 x bfloat> @vfadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vfadd.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -457,8 +455,7 @@ define <vscale x 32 x bfloat> @vfadd_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -493,8 +490,7 @@ define <vscale x 32 x bfloat> @vfadd_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfadd.vv v16, v24, v16
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -507,7 +503,7 @@ define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a1, 4
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
@@ -574,7 +570,7 @@ define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK-NEXT: vfadd.vv v16, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a0, 4
; CHECK-NEXT: add a0, a1, a0
; CHECK-NEXT: add sp, sp, a0
@@ -591,7 +587,7 @@ define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -641,7 +637,7 @@ define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfadd.vv v16, v16, v24
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -1184,8 +1180,7 @@ define <vscale x 32 x half> @vfadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -1220,8 +1215,7 @@ define <vscale x 32 x half> @vfadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1240,8 +1234,7 @@ define <vscale x 32 x half> @vfadd_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -1276,8 +1269,7 @@ define <vscale x 32 x half> @vfadd_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1296,7 +1288,7 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a1, 4
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
@@ -1363,7 +1355,7 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN-NEXT: vfadd.vv v16, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a0, 4
; ZVFHMIN-NEXT: add a0, a1, a0
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -1386,7 +1378,7 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 4
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1436,7 +1428,7 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfadd.vv v16, v16, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 4
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll
index 0fe6c5dec42649..d18d7516219bb7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfdiv-vp.ll
@@ -369,8 +369,7 @@ define <vscale x 32 x bfloat> @vfdiv_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -405,8 +404,7 @@ define <vscale x 32 x bfloat> @vfdiv_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vfdiv.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -419,8 +417,7 @@ define <vscale x 32 x bfloat> @vfdiv_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -455,8 +452,7 @@ define <vscale x 32 x bfloat> @vfdiv_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfdiv.vv v16, v24, v16
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -469,7 +465,7 @@ define <vscale x 32 x bfloat> @vfdiv_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a1, 4
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
@@ -536,7 +532,7 @@ define <vscale x 32 x bfloat> @vfdiv_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK-NEXT: vfdiv.vv v16, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a0, 4
; CHECK-NEXT: add a0, a1, a0
; CHECK-NEXT: add sp, sp, a0
@@ -553,7 +549,7 @@ define <vscale x 32 x bfloat> @vfdiv_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -603,7 +599,7 @@ define <vscale x 32 x bfloat> @vfdiv_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfdiv.vv v16, v16, v24
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -1096,8 +1092,7 @@ define <vscale x 32 x half> @vfdiv_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -1132,8 +1127,7 @@ define <vscale x 32 x half> @vfdiv_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfdiv.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1152,8 +1146,7 @@ define <vscale x 32 x half> @vfdiv_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -1188,8 +1181,7 @@ define <vscale x 32 x half> @vfdiv_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfdiv.vv v16, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1208,7 +1200,7 @@ define <vscale x 32 x half> @vfdiv_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a1, 4
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
@@ -1275,7 +1267,7 @@ define <vscale x 32 x half> @vfdiv_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN-NEXT: vfdiv.vv v16, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a0, 4
; ZVFHMIN-NEXT: add a0, a1, a0
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -1298,7 +1290,7 @@ define <vscale x 32 x half> @vfdiv_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 4
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1348,7 +1340,7 @@ define <vscale x 32 x half> @vfdiv_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfdiv.vv v16, v16, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 4
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
index f0c74d064016a8..b5ec6939c9ec68 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfma-vp.ll
@@ -475,8 +475,7 @@ define <vscale x 16 x bfloat> @vfma_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma
@@ -490,8 +489,7 @@ define <vscale x 16 x bfloat> @vfma_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <vs
; CHECK-NEXT: vfmadd.vv v16, v24, v8, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -520,8 +518,7 @@ define <vscale x 16 x bfloat> @vfma_vf_nxv16bf16(<vscale x 16 x bfloat> %va, bfl
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: fmv.x.h a1, fa0
@@ -537,8 +534,7 @@ define <vscale x 16 x bfloat> @vfma_vf_nxv16bf16(<vscale x 16 x bfloat> %va, bfl
; CHECK-NEXT: vfmadd.vv v16, v24, v8, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -573,8 +569,7 @@ define <vscale x 16 x bfloat> @vfma_vf_nxv16bf16_unmasked(<vscale x 16 x bfloat>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 2
+; CHECK-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; CHECK-NEXT: fmv.x.h a1, fa0
@@ -590,8 +585,7 @@ define <vscale x 16 x bfloat> @vfma_vf_nxv16bf16_unmasked(<vscale x 16 x bfloat>
; CHECK-NEXT: vfmadd.vv v16, v0, v24
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -606,8 +600,7 @@ define <vscale x 16 x bfloat> @vfma_vf_nxv16bf16_unmasked_commute(<vscale x 16 x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 2
+; CHECK-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; CHECK-NEXT: fmv.x.h a1, fa0
@@ -623,8 +616,7 @@ define <vscale x 16 x bfloat> @vfma_vf_nxv16bf16_unmasked_commute(<vscale x 16 x
; CHECK-NEXT: vfmadd.vv v16, v0, v24
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -641,7 +633,7 @@ define <vscale x 32 x bfloat> @vfma_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: mv a3, a2
; CHECK-NEXT: slli a2, a2, 3
; CHECK-NEXT: add a3, a3, a2
@@ -774,7 +766,7 @@ define <vscale x 32 x bfloat> @vfma_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <vs
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: add a1, a1, a0
@@ -792,7 +784,7 @@ define <vscale x 32 x bfloat> @vfma_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a2, vlenb
+; CHECK-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a2, 5
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -880,7 +872,7 @@ define <vscale x 32 x bfloat> @vfma_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat>
; CHECK-NEXT: vfmacc.vv v0, v16, v24
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -894,7 +886,7 @@ define <vscale x 32 x bfloat> @vfma_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bfl
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: mv a2, a1
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: add a2, a2, a1
@@ -1034,7 +1026,7 @@ define <vscale x 32 x bfloat> @vfma_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bfl
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: add a1, a1, a0
@@ -1054,7 +1046,7 @@ define <vscale x 32 x bfloat> @vfma_vf_nxv32bf16_commute(<vscale x 32 x bfloat>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: mv a2, a1
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: add a2, a2, a1
@@ -1194,7 +1186,7 @@ define <vscale x 32 x bfloat> @vfma_vf_nxv32bf16_commute(<vscale x 32 x bfloat>
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: add a1, a1, a0
@@ -1214,7 +1206,7 @@ define <vscale x 32 x bfloat> @vfma_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -1303,7 +1295,7 @@ define <vscale x 32 x bfloat> @vfma_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat>
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v16, v0
; CHECK-NEXT: vmv8r.v v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -1319,7 +1311,7 @@ define <vscale x 32 x bfloat> @vfma_vf_nxv32bf16_unmasked_commute(<vscale x 32 x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -1407,7 +1399,7 @@ define <vscale x 32 x bfloat> @vfma_vf_nxv32bf16_unmasked_commute(<vscale x 32 x
; CHECK-NEXT: vfmadd.vv v0, v24, v16
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -2036,8 +2028,7 @@ define <vscale x 16 x half> @vfma_vv_nxv16f16(<vscale x 16 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma
@@ -2051,8 +2042,7 @@ define <vscale x 16 x half> @vfma_vv_nxv16f16(<vscale x 16 x half> %va, <vscale
; ZVFHMIN-NEXT: vfmadd.vv v16, v24, v8, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -2093,8 +2083,7 @@ define <vscale x 16 x half> @vfma_vf_nxv16f16(<vscale x 16 x half> %va, half %b,
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -2110,8 +2099,7 @@ define <vscale x 16 x half> @vfma_vf_nxv16f16(<vscale x 16 x half> %va, half %b,
; ZVFHMIN-NEXT: vfmadd.vv v16, v24, v8, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -2158,8 +2146,7 @@ define <vscale x 16 x half> @vfma_vf_nxv16f16_unmasked(<vscale x 16 x half> %va,
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -2175,8 +2162,7 @@ define <vscale x 16 x half> @vfma_vf_nxv16f16_unmasked(<vscale x 16 x half> %va,
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -2197,8 +2183,7 @@ define <vscale x 16 x half> @vfma_vf_nxv16f16_unmasked_commute(<vscale x 16 x ha
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -2214,8 +2199,7 @@ define <vscale x 16 x half> @vfma_vf_nxv16f16_unmasked_commute(<vscale x 16 x ha
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -2240,7 +2224,7 @@ define <vscale x 32 x half> @vfma_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: mv a3, a2
; ZVFHMIN-NEXT: slli a2, a2, 3
; ZVFHMIN-NEXT: add a3, a3, a2
@@ -2373,7 +2357,7 @@ define <vscale x 32 x half> @vfma_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: add a1, a1, a0
@@ -2398,7 +2382,7 @@ define <vscale x 32 x half> @vfma_vv_nxv32f16_unmasked(<vscale x 32 x half> %va,
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 5
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -2486,7 +2470,7 @@ define <vscale x 32 x half> @vfma_vv_nxv32f16_unmasked(<vscale x 32 x half> %va,
; ZVFHMIN-NEXT: vfmacc.vv v0, v16, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -2506,7 +2490,7 @@ define <vscale x 32 x half> @vfma_vf_nxv32f16(<vscale x 32 x half> %va, half %b,
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: mv a2, a1
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: add a2, a2, a1
@@ -2646,7 +2630,7 @@ define <vscale x 32 x half> @vfma_vf_nxv32f16(<vscale x 32 x half> %va, half %b,
; ZVFHMIN-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: add a1, a1, a0
@@ -2672,7 +2656,7 @@ define <vscale x 32 x half> @vfma_vf_nxv32f16_commute(<vscale x 32 x half> %va,
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: mv a2, a1
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: add a2, a2, a1
@@ -2812,7 +2796,7 @@ define <vscale x 32 x half> @vfma_vf_nxv32f16_commute(<vscale x 32 x half> %va,
; ZVFHMIN-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: add a1, a1, a0
@@ -2838,7 +2822,7 @@ define <vscale x 32 x half> @vfma_vf_nxv32f16_unmasked(<vscale x 32 x half> %va,
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -2927,7 +2911,7 @@ define <vscale x 32 x half> @vfma_vf_nxv32f16_unmasked(<vscale x 32 x half> %va,
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v16, v0
; ZVFHMIN-NEXT: vmv8r.v v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -2949,7 +2933,7 @@ define <vscale x 32 x half> @vfma_vf_nxv32f16_unmasked_commute(<vscale x 32 x ha
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -3037,7 +3021,7 @@ define <vscale x 32 x half> @vfma_vf_nxv32f16_unmasked_commute(<vscale x 32 x ha
; ZVFHMIN-NEXT: vfmadd.vv v0, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -3723,7 +3707,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: mv a3, a1
; CHECK-NEXT: slli a1, a1, 2
@@ -3822,7 +3806,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 2
@@ -3839,7 +3823,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64_unmasked(<vscale x 16 x double>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 3
; CHECK-NEXT: mv a3, a1
; CHECK-NEXT: slli a1, a1, 1
@@ -3891,7 +3875,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64_unmasked(<vscale x 16 x double>
; CHECK-NEXT: vsetvli zero, a4, e64, m8, ta, ma
; CHECK-NEXT: vfmadd.vv v0, v24, v8
; CHECK-NEXT: vmv.v.v v8, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 3
; CHECK-NEXT: mv a1, a0
; CHECK-NEXT: slli a0, a0, 1
@@ -7702,8 +7686,7 @@ define <vscale x 16 x half> @vfmsub_vf_nxv16f16_unmasked(<vscale x 16 x half> %v
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -7724,8 +7707,7 @@ define <vscale x 16 x half> @vfmsub_vf_nxv16f16_unmasked(<vscale x 16 x half> %v
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -7747,8 +7729,7 @@ define <vscale x 16 x half> @vfmsub_vf_nxv16f16_unmasked_commute(<vscale x 16 x
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -7769,8 +7750,7 @@ define <vscale x 16 x half> @vfmsub_vf_nxv16f16_unmasked_commute(<vscale x 16 x
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -7822,8 +7802,7 @@ define <vscale x 16 x half> @vfnmadd_vv_nxv16f16_commuted(<vscale x 16 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: lui a1, 8
@@ -7841,8 +7820,7 @@ define <vscale x 16 x half> @vfnmadd_vv_nxv16f16_commuted(<vscale x 16 x half> %
; ZVFHMIN-NEXT: vfmadd.vv v16, v24, v8, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -7919,8 +7897,7 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16(<vscale x 16 x half> %va, half
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -7941,8 +7918,7 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16(<vscale x 16 x half> %va, half
; ZVFHMIN-NEXT: vfmadd.vv v16, v24, v8, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -7998,8 +7974,7 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16_unmasked(<vscale x 16 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -8021,8 +7996,7 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16_unmasked(<vscale x 16 x half> %
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -8045,8 +8019,7 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16_unmasked_commute(<vscale x 16 x
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -8068,8 +8041,7 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16_unmasked_commute(<vscale x 16 x
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -8092,8 +8064,7 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16_neg_splat(<vscale x 16 x half>
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
@@ -8115,8 +8086,7 @@ define <vscale x 16 x half> @vfnmadd_vf_nxv16f16_neg_splat(<vscale x 16 x half>
; ZVFHMIN-NEXT: vfmadd.vv v16, v8, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -8269,8 +8239,7 @@ define <vscale x 16 x half> @vfnmsub_vv_nxv16f16_commuted(<vscale x 16 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: lui a1, 8
@@ -8288,8 +8257,7 @@ define <vscale x 16 x half> @vfnmsub_vv_nxv16f16_commuted(<vscale x 16 x half> %
; ZVFHMIN-NEXT: vfmadd.vv v16, v24, v8, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -8429,8 +8397,7 @@ define <vscale x 16 x half> @vfnmsub_vf_nxv16f16_unmasked(<vscale x 16 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -8451,8 +8418,7 @@ define <vscale x 16 x half> @vfnmsub_vf_nxv16f16_unmasked(<vscale x 16 x half> %
; ZVFHMIN-NEXT: vfmadd.vv v16, v24, v0
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -8474,8 +8440,7 @@ define <vscale x 16 x half> @vfnmsub_vf_nxv16f16_unmasked_commute(<vscale x 16 x
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 2
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a1, fa0
@@ -8496,8 +8461,7 @@ define <vscale x 16 x half> @vfnmsub_vf_nxv16f16_unmasked_commute(<vscale x 16 x
; ZVFHMIN-NEXT: vfmadd.vv v16, v24, v0
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -8650,7 +8614,7 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: mv a3, a2
; ZVFHMIN-NEXT: slli a2, a2, 3
; ZVFHMIN-NEXT: add a3, a3, a2
@@ -8788,7 +8752,7 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: add a1, a1, a0
@@ -8814,7 +8778,7 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16_unmasked(<vscale x 32 x half> %v
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 3
; ZVFHMIN-NEXT: mv a3, a2
; ZVFHMIN-NEXT: slli a2, a2, 2
@@ -8917,7 +8881,7 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16_unmasked(<vscale x 32 x half> %v
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v0, v24
; ZVFHMIN-NEXT: vmv8r.v v8, v0
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 2
@@ -8941,7 +8905,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16(<vscale x 32 x half> %va, half %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -9050,7 +9014,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16(<vscale x 32 x half> %va, half %
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -9073,7 +9037,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16_commute(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -9182,7 +9146,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16_commute(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -9205,7 +9169,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16_unmasked(<vscale x 32 x half> %v
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -9304,7 +9268,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16_unmasked(<vscale x 32 x half> %v
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v16, v24
; ZVFHMIN-NEXT: vmv8r.v v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -9327,7 +9291,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16_unmasked_commute(<vscale x 32 x
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -9428,7 +9392,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16_unmasked_commute(<vscale x 32 x
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v16, v24
; ZVFHMIN-NEXT: vmv8r.v v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -9453,7 +9417,7 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 5
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -9551,7 +9515,7 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -9574,7 +9538,7 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16_commuted(<vscale x 32 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 3
; ZVFHMIN-NEXT: mv a3, a2
; ZVFHMIN-NEXT: slli a2, a2, 2
@@ -9698,7 +9662,7 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16_commuted(<vscale x 32 x half> %
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 2
@@ -9724,7 +9688,7 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 5
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -9823,7 +9787,7 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v24, v0
; ZVFHMIN-NEXT: vmv8r.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -9846,7 +9810,7 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16_unmasked_commuted(<vscale x 32
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 5
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -9945,7 +9909,7 @@ define <vscale x 32 x half> @vfnmadd_vv_nxv32f16_unmasked_commuted(<vscale x 32
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v24, v8
; ZVFHMIN-NEXT: vmv8r.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -9967,7 +9931,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16(<vscale x 32 x half> %va, half
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: mv a2, a1
; ZVFHMIN-NEXT: slli a1, a1, 2
@@ -10093,7 +10057,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16(<vscale x 32 x half> %va, half
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 2
@@ -10120,7 +10084,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_commute(<vscale x 32 x half> %v
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -10230,7 +10194,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_commute(<vscale x 32 x half> %v
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -10254,7 +10218,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -10355,7 +10319,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v24, v8
; ZVFHMIN-NEXT: vmv8r.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -10379,7 +10343,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_unmasked_commute(<vscale x 32 x
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -10481,7 +10445,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_unmasked_commute(<vscale x 32 x
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v24, v16
; ZVFHMIN-NEXT: vmv8r.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -10505,7 +10469,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat(<vscale x 32 x half>
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -10609,7 +10573,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat(<vscale x 32 x half>
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -10633,7 +10597,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat_commute(<vscale x 32
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -10743,7 +10707,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat_commute(<vscale x 32
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -10767,7 +10731,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat_unmasked(<vscale x 32
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -10869,7 +10833,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat_unmasked(<vscale x 32
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v24, v16
; ZVFHMIN-NEXT: vmv8r.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -10893,7 +10857,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat_unmasked_commute(<vsc
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -10994,7 +10958,7 @@ define <vscale x 32 x half> @vfnmadd_vf_nxv32f16_neg_splat_unmasked_commute(<vsc
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v24, v8
; ZVFHMIN-NEXT: vmv8r.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -11020,7 +10984,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 5
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -11118,7 +11082,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -11141,7 +11105,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16_commuted(<vscale x 32 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 3
; ZVFHMIN-NEXT: mv a3, a2
; ZVFHMIN-NEXT: slli a2, a2, 2
@@ -11265,7 +11229,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16_commuted(<vscale x 32 x half> %
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 2
@@ -11291,7 +11255,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 5
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -11390,7 +11354,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v24, v0
; ZVFHMIN-NEXT: vmv8r.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -11413,7 +11377,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16_unmasked_commuted(<vscale x 32
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a2, vlenb
+; ZVFHMIN-NEXT: vsetvli a2, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a2, 5
; ZVFHMIN-NEXT: sub sp, sp, a2
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -11512,7 +11476,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16_unmasked_commuted(<vscale x 32
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v24, v8
; ZVFHMIN-NEXT: vmv8r.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -11534,7 +11498,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, half
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 3
; ZVFHMIN-NEXT: mv a2, a1
; ZVFHMIN-NEXT: slli a1, a1, 2
@@ -11643,7 +11607,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, half
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 3
; ZVFHMIN-NEXT: mv a1, a0
; ZVFHMIN-NEXT: slli a0, a0, 2
@@ -11669,7 +11633,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_commute(<vscale x 32 x half> %v
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -11773,7 +11737,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_commute(<vscale x 32 x half> %v
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -11796,7 +11760,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -11896,7 +11860,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_unmasked(<vscale x 32 x half> %
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v16, v24
; ZVFHMIN-NEXT: vmv8r.v v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -11919,7 +11883,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_unmasked_commute(<vscale x 32 x
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -12018,7 +11982,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_unmasked_commute(<vscale x 32 x
; ZVFHMIN-NEXT: vfmadd.vv v0, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -12041,7 +12005,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_neg_splat(<vscale x 32 x half>
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -12145,7 +12109,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_neg_splat(<vscale x 32 x half>
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -12168,7 +12132,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_neg_splat_commute(<vscale x 32
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a1, 5
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
@@ -12294,7 +12258,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_neg_splat_commute(<vscale x 32
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a0, 5
; ZVFHMIN-NEXT: add a0, a1, a0
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -12318,7 +12282,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_neg_splat_unmasked(<vscale x 32
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -12418,7 +12382,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_neg_splat_unmasked(<vscale x 32
; ZVFHMIN-NEXT: vfmadd.vv v0, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -12441,7 +12405,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_neg_splat_unmasked_commute(<vsc
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -12541,7 +12505,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16_neg_splat_unmasked_commute(<vsc
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v16, v24
; ZVFHMIN-NEXT: vmv8r.v v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmadd-constrained-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfmadd-constrained-sdnode.ll
index dea411348ce542..187a809d80f515 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmadd-constrained-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmadd-constrained-sdnode.ll
@@ -170,8 +170,7 @@ define <vscale x 16 x bfloat> @vfmadd_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; CHECK-NEXT: addi a0, sp, 16
@@ -186,8 +185,7 @@ define <vscale x 16 x bfloat> @vfmadd_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <
; CHECK-NEXT: vfmadd.vv v24, v0, v16
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v24
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -200,8 +198,7 @@ define <vscale x 16 x bfloat> @vfmadd_vf_nxv16bf16(<vscale x 16 x bfloat> %va, <
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; CHECK-NEXT: fmv.x.h a0, fa0
@@ -217,8 +214,7 @@ define <vscale x 16 x bfloat> @vfmadd_vf_nxv16bf16(<vscale x 16 x bfloat> %va, <
; CHECK-NEXT: vfmadd.vv v16, v0, v24
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -234,7 +230,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -315,7 +311,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v0
; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -329,7 +325,7 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: sub sp, sp, a0
@@ -385,7 +381,7 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v0
; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
@@ -610,8 +606,7 @@ define <vscale x 16 x half> @vfmadd_vv_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: addi a0, sp, 16
@@ -626,8 +621,7 @@ define <vscale x 16 x half> @vfmadd_vv_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN-NEXT: vfmadd.vv v24, v0, v16
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -646,8 +640,7 @@ define <vscale x 16 x half> @vfmadd_vf_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a0, fa0
@@ -663,8 +656,7 @@ define <vscale x 16 x half> @vfmadd_vf_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -688,7 +680,7 @@ define <vscale x 32 x half> @vfmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -769,7 +761,7 @@ define <vscale x 32 x half> @vfmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -789,7 +781,7 @@ define <vscale x 32 x half> @vfmadd_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
@@ -845,7 +837,7 @@ define <vscale x 32 x half> @vfmadd_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll
index 2df2212c43db09..e4b254e9dcfa68 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmadd-sdnode.ll
@@ -200,8 +200,7 @@ define <vscale x 16 x bfloat> @vfmadd_vf_nxv16bf16(<vscale x 16 x bfloat> %va, <
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: sub sp, sp, a0
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; CHECK-NEXT: fmv.x.h a0, fa0
@@ -217,8 +216,7 @@ define <vscale x 16 x bfloat> @vfmadd_vf_nxv16bf16(<vscale x 16 x bfloat> %va, <
; CHECK-NEXT: vfmadd.vv v16, v0, v24
; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 2
+; CHECK-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -235,7 +233,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFH: # %bb.0:
; ZVFH-NEXT: addi sp, sp, -16
; ZVFH-NEXT: .cfi_def_cfa_offset 16
-; ZVFH-NEXT: csrr a1, vlenb
+; ZVFH-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFH-NEXT: slli a1, a1, 5
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -324,7 +322,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v0
; ZVFH-NEXT: vfncvtbf16.f.f.w v12, v16
-; ZVFH-NEXT: csrr a0, vlenb
+; ZVFH-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFH-NEXT: slli a0, a0, 5
; ZVFH-NEXT: add sp, sp, a0
; ZVFH-NEXT: addi sp, sp, 16
@@ -334,7 +332,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -415,7 +413,7 @@ define <vscale x 32 x bfloat> @vfmadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -429,7 +427,7 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFH: # %bb.0:
; ZVFH-NEXT: addi sp, sp, -16
; ZVFH-NEXT: .cfi_def_cfa_offset 16
-; ZVFH-NEXT: csrr a0, vlenb
+; ZVFH-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFH-NEXT: slli a0, a0, 3
; ZVFH-NEXT: mv a1, a0
; ZVFH-NEXT: slli a0, a0, 1
@@ -487,7 +485,7 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v0
; ZVFH-NEXT: vfncvtbf16.f.f.w v12, v24
-; ZVFH-NEXT: csrr a0, vlenb
+; ZVFH-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFH-NEXT: slli a0, a0, 3
; ZVFH-NEXT: mv a1, a0
; ZVFH-NEXT: slli a0, a0, 1
@@ -500,7 +498,7 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
@@ -556,7 +554,7 @@ define <vscale x 32 x bfloat> @vfmadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, <
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -825,8 +823,7 @@ define <vscale x 16 x half> @vfmadd_vf_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a0, fa0
@@ -842,8 +839,7 @@ define <vscale x 16 x half> @vfmadd_vf_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -867,7 +863,7 @@ define <vscale x 32 x half> @vfmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -948,7 +944,7 @@ define <vscale x 32 x half> @vfmadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -968,7 +964,7 @@ define <vscale x 32 x half> @vfmadd_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
@@ -1024,7 +1020,7 @@ define <vscale x 32 x half> @vfmadd_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll
index 6e38881b4d60fb..655127d0715f35 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmax-vp.ll
@@ -179,8 +179,7 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -215,8 +214,7 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vfmax.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -229,8 +227,7 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -265,8 +262,7 @@ define <vscale x 32 x bfloat> @vfmax_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfmax.vv v16, v24, v16
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -506,8 +502,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -542,8 +537,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfmax.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -562,8 +556,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -598,8 +591,7 @@ define <vscale x 32 x half> @vfmax_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfmax.vv v16, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll
index f1d6b2100ae980..17cdcc7f4dd455 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmin-vp.ll
@@ -179,8 +179,7 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -215,8 +214,7 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vfmin.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -229,8 +227,7 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -265,8 +262,7 @@ define <vscale x 32 x bfloat> @vfmin_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfmin.vv v16, v24, v16
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -506,8 +502,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -542,8 +537,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfmin.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -562,8 +556,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -598,8 +591,7 @@ define <vscale x 32 x half> @vfmin_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfmin.vv v16, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmsub-constrained-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfmsub-constrained-sdnode.ll
index 7ec241bf74247a..a70abc9e4b356e 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmsub-constrained-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmsub-constrained-sdnode.ll
@@ -248,8 +248,7 @@ define <vscale x 16 x half> @vfmsub_vv_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: addi a0, sp, 16
@@ -266,8 +265,7 @@ define <vscale x 16 x half> @vfmsub_vv_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN-NEXT: vfmadd.vv v24, v0, v16
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -287,8 +285,7 @@ define <vscale x 16 x half> @vfmsub_vf_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x04, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 4 * vlenb
; ZVFHMIN-NEXT: fmv.x.h a0, fa0
@@ -307,8 +304,7 @@ define <vscale x 16 x half> @vfmsub_vf_nxv16f16(<vscale x 16 x half> %va, <vscal
; ZVFHMIN-NEXT: vfmadd.vv v16, v0, v24
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 2
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m4, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -333,7 +329,7 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a2, 24
; ZVFHMIN-NEXT: mul a1, a1, a2
; ZVFHMIN-NEXT: sub sp, sp, a1
@@ -392,7 +388,7 @@ define <vscale x 32 x half> @vfmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -414,7 +410,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: sub sp, sp, a0
@@ -472,7 +468,7 @@ define <vscale x 32 x half> @vfmsub_vf_nxv32f16(<vscale x 32 x half> %va, <vscal
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: li a1, 24
; ZVFHMIN-NEXT: mul a0, a0, a1
; ZVFHMIN-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll
index 3114fb5d3bfa3e..e8578a59998be1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmul-vp.ll
@@ -491,8 +491,7 @@ define <vscale x 32 x half> @vfmul_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -527,8 +526,7 @@ define <vscale x 32 x half> @vfmul_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfmul.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -547,8 +545,7 @@ define <vscale x 32 x half> @vfmul_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -583,8 +580,7 @@ define <vscale x 32 x half> @vfmul_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfmul.vv v16, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -603,7 +599,7 @@ define <vscale x 32 x half> @vfmul_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a1, 4
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
@@ -670,7 +666,7 @@ define <vscale x 32 x half> @vfmul_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN-NEXT: vfmul.vv v16, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a0, 4
; ZVFHMIN-NEXT: add a0, a1, a0
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -693,7 +689,7 @@ define <vscale x 32 x half> @vfmul_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 4
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -743,7 +739,7 @@ define <vscale x 32 x half> @vfmul_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfmul.vv v16, v16, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 4
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll
index abda6750e5a8a5..21f7fc61d6ad90 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfmuladd-vp.ll
@@ -1107,7 +1107,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 40
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -1196,7 +1196,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64(<vscale x 16 x double> %va, <vsc
; CHECK-NEXT: add a0, sp, a0
; CHECK-NEXT: addi a0, a0, 16
; CHECK-NEXT: vl8r.v v16, (a0) # Unknown-size Folded Reload
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 40
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
@@ -1211,7 +1211,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64_unmasked(<vscale x 16 x double>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: li a3, 24
; CHECK-NEXT: mul a1, a1, a3
; CHECK-NEXT: sub sp, sp, a1
@@ -1261,7 +1261,7 @@ define <vscale x 16 x double> @vfma_vv_nxv16f64_unmasked(<vscale x 16 x double>
; CHECK-NEXT: vsetvli zero, a4, e64, m8, ta, ma
; CHECK-NEXT: vfmadd.vv v0, v24, v8
; CHECK-NEXT: vmv.v.v v8, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: li a1, 24
; CHECK-NEXT: mul a0, a0, a1
; CHECK-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfnmadd-constrained-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfnmadd-constrained-sdnode.ll
index 5ec089a2dcac86..39d704db0e41b3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfnmadd-constrained-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfnmadd-constrained-sdnode.ll
@@ -325,7 +325,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -397,7 +397,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -419,7 +419,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -494,7 +494,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v0
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfnmsub-constrained-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfnmsub-constrained-sdnode.ll
index 286492bce2960c..b02328d860ff26 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfnmsub-constrained-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfnmsub-constrained-sdnode.ll
@@ -305,7 +305,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 5
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -377,7 +377,7 @@ define <vscale x 32 x half> @vfnmsub_vv_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
@@ -398,7 +398,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: sub sp, sp, a0
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -470,7 +470,7 @@ define <vscale x 32 x half> @vfnmsub_vf_nxv32f16(<vscale x 32 x half> %va, <vsca
; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v24
; ZVFHMIN-NEXT: vfncvt.f.f.w v12, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 5
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll
index f42b603509c22a..1ce0d836cee1b4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfptosi-vp.ll
@@ -510,8 +510,7 @@ define <vscale x 32 x i16> @vfptosi_nxv32i16_nxv32f32(<vscale x 32 x float> %va,
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -538,8 +537,7 @@ define <vscale x 32 x i16> @vfptosi_nxv32i16_nxv32f32(<vscale x 32 x float> %va,
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; CHECK-NEXT: vfncvt.rtz.x.f.w v16, v8, v0.t
; CHECK-NEXT: vmv8r.v v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll
index 403bc595b9bbd1..06bbe1796675b2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfptoui-vp.ll
@@ -510,8 +510,7 @@ define <vscale x 32 x i16> @vfptoui_nxv32i16_nxv32f32(<vscale x 32 x float> %va,
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -538,8 +537,7 @@ define <vscale x 32 x i16> @vfptoui_nxv32i16_nxv32f32(<vscale x 32 x float> %va,
; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; CHECK-NEXT: vfncvt.rtz.xu.f.w v16, v8, v0.t
; CHECK-NEXT: vmv8r.v v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll
index da16feeddecd71..15689aafdf6db5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfptrunc-vp.ll
@@ -98,8 +98,7 @@ define <vscale x 16 x float> @vfptrunc_nxv16f32_nxv16f64(<vscale x 16 x double>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -125,8 +124,7 @@ define <vscale x 16 x float> @vfptrunc_nxv16f32_nxv16f64(<vscale x 16 x double>
; CHECK-NEXT: vsetvli zero, a0, e32, m4, ta, ma
; CHECK-NEXT: vfncvt.f.f.w v16, v8, v0.t
; CHECK-NEXT: vmv8r.v v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -141,7 +139,7 @@ define <vscale x 32 x float> @vfptrunc_nxv32f32_nxv32f64(<vscale x 32 x double>
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -210,7 +208,7 @@ define <vscale x 32 x float> @vfptrunc_nxv32f32_nxv32f64(<vscale x 32 x double>
; CHECK-NEXT: vsetvli zero, a2, e32, m4, ta, ma
; CHECK-NEXT: vfncvt.f.f.w v24, v8, v0.t
; CHECK-NEXT: vmv8r.v v8, v24
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll
index dd57b65b50f4f1..3d660854458008 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfsub-vp.ll
@@ -369,8 +369,7 @@ define <vscale x 32 x bfloat> @vfsub_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -405,8 +404,7 @@ define <vscale x 32 x bfloat> @vfsub_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <v
; CHECK-NEXT: vfsub.vv v16, v24, v16, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -419,8 +417,7 @@ define <vscale x 32 x bfloat> @vfsub_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: csrr a2, vlenb
@@ -455,8 +452,7 @@ define <vscale x 32 x bfloat> @vfsub_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfsub.vv v16, v24, v16
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
@@ -469,7 +465,7 @@ define <vscale x 32 x bfloat> @vfsub_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a2, a1, 4
; CHECK-NEXT: add a1, a2, a1
; CHECK-NEXT: sub sp, sp, a1
@@ -536,7 +532,7 @@ define <vscale x 32 x bfloat> @vfsub_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf
; CHECK-NEXT: vfsub.vv v16, v16, v24, v0.t
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a0, 4
; CHECK-NEXT: add a0, a1, a0
; CHECK-NEXT: add sp, sp, a0
@@ -553,7 +549,7 @@ define <vscale x 32 x bfloat> @vfsub_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -603,7 +599,7 @@ define <vscale x 32 x bfloat> @vfsub_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat
; CHECK-NEXT: vfsub.vv v16, v16, v24
; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -1096,8 +1092,7 @@ define <vscale x 32 x half> @vfsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: vmv1r.v v7, v0
@@ -1132,8 +1127,7 @@ define <vscale x 32 x half> @vfsub_vv_nxv32f16(<vscale x 32 x half> %va, <vscale
; ZVFHMIN-NEXT: vfsub.vv v16, v24, v16, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1152,8 +1146,7 @@ define <vscale x 32 x half> @vfsub_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: csrr a2, vlenb
@@ -1188,8 +1181,7 @@ define <vscale x 32 x half> @vfsub_vv_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfsub.vv v16, v24, v16
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
@@ -1208,7 +1200,7 @@ define <vscale x 32 x half> @vfsub_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a2, a1, 4
; ZVFHMIN-NEXT: add a1, a2, a1
; ZVFHMIN-NEXT: sub sp, sp, a1
@@ -1275,7 +1267,7 @@ define <vscale x 32 x half> @vfsub_vf_nxv32f16(<vscale x 32 x half> %va, half %b
; ZVFHMIN-NEXT: vfsub.vv v16, v16, v24, v0.t
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a0, 4
; ZVFHMIN-NEXT: add a0, a1, a0
; ZVFHMIN-NEXT: add sp, sp, a0
@@ -1298,7 +1290,7 @@ define <vscale x 32 x half> @vfsub_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a1, a1, 4
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -1348,7 +1340,7 @@ define <vscale x 32 x half> @vfsub_vf_nxv32f16_unmasked(<vscale x 32 x half> %va
; ZVFHMIN-NEXT: vfsub.vv v16, v16, v24
; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma
; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16
-; ZVFHMIN-NEXT: csrr a0, vlenb
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; ZVFHMIN-NEXT: slli a0, a0, 4
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll
index 80ada4670562d7..e84d04cff8fae5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfwmacc-vp.ll
@@ -635,8 +635,7 @@ define <vscale x 16 x float> @vfmacc_vv_nxv16f32(<vscale x 16 x half> %a, <vscal
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
@@ -649,8 +648,7 @@ define <vscale x 16 x float> @vfmacc_vv_nxv16f32(<vscale x 16 x half> %a, <vscal
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
; ZVFHMIN-NEXT: vfmadd.vv v24, v16, v8, v0.t
; ZVFHMIN-NEXT: vmv.v.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll
index 6ea58a4e768736..2bfbdeb4bd2dd2 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfwnmacc-vp.ll
@@ -592,8 +592,7 @@ define <vscale x 16 x float> @vfnmacc_vv_nxv16f32(<vscale x 16 x half> %a, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
@@ -606,8 +605,7 @@ define <vscale x 16 x float> @vfnmacc_vv_nxv16f32(<vscale x 16 x half> %a, <vsca
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
; ZVFHMIN-NEXT: vfnmadd.vv v24, v16, v8, v0.t
; ZVFHMIN-NEXT: vmv.v.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll
index 0afbe58038c76f..55611933c08c62 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vfwnmsac-vp.ll
@@ -568,8 +568,7 @@ define <vscale x 16 x float> @vfnmsac_vv_nxv16f32(<vscale x 16 x half> %a, <vsca
; ZVFHMIN: # %bb.0:
; ZVFHMIN-NEXT: addi sp, sp, -16
; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16
-; ZVFHMIN-NEXT: csrr a1, vlenb
-; ZVFHMIN-NEXT: slli a1, a1, 3
+; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: sub sp, sp, a1
; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFHMIN-NEXT: addi a1, sp, 16
@@ -582,8 +581,7 @@ define <vscale x 16 x float> @vfnmsac_vv_nxv16f32(<vscale x 16 x half> %a, <vsca
; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma
; ZVFHMIN-NEXT: vfnmsub.vv v24, v16, v8, v0.t
; ZVFHMIN-NEXT: vmv.v.v v8, v24
-; ZVFHMIN-NEXT: csrr a0, vlenb
-; ZVFHMIN-NEXT: slli a0, a0, 3
+; ZVFHMIN-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFHMIN-NEXT: add sp, sp, a0
; ZVFHMIN-NEXT: addi sp, sp, 16
; ZVFHMIN-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vp-reverse-int.ll b/llvm/test/CodeGen/RISCV/rvv/vp-reverse-int.ll
index 717b3a551d219f..f0d32a11f88c9c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vp-reverse-int.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vp-reverse-int.ll
@@ -513,7 +513,7 @@ define <vscale x 128 x i8> @test_vp_reverse_nxv128i8(<vscale x 128 x i8> %src, i
; CHECK-NEXT: .cfi_offset s0, -16
; CHECK-NEXT: addi s0, sp, 80
; CHECK-NEXT: .cfi_def_cfa s0, 0
-; CHECK-NEXT: csrr a3, vlenb
+; CHECK-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a3, a3, 4
; CHECK-NEXT: sub sp, sp, a3
; CHECK-NEXT: andi sp, sp, -64
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll
index ea7bf65fc5644d..7a2ee5285924f9 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpmerge-sdnode.ll
@@ -357,8 +357,7 @@ define <vscale x 128 x i8> @vpmerge_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 3
+; CHECK-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vmv1r.v v7, v0
@@ -387,8 +386,7 @@ define <vscale x 128 x i8> @vpmerge_vv_nxv128i8(<vscale x 128 x i8> %va, <vscale
; CHECK-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a3, e8, m8, tu, ma
; CHECK-NEXT: vmerge.vvm v8, v8, v24, v0
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll
index 0028f3035c2734..ae347f77b58d7c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpscatter-sdnode.ll
@@ -2319,8 +2319,7 @@ define void @vpscatter_nxv16f64(<vscale x 16 x double> %val, <vscale x 16 x ptr>
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a1, vlenb
-; RV64-NEXT: slli a1, a1, 3
+; RV64-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; RV64-NEXT: csrr a1, vlenb
@@ -2348,8 +2347,7 @@ define void @vpscatter_nxv16f64(<vscale x 16 x double> %val, <vscale x 16 x ptr>
; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v16, (zero), v8, v0.t
-; RV64-NEXT: csrr a0, vlenb
-; RV64-NEXT: slli a0, a0, 3
+; RV64-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
; RV64-NEXT: ret
@@ -2387,7 +2385,7 @@ define void @vpscatter_baseidx_nxv16i16_nxv16f64(<vscale x 16 x double> %val, pt
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a3, vlenb
+; RV64-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; RV64-NEXT: slli a3, a3, 4
; RV64-NEXT: sub sp, sp, a3
; RV64-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -2428,7 +2426,7 @@ define void @vpscatter_baseidx_nxv16i16_nxv16f64(<vscale x 16 x double> %val, pt
; RV64-NEXT: vl8r.v v16, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, a2, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a0, a0, 4
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: addi sp, sp, 16
@@ -2468,7 +2466,7 @@ define void @vpscatter_baseidx_sext_nxv16i16_nxv16f64(<vscale x 16 x double> %va
; RV64: # %bb.0:
; RV64-NEXT: addi sp, sp, -16
; RV64-NEXT: .cfi_def_cfa_offset 16
-; RV64-NEXT: csrr a3, vlenb
+; RV64-NEXT: vsetvli a3, zero, e8, m1, ta, ma
; RV64-NEXT: slli a4, a3, 3
; RV64-NEXT: add a3, a4, a3
; RV64-NEXT: sub sp, sp, a3
@@ -2508,7 +2506,7 @@ define void @vpscatter_baseidx_sext_nxv16i16_nxv16f64(<vscale x 16 x double> %va
; RV64-NEXT: vl8r.v v8, (a1) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, a2, e64, m8, ta, ma
; RV64-NEXT: vsoxei64.v v8, (a0), v16, v0.t
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: slli a1, a0, 3
; RV64-NEXT: add a0, a1, a0
; RV64-NEXT: add sp, sp, a0
diff --git a/llvm/test/CodeGen/RISCV/rvv/vpstore.ll b/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
index d935e52149d207..e9bdeba27b4006 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vpstore.ll
@@ -465,8 +465,7 @@ define void @vpstore_nxv17f64(<vscale x 17 x double> %val, ptr %ptr, <vscale x 1
; CHECK-NEXT: .LBB35_4:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a7, vlenb
-; CHECK-NEXT: slli a7, a7, 3
+; CHECK-NEXT: vsetvli a7, zero, e8, m8, ta, ma
; CHECK-NEXT: sub sp, sp, a7
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; CHECK-NEXT: vl8re64.v v0, (a0)
@@ -503,8 +502,7 @@ define void @vpstore_nxv17f64(<vscale x 17 x double> %val, ptr %ptr, <vscale x 1
; CHECK-NEXT: vl8r.v v8, (a2) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a0, e64, m8, ta, ma
; CHECK-NEXT: vse64.v v8, (a1), v0.t
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 3
+; CHECK-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
; CHECK-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
index ee0617c9314801..7945496b8986a3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vselect-vp.ll
@@ -353,7 +353,7 @@ define <vscale x 32 x i32> @select_nxv32i32(<vscale x 32 x i1> %a, <vscale x 32
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -416,7 +416,7 @@ define <vscale x 32 x i32> @select_nxv32i32(<vscale x 32 x i1> %a, <vscale x 32
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a2, e32, m8, ta, ma
; CHECK-NEXT: vmerge.vvm v8, v8, v24, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -432,7 +432,7 @@ define <vscale x 32 x i32> @select_evl_nxv32i32(<vscale x 32 x i1> %a, <vscale x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 5
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x20, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 32 * vlenb
@@ -495,7 +495,7 @@ define <vscale x 32 x i32> @select_evl_nxv32i32(<vscale x 32 x i1> %a, <vscale x
; CHECK-NEXT: vl8r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a1, e32, m8, ta, ma
; CHECK-NEXT: vmerge.vvm v8, v8, v24, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 5
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
@@ -741,7 +741,7 @@ define <vscale x 16 x double> @select_nxv16f64(<vscale x 16 x i1> %a, <vscale x
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -781,7 +781,7 @@ define <vscale x 16 x double> @select_nxv16f64(<vscale x 16 x i1> %a, <vscale x
; CHECK-NEXT: vl8r.v v24, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, a2, e64, m8, ta, ma
; CHECK-NEXT: vmerge.vvm v8, v24, v8, v0
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
index 027c81180d5f19..706bba8a0c237b 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
@@ -331,8 +331,7 @@ define <vscale x 1 x double> @test8(i64 %avl, i8 zeroext %cond, <vscale x 1 x do
; CHECK-NEXT: addi sp, sp, -32
; CHECK-NEXT: sd ra, 24(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
-; CHECK-NEXT: csrr a1, vlenb
-; CHECK-NEXT: slli a1, a1, 1
+; CHECK-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: mv s0, a0
; CHECK-NEXT: csrr a0, vlenb
@@ -350,8 +349,7 @@ define <vscale x 1 x double> @test8(i64 %avl, i8 zeroext %cond, <vscale x 1 x do
; CHECK-NEXT: vl1r.v v9, (a0) # Unknown-size Folded Reload
; CHECK-NEXT: vsetvli zero, s0, e64, m1, ta, ma
; CHECK-NEXT: vfsub.vv v8, v9, v8
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 1
+; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld ra, 24(sp) # 8-byte Folded Reload
; CHECK-NEXT: ld s0, 16(sp) # 8-byte Folded Reload
@@ -384,8 +382,7 @@ define <vscale x 1 x double> @test9(i64 %avl, i8 zeroext %cond, <vscale x 1 x do
; CHECK-NEXT: addi sp, sp, -32
; CHECK-NEXT: sd ra, 24(sp) # 8-byte Folded Spill
; CHECK-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
-; CHECK-NEXT: csrr a2, vlenb
-; CHECK-NEXT: slli a2, a2, 1
+; CHECK-NEXT: vsetvli a2, zero, e8, m2, ta, ma
; CHECK-NEXT: sub sp, sp, a2
; CHECK-NEXT: mv s0, a0
; CHECK-NEXT: vsetvli zero, a0, e64, m1, ta, ma
@@ -411,8 +408,7 @@ define <vscale x 1 x double> @test9(i64 %avl, i8 zeroext %cond, <vscale x 1 x do
; CHECK-NEXT: .LBB7_3: # %if.end
; CHECK-NEXT: vsetvli zero, s0, e64, m1, ta, ma
; CHECK-NEXT: vfmul.vv v8, v9, v8
-; CHECK-NEXT: csrr a0, vlenb
-; CHECK-NEXT: slli a0, a0, 1
+; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld ra, 24(sp) # 8-byte Folded Reload
; CHECK-NEXT: ld s0, 16(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll
index d163988b3d41cd..d6728f8eb46730 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsitofp-vp.ll
@@ -510,8 +510,7 @@ define <vscale x 32 x half> @vsitofp_nxv32f16_nxv32i32(<vscale x 32 x i32> %va,
; ZVFH: # %bb.0:
; ZVFH-NEXT: addi sp, sp, -16
; ZVFH-NEXT: .cfi_def_cfa_offset 16
-; ZVFH-NEXT: csrr a1, vlenb
-; ZVFH-NEXT: slli a1, a1, 3
+; ZVFH-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFH-NEXT: vmv1r.v v7, v0
@@ -538,8 +537,7 @@ define <vscale x 32 x half> @vsitofp_nxv32f16_nxv32i32(<vscale x 32 x i32> %va,
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFH-NEXT: vfncvt.f.x.w v16, v8, v0.t
; ZVFH-NEXT: vmv8r.v v8, v16
-; ZVFH-NEXT: csrr a0, vlenb
-; ZVFH-NEXT: slli a0, a0, 3
+; ZVFH-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFH-NEXT: add sp, sp, a0
; ZVFH-NEXT: addi sp, sp, 16
; ZVFH-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll
index 27755c166cc52d..eb35eb71f8e6cf 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vtrunc-vp.ll
@@ -284,7 +284,7 @@ define <vscale x 32 x i32> @vtrunc_nxv32i64_nxv32i32(<vscale x 32 x i64> %a, <vs
; CHECK: # %bb.0:
; CHECK-NEXT: addi sp, sp, -16
; CHECK-NEXT: .cfi_def_cfa_offset 16
-; CHECK-NEXT: csrr a1, vlenb
+; CHECK-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a1, a1, 4
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb
@@ -353,7 +353,7 @@ define <vscale x 32 x i32> @vtrunc_nxv32i64_nxv32i32(<vscale x 32 x i64> %a, <vs
; CHECK-NEXT: vsetvli zero, a2, e32, m4, ta, ma
; CHECK-NEXT: vnsrl.wi v24, v8, 0, v0.t
; CHECK-NEXT: vmv8r.v v8, v24
-; CHECK-NEXT: csrr a0, vlenb
+; CHECK-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; CHECK-NEXT: slli a0, a0, 4
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: addi sp, sp, 16
diff --git a/llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll
index 7c96a9e9e10f65..0941bddd210db7 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vuitofp-vp.ll
@@ -502,8 +502,7 @@ define <vscale x 32 x half> @vuitofp_nxv32f16_nxv32i32(<vscale x 32 x i32> %va,
; ZVFH: # %bb.0:
; ZVFH-NEXT: addi sp, sp, -16
; ZVFH-NEXT: .cfi_def_cfa_offset 16
-; ZVFH-NEXT: csrr a1, vlenb
-; ZVFH-NEXT: slli a1, a1, 3
+; ZVFH-NEXT: vsetvli a1, zero, e8, m8, ta, ma
; ZVFH-NEXT: sub sp, sp, a1
; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb
; ZVFH-NEXT: vmv1r.v v7, v0
@@ -530,8 +529,7 @@ define <vscale x 32 x half> @vuitofp_nxv32f16_nxv32i32(<vscale x 32 x i32> %va,
; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma
; ZVFH-NEXT: vfncvt.f.xu.w v16, v8, v0.t
; ZVFH-NEXT: vmv8r.v v8, v16
-; ZVFH-NEXT: csrr a0, vlenb
-; ZVFH-NEXT: slli a0, a0, 3
+; ZVFH-NEXT: vsetvli a0, zero, e8, m8, ta, ma
; ZVFH-NEXT: add sp, sp, a0
; ZVFH-NEXT: addi sp, sp, 16
; ZVFH-NEXT: ret
diff --git a/llvm/test/CodeGen/RISCV/rvv/vxrm-insert.ll b/llvm/test/CodeGen/RISCV/rvv/vxrm-insert.ll
index 72f25268109a13..c848e5501699c0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vxrm-insert.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vxrm-insert.ll
@@ -74,7 +74,7 @@ define <vscale x 1 x i8> @test3(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, <vsc
; RV32-NEXT: addi sp, sp, -32
; RV32-NEXT: sw ra, 28(sp) # 4-byte Folded Spill
; RV32-NEXT: sw s0, 24(sp) # 4-byte Folded Spill
-; RV32-NEXT: csrr a1, vlenb
+; RV32-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV32-NEXT: sub sp, sp, a1
; RV32-NEXT: mv s0, a0
; RV32-NEXT: addi a1, sp, 16
@@ -88,7 +88,7 @@ define <vscale x 1 x i8> @test3(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, <vsc
; RV32-NEXT: vl1r.v v9, (a0) # Unknown-size Folded Reload
; RV32-NEXT: vsetvli zero, s0, e8, mf8, ta, ma
; RV32-NEXT: vaadd.vv v8, v8, v9
-; RV32-NEXT: csrr a0, vlenb
+; RV32-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV32-NEXT: add sp, sp, a0
; RV32-NEXT: lw ra, 28(sp) # 4-byte Folded Reload
; RV32-NEXT: lw s0, 24(sp) # 4-byte Folded Reload
@@ -100,7 +100,7 @@ define <vscale x 1 x i8> @test3(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, <vsc
; RV64-NEXT: addi sp, sp, -32
; RV64-NEXT: sd ra, 24(sp) # 8-byte Folded Spill
; RV64-NEXT: sd s0, 16(sp) # 8-byte Folded Spill
-; RV64-NEXT: csrr a1, vlenb
+; RV64-NEXT: vsetvli a1, zero, e8, m1, ta, ma
; RV64-NEXT: sub sp, sp, a1
; RV64-NEXT: mv s0, a0
; RV64-NEXT: addi a1, sp, 16
@@ -114,7 +114,7 @@ define <vscale x 1 x i8> @test3(<vscale x 1 x i8> %0, <vscale x 1 x i8> %1, <vsc
; RV64-NEXT: vl1r.v v9, (a0) # Unknown-size Folded Reload
; RV64-NEXT: vsetvli zero, s0, e8, mf8, ta, ma
; RV64-NEXT: vaadd.vv v8, v8, v9
-; RV64-NEXT: csrr a0, vlenb
+; RV64-NEXT: vsetvli a0, zero, e8, m1, ta, ma
; RV64-NEXT: add sp, sp, a0
; RV64-NEXT: ld ra, 24(sp) # 8-byte Folded Reload
; RV64-NEXT: ld s0, 16(sp) # 8-byte Folded Reload
diff --git a/llvm/test/CodeGen/RISCV/rvv/wrong-stack-offset-for-rvv-object.mir b/llvm/test/CodeGen/RISCV/rvv/wrong-stack-offset-for-rvv-object.mir
index 2ec51911a65f76..295eef51a90d3d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/wrong-stack-offset-for-rvv-object.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/wrong-stack-offset-for-rvv-object.mir
@@ -132,7 +132,7 @@ body: |
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $x1, -24
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $x8, -32
; CHECK-NEXT: frame-setup CFI_INSTRUCTION offset $x9, -40
- ; CHECK-NEXT: $x10 = frame-setup PseudoReadVLENB
+ ; CHECK-NEXT: $x10 = frame-setup PseudoReadMulVLENB 1, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: $x2 = frame-setup SUB $x2, killed $x10
; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0e, 0x72, 0x00, 0x11, 0xd0, 0x00, 0x22, 0x11, 0x01, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22
; CHECK-NEXT: renamable $x8 = COPY $x14
diff --git a/llvm/test/CodeGen/RISCV/rvv/wrong-stack-slot-rv32.mir b/llvm/test/CodeGen/RISCV/rvv/wrong-stack-slot-rv32.mir
index 268e21b0732ab7..c2ae6dc9cb2bb5 100644
--- a/llvm/test/CodeGen/RISCV/rvv/wrong-stack-slot-rv32.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/wrong-stack-slot-rv32.mir
@@ -11,14 +11,12 @@
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -32
; CHECK-NEXT: sw s9, 28(sp) # 4-byte Folded Spill
- ; CHECK-NEXT: csrr a1, vlenb
- ; CHECK-NEXT: slli a1, a1, 1
+ ; CHECK-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: sw a0, 8(sp) # 4-byte Folded Spill
; CHECK-NEXT: addi a0, sp, 16
; CHECK-NEXT: vs2r.v v30, (a0) # Unknown-size Folded Spill
- ; CHECK-NEXT: csrr a0, vlenb
- ; CHECK-NEXT: slli a0, a0, 1
+ ; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: lw s9, 28(sp) # 4-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 32
@@ -35,8 +33,7 @@
; CHECK-NEXT: sw s0, 72(sp) # 4-byte Folded Spill
; CHECK-NEXT: sw s9, 68(sp) # 4-byte Folded Spill
; CHECK-NEXT: addi s0, sp, 80
- ; CHECK-NEXT: csrr a1, vlenb
- ; CHECK-NEXT: slli a1, a1, 1
+ ; CHECK-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: andi sp, sp, -32
; CHECK-NEXT: sw a0, 32(sp) # 4-byte Folded Spill
diff --git a/llvm/test/CodeGen/RISCV/rvv/wrong-stack-slot-rv64.mir b/llvm/test/CodeGen/RISCV/rvv/wrong-stack-slot-rv64.mir
index 07e7ad5184f084..86d2c6da2f7184 100644
--- a/llvm/test/CodeGen/RISCV/rvv/wrong-stack-slot-rv64.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/wrong-stack-slot-rv64.mir
@@ -11,14 +11,12 @@
; CHECK: # %bb.0: # %entry
; CHECK-NEXT: addi sp, sp, -48
; CHECK-NEXT: sd s9, 40(sp) # 8-byte Folded Spill
- ; CHECK-NEXT: csrr a1, vlenb
- ; CHECK-NEXT: slli a1, a1, 1
+ ; CHECK-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; CHECK-NEXT: sub sp, sp, a1
; CHECK-NEXT: sd a0, 16(sp) # 8-byte Folded Spill
; CHECK-NEXT: addi a0, sp, 32
; CHECK-NEXT: vs2r.v v30, (a0) # Unknown-size Folded Spill
- ; CHECK-NEXT: csrr a0, vlenb
- ; CHECK-NEXT: slli a0, a0, 1
+ ; CHECK-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; CHECK-NEXT: add sp, sp, a0
; CHECK-NEXT: ld s9, 40(sp) # 8-byte Folded Reload
; CHECK-NEXT: addi sp, sp, 48
diff --git a/llvm/test/CodeGen/RISCV/rvv/zvlsseg-spill.mir b/llvm/test/CodeGen/RISCV/rvv/zvlsseg-spill.mir
index fcd852f1210df5..5e8cda7b4504f1 100644
--- a/llvm/test/CodeGen/RISCV/rvv/zvlsseg-spill.mir
+++ b/llvm/test/CodeGen/RISCV/rvv/zvlsseg-spill.mir
@@ -23,8 +23,7 @@ body: |
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: $x2 = frame-setup ADDI $x2, -16
; CHECK-NEXT: frame-setup CFI_INSTRUCTION def_cfa_offset 16
- ; CHECK-NEXT: $x12 = frame-setup PseudoReadVLENB
- ; CHECK-NEXT: $x12 = frame-setup SLLI killed $x12, 3
+ ; CHECK-NEXT: $x12 = frame-setup PseudoReadMulVLENB 8, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: $x2 = frame-setup SUB $x2, killed $x12
; CHECK-NEXT: frame-setup CFI_INSTRUCTION escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22
; CHECK-NEXT: dead $x0 = PseudoVSETVLI killed renamable $x11, 216 /* e64, m1, ta, ma */, implicit-def $vl, implicit-def $vtype
@@ -60,8 +59,7 @@ body: |
; CHECK-NEXT: $x11 = ADD killed $x11, killed $x12
; CHECK-NEXT: $v13 = VL1RE8_V killed $x11 :: (load unknown-size from %stack.0, align 8)
; CHECK-NEXT: VS1R_V killed $v8, killed renamable $x10
- ; CHECK-NEXT: $x10 = frame-destroy PseudoReadVLENB
- ; CHECK-NEXT: $x10 = frame-destroy SLLI killed $x10, 3
+ ; CHECK-NEXT: $x10 = frame-destroy PseudoReadMulVLENB 8, implicit-def $vl, implicit-def $vtype
; CHECK-NEXT: $x2 = frame-destroy ADD $x2, killed $x10
; CHECK-NEXT: $x2 = frame-destroy ADDI $x2, 16
; CHECK-NEXT: PseudoRET
diff --git a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
index 307a0531cf0296..a6e3ba1ee28e36 100644
--- a/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
+++ b/llvm/test/CodeGen/RISCV/srem-seteq-illegal-types.ll
@@ -606,8 +606,7 @@ define void @test_srem_vec(ptr %X) nounwind {
; RV32MV-NEXT: sw s2, 48(sp) # 4-byte Folded Spill
; RV32MV-NEXT: sw s3, 44(sp) # 4-byte Folded Spill
; RV32MV-NEXT: sw s4, 40(sp) # 4-byte Folded Spill
-; RV32MV-NEXT: csrr a1, vlenb
-; RV32MV-NEXT: slli a1, a1, 1
+; RV32MV-NEXT: vsetvli a1, zero, e8, m2, ta, ma
; RV32MV-NEXT: sub sp, sp, a1
; RV32MV-NEXT: mv s0, a0
; RV32MV-NEXT: lbu a1, 12(a0)
@@ -712,8 +711,7 @@ define void @test_srem_vec(ptr %X) nounwind {
; RV32MV-NEXT: sw a2, 4(s0)
; RV32MV-NEXT: sw a0, 8(s0)
; RV32MV-NEXT: sb a3, 12(s0)
-; RV32MV-NEXT: csrr a0, vlenb
-; RV32MV-NEXT: slli a0, a0, 1
+; RV32MV-NEXT: vsetvli a0, zero, e8, m2, ta, ma
; RV32MV-NEXT: add sp, sp, a0
; RV32MV-NEXT: lw ra, 60(sp) # 4-byte Folded Reload
; RV32MV-NEXT: lw s0, 56(sp) # 4-byte Folded Reload
More information about the llvm-commits
mailing list